From stephen at xemacs.org Tue Nov 1 04:57:15 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 01 Nov 2011 12:57:15 +0900 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: <4EAF00A3.90400@oddbird.net> References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> Message-ID: <87obwwpk9w.fsf@uwakimon.sk.tsukuba.ac.jp> Carl Meyer writes: > > On 31 October 2011 16:08, Tres Seaver wrote: > >> I would say this is a perfect "opportunity to delegate," in this case > >> to the devotees of other cults^Wshells than bash. > > Good call - we'll stick with what we've got until such devotees > show up :-) That's fine, but either make sure it works with a POSIX-conformant /bin/sh, or make the shebang explicitly bash (bash is notoriously buggy in respect of being POSIX-compatible when named "sh"). From martin at v.loewis.de Tue Nov 1 08:25:20 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 01 Nov 2011 08:25:20 +0100 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030133958.0c336199@pitrou.net> <20111030183551.2ce82c11@pitrou.net> <20111030235944.240770f2@pitrou.net> Message-ID: <4EAF9EE0.8080606@v.loewis.de> > Not a zip file specifically - just a binary stream which organises scripts to be > installed. If each class in a hierarchy has access to a binary stream, then > subclasses have access to the streams for base classes as well as their own > stream, and can install selectively from base class streams and their own stream. > > class Base: > scripts = ... # zip stream containing scripts A, B > > def install_scripts(self, stream): > # ... > > def setup_scripts(self): > self.install_scripts(self.scripts) > > class Derived: > scripts = ... # zip stream containing modified script B, new script C > > def setup_scripts(self): > self.install_scripts(Base.scripts) # adds A, B > self.install_scripts(self.scripts) # adds C, overwrites B I'm not sure how many scripts you are talking about, and how long they are. Assuming there are free, and assuming they are short, I'd not make them separate source files again, but put them into string literals instead: scripts = { 'start':'''\ #!/bin/sh echo start ''', 'stop':'''\ #!/bin/sh echo stop ''' }}} Then, your install_scripts would take a dictionary filename:script contents. That's just as easily extensible. Regards, Martin From techtonik at gmail.com Tue Nov 1 08:40:15 2011 From: techtonik at gmail.com (anatoly techtonik) Date: Tue, 1 Nov 2011 10:40:15 +0300 Subject: [Python-Dev] cpython (3.2): adjust braces a bit In-Reply-To: References: <4EA19390.9080604@trueblade.com> Message-ID: On Fri, Oct 21, 2011 at 8:17 PM, Benjamin Peterson wrote: > 2011/10/21 Tres Seaver : >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On 10/21/2011 12:31 PM, Benjamin Peterson wrote: >>> 2011/10/21 Eric V. Smith : >>>> What's the logic for adding some braces, but removing others? >>> >>> No braces if everything is a one-liner, otherwise braces >>> everywhere. >> >> Hmm, PEP 7 doesn't show any example of the one-liner exception. ?Given >> that it tends to promote errors, particularly among >> indentation-conditioned Python programmers (adding another statement >> at the same indentation level), why not just have braces everywhere? > > Because we're not writing Python? Right. Only CPython here. Where can I find The True Python? =) BTW, some of this stuff may find its way into PEP-7 http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml?showone=Conditionals#Conditionals -- anatoly t. From vinay_sajip at yahoo.co.uk Tue Nov 1 09:15:40 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 1 Nov 2011 08:15:40 +0000 (UTC) Subject: [Python-Dev] draft PEP: virtual environments References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030133958.0c336199@pitrou.net> <20111030183551.2ce82c11@pitrou.net> <20111030235944.240770f2@pitrou.net> <4EAF9EE0.8080606@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > I'm not sure how many scripts you are talking about, and how long they > are. Assuming there are free, and assuming they are short, I'd not make > them separate source files again, but put them into string literals instead: > > scripts = { > 'start':'''\ > #!/bin/sh > echo start > ''', > 'stop':'''\ > #!/bin/sh > echo stop > ''' > }}} > > Then, your install_scripts would take a dictionary filename:script > contents. That's just as easily extensible. True, but while the default scripts are not *too* long, third party scripts might be not amenable to this treatment. Plus, there can be binary executables in there too: at the moment, the pysetup3 script on Windows is shipped as a stub executable pysetup3.exe and a script pysetup3-script.py (since we can't rely on the PEP 397 launcher being available, this is the only way of being sure that the correct Python gets to run the script). I've changed the implementation now to use a directory tree, and the API takes the absolute pathname of the directory containing the scripts. Regards, Vinay Sajip From daveabailey at gmail.com Tue Nov 1 06:59:28 2011 From: daveabailey at gmail.com (David Bailey) Date: Mon, 31 Oct 2011 22:59:28 -0700 Subject: [Python-Dev] PEP 397 and idle Message-ID: python-dev I am being forced to support multiple versions of python on Windows platforms. I have been using PEP 397 and the execution of *.py files works great. Thank you!! My problem is idle. The various versions of idle have the same problem as the various versions of python. We were using an editor that allowed python selection, however, they stopped supporting python and we are back using idle. Any suggestions on switching versions of idle would be appreciated. Thanks Dave Bailey -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Nov 1 14:25:28 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 1 Nov 2011 14:25:28 +0100 Subject: [Python-Dev] PEP 397 and idle In-Reply-To: References: Message-ID: Hi, 2011/11/1 David Bailey > python-dev > > I am being forced to support multiple versions of python on Windows > platforms. I have been using PEP 397 and the execution of *.py files works > great. Thank you!! > My problem is idle. The various versions of idle have the same problem as > the various versions of python. We were using an editor that allowed python > selection, however, they stopped supporting python and we are back using > idle. Any suggestions on switching versions of idle would be appreciated. > The python-dev mailing list is for the development *of* python. For development *with* python, please ask your question on the python-list mailing list, or the comp.lang.python newsgroup. There are many friendly people there ready to answer your questions. Thank you! -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Nov 1 14:39:36 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 1 Nov 2011 14:39:36 +0100 Subject: [Python-Dev] PEP 397 and idle References: Message-ID: <20111101143936.0bd7890a@pitrou.net> On Tue, 1 Nov 2011 14:25:28 +0100 "Amaury Forgeot d'Arc" wrote: > Hi, > > 2011/11/1 David Bailey > > > python-dev > > > > I am being forced to support multiple versions of python on Windows > > platforms. I have been using PEP 397 and the execution of *.py files works > > great. Thank you!! > > My problem is idle. The various versions of idle have the same problem as > > the various versions of python. We were using an editor that allowed python > > selection, however, they stopped supporting python and we are back using > > idle. Any suggestions on switching versions of idle would be appreciated. > > > > The python-dev mailing list is for the development *of* python. > For development *with* python, please ask your question on the python-list > mailing list, or the comp.lang.python newsgroup. Given the question is about an in-progress PEP, it actually seems quite appropriate for python-dev. Also, until now, IDLE is supposed to be developed in the stdlib (although concretely it's not developed anymore). Regards Antoine. From carl at oddbird.net Tue Nov 1 16:30:30 2011 From: carl at oddbird.net (Carl Meyer) Date: Tue, 01 Nov 2011 09:30:30 -0600 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: <87obwwpk9w.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> <87obwwpk9w.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4EB01096.70008@oddbird.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/31/2011 09:57 PM, Stephen J. Turnbull wrote: > That's fine, but either make sure it works with a POSIX-conformant > /bin/sh, or make the shebang explicitly bash (bash is notoriously > buggy in respect of being POSIX-compatible when named "sh"). It has no shebang line, it must be sourced not run (its only purpose is to modify the current shell environment). Carl -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk6wEJYACgkQ8W4rlRKtE2dNGQCguHy8iYMgWIJyaQqABObt5ecv esIAnjmuHYH+G8JBGBzcwZzj8sofPinc =MR6D -----END PGP SIGNATURE----- From p.f.moore at gmail.com Tue Nov 1 17:29:49 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Nov 2011 16:29:49 +0000 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: <4EAF00A3.90400@oddbird.net> References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> Message-ID: On 31 October 2011 20:10, Carl Meyer wrote: >> For Windows, can you point me at the nt scripts? If they aren't too >> complex, I'd be willing to port to Powershell. > > Thanks! They are here: > https://bitbucket.org/vinay.sajip/pythonv/src/6d057cfaaf53/Lib/venv/scripts/nt The attached should work. Untested at the moment, I'm afraid, as I don't have access to a PC with the venv branch available. But they aren't complex, so they should be fine. Paul. -------------- next part -------------- A non-text attachment was scrubbed... Name: Deactivate.ps1 Type: application/octet-stream Size: 468 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Activate.ps1 Type: application/octet-stream Size: 996 bytes Desc: not available URL: From p.f.moore at gmail.com Tue Nov 1 17:40:54 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Nov 2011 16:40:54 +0000 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> Message-ID: On 1 November 2011 16:29, Paul Moore wrote: > On 31 October 2011 20:10, Carl Meyer wrote: >>> For Windows, can you point me at the nt scripts? If they aren't too >>> complex, I'd be willing to port to Powershell. >> >> Thanks! They are here: >> https://bitbucket.org/vinay.sajip/pythonv/src/6d057cfaaf53/Lib/venv/scripts/nt > > The attached should work. Untested at the moment, I'm afraid, as I > don't have access to a PC with the venv branch available. But they > aren't complex, so they should be fine. By the way, these do not need to be dot-sourced to activate/deactivate the venv, but they do need to be dot-sourced to enable the prompt change. As the prompt is more of a cosmetic thing, I'm not sure how crucial that is... Paul. From p.f.moore at gmail.com Tue Nov 1 17:43:28 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 1 Nov 2011 16:43:28 +0000 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> Message-ID: On 1 November 2011 16:40, Paul Moore wrote: > On 1 November 2011 16:29, Paul Moore wrote: >> On 31 October 2011 20:10, Carl Meyer wrote: >>>> For Windows, can you point me at the nt scripts? If they aren't too >>>> complex, I'd be willing to port to Powershell. >>> >>> Thanks! They are here: >>> https://bitbucket.org/vinay.sajip/pythonv/src/6d057cfaaf53/Lib/venv/scripts/nt >> >> The attached should work. Untested at the moment, I'm afraid, as I >> don't have access to a PC with the venv branch available. But they >> aren't complex, so they should be fine. > > By the way, these do not need to be dot-sourced to activate/deactivate > the venv, but they do need to be dot-sourced to enable the prompt > change. As the prompt is more of a cosmetic thing, I'm not sure how > crucial that is... ... and of course, to prove that anything untested is wrong, here's a minor fix to deactivate.ps1 :-) Paul. -------------- next part -------------- A non-text attachment was scrubbed... Name: Deactivate.ps1 Type: application/octet-stream Size: 540 bytes Desc: not available URL: From daveabailey at gmail.com Tue Nov 1 19:20:24 2011 From: daveabailey at gmail.com (David Bailey) Date: Tue, 1 Nov 2011 11:20:24 -0700 Subject: [Python-Dev] PEP 397 and idle In-Reply-To: References: Message-ID: Amaury, Maybe this belongs on some blog. I don't know. I was responding to PEP 397, seems to me that idle was left out. I am not being critical of what you guys are doing. I love python. As a developer, I see a problem. You are correct, I have no technical issue. I do believe it is still a development issue that is more of a marketing issue than a technical issue. I am tired of ruby this and ruby on rails that. I love python and want it to succeed. If you want to increase the population of windows users of python, make idle easier to use or fix print in 3.X or both. The point I was trying to make, is that to the world outside of python-dev, idle is part of python, especially on a windows platform. I am not trying to say that emacs should be part of gcc or anything like that. I have taught python to a number of windows users and the editor is the first big hurdle to using python. On linux, emacs vs vim might be an issue but the editor is not an issue. My guess is that very few python-dev developers use windows. I wouldn't. It is fascinating to watch a new python user on windows. The typical scenario is that they download both a version of 3.X and 2.6 or 2.7 because they are not sure which is correct. I tell them to uninstall 3.X but there is a lot of push back because they don't want the "old stuff". I grew up using Unix, and it is very hard to watch someone trying to use microsoft word to write a python 'hello world'. Every book they read says type print "hello world" into hello.py. They will try any way they know how. Then they double click on hello.py and if they are lucky and have a slow machine they can see a syntax error flash on their screen from a command window that pops up and then goes away. The people with fast machines just throw their hands in the air. Some dig deeper, and migrate to idle. Idle generates the same syntax error but with persistence. They try all the different combinations of quotes, because that seems to be the indicated syntax error. Then they start asking about Ruby. So we install PEP 397 and change the hello.py to #!/usr/bin/env python2 print "hello world" raw_input('>') and #!/usr/bin/env python3 print ("hello world") input('>') It now executes correctly when they double click and the command window stays open. Python works and everyone is happy, but idle is broken and continues to generate a syntax error. At this point they don't have warm fuzzy feelings about python. These people were thinking that they were going to use python in their job, but now it is too scary. They don't understand how python works but idle does not. It is truly unfortunate that print breaks between versions of python. The uninitiated do not expect that. So once I get to the point of saying " ok now lets first create .bat files to start different versions of idle, their eyes start to glaze over. An then when I say " OK now lets put copies of these .bat files in all the python folders on the desktop" they start looking out the real windows. What once took 5 min in a training now takes the whole hour if someone has loaded two versions, and the whole class is lost to VB or something worse. Idle needs to be smart so it runs the correct version of python, or generate a print deprecated message or something that gives a new user a clue. This is more of a marketing issue than a technical issue. You can develop the greatest new version of python but if a new python user can't get past the first page of the python book they bought, they will not bother to go to page two and the greatest new version that you guys develop is lost all because of a single keyword print that has no technical issues. Dave Bailey On Tue, Nov 1, 2011 at 6:25 AM, Amaury Forgeot d'Arc wrote: > Hi, > > 2011/11/1 David Bailey > >> python-dev >> >> I am being forced to support multiple versions of python on Windows >> platforms. I have been using PEP 397 and the execution of *.py files works >> great. Thank you!! >> My problem is idle. The various versions of idle have the same problem as >> the various versions of python. We were using an editor that allowed python >> selection, however, they stopped supporting python and we are back using >> idle. Any suggestions on switching versions of idle would be appreciated. >> > > The python-dev mailing list is for the development *of* python. > For development *with* python, please ask your question on the python-list > mailing list, or the comp.lang.python newsgroup. > There are many friendly people there ready to answer your questions. > Thank you! > > -- > Amaury Forgeot d'Arc > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Nov 1 21:31:47 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 1 Nov 2011 16:31:47 -0400 Subject: [Python-Dev] Code cleanups in stable branches? In-Reply-To: <4EAED974.602@netwok.org> References: <4EAED974.602@netwok.org> Message-ID: <20111101163147.5ca8f2fe@resist> On Oct 31, 2011, at 06:23 PM, ?ric Araujo wrote: >I thought that patches that clean up code but don?t fix actual bugs were >not done in stable branches. Has this changed? I hope not. Sure, if they fix actual bugs, that's fine, but as MvL often points out, even innocent looking changes can break code *somewhere*. We don't lose by being conservative with our stable branches. -Barry From tjreedy at udel.edu Tue Nov 1 22:28:54 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 01 Nov 2011 17:28:54 -0400 Subject: [Python-Dev] PEP 397 and idle In-Reply-To: References: Message-ID: On 11/1/2011 2:20 PM, David Bailey wrote: > population of windows users of python, make idle easier to use or fix > print in 3.X or both. print is fixed in 3.x. This is not the place to argue otherwise. If you want to rant againt print as function, go to python-list. If one looks up 'print' in the index, it is clearly and immediately identified as a "built-in function" (at least in Window help version of docs). IDLE is as trivial to use as anything. A Start menu icon is installed. If used often, it appears in the frequently used programs list. I presume a shortcut can be copied to the desktop. I just have it pinned to my taskbar. One click and it starts. Could not be easier. Now, it would be better if the icons were labelled by version. I thought that had been agreed on, and I intend to request it again. > is that very few python-dev developers use windows. I wouldn't. I am an exception; I have used it for a decade and still do. And I still plan to work on improving it. > fascinating to watch a new python user on windows. The typical scenario > is that they download both a version of 3.X and 2.6 or 2.7 because they > are not sure which is correct. I tell them to uninstall 3.X WRONG, WRONG, WRONG. New users should install the most recent 3.x, learn it, and only bother with 2.7 if they actually NEED it. > but there is a lot of push back because they don't want the "old stuff". They are correct about that. 3.x is easier to learn because of the removal of obsolete junk. > microsoft word to write a python 'hello world'. That works if, AND ONLY IF, they know to save as plain vanilla text mode. Perhaps this needs to be emphasized better. Notepad works better. IDLE is much, much better yet. > Every book they read says type print "hello world" into hello.py. Only old 2.x books. Any 3.x book (like Mark Summerfield's) will say 'print("Hello world")' Do the people you are talking about expect that everything in a Windows XP book applies without change to Windows 7? > They will try any way they know how. Except look in the doc, with its index? Or type 'help(print)'? Any decent beginner book or class should explain how to use both the docs and help(). I hope our tutorial does. > Then they double click on hello.py This and PEP 397 have nothing to do with IDLE. It is for interactive use and editing. It would be helpful if the right-click context menu for .py files had an 'Edit with IDLEx.y' for each version installed, instead of one 'Edit with IDLE'. But there is no 'correct' version for editing. It depends on what the user intends to do, and what version the user wants the file to run with *after* editing. IDLE could be enhanced to look at the first line of files that it is opening and if there is a conflicting shebang line, ask if the use wants to open it with a different version of Python and IDLE. This should NOT, however, be automatic as the user might want to edit the file (including the first line) to work with the Python version already running. > #!/usr/bin/env python3 > print ("hello world") > input('>') > > It now executes correctly when they double click and the command window > stays open. Python works and everyone is happy, but idle is broken and > continues to generate a syntax error. IDLE 3.2 runs the above fine. I have no idea what you are talking about. > It is truly unfortunate that print breaks between versions of python. > The uninitiated do not expect that. So they should start with 3.2 and never learn about the old version unless they really, really have to. > " ok now lets first create .bat files to start different versions of > idle, Why? The installers already create version-specific icons (which, as I said, should be better labelled), and beginners should start with the modern one. > their eyes start to glaze over. No wonder. > An then when I say " OK now lets > put copies of these .bat files in all the python folders on the desktop" Why? This make no sense to me, an experience IDLE user. > they start looking out the real windows. No wonder. > What once took 5 min in a > training now takes the whole hour if someone has loaded two versions, So they should not do that, and you should not mess their heads with talk about unneeded .bat files. > Idle needs to be smart so it runs the correct version of python, IDLE does not run Python, it is run by Python, and the Python it is run by runs *its* version of IDLE in *its* stdlib. -- Terry Jan Reedy From ncoghlan at gmail.com Tue Nov 1 23:20:33 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 2 Nov 2011 08:20:33 +1000 Subject: [Python-Dev] PEP 397 and idle In-Reply-To: References: Message-ID: On Wed, Nov 2, 2011 at 7:28 AM, Terry Reedy wrote: > Now, it would be better if the icons were labelled by version. I thought > that had been agreed on, and I intend to request it again. It has, it really just needs a patch put forward with specific installer changes in it. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From vinay_sajip at yahoo.co.uk Wed Nov 2 12:37:52 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 2 Nov 2011 11:37:52 +0000 (UTC) Subject: [Python-Dev] Unexpected behaviour in compileall Message-ID: I just started getting errors in my PEP 404 / pythonv branch, but they don't at first glance appear related to the functionality of this branch. What I'm seeing is that during installation, some of the .pyc/.pyo files written by compileall have mode 600 rather than the expected 644, with the result that test_compileall fails when run from the installed Python as an unprivileged user. If I manually do sudo chmod a+r /usr/local/lib/python3.3/__pycache__/* then test_compileall works again. I added a diagnostic to compileall.py, here's an extract from the log of the subsequent installation: Listing '/usr/local/lib/python3.3'... Compiling '/usr/local/lib/python3.3/__future__.py'... Mode of [...]/__pycache__/__future__.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/__phello__.foo.py'... Mode of [...]/__pycache__/__phello__.foo.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_compat_pickle.py'... Mode of [...]/__pycache__/_compat_pickle.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_dummy_thread.py'... Mode of [...]/__pycache__/_dummy_thread.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_markupbase.py'... Mode of [...]/__pycache__/_markupbase.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_pyio.py'... Mode of [...]/__pycache__/_pyio.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_strptime.py'... Mode of [...]/__pycache__/_strptime.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_sysconfigdata.py'... Mode of [...]/__pycache__/_sysconfigdata.cpython-33.pyc is 600 Compiling '/usr/local/lib/python3.3/_threading_local.py'... Mode of [...]/__pycache__/_threading_local.cpython-33.pyc is 644 Compiling '/usr/local/lib/python3.3/_weakrefset.py'... Mode of [...]/__pycache__/_weakrefset.cpython-33.pyc is 600 Compiling '/usr/local/lib/python3.3/abc.py'... Mode of [...]/__pycache__/abc.cpython-33.pyc is 600 Compiling '/usr/local/lib/python3.3/aifc.py'... Mode of [...]/__pycache__/aifc.cpython-33.pyc is 644 The 600s and 644s are interspersed with no pattern immediately apparent. All the source files have mode 644, as expected. This happens on two different Posix machines - Ubuntu Natty and OS X Leopard - so doesn't seem to be related to the external environment. Can anyone shed any light as to what might be going on? Regards, Vinay Sajip From neologix at free.fr Wed Nov 2 14:58:07 2011 From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Wed, 2 Nov 2011 14:58:07 +0100 Subject: [Python-Dev] Unexpected behaviour in compileall In-Reply-To: References: Message-ID: 2011/11/2 Vinay Sajip : > I just started getting errors in my PEP 404 / pythonv branch, but they don't > at first glance appear related to the functionality of this branch. What I'm > seeing is that during installation, some of the .pyc/.pyo files written by > compileall have mode 600 rather than the expected 644, with the result that > test_compileall fails when run from the installed Python as an unprivileged > user. If I manually do It's a consequence of http://hg.python.org/cpython/rev/740baff4f169. I'll fix that. From vinay_sajip at yahoo.co.uk Wed Nov 2 17:13:43 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 2 Nov 2011 16:13:43 +0000 (UTC) Subject: [Python-Dev] Unexpected behaviour in compileall References: Message-ID: Charles-Fran?ois Natali free.fr> writes: > It's a consequence of http://hg.python.org/cpython/rev/740baff4f169. > I'll fix that. Should a new issue be opened (or #13303 re-opened) pending this fix? Regards, Vinay Sajip From brett at python.org Wed Nov 2 18:07:07 2011 From: brett at python.org (Brett Cannon) Date: Wed, 2 Nov 2011 10:07:07 -0700 Subject: [Python-Dev] Unexpected behaviour in compileall In-Reply-To: References: Message-ID: On Wed, Nov 2, 2011 at 09:13, Vinay Sajip wrote: > Charles-Fran?ois Natali free.fr> writes: > > > It's a consequence of http://hg.python.org/cpython/rev/740baff4f169. > > I'll fix that. > > Should a new issue be opened (or #13303 re-opened) pending this fix? > Re-open the issue. > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Wed Nov 2 22:50:19 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 2 Nov 2011 14:50:19 -0700 Subject: [Python-Dev] Code cleanups in stable branches? In-Reply-To: <20111101163147.5ca8f2fe@resist> References: <4EAED974.602@netwok.org> <20111101163147.5ca8f2fe@resist> Message-ID: <2BA9EBA9-3750-4827-98E9-71002A1D9C2A@gmail.com> On Nov 1, 2011, at 1:31 PM, Barry Warsaw wrote: > On Oct 31, 2011, at 06:23 PM, ?ric Araujo wrote: > >> I thought that patches that clean up code but don?t fix actual bugs were >> not done in stable branches. Has this changed? > > I hope not. Sure, if they fix actual bugs, that's fine, but as MvL often > points out, even innocent looking changes can break code *somewhere*. We > don't lose by being conservative with our stable branches. > > -Barry I concur with Barry and MvL. Random code cleanups increase the risk of introducing new bugs. Raymond From derek.shockey at gmail.com Thu Nov 3 03:32:38 2011 From: derek.shockey at gmail.com (Derek Shockey) Date: Wed, 2 Nov 2011 19:32:38 -0700 Subject: [Python-Dev] ints not overflowing into longs? Message-ID: I just found an unexpected behavior and I'm wondering if it is a bug. In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it appears that integers are not correctly overflowing into longs and instead are yielding bizarre results. I can only reproduce this when using the exponent operator with two ints (declaring either operand explicitly as long prevents the behavior). >>> 2**100 0 >>> 2**100L 1267650600228229401496703205376L >>> 20**20 -2101438300051996672 >>> 20L**20 104857600000000000000000000L >>> 10**20 7766279631452241920 >>> 10L**20L 100000000000000000000L To confirm I'm not crazy, I tried in the 2.7.1 and 2.6.7 installations included in OS X 10.7, and also a 2.7.2+ (not sure what the + is) on an Ubuntu machine and didn't see this behavior. This looks like some kind of truncation error, but I don't know much about the internals of Python and have no idea what's going on. I assume since it's only in my MacPorts installation, it must be build configuration issue that is specific to OS X, perhaps only 10.7, or MacPorts. Am I doing something wrong, and is there a way to fix it before I compile? I could find any references to this problem as a known issue. Thanks, Derek From guido at python.org Thu Nov 3 03:41:30 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 2 Nov 2011 19:41:30 -0700 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: References: Message-ID: Apparently Macports is still using a buggy compiler. I reported a similar issue before and got this reply from Ned Delly: """ Thanks for the pointer. That looks like a duplicate of Issue11149 (and Issue12701). Another manifestation of this was reported in Issue13061 which also originated from MacPorts. I'll remind them that the configure change is likely needed for all Pythons. It's still safest to stick with good old gcc-4.2 on OS X at the moment. """ (Those issues are on bugs.python.org.) --Guido On Wed, Nov 2, 2011 at 7:32 PM, Derek Shockey wrote: > I just found an unexpected behavior and I'm wondering if it is a bug. > In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it > appears that integers are not correctly overflowing into longs and > instead are yielding bizarre results. I can only reproduce this when > using the exponent operator with two ints (declaring either operand > explicitly as long prevents the behavior). > >>>> 2**100 > 0 >>>> 2**100L > 1267650600228229401496703205376L > >>>> 20**20 > -2101438300051996672 >>>> 20L**20 > 104857600000000000000000000L > >>>> 10**20 > 7766279631452241920 >>>> 10L**20L > 100000000000000000000L > > To confirm I'm not crazy, I tried in the 2.7.1 and 2.6.7 installations > included in OS X 10.7, and also a 2.7.2+ (not sure what the + is) on > an Ubuntu machine and didn't see this behavior. This looks like some > kind of truncation error, but I don't know much about the internals of > Python and have no idea what's going on. I assume since it's only in > my MacPorts installation, it must be build configuration issue that is > specific to OS X, perhaps only 10.7, or MacPorts. > > Am I doing something wrong, and is there a way to fix it before I > compile? I could find any references to this problem as a known issue. > > Thanks, > Derek > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From derek.shockey at gmail.com Thu Nov 3 05:37:02 2011 From: derek.shockey at gmail.com (Derek Shockey) Date: Wed, 2 Nov 2011 21:37:02 -0700 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: References: Message-ID: Thank you, I narrowed it down from there and got a properly working build. I gather the problem is that in Xcode 4.2 the default compiler was changed to clang, but the version of clang bundled with it has a bug that breaks overflows in intobject.c. In case anyone else hits this, I fixed this in MacPorts by forcing it to use gcc. Edit the portfile (port edit python27) and add this anywhere after the 5th or so line: configure.compiler llvm-gcc-4 -Derek On Wed, Nov 2, 2011 at 7:41 PM, Guido van Rossum wrote: > Apparently Macports is still using a buggy compiler. I reported a > similar issue before and got this reply from Ned Delly: > > """ > Thanks for the pointer. ?That looks like a duplicate of Issue11149 (and > Issue12701). ?Another manifestation of this was reported in Issue13061 > which also originated from MacPorts. ?I'll remind them that the > configure change is likely needed for all Pythons. ?It's still safest to > stick with good old gcc-4.2 on OS X at the moment. > """ > > (Those issues are on bugs.python.org.) > > --Guido > > On Wed, Nov 2, 2011 at 7:32 PM, Derek Shockey wrote: >> I just found an unexpected behavior and I'm wondering if it is a bug. >> In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it >> appears that integers are not correctly overflowing into longs and >> instead are yielding bizarre results. I can only reproduce this when >> using the exponent operator with two ints (declaring either operand >> explicitly as long prevents the behavior). >> >>>>> 2**100 >> 0 >>>>> 2**100L >> 1267650600228229401496703205376L >> >>>>> 20**20 >> -2101438300051996672 >>>>> 20L**20 >> 104857600000000000000000000L >> >>>>> 10**20 >> 7766279631452241920 >>>>> 10L**20L >> 100000000000000000000L >> >> To confirm I'm not crazy, I tried in the 2.7.1 and 2.6.7 installations >> included in OS X 10.7, and also a 2.7.2+ (not sure what the + is) on >> an Ubuntu machine and didn't see this behavior. This looks like some >> kind of truncation error, but I don't know much about the internals of >> Python and have no idea what's going on. I assume since it's only in >> my MacPorts installation, it must be build configuration issue that is >> specific to OS X, perhaps only 10.7, or MacPorts. >> >> Am I doing something wrong, and is there a way to fix it before I >> compile? I could find any references to this problem as a known issue. >> >> Thanks, >> Derek >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > From solipsis at pitrou.net Thu Nov 3 12:30:02 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 3 Nov 2011 12:30:02 +0100 Subject: [Python-Dev] ints not overflowing into longs? References: Message-ID: <20111103123002.1e7fc789@pitrou.net> On Wed, 2 Nov 2011 19:41:30 -0700 Guido van Rossum wrote: > Apparently Macports is still using a buggy compiler. If I understand things correctly, this is technically not a buggy compiler but Python making optimistic assumptions about the C standard. (from issue11149: "clang (as with gcc 4.x) assumes signed integer overflow is undefined. But Python depends on the fact that signed integer overflow wraps") I'd happily call that a buggy C standard, though :-) Regards Antoine. > I reported a > similar issue before and got this reply from Ned Delly: > > """ > Thanks for the pointer. That looks like a duplicate of Issue11149 (and > Issue12701). Another manifestation of this was reported in Issue13061 > which also originated from MacPorts. I'll remind them that the > configure change is likely needed for all Pythons. It's still safest to > stick with good old gcc-4.2 on OS X at the moment. > """ > > (Those issues are on bugs.python.org.) > > --Guido > > On Wed, Nov 2, 2011 at 7:32 PM, Derek Shockey wrote: > > I just found an unexpected behavior and I'm wondering if it is a bug. > > In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it > > appears that integers are not correctly overflowing into longs and > > instead are yielding bizarre results. I can only reproduce this when > > using the exponent operator with two ints (declaring either operand > > explicitly as long prevents the behavior). > > > >>>> 2**100 > > 0 > >>>> 2**100L > > 1267650600228229401496703205376L > > > >>>> 20**20 > > -2101438300051996672 > >>>> 20L**20 > > 104857600000000000000000000L > > > >>>> 10**20 > > 7766279631452241920 > >>>> 10L**20L > > 100000000000000000000L > > > > To confirm I'm not crazy, I tried in the 2.7.1 and 2.6.7 installations > > included in OS X 10.7, and also a 2.7.2+ (not sure what the + is) on > > an Ubuntu machine and didn't see this behavior. This looks like some > > kind of truncation error, but I don't know much about the internals of > > Python and have no idea what's going on. I assume since it's only in > > my MacPorts installation, it must be build configuration issue that is > > specific to OS X, perhaps only 10.7, or MacPorts. > > > > Am I doing something wrong, and is there a way to fix it before I > > compile? I could find any references to this problem as a known issue. > > > > Thanks, > > Derek > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > http://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > -- > --Guido van Rossum (python.org/~guido) From victor.stinner at haypocalc.com Thu Nov 3 13:00:18 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 03 Nov 2011 13:00:18 +0100 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: References: Message-ID: <1393916.XNIxN3dLtd@dsk000552> Le Mercredi 2 Novembre 2011 19:32:38 Derek Shockey a ?crit : > I just found an unexpected behavior and I'm wondering if it is a bug. > In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it > appears that integers are not correctly overflowing into longs and > instead are yielding bizarre results. I can only reproduce this when > using the exponent operator with two ints (declaring either operand > explicitly as long prevents the behavior). > > >>> 2**100 > > 0 This issue has already been fixed twice in Python 2.7 branch: int_pow() has been fixed and -fwrapv is now used for Clang. http://bugs.python.org/issue11149 http://bugs.python.org/issue12973 It is maybe time for a new release? :-) Victor From van.lindberg at gmail.com Thu Nov 3 13:36:12 2011 From: van.lindberg at gmail.com (VanL) Date: Thu, 03 Nov 2011 07:36:12 -0500 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: References: <4EAAF66F.9020603@oddbird.net> <20111029172342.61dbbd71@pitrou.net> <20111030132837.220c124d@pitrou.net> <4EAEC3CD.50408@oddbird.net> <4EAF00A3.90400@oddbird.net> Message-ID: For what its worth On 11/1/2011 11:43 AM, Paul Moore wrote: > On 1 November 2011 16:40, Paul Moore wrote: >> On 1 November 2011 16:29, Paul Moore wrote: >>> On 31 October 2011 20:10, Carl Meyer wrote: >>>>> For Windows, can you point me at the nt scripts? If they aren't too >>>>> complex, I'd be willing to port to Powershell. For what its worth, there have been a number of efforts in this direction: https://bitbucket.org/guillermooo/virtualenvwrapper-powershell https://bitbucket.org/vanl/virtualenvwrapper-powershell (Both different implementations) From brian.curtin at gmail.com Thu Nov 3 15:59:24 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Thu, 3 Nov 2011 09:59:24 -0500 Subject: [Python-Dev] Buildbot failures In-Reply-To: <4EA319DA.2000904@gmail.com> References: <20111021230808.7c101aec@pitrou.net> <4EA319DA.2000904@gmail.com> Message-ID: On Sat, Oct 22, 2011 at 14:30, Andrea Crotti wrote: > On 10/21/2011 10:08 PM, Antoine Pitrou wrote: >> >> Hello, >> >> There are currently a bunch of various buildbot failures on all 3 >> branches. I would remind committers to regularly take a look at the >> buildbots, so that these failures get solved reasonably fast. >> >> Regards >> >> Antoine. > > In my previous workplace if someone broke a build committing something wrong > he/she > had to bring cake for everyone next meeting. > > The cake is not really feasible I guess, but isn't it possible to notify the > developer that > broke the build? You just have to keep track and bring all of the cakes that you owe to PyCon. From merwok at netwok.org Thu Nov 3 16:36:01 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Thu, 03 Nov 2011 16:36:01 +0100 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: References: Message-ID: <4EB2B4E1.90409@netwok.org> Hi Derek, > I tried in the 2.7.1 and 2.6.7 installations included in OS X 10.7, > and also a 2.7.2+ (not sure what the + is) The + means that?s it?s 2.7.2 + some commits, in other words the in-development version that will become 2.7.3. This bit of info seems to be missing from the doc. Regards From martin at v.loewis.de Thu Nov 3 18:14:42 2011 From: martin at v.loewis.de (martin at v.loewis.de) Date: Thu, 03 Nov 2011 18:14:42 +0100 Subject: [Python-Dev] Unicode exception indexing Message-ID: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> There is a backwards compatibility issue with PEP 393 and Unicode exceptions: the start and end indices: are they Py_UNICODE indices, or code point indices? On the one hand, these indices are used in formatting error messages such as "codec can't encode character \u%04x in position %d", suggesting they are regular indices into the string (counting code points). On the other hand, they are used by error handlers to lookup the character, and existing error handlers (including the ones we have now) use PyUnicode_AsUnicode to find the character. This suggests that the indices should be Py_UNICODE indices, for compatibility (and they currently do work in this way). The indices can only be different if the string is an UCS-4 string, and Py_UNICODE is a two-byte type (i.e. on Windows). So what should it be? As a compromise, it would be possible to convert between these indices, by counting the non-BMP characters that precede the index if the indices might differ. That would be expensive to compute, but provide backwards compatibility to the C API. It's less clear what backwards compatibility to Python code would require - most likely, people would use the indices for slicing operations (rather than performing an UTF-16 conversion and performing indexing on that). Regards, Martin From derek.shockey at gmail.com Thu Nov 3 18:44:06 2011 From: derek.shockey at gmail.com (Derek Shockey) Date: Thu, 3 Nov 2011 10:44:06 -0700 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: <20111103123002.1e7fc789@pitrou.net> References: <20111103123002.1e7fc789@pitrou.net> Message-ID: I believe you're right. The 2.7.2 MacPorts portfile definitely passes the -fwrapv flag to clang, but the bad behavior still occurs with exponents. I verified the current head of the 2.7 branch does not have this problem when built with clang, so I'm assuming that issue12973 resolved this with a patch to int_pow() and that it will be out in the next release. -Derek On Thu, Nov 3, 2011 at 4:30 AM, Antoine Pitrou wrote: > On Wed, 2 Nov 2011 19:41:30 -0700 > Guido van Rossum wrote: >> Apparently Macports is still using a buggy compiler. > > If I understand things correctly, this is technically not a buggy > compiler but Python making optimistic assumptions about the C standard. > (from issue11149: "clang (as with gcc 4.x) assumes signed integer > overflow is undefined. But Python depends on the fact that signed > integer overflow wraps") > > I'd happily call that a buggy C standard, though :-) > > Regards > > Antoine. > > >> I reported a >> similar issue before and got this reply from Ned Delly: >> >> """ >> Thanks for the pointer. ?That looks like a duplicate of Issue11149 (and >> Issue12701). ?Another manifestation of this was reported in Issue13061 >> which also originated from MacPorts. ?I'll remind them that the >> configure change is likely needed for all Pythons. ?It's still safest to >> stick with good old gcc-4.2 on OS X at the moment. >> """ >> >> (Those issues are on bugs.python.org.) >> >> --Guido >> >> On Wed, Nov 2, 2011 at 7:32 PM, Derek Shockey wrote: >> > I just found an unexpected behavior and I'm wondering if it is a bug. >> > In my 2.7.2 interpreter on OS X, built and installed via MacPorts, it >> > appears that integers are not correctly overflowing into longs and >> > instead are yielding bizarre results. I can only reproduce this when >> > using the exponent operator with two ints (declaring either operand >> > explicitly as long prevents the behavior). >> > >> >>>> 2**100 >> > 0 >> >>>> 2**100L >> > 1267650600228229401496703205376L >> > >> >>>> 20**20 >> > -2101438300051996672 >> >>>> 20L**20 >> > 104857600000000000000000000L >> > >> >>>> 10**20 >> > 7766279631452241920 >> >>>> 10L**20L >> > 100000000000000000000L >> > >> > To confirm I'm not crazy, I tried in the 2.7.1 and 2.6.7 installations >> > included in OS X 10.7, and also a 2.7.2+ (not sure what the + is) on >> > an Ubuntu machine and didn't see this behavior. This looks like some >> > kind of truncation error, but I don't know much about the internals of >> > Python and have no idea what's going on. I assume since it's only in >> > my MacPorts installation, it must be build configuration issue that is >> > specific to OS X, perhaps only 10.7, or MacPorts. >> > >> > Am I doing something wrong, and is there a way to fix it before I >> > compile? I could find any references to this problem as a known issue. >> > >> > Thanks, >> > Derek >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > http://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/derek.shockey%40gmail.com > From stefan at bytereef.org Thu Nov 3 19:07:52 2011 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 3 Nov 2011 19:07:52 +0100 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: References: <20111103123002.1e7fc789@pitrou.net> Message-ID: <20111103180752.GA18201@sleipnir.bytereef.org> Derek Shockey wrote: > I believe you're right. The 2.7.2 MacPorts portfile definitely passes > the -fwrapv flag to clang, but the bad behavior still occurs with > exponents. Really? Even without the fix for issue12973 the -fwrapv flag should be sufficient, as reported in issue13061 and Issue11149. For clang version 3.0 (trunk 139691) on FreeBSD this is the case. Stefan Krah From victor.stinner at haypocalc.com Thu Nov 3 20:16:21 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 3 Nov 2011 20:16:21 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> Message-ID: <201111032016.21584.victor.stinner@haypocalc.com> Le jeudi 3 novembre 2011 18:14:42, martin at v.loewis.de a ?crit : > There is a backwards compatibility issue with PEP 393 and Unicode > exceptions: the start and end indices: are they Py_UNICODE indices, or > code point indices? Oh oh. That's exactly why I didn't want to start to work on this issue. http://bugs.python.org/issue13064 In a Python error handler, exc.object[exc.start:exc.end] should be used to get the unencodable/undecodable substring. In a C error handler, it depends if you use a Py_UNICODE* pointer or PyUnicode_Substring() / PyUnicode_READ. Using google.fr/codesearch, I found some user error handlers implemented in Python: * straw: "html_replace" * Nuxeo: "latin9_fallback" * peerscape: "htmlentityescape" * pymt: "cssescape" * .... I found no error implemented in C (not any call to PyCodec_RegisterError). > So what should it be? I suggest to use code point indices. Code point indices is also now more "natural" with the PEP 393. Because it is an incompatible change, it should be documented in the PEP and in the "What's new in Python 3.3" document. > As a compromise, it would be possible to convert between these indices, > by counting the non-BMP characters that precede the index if the indices > might differ. I started such hack for the UTF-8 codec... It is really tricky, we should not do that! > That would be expensive to compute Yeah, O(n) should be avoided when is it possible. -- FYI I implemented a proof-of-concept in Python of the surrogateescape error handler for Python 2 (for Mercurial): https://bitbucket.org/haypo/misc/src/tip/python/surrogateescape.py Victor From stefan_ml at behnel.de Thu Nov 3 20:32:03 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 03 Nov 2011 20:32:03 +0100 Subject: [Python-Dev] Buildbot failures In-Reply-To: References: <20111021230808.7c101aec@pitrou.net> <4EA319DA.2000904@gmail.com> Message-ID: Brian Curtin, 03.11.2011 15:59: > On Sat, Oct 22, 2011 at 14:30, Andrea Crotti wrote: >> On 10/21/2011 10:08 PM, Antoine Pitrou wrote: >>> >>> Hello, >>> >>> There are currently a bunch of various buildbot failures on all 3 >>> branches. I would remind committers to regularly take a look at the >>> buildbots, so that these failures get solved reasonably fast. >>> >>> Regards >>> >>> Antoine. >> >> In my previous workplace if someone broke a build committing something wrong >> he/she >> had to bring cake for everyone next meeting. >> >> The cake is not really feasible I guess, but isn't it possible to notify the >> developer that >> broke the build? > > You just have to keep track and bring all of the cakes that you owe to PyCon. Did you mean "PieCon"? Stefan From solipsis at pitrou.net Thu Nov 3 20:29:50 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 3 Nov 2011 20:29:50 +0100 Subject: [Python-Dev] Unicode exception indexing References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> Message-ID: <20111103202950.04be04a8@pitrou.net> On Thu, 03 Nov 2011 18:14:42 +0100 martin at v.loewis.de wrote: > There is a backwards compatibility issue with PEP 393 and Unicode exceptions: > the start and end indices: are they Py_UNICODE indices, or code point indices? > > On the one hand, these indices are used in formatting error messages such as > "codec can't encode character \u%04x in position %d", suggesting they > are regular > indices into the string (counting code points). > > On the other hand, they are used by error handlers to lookup the character, > and existing error handlers (including the ones we have now) use > PyUnicode_AsUnicode to find the character. This suggests that the indices > should be Py_UNICODE indices, for compatibility (and they currently do > work in this way). But what about error handlers written in Python? > The indices can only be different if the string is an UCS-4 string, and > Py_UNICODE is a two-byte type (i.e. on Windows). > > So what should it be? I'd say let's do the Right Thing and accept the small compatibility breach (surrogates on UCS-2 builds). Regards Antoine. From derek.shockey at gmail.com Thu Nov 3 21:30:18 2011 From: derek.shockey at gmail.com (Derek Shockey) Date: Thu, 3 Nov 2011 13:30:18 -0700 Subject: [Python-Dev] ints not overflowing into longs? In-Reply-To: <20111103180752.GA18201@sleipnir.bytereef.org> References: <20111103123002.1e7fc789@pitrou.net> <20111103180752.GA18201@sleipnir.bytereef.org> Message-ID: You're right; among my many tests I think I muddled the situation with a stray CFLAGS variable in my environment. Apologies for the misinformation. The current MacPorts portfile does not add -fwrapv. Adding -fwrapv to OPT in the Makefile solves the problem. I confirmed by manually building the v2.7.2 tag with clang and -fwrapv, and the overflow behavior is correct. I've notified the MacPorts package maintainer. -Derek On Thu, Nov 3, 2011 at 11:07 AM, Stefan Krah wrote: > Derek Shockey wrote: >> I believe you're right. The 2.7.2 MacPorts portfile definitely passes >> the -fwrapv flag to clang, but the bad behavior still occurs with >> exponents. > > Really? Even without the fix for issue12973 the -fwrapv flag > should be sufficient, as reported in issue13061 and Issue11149. > > For clang version 3.0 (trunk 139691) on FreeBSD this is the case. > > > Stefan Krah > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/derek.shockey%40gmail.com > From guido at python.org Thu Nov 3 22:09:37 2011 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Nov 2011 14:09:37 -0700 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <20111103202950.04be04a8@pitrou.net> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> Message-ID: On Thu, Nov 3, 2011 at 12:29 PM, Antoine Pitrou wrote: > On Thu, 03 Nov 2011 18:14:42 +0100 > martin at v.loewis.de wrote: >> There is a backwards compatibility issue with PEP 393 and Unicode exceptions: >> the start and end indices: are they Py_UNICODE indices, or code point indices? >> >> On the one hand, these indices are used in formatting error messages such as >> "codec can't encode character \u%04x in position %d", suggesting they >> are regular >> indices into the string (counting code points). >> >> On the other hand, they are used by error handlers to lookup the character, >> and existing error handlers (including the ones we have now) use >> PyUnicode_AsUnicode to find the character. This suggests that the indices >> should be Py_UNICODE indices, for compatibility (and they currently do >> work in this way). > > But what about error handlers written in Python? > >> The indices can only be different if the string is an UCS-4 string, and >> Py_UNICODE is a two-byte type (i.e. on Windows). >> >> So what should it be? > > I'd say let's do the Right Thing and accept the small compatibility > breach (surrogates on UCS-2 builds). +1 -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Thu Nov 3 22:19:10 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 03 Nov 2011 17:19:10 -0400 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <201111032016.21584.victor.stinner@haypocalc.com> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <201111032016.21584.victor.stinner@haypocalc.com> Message-ID: On 11/3/2011 3:16 PM, Victor Stinner wrote: > Le jeudi 3 novembre 2011 18:14:42, martin at v.loewis.de a ?crit : >> There is a backwards compatibility issue with PEP 393 and Unicode >> exceptions: the start and end indices: are they Py_UNICODE indices, or >> code point indices? I had the impression that we were abolishing the wide versus narrow build difference and that this issue would disappear. I must have missed something. >> So what should it be? > > I suggest to use code point indices. Code point indices is also now more > "natural" with the PEP 393. I think we should look forward, not backwards. Error messages are defined as undefined ;-). So I think we should do what is right for the new implementation. I suspect that means that I am agreeing with both Victor and Antoine. > Because it is an incompatible change, it should be documented in the PEP and > in the "What's new in Python 3.3" document. ... > Yeah, O(n) should be avoided when is it possible. Definitely to both. -- Terry Jan Reedy From martin at v.loewis.de Thu Nov 3 22:43:30 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 03 Nov 2011 22:43:30 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <201111032016.21584.victor.stinner@haypocalc.com> Message-ID: <4EB30B02.9010609@v.loewis.de> Am 03.11.2011 22:19, schrieb Terry Reedy: > On 11/3/2011 3:16 PM, Victor Stinner wrote: >> Le jeudi 3 novembre 2011 18:14:42, martin at v.loewis.de a ?crit : >>> There is a backwards compatibility issue with PEP 393 and Unicode >>> exceptions: the start and end indices: are they Py_UNICODE indices, or >>> code point indices? > > I had the impression that we were abolishing the wide versus narrow > build difference and that this issue would disappear. I must have missed > something. Most certainly. The Py_UNICODE type continues to exist for backwards compatibility. It is now always a typedef for wchar_t, which makes it a 16-bit type on Windows. Regards, Martin From martin at v.loewis.de Thu Nov 3 22:47:00 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 03 Nov 2011 22:47:00 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <20111103202950.04be04a8@pitrou.net> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> Message-ID: <4EB30BD4.6030703@v.loewis.de> >> On the one hand, these indices are used in formatting error messages such as >> "codec can't encode character \u%04x in position %d", suggesting they >> are regular >> indices into the string (counting code points). >> >> On the other hand, they are used by error handlers to lookup the character, >> and existing error handlers (including the ones we have now) use >> PyUnicode_AsUnicode to find the character. This suggests that the indices >> should be Py_UNICODE indices, for compatibility (and they currently do >> work in this way). > > But what about error handlers written in Python? I'm working on a patch where an C error handler using PyUnicodeEncodeError_GetStart gets a different value than a Python error handler accessing .start. The _GetStart/_GetEnd functions would take the value from the exception object, and adjust it before returning it. The implementation is fairly straight-forward, just a little expensive (in the case of non-BMP strings on Windows). Regards, Martin From martin at v.loewis.de Thu Nov 3 22:51:27 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 03 Nov 2011 22:51:27 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <201111032016.21584.victor.stinner@haypocalc.com> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <201111032016.21584.victor.stinner@haypocalc.com> Message-ID: <4EB30CDF.70000@v.loewis.de> > I started such hack for the UTF-8 codec... It is really tricky, we should not > do that! With the proper encapsulation, it's not that tricky. I have written functions PyUnicode_IndexToWCharIndex and PyUnicode_WCharIndexToIndex, and PyUnicodeEncodeError_GetStart and friends would use that function. I'd also need new functions PyUnicodeEncodeError_GetStartIndex to access the "true" start field. >> That would be expensive to compute > > Yeah, O(n) should be avoided when is it possible. Ok. I'll wait half a day or so for people to reconsider (now knowing that it's actually feasible to be fully backwards compatible); if nobody speaks up, I go ahead and accept the breakage. Regards, Martin From tjreedy at udel.edu Thu Nov 3 23:21:24 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 03 Nov 2011 18:21:24 -0400 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <4EB30B02.9010609@v.loewis.de> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <201111032016.21584.victor.stinner@haypocalc.com> <4EB30B02.9010609@v.loewis.de> Message-ID: <4EB313E4.1060000@udel.edu> On 11/3/2011 5:43 PM, "Martin v. L?wis" wrote: >> I had the impression that we were abolishing the wide versus narrow >> build difference and that this issue would disappear. I must have missed >> something. > > Most certainly. The Py_UNICODE type continues to exist for backwards > compatibility. It is now always a typedef for wchar_t, which makes it > a 16-bit type on Windows. Thank you for answering: My revised impression now is that any string I create with Python code in Python 3.3+ (as distributed, without extensions or ctypes calls) will use the new implementation and will index and and slice correctly, even with extended chars. So indexing is only an issue for those writing or using C-coded extensions with the old unicode C-API on systems with a 16-bit wchar_t. Correct? --- Terry Jan Reedy From ncoghlan at gmail.com Thu Nov 3 23:24:44 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 Nov 2011 08:24:44 +1000 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <4EB30CDF.70000@v.loewis.de> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <201111032016.21584.victor.stinner@haypocalc.com> <4EB30CDF.70000@v.loewis.de> Message-ID: Your approach (doing the right thing for both Python and C, new API to avoid the C performance problem) sounds good to me. -- Nick Coghlan (via Gmail on Android, so likely to be more terse than usual) On Nov 4, 2011 7:58 AM, Martin v. L?wis wrote: > > I started such hack for the UTF-8 codec... It is really tricky, we > should not > > do that! > > With the proper encapsulation, it's not that tricky. I have written > functions PyUnicode_IndexToWCharIndex and PyUnicode_WCharIndexToIndex, > and PyUnicodeEncodeError_GetStart and friends would use that function. > I'd also need new functions PyUnicodeEncodeError_GetStartIndex to access > the "true" start field. > > >> That would be expensive to compute > > > > Yeah, O(n) should be avoided when is it possible. > > Ok. I'll wait half a day or so for people to reconsider (now knowing > that it's actually feasible to be fully backwards compatible); if nobody > speaks up, I go ahead and accept the breakage. > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 4 03:18:32 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 Nov 2011 03:18:32 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <4EB30BD4.6030703@v.loewis.de> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> <4EB30BD4.6030703@v.loewis.de> Message-ID: <20111104031832.2aed713d@pitrou.net> On Thu, 03 Nov 2011 22:47:00 +0100 "Martin v. L?wis" wrote: > >> On the one hand, these indices are used in formatting error messages such as > >> "codec can't encode character \u%04x in position %d", suggesting they > >> are regular > >> indices into the string (counting code points). > >> > >> On the other hand, they are used by error handlers to lookup the character, > >> and existing error handlers (including the ones we have now) use > >> PyUnicode_AsUnicode to find the character. This suggests that the indices > >> should be Py_UNICODE indices, for compatibility (and they currently do > >> work in this way). > > > > But what about error handlers written in Python? > > I'm working on a patch where an C error handler using > PyUnicodeEncodeError_GetStart gets a different value than a Python > error handler accessing .start. The _GetStart/_GetEnd functions would > take the value from the exception object, and adjust it before returning > it. Is it worth the hassle? We can just port our existing error handlers, and I guess the few third-party error handlers written in C (if any) can bear the transition. Regards Antoine. From martin at v.loewis.de Fri Nov 4 08:39:54 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 04 Nov 2011 08:39:54 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <20111104031832.2aed713d@pitrou.net> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> <4EB30BD4.6030703@v.loewis.de> <20111104031832.2aed713d@pitrou.net> Message-ID: <4EB396CA.2060801@v.loewis.de> > Is it worth the hassle? We can just port our existing error handlers, > and I guess the few third-party error handlers written in C (if any) > can bear the transition. That was my question exactly. As the author of PEP 393, I was leaning towards full backwards compatibility, but you, Victor, and Guido tell me not to worry - so I won't :-) Regards, Martin From stefan_ml at behnel.de Fri Nov 4 09:35:56 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 04 Nov 2011 09:35:56 +0100 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <4EB396CA.2060801@v.loewis.de> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> <4EB30BD4.6030703@v.loewis.de> <20111104031832.2aed713d@pitrou.net> <4EB396CA.2060801@v.loewis.de> Message-ID: "Martin v. L?wis", 04.11.2011 08:39: >> Is it worth the hassle? We can just port our existing error handlers, >> and I guess the few third-party error handlers written in C (if any) >> can bear the transition. > > That was my question exactly. As the author of PEP 393, I was leaning > towards full backwards compatibility, but you, Victor, and Guido tell > me not to worry - so I won't :-) +1, FWIW. Stefan From merwok at netwok.org Fri Nov 4 18:05:17 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Fri, 04 Nov 2011 18:05:17 +0100 Subject: [Python-Dev] Code cleanups in stable branches? In-Reply-To: <2BA9EBA9-3750-4827-98E9-71002A1D9C2A@gmail.com> References: <4EAED974.602@netwok.org> <20111101163147.5ca8f2fe@resist> <2BA9EBA9-3750-4827-98E9-71002A1D9C2A@gmail.com> Message-ID: <4EB41B4D.9060101@netwok.org> Nick and Brett share the opinion that some code cleanups can be considered bugfixes, whereas MvL, Barry and Raymond defend that we never know what can get broken and it?s not worth risking it. I have added a comment on #13283 (removal of two unused variable in locale.py) to restate this policy, but haven?t asked for a revert; I did not comment on #10519 (Avoid unnecessary recursive function calls), as it can be considered a bugfix and Raymond took part in the discussion and thus implicitly approved the change. Regards From status at bugs.python.org Fri Nov 4 18:07:30 2011 From: status at bugs.python.org (Python tracker) Date: Fri, 4 Nov 2011 18:07:30 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20111104170730.2881C1CDA4@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2011-10-28 - 2011-11-04) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3118 (+10) closed 22006 (+45) total 25124 (+55) Open issues with patches: 1324 Issues opened (42) ================== #12119: test_distutils failure http://bugs.python.org/issue12119 reopened by eric.araujo #12342: characters with ord above 65535 fail to display in IDLE http://bugs.python.org/issue12342 reopened by flox #13290: get vars for object with __slots__ http://bugs.python.org/issue13290 opened by JBernardo #13292: missing versionadded for bytearray http://bugs.python.org/issue13292 opened by flox #13294: http.server - HEAD request when no resource is defined. http://bugs.python.org/issue13294 opened by karlcow #13297: xmlrpc.client could accept bytes for input and output http://bugs.python.org/issue13297 opened by flox #13298: Result type depends on order of operands for bytes and bytearr http://bugs.python.org/issue13298 opened by ncoghlan #13299: namedtuple row factory for sqlite3 http://bugs.python.org/issue13299 opened by ncoghlan #13300: IDLE 3.3 Restart Shell command fails http://bugs.python.org/issue13300 opened by ned.deily #13301: the script Tools/i18n/msgfmt.py allows arbitrary code executio http://bugs.python.org/issue13301 opened by izi #13302: Clarification needed in C API arg parsing http://bugs.python.org/issue13302 opened by sandro.tosi #13303: Sporadic importlib failures: FileNotFoundError on os.rename() http://bugs.python.org/issue13303 opened by haypo #13305: datetime.strftime("%Y") not consistent for years < 1000 http://bugs.python.org/issue13305 opened by flox #13306: Add diagnostic tools to importlib? http://bugs.python.org/issue13306 opened by ncoghlan #13309: test_time fails: time data 'LMT' does not match format '%Z' http://bugs.python.org/issue13309 opened by flox #13311: asyncore handle_read should call recv http://bugs.python.org/issue13311 opened by xdegaye #13312: test_time fails: strftime('%Y', y) for negative year http://bugs.python.org/issue13312 opened by flox #13313: test_time fails: tzset() do not change timezone http://bugs.python.org/issue13313 opened by flox #13314: ImportError ImportError: Import by filename, should be deferre http://bugs.python.org/issue13314 opened by Rob.Bairos #13316: build_py_2to3 does not convert when there was an error in the http://bugs.python.org/issue13316 opened by simohe #13317: building with 2to3 generates wrong import paths because build_ http://bugs.python.org/issue13317 opened by simohe #13319: IDLE: Menu accelerator conflict between Format and Options http://bugs.python.org/issue13319 opened by serwy #13320: _remove_visual_c_ref in distutils.msvc9compiler causes DLL loa http://bugs.python.org/issue13320 opened by Inverness #13321: fstat doesn't accept an object with "fileno" method http://bugs.python.org/issue13321 opened by anacrolix #13322: buffered read() and write() does not raise BlockingIOError http://bugs.python.org/issue13322 opened by sbt #13323: urllib2 does not correctly handle multiple www-authenticate he http://bugs.python.org/issue13323 opened by dfischer #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 opened by xdegaye #13326: make clean failed on OpenBSD http://bugs.python.org/issue13326 opened by rpointel #13327: Update utime API to not require explicit None argument http://bugs.python.org/issue13327 opened by brian.curtin #13328: pdb shows code from wrong module http://bugs.python.org/issue13328 opened by yak #13329: Runs normal as console script but falls as CGI http://bugs.python.org/issue13329 opened by Nick.Rowan #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 opened by Sean.Fleming #13331: Packaging cannot install resource directory trees specified in http://bugs.python.org/issue13331 opened by vinay.sajip #13332: execfile fixer produces code that does not close the file http://bugs.python.org/issue13332 opened by smarnach #13333: utf-7 inconsistent with surrogates http://bugs.python.org/issue13333 opened by pitrou #13335: Service application hang in python25.dll http://bugs.python.org/issue13335 opened by chandra #13336: packaging.command.Command.copy_file doesn't implement preserve http://bugs.python.org/issue13336 opened by vinay.sajip #13337: IGNORE_CASE doctest option flag http://bugs.python.org/issue13337 opened by Gerald.Dalley #13338: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER http://bugs.python.org/issue13338 opened by flub #13340: list.index does not accept None as start or stop http://bugs.python.org/issue13340 opened by Carl.Friedrich.Bolz #13341: Incorrect documentation for "u" PyArg_Parse format unit http://bugs.python.org/issue13341 opened by Ilya.Novoselov #13342: input() builtin always uses "strict" error handler http://bugs.python.org/issue13342 opened by stefanholek Most recent 15 issues with no replies (15) ========================================== #13341: Incorrect documentation for "u" PyArg_Parse format unit http://bugs.python.org/issue13341 #13340: list.index does not accept None as start or stop http://bugs.python.org/issue13340 #13338: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER http://bugs.python.org/issue13338 #13336: packaging.command.Command.copy_file doesn't implement preserve http://bugs.python.org/issue13336 #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 #13319: IDLE: Menu accelerator conflict between Format and Options http://bugs.python.org/issue13319 #13313: test_time fails: tzset() do not change timezone http://bugs.python.org/issue13313 #13300: IDLE 3.3 Restart Shell command fails http://bugs.python.org/issue13300 #13297: xmlrpc.client could accept bytes for input and output http://bugs.python.org/issue13297 #13294: http.server - HEAD request when no resource is defined. http://bugs.python.org/issue13294 #13292: missing versionadded for bytearray http://bugs.python.org/issue13292 #13290: get vars for object with __slots__ http://bugs.python.org/issue13290 #13282: the table of contents in epub file is too long http://bugs.python.org/issue13282 #13277: tzinfo subclasses information http://bugs.python.org/issue13277 Most recent 15 issues waiting for review (15) ============================================= #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 #13328: pdb shows code from wrong module http://bugs.python.org/issue13328 #13327: Update utime API to not require explicit None argument http://bugs.python.org/issue13327 #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 #13311: asyncore handle_read should call recv http://bugs.python.org/issue13311 #13305: datetime.strftime("%Y") not consistent for years < 1000 http://bugs.python.org/issue13305 #13303: Sporadic importlib failures: FileNotFoundError on os.rename() http://bugs.python.org/issue13303 #13301: the script Tools/i18n/msgfmt.py allows arbitrary code executio http://bugs.python.org/issue13301 #13300: IDLE 3.3 Restart Shell command fails http://bugs.python.org/issue13300 #13297: xmlrpc.client could accept bytes for input and output http://bugs.python.org/issue13297 #13281: Make robotparser.RobotFileParser ignore blank lines http://bugs.python.org/issue13281 #13256: Document and test new socket options http://bugs.python.org/issue13256 #13254: maildir.items() broken http://bugs.python.org/issue13254 #13249: argparse.ArgumentParser() lists arguments in the wrong order http://bugs.python.org/issue13249 #13247: os.path.abspath returns unicode paths as question marks http://bugs.python.org/issue13247 Top 10 most discussed issues (10) ================================= #12498: asyncore.dispatcher_with_send, disconnection problem + miss-co http://bugs.python.org/issue12498 16 msgs #13303: Sporadic importlib failures: FileNotFoundError on os.rename() http://bugs.python.org/issue13303 16 msgs #13326: make clean failed on OpenBSD http://bugs.python.org/issue13326 13 msgs #13305: datetime.strftime("%Y") not consistent for years < 1000 http://bugs.python.org/issue13305 11 msgs #12342: characters with ord above 65535 fail to display in IDLE http://bugs.python.org/issue12342 10 msgs #13322: buffered read() and write() does not raise BlockingIOError http://bugs.python.org/issue13322 10 msgs #12939: Add new io.FileIO using the native Windows API http://bugs.python.org/issue12939 8 msgs #13281: Make robotparser.RobotFileParser ignore blank lines http://bugs.python.org/issue13281 8 msgs #13309: test_time fails: time data 'LMT' does not match format '%Z' http://bugs.python.org/issue13309 7 msgs #13327: Update utime API to not require explicit None argument http://bugs.python.org/issue13327 7 msgs Issues closed (42) ================== #2892: improve cElementTree iterparse error handling http://bugs.python.org/issue2892 closed by python-dev #5661: asyncore should catch EPIPE while sending() and receiving() http://bugs.python.org/issue5661 closed by neologix #5875: test_distutils failing on OpenSUSE 10.3, Py3k http://bugs.python.org/issue5875 closed by eric.araujo #6434: buffer overflow in Zipfile when wrinting more than 2gig file http://bugs.python.org/issue6434 closed by nadeem.vawda #6655: etree iterative find[text] http://bugs.python.org/issue6655 closed by flox #7334: ElementTree: file locking in Jython 2.5 (OSError on Windows) http://bugs.python.org/issue7334 closed by python-dev #8047: Serialiser in ElementTree returns unicode strings in Py3k http://bugs.python.org/issue8047 closed by flox #8277: ElementTree won't parse comments http://bugs.python.org/issue8277 closed by flox #9897: multiprocessing problems http://bugs.python.org/issue9897 closed by neologix #10519: setobject.c no-op typo http://bugs.python.org/issue10519 closed by python-dev #10570: curses.tigetstr() returns bytes, but curses.tparm() expects a http://bugs.python.org/issue10570 closed by haypo #10817: urllib.request.urlretrieve never raises ContentTooShortError i http://bugs.python.org/issue10817 closed by orsenthil #12008: HtmlParser non-strict goes wrong with unquoted attributes http://bugs.python.org/issue12008 closed by ezio.melotti #12760: Add create mode to open() http://bugs.python.org/issue12760 closed by neologix #12797: io.FileIO and io.open should support openat http://bugs.python.org/issue12797 closed by rosslagerwall #13140: ThreadingMixIn.daemon_threads is not honored when parent is da http://bugs.python.org/issue13140 closed by python-dev #13147: Multiprocessing Pool.map_async() does not have an error_callba http://bugs.python.org/issue13147 closed by orsenthil #13218: test_ssl failures on Debian/Ubuntu http://bugs.python.org/issue13218 closed by barry #13246: Py_UCS4_strlen and friends needn't be public http://bugs.python.org/issue13246 closed by python-dev #13257: Move importlib over to PEP 3151 exceptions http://bugs.python.org/issue13257 closed by brett.cannon #13265: IDLE crashes when printing some unprintable characters. http://bugs.python.org/issue13265 closed by ned.deily #13274: heapq pure python version uses islice without guarding for neg http://bugs.python.org/issue13274 closed by rhettinger #13279: Add memcmp into unicode_compare for optimizing comparisons http://bugs.python.org/issue13279 closed by loewis #13280: argparse should use the new Formatter class http://bugs.python.org/issue13280 closed by rhettinger #13283: removal of two unused variable in locale.py http://bugs.python.org/issue13283 closed by python-dev #13287: urllib.request exposes too many names http://bugs.python.org/issue13287 closed by orsenthil #13288: SSL module doesn't allow access to cert issuer information http://bugs.python.org/issue13288 closed by pitrou #13289: a spell error in standard lib SocketServer???s comment http://bugs.python.org/issue13289 closed by ezio.melotti #13291: latent NameError in xmlrpc package http://bugs.python.org/issue13291 closed by python-dev #13293: xmlrpc.client encode error http://bugs.python.org/issue13293 closed by flox #13295: Fix HTML produced by http.server http://bugs.python.org/issue13295 closed by ezio.melotti #13296: IDLE: __future__ flags don't clear on shell restart http://bugs.python.org/issue13296 closed by ned.deily #13304: test_site assumes that site.ENABLE_USER_SITE is True http://bugs.python.org/issue13304 closed by ned.deily #13307: bdist_rpm: INSTALLED_FILES does not use __pycache__ http://bugs.python.org/issue13307 closed by pitrou #13308: fix test_httpservers failures when run as root http://bugs.python.org/issue13308 closed by neologix #13310: asyncore handling of out-of-band data fails http://bugs.python.org/issue13310 closed by neologix #13315: tarfile extract fails on OS X system python due to chown of gi http://bugs.python.org/issue13315 closed by ned.deily #13318: Shelve second tier array subscript "[ ]" key creation doesn't http://bugs.python.org/issue13318 closed by ned.deily #13324: fcntl module doesn't support F_NOCACHE (OS X specific) results http://bugs.python.org/issue13324 closed by neologix #13334: Erroneous Size check in _PyString_Resize http://bugs.python.org/issue13334 closed by amaury.forgeotdarc #13339: Missing semicolon at Modules/posixsubprocess.c:4511 http://bugs.python.org/issue13339 closed by rosslagerwall #670664: HTMLParser.py - more robust SCRIPT tag parsing http://bugs.python.org/issue670664 closed by ezio.melotti From victor.stinner at haypocalc.com Fri Nov 4 19:08:09 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Fri, 4 Nov 2011 19:08:09 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Port code page codec to Unicode API. In-Reply-To: References: Message-ID: <201111041908.09455.victor.stinner@haypocalc.com> Le vendredi 4 novembre 2011 18:23:26, martin.v.loewis a ?crit : > http://hg.python.org/cpython/rev/9191f804d376 > changeset: 73353:9191f804d376 > parent: 73351:2bec7c452b39 > user: Martin v. L?wis > date: Fri Nov 04 18:23:06 2011 +0100 > summary: > Port code page codec to Unicode API. Oh please, try to avoid introducing tabs when a file uses spaces. All C files are supposed to use an indentation of 4 spaces. Victor From eric at trueblade.com Fri Nov 4 21:21:46 2011 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 04 Nov 2011 16:21:46 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Inline the advisory text on how to use the shelve module. In-Reply-To: References: Message-ID: <4EB4495A.7050706@trueblade.com> On 11/4/2011 4:08 PM, raymond.hettinger wrote: > - .. note:: > + Like file objects, shelve objects should closed explicitly to assure > + that the peristent data is flushed to disk. Missing "be" there, I think: "should be closed". Eric. From ezio.melotti at gmail.com Fri Nov 4 21:25:00 2011 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 04 Nov 2011 22:25:00 +0200 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Inline the advisory text on how to use the shelve module. In-Reply-To: <4EB4495A.7050706@trueblade.com> References: <4EB4495A.7050706@trueblade.com> Message-ID: <4EB44A1C.4000200@gmail.com> On 04/11/2011 22.21, Eric V. Smith wrote: > On 11/4/2011 4:08 PM, raymond.hettinger wrote: > >> - .. note:: >> + Like file objects, shelve objects should closed explicitly to assure >> + that the peristent data is flushed to disk. > Missing "be" there, I think: "should be closed". > > Eric. And on the next line it should be 'persistent'. Best Regards, Ezio Melotti From eric at trueblade.com Fri Nov 4 21:34:30 2011 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 04 Nov 2011 16:34:30 -0400 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Inline the advisory text on how to use the shelve module. In-Reply-To: <4EB44A1C.4000200@gmail.com> References: <4EB4495A.7050706@trueblade.com> <4EB44A1C.4000200@gmail.com> Message-ID: <4EB44C56.6080908@trueblade.com> On 11/4/2011 4:25 PM, Ezio Melotti wrote: > On 04/11/2011 22.21, Eric V. Smith wrote: >> On 11/4/2011 4:08 PM, raymond.hettinger wrote: >> >>> - .. note:: >>> + Like file objects, shelve objects should closed explicitly to assure >>> + that the peristent data is flushed to disk. >> Missing "be" there, I think: "should be closed". >> >> Eric. > > And on the next line it should be 'persistent'. And I'd argue that it should be "ensure" instead of "assure": you "ensure" an event occurs, you "assure" a person that it does. But it's a nit I wouldn't normally object to. Ezio is just making me re-read the sentence! Eric. From tjreedy at udel.edu Fri Nov 4 23:11:06 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 04 Nov 2011 18:11:06 -0400 Subject: [Python-Dev] Unicode exception indexing In-Reply-To: <4EB396CA.2060801@v.loewis.de> References: <20111103181442.Horde.6eRyY1NNcXdOsswC8zqHn-A@webmail.df.eu> <20111103202950.04be04a8@pitrou.net> <4EB30BD4.6030703@v.loewis.de> <20111104031832.2aed713d@pitrou.net> <4EB396CA.2060801@v.loewis.de> Message-ID: On 11/4/2011 3:39 AM, "Martin v. L?wis" wrote: >> Is it worth the hassle? We can just port our existing error handlers, >> and I guess the few third-party error handlers written in C (if any) >> can bear the transition. > > That was my question exactly. As the author of PEP 393, I was leaning > towards full backwards compatibility, but you, Victor, and Guido tell > me not to worry - so I won't :-) While we need to keep the old api, I do not think we do not need to encourage its continued use by actively supporting it with new code. When 3.3 comes out, I think it should be socially OK to write C code only for 3.3+ by only using the new api. -- Terry Jan Reedy From pje at telecommunity.com Sat Nov 5 03:24:52 2011 From: pje at telecommunity.com (PJ Eby) Date: Fri, 4 Nov 2011 22:24:52 -0400 Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: References: Message-ID: On Sun, Oct 30, 2011 at 6:52 PM, Paul Moore wrote: > On 30 October 2011 18:04, Ned Deily wrote: > > Has anyone analyzed the current packages on PyPI to see how many provide > > binary distributions and in what format? > > A very quick and dirty check: > > dmg: 5 > rpm: 12 > msi: 23 > dumb: 132 > wininst: 364 > egg: 2570 > > That's number of packages with binary distributions in that format. > It's hard to be sure about egg distributions, as many of these could > be pure-python (there's no way I know, from the PyPI metadata, to > check this). > FYI, the egg filename will contain a distutils platform identifier (e.g. 'win32', 'macosx', 'linux', etc.) after the 'py2.x' tag if the egg is platform-specific. Otherwise, it's pure Python. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Sat Nov 5 03:30:14 2011 From: pje at telecommunity.com (PJ Eby) Date: Fri, 4 Nov 2011 22:30:14 -0400 Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: References: Message-ID: Urgh. I guess that was already answered. Guess this'll teach me not to reply to a thread before waiting for ALL the messages to download over a low-bandwidth connection... (am on the road at the moment and catching up on stuff in spare cycles - sorry for the noise) On Fri, Nov 4, 2011 at 10:24 PM, PJ Eby wrote: > On Sun, Oct 30, 2011 at 6:52 PM, Paul Moore wrote: > >> On 30 October 2011 18:04, Ned Deily wrote: >> > Has anyone analyzed the current packages on PyPI to see how many provide >> > binary distributions and in what format? >> >> A very quick and dirty check: >> >> dmg: 5 >> rpm: 12 >> msi: 23 >> dumb: 132 >> wininst: 364 >> egg: 2570 >> >> That's number of packages with binary distributions in that format. >> It's hard to be sure about egg distributions, as many of these could >> be pure-python (there's no way I know, from the PyPI metadata, to >> check this). >> > > FYI, the egg filename will contain a distutils platform identifier (e.g. > 'win32', 'macosx', 'linux', etc.) after the 'py2.x' tag if the egg is > platform-specific. Otherwise, it's pure Python. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 5 07:39:07 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 Nov 2011 16:39:07 +1000 Subject: [Python-Dev] Code cleanups in stable branches? In-Reply-To: <4EB41B4D.9060101@netwok.org> References: <4EAED974.602@netwok.org> <20111101163147.5ca8f2fe@resist> <2BA9EBA9-3750-4827-98E9-71002A1D9C2A@gmail.com> <4EB41B4D.9060101@netwok.org> Message-ID: On Sat, Nov 5, 2011 at 3:05 AM, ?ric Araujo wrote: > Nick and Brett share the opinion that some code cleanups can be > considered bugfixes, whereas MvL, Barry and Raymond defend that we never > know what can get broken and it?s not worth risking it. > > I have added a comment on #13283 (removal of two unused variable in > locale.py) to restate this policy, but haven?t asked for a revert; I did > not comment on #10519 (Avoid unnecessary recursive function calls), as > it can be considered a bugfix and Raymond took part in the discussion > and thus implicitly approved the change. There's always going to be a grey area where it's a judgment call - my opinion is basically the same as what you state in your second paragraph. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at haypocalc.com Sat Nov 5 17:21:09 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 5 Nov 2011 17:21:09 +0100 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses Message-ID: <201111051721.09350.victor.stinner@haypocalc.com> Hi, PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and __setitem__ for dict subclasses. Is there a reason for that? I found this surprising behaviour when I replaced a dict by a custom dict checking the key type on set. But my __setitem__ was not called because the function using the dict was implemented in C (and I didn't know that ;-)). Victor From benjamin at python.org Sat Nov 5 17:34:47 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 5 Nov 2011 12:34:47 -0400 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses In-Reply-To: <201111051721.09350.victor.stinner@haypocalc.com> References: <201111051721.09350.victor.stinner@haypocalc.com> Message-ID: 2011/11/5 Victor Stinner : > Hi, > > PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and __setitem__ > for dict subclasses. Is there a reason for that? It's the same reason that PyUnicode_Concat doesn't call __add__ on unicode subclasses or PyList_Append doesn't call "append" on list subclasses. It's a concrete API. Code which expects subclasses should use PyObject_GetItem and friends, the abstract API. -- Regards, Benjamin From merwok at netwok.org Sat Nov 5 17:34:59 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 05 Nov 2011 17:34:59 +0100 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses In-Reply-To: <201111051721.09350.victor.stinner@haypocalc.com> References: <201111051721.09350.victor.stinner@haypocalc.com> Message-ID: <4EB565B3.6020600@netwok.org> Hi Victor, > PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and __setitem__ > for dict subclasses. Is there a reason for that? http://bugs.python.org/issue10977 ?Currently, the concrete object C API bypasses any methods defined on subclasses of builtin types.? Cheers From solipsis at pitrou.net Sat Nov 5 23:26:11 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 Nov 2011 23:26:11 +0100 Subject: [Python-Dev] Why does _PyUnicode_FromId return a new reference? Message-ID: <20111105232611.4589e243@pitrou.net> Given it returns an eternal object, and it's almost always used temporarily (for attribute lookup, string joining, etc.), it would seem more practical for it to return a borrowed reference. Regards Antoine. From martin at v.loewis.de Sun Nov 6 08:08:46 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 06 Nov 2011 08:08:46 +0100 Subject: [Python-Dev] Why does _PyUnicode_FromId return a new reference? In-Reply-To: <20111105232611.4589e243@pitrou.net> References: <20111105232611.4589e243@pitrou.net> Message-ID: <4EB6327E.2060607@v.loewis.de> Am 05.11.2011 23:26, schrieb Antoine Pitrou: > > Given it returns an eternal object, and it's almost always used > temporarily (for attribute lookup, string joining, etc.), it would seem > more practical for it to return a borrowed reference. For purity reasons: all PyUnicode_From* functions return new references (most of them return actually new objects most of the time); having PyUnicode_FromId return a borrowed reference would break uniformity. I personally it difficult to remember which functions return borrowed references, and wish there were fewer of them in the API (with PyArg_ParseTuple being the notable exception where borrowed references are a good idea). Now, practicality beats purity, so the real answer is: I just didn't consider that it might return a borrowed reference. Regards, Martin From martin at v.loewis.de Sun Nov 6 08:53:51 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 06 Nov 2011 08:53:51 +0100 Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: References: <4EAE7BB2.2050502@v.loewis.de> Message-ID: <4EB63D0F.9010505@v.loewis.de> > I agree in principle, but one thing you get with setup.cfg which seems harder to > achieve with MSI is the use of Python to do things at installation time. For > example, with setup.cfg hooks, you can use ctypes to make Windows API calls at > installation time to decide where to put things. While this same flexibility > exists in the MSI format (with custom actions and so forth) it's not as readily > accessible to someone who wants to use Python to code this type of installation > logic. Again, that's a bdist_msi implementation issue. It could generate custom actions that run the "proper" setup.cfg hooks (I presume - I have no idea what a setup.cfg hook actually is). Regards, Martin From martin at v.loewis.de Sun Nov 6 09:10:59 2011 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 06 Nov 2011 09:10:59 +0100 Subject: [Python-Dev] PEP 382 specification and implementation complete Message-ID: <4EB64113.3050108@v.loewis.de> I had announced this to import-sig already; now python-dev. I have now written an implementation of PEP 382, and fixed some details of the PEP in the process. The implementation is available at http://hg.python.org/features/pep-382-2/ With this PEP, a Python package P can consist of either a P/__init__.py directory and module, or multiple P.pyp directories, or both. Using a directory suffix resulted from the following requirements/observations: - people apparently prefer an approach where a directory has to be declared as a Python package (several people commented this after my PyConDE talk, expressing dislike of how Java packages are "unflagged" directories) - people also commented that any declaration should indicate that this is about Python, hence the choice of .pyp as the directory suffix. - in choosing between a file marker inside of the directory (e.g. zope-interfaces.pyp) and a directory suffix, the directory suffix wins for simplicity reasons. A file marker would have to have a name which wouldn't matter except that it needs to be unique - which is a confusing requirement that people likely would fail to meet. In the new form, the PEP was much easier to implement than in the first version (plus I understand import.c better now). This implementation now features .pyp directories, zipimporter support, documentation and test cases. As the next step, I'd like to advance this to ask for pronouncement. Regards, Martin From petri at digip.org Sun Nov 6 08:49:27 2011 From: petri at digip.org (Petri Lehtinen) Date: Sun, 6 Nov 2011 09:49:27 +0200 Subject: [Python-Dev] None as slice params to list.index() and tuple.index() Message-ID: <20111106074927.GA2146@ihaa> list.index() and list.tuple() don't currently accept None as slice parameters, as reported in http://bugs.python.org/issue13340. For example: >>> [1, 2, 3].index(2, None, None) Traceback (most recent call last): File "", line 1, in TypeError: slice indices must be integers or None or have an __index__ method The error message of list.index() and tuple.index() (as a consequence of using _PyEval_SliceIndex() for parsing arguments) indicates that None is a valid value. Currently, find(), rfind(), index(), rindex(), count(), startswith() and endswith() of str, bytes and bytearray accept None. Should list.index() and tuple.index() accept it, too? I'm bringing this up because I already committed a fix that enables the None arguments, but Raymond pointed out that maybe they shouldn't accept None at all. I also committed the fix to 3.2 and 2.7 based on discussion in http://bugs.python.org/issue11828#msg133532, in which Raymond says that for string functions, this can be considered a bug because the exception's error message makes a promise that None is accepted. Petri From ncoghlan at gmail.com Sun Nov 6 13:29:56 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 Nov 2011 22:29:56 +1000 Subject: [Python-Dev] PEP 382 specification and implementation complete In-Reply-To: <4EB64113.3050108@v.loewis.de> References: <4EB64113.3050108@v.loewis.de> Message-ID: On Sun, Nov 6, 2011 at 6:10 PM, "Martin v. L?wis" wrote: > I had announced this to import-sig already; now python-dev. > > I have now written an implementation of PEP 382, and fixed some details > of the PEP in the process. The implementation is available at > > http://hg.python.org/features/pep-382-2/ > > With this PEP, a Python package P can consist of either a P/__init__.py > directory and module, or multiple P.pyp directories, or both. Using a > directory suffix resulted from the following requirements/observations: > - people apparently prefer an approach where a directory has to be > ?declared as a Python package (several people commented this after > ?my PyConDE talk, expressing dislike of how Java packages are > ?"unflagged" directories) > - people also commented that any declaration should indicate that this > ?is about Python, hence the choice of .pyp as the directory suffix. > - in choosing between a file marker inside of the directory (e.g. > ?zope-interfaces.pyp) and a directory suffix, the directory suffix > ?wins for simplicity reasons. A file marker would have to have a > ?name which wouldn't matter except that it needs to be unique - which > ?is a confusing requirement that people likely would fail to meet. I finally got around to doing the search Barry and I promised for the previously raised objections to the directory suffix approach. They were in the "Rejected Alternatives" section of PJE's proposed redraft of PEP 382 (before he decided to create PEP 402 as a competing proposal): ============= * Another approach considered during revisions to this PEP was to simply rename package directories to add a suffix like ``.ns`` or ``-ns``, to indicate their namespaced nature. This would effect a small performance improvement for the initial import of a namespace package, avoid the need to create empty ``*.ns`` files, and even make it clearer that the directory involved is a namespace portion. The downsides, however, are also plentiful. If a package starts its life as a normal package, it must be renamed when it becomes a namespace, with the implied consequences for revision control tools. Further, there is an immense body of existing code (including the distutils and many other packaging tools) that expect a package directory's name to be the same as the package name. And porting existing Python 2.x namespace packages to Python 3 would require widespread directory renaming as well. In short, this approach would require a vastly larger number of changes to both the standard library and third-party code, for a tiny potential performance improvement and a small increase in clarity. It was therefore rejected on "practicality vs. purity" grounds. ============= I think this was based on the assumption that *existing* namespace package approaches would break under the new scheme. Since that is not the case, I suspect those previous objections were overstated (and all packaging related code manages to cope well enough with modules where the file name doesn't match the package name) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From le.mognon at gmail.com Sun Nov 6 16:36:48 2011 From: le.mognon at gmail.com (Martin Goudreau) Date: Sun, 6 Nov 2011 15:36:48 +0000 (UTC) Subject: [Python-Dev] genious hack in python References: <20110923155451.GC21909@iskra.aviel.ru> Message-ID: Write Oleg, If there is a better way to implemant this? sure. But the idea is still good. From pje at telecommunity.com Sun Nov 6 16:38:54 2011 From: pje at telecommunity.com (PJ Eby) Date: Sun, 6 Nov 2011 10:38:54 -0500 Subject: [Python-Dev] PEP 382 specification and implementation complete In-Reply-To: References: <4EB64113.3050108@v.loewis.de> Message-ID: On Sun, Nov 6, 2011 at 7:29 AM, Nick Coghlan wrote: > I think this was based on the assumption that *existing* namespace > package approaches would break under the new scheme. Since that is not > the case, I suspect those previous objections were overstated (and all > packaging related code manages to cope well enough with modules where > the file name doesn't match the package name) > I was actually referring to all the code that does things like split package names on '.' and then use os.path.join, or that makes assumptions which are the moral equivalent of that. PEP 402's version of namespace packages should break less of that sort of code than adding a directory name extension. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Nov 6 17:39:02 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 06 Nov 2011 17:39:02 +0100 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses In-Reply-To: <4EB565B3.6020600@netwok.org> References: <201111051721.09350.victor.stinner@haypocalc.com> <4EB565B3.6020600@netwok.org> Message-ID: Le 05/11/2011 17:34, ?ric Araujo a ?crit : > Hi Victor, > >> PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and __setitem__ >> for dict subclasses. Is there a reason for that? > > http://bugs.python.org/issue10977 ?Currently, the concrete object C API > bypasses any methods defined on subclasses of builtin types.? I think that's the correct behaviour. If you expect to get an arbitrary mapping, just use the abstract API. You should use PyDict_GetItem when you know the object is exactly a dict (generally because you have created it yourself, or you know at least where and how it was created). Regards Antoine. From solipsis at pitrou.net Sun Nov 6 17:40:25 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 06 Nov 2011 17:40:25 +0100 Subject: [Python-Dev] Why does _PyUnicode_FromId return a new reference? In-Reply-To: <4EB6327E.2060607@v.loewis.de> References: <20111105232611.4589e243@pitrou.net> <4EB6327E.2060607@v.loewis.de> Message-ID: Le 06/11/2011 08:08, "Martin v. L?wis" a ?crit : > Am 05.11.2011 23:26, schrieb Antoine Pitrou: >> >> Given it returns an eternal object, and it's almost always used >> temporarily (for attribute lookup, string joining, etc.), it would seem >> more practical for it to return a borrowed reference. > > For purity reasons: all PyUnicode_From* functions return new references > (most of them return actually new objects most of the time); having > PyUnicode_FromId return a borrowed reference would break uniformity. > I personally it difficult to remember which functions return borrowed > references, and wish there were fewer of them in the API I agree with this general sentiment. For PyUnicode_FromId, though, I think it makes sense to return a borrowed reference. Regards Antoine. From solipsis at pitrou.net Sun Nov 6 20:16:44 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 6 Nov 2011 20:16:44 +0100 Subject: [Python-Dev] None as slice params to list.index() and tuple.index() References: <20111106074927.GA2146@ihaa> Message-ID: <20111106201644.03da0302@pitrou.net> On Sun, 6 Nov 2011 09:49:27 +0200 Petri Lehtinen wrote: > list.index() and list.tuple() don't currently accept None as slice > parameters, as reported in http://bugs.python.org/issue13340. For > example: > > >>> [1, 2, 3].index(2, None, None) > Traceback (most recent call last): > File "", line 1, in > TypeError: slice indices must be integers or None or have an __index__ method > > The error message of list.index() and tuple.index() (as a consequence > of using _PyEval_SliceIndex() for parsing arguments) indicates that > None is a valid value. > > Currently, find(), rfind(), index(), rindex(), count(), startswith() > and endswith() of str, bytes and bytearray accept None. Should > list.index() and tuple.index() accept it, too? Either that or fix the error message. I can't find much benefit in accepting None, that said (nor in refusing it). Regards Antoine. From robertc at robertcollins.net Sun Nov 6 20:39:22 2011 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 7 Nov 2011 08:39:22 +1300 Subject: [Python-Dev] None as slice params to list.index() and tuple.index() In-Reply-To: <20111106201644.03da0302@pitrou.net> References: <20111106074927.GA2146@ihaa> <20111106201644.03da0302@pitrou.net> Message-ID: On Mon, Nov 7, 2011 at 8:16 AM, Antoine Pitrou wrote: > Either that or fix the error message. I can't find much benefit in > accepting None, that said (nor in refusing it). Its very convenient when working with slices to not have to special case the end points. +1 on accepting None, FWIW. -Rob From benjamin at python.org Sun Nov 6 20:46:33 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 6 Nov 2011 14:46:33 -0500 Subject: [Python-Dev] [Python-checkins] cpython: Fix #13327. Remove the need for an explicit None as the second argument to In-Reply-To: References: Message-ID: 2011/11/6 brian.curtin : > - > - ? ?if (!PyArg_ParseTuple(args, "O&O:utime", > + ? ?PyObject* arg = NULL; You could set arg = Py_None here. > + > + ? ?if (!PyArg_ParseTuple(args, "O&|O:utime", > ? ? ? ? ? ? ? ? ? ? ? ? ? PyUnicode_FSConverter, &opath, &arg)) > ? ? ? ? return NULL; > ? ? path = PyBytes_AsString(opath); > - ? ?if (arg == Py_None) { > + ? ?if (!arg || (arg == Py_None)) { And then not have to change this. -- Regards, Benjamin From brian.curtin at gmail.com Sun Nov 6 20:54:11 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Sun, 6 Nov 2011 13:54:11 -0600 Subject: [Python-Dev] [Python-checkins] cpython: Fix #13327. Remove the need for an explicit None as the second argument to In-Reply-To: References: Message-ID: On Sun, Nov 6, 2011 at 13:46, Benjamin Peterson wrote: > 2011/11/6 brian.curtin : >> - >> - ? ?if (!PyArg_ParseTuple(args, "O&O:utime", >> + ? ?PyObject* arg = NULL; > > You could set arg = Py_None here. >> + >> + ? ?if (!PyArg_ParseTuple(args, "O&|O:utime", >> ? ? ? ? ? ? ? ? ? ? ? ? ? PyUnicode_FSConverter, &opath, &arg)) >> ? ? ? ? return NULL; >> ? ? path = PyBytes_AsString(opath); >> - ? ?if (arg == Py_None) { >> + ? ?if (!arg || (arg == Py_None)) { > > And then not have to change this. Ah, good point. I'm going to be making this same change to the other functions in utime family, so I'll look at updating this one and change the others accordingly. From raymond.hettinger at gmail.com Sun Nov 6 20:56:41 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 6 Nov 2011 11:56:41 -0800 Subject: [Python-Dev] None as slice params to list.index() and tuple.index() In-Reply-To: <20111106074927.GA2146@ihaa> References: <20111106074927.GA2146@ihaa> Message-ID: <0C2EB7D1-F6A4-4240-B71E-CD391AA0BFDD@gmail.com> On Nov 6, 2011, at 12:49 AM, Petri Lehtinen wrote: > Currently, find(), rfind(), index(), rindex(), count(), startswith() > and endswith() of str, bytes and bytearray accept None. Should > list.index() and tuple.index() accept it, too? The string methods accept None as a historical artifact of being in string.py where optional arguments defaulted to None. That doesn't imply that you should change every other API that accepts a start argument. The list.index() API is ancient and stable. There has been little or no demonstrated need for its start argument to be None. Also, the list API does not exist in isolation. It shows up in strings, the sequence ABC, and every API that aspires to be list-like. Overall, I'm -1 on this change and find it to be gratuitous. We have *way* to many micro API changes of dubious benefit. Also, the change should not have been applied to Py2.7 and Py3.2. We don't backport API changes. That would just make Jython and IronPython become non-compliant in mid-stream. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Nov 6 21:51:49 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 06 Nov 2011 21:51:49 +0100 Subject: [Python-Dev] PEP 382 specification and implementation complete In-Reply-To: References: <4EB64113.3050108@v.loewis.de> Message-ID: <4EB6F365.20508@v.loewis.de> > I think this was based on the assumption that *existing* namespace > package approaches would break under the new scheme. Since that is not > the case, I suspect those previous objections were overstated (and all > packaging related code manages to cope well enough with modules where > the file name doesn't match the package name) I just drafted a message rebutting Phillip's objection, when I then found that you disagree as well :-) One elaboration: > The downsides, however, are also plentiful. If a package starts > its life as a normal package, it must be renamed when it becomes > a namespace, with the implied consequences for revision control > tools. If a package starts out as a regular (P/__init__.py) package (possibly with __init__.py being empty), then making it a namespace package actually requires no change at all - that single __init__.py could continue to exist. Developers may decide to still delete __init__.py and rename the package to .pyp (with VCS consequences), but that would be their choice deciding whether purity beats practicality in their case (where I think most developers actually would rename the directory once they can drop support for pre-3.3 releases). Regards, Martin From martin at v.loewis.de Sun Nov 6 22:00:05 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 06 Nov 2011 22:00:05 +0100 Subject: [Python-Dev] PEP 382 specification and implementation complete In-Reply-To: References: <4EB64113.3050108@v.loewis.de> Message-ID: <4EB6F555.8010509@v.loewis.de> Am 06.11.2011 16:38, schrieb PJ Eby: > On Sun, Nov 6, 2011 at 7:29 AM, Nick Coghlan > wrote: > > I think this was based on the assumption that *existing* namespace > package approaches would break under the new scheme. Since that is not > the case, I suspect those previous objections were overstated (and all > packaging related code manages to cope well enough with modules where > the file name doesn't match the package name) > > > I was actually referring to all the code that does things like split > package names on '.' and then use os.path.join, or that makes > assumptions which are the moral equivalent of that. PEP 402's version > of namespace packages should break less of that sort of code than adding > a directory name extension. I think tools emulating the import mechanism will break no matter what change is made: the whole point of changing it is that it does something new that didn't work before. I think adjusting the tools will be straight-forward: they already need to recognize that an imported name could come either from a file (with various extensions), or a directory with special properties. So extending this should be "easy". Also, the number of tools that emulate the Python import algorithm is rather small. Tools that merely inspect __path__ after importing a package will continue to work just fine even under PEP 382. Regards, Martin From martin at v.loewis.de Sun Nov 6 22:26:34 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 06 Nov 2011 22:26:34 +0100 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses In-Reply-To: References: <201111051721.09350.victor.stinner@haypocalc.com> <4EB565B3.6020600@netwok.org> Message-ID: <4EB6FB8A.10000@v.loewis.de> Am 06.11.2011 17:39, schrieb Antoine Pitrou: > Le 05/11/2011 17:34, ?ric Araujo a ?crit : >> Hi Victor, >> >>> PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and >>> __setitem__ >>> for dict subclasses. Is there a reason for that? >> >> http://bugs.python.org/issue10977 ?Currently, the concrete object C API >> bypasses any methods defined on subclasses of builtin types.? > > I think that's the correct behaviour. If you expect to get an arbitrary > mapping, just use the abstract API. You should use PyDict_GetItem when > you know the object is exactly a dict (generally because you have > created it yourself, or you know at least where and how it was created). If anybody has spare time at their hands, they should go through the code base and eliminate all uses of concrete API where it's not certain that the object really is of the base class (unless I missed that somebody already did, and that any remaining occurrences would be just minor bugs). Regards, Martin From raymond.hettinger at gmail.com Mon Nov 7 06:51:33 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 6 Nov 2011 21:51:33 -0800 Subject: [Python-Dev] PyDict_Get/SetItem and dict subclasses In-Reply-To: <4EB6FB8A.10000@v.loewis.de> References: <201111051721.09350.victor.stinner@haypocalc.com> <4EB565B3.6020600@netwok.org> <4EB6FB8A.10000@v.loewis.de> Message-ID: <1DCF546C-F0C4-4B26-A956-FD9947434545@gmail.com> On Nov 6, 2011, at 1:26 PM, Martin v. L?wis wrote: > Am 06.11.2011 17:39, schrieb Antoine Pitrou: >> Le 05/11/2011 17:34, ?ric Araujo a ?crit : >>> Hi Victor, >>> >>>> PyDict_GetItem() and PyDict_SetItem() don't call __getitem__ and >>>> __setitem__ >>>> for dict subclasses. Is there a reason for that? >>> >>> http://bugs.python.org/issue10977 ?Currently, the concrete object C API >>> bypasses any methods defined on subclasses of builtin types.? >> >> I think that's the correct behaviour. If you expect to get an arbitrary >> mapping, just use the abstract API. You should use PyDict_GetItem when >> you know the object is exactly a dict (generally because you have >> created it yourself, or you know at least where and how it was created). > > If anybody has spare time at their hands, they should go through the > code base and eliminate all uses of concrete API where it's not certain > that the object really is of the base class (unless I missed that > somebody already did, and that any remaining occurrences would be just > minor bugs). Also check uses of PyList_SetItem and other uses of the concrete API. Raymond From roundup-admin at psf.upfronthosting.co.za Mon Nov 7 09:55:13 2011 From: roundup-admin at psf.upfronthosting.co.za (Python tracker) Date: Mon, 07 Nov 2011 08:55:13 +0000 Subject: [Python-Dev] Failed issue tracker submission Message-ID: <20111107085513.767BF1DE4D@psf.upfronthosting.co.za> The node specified by the designator in the subject of your message ("13661") does not exist. Subject was: "[issue13661] [status=closed; resolution=fixed; stage=committed/rejected]" Mail Gateway Help ================= Incoming messages are examined for multiple parts: . In a multipart/mixed message or part, each subpart is extracted and examined. The text/plain subparts are assembled to form the textual body of the message, to be stored in the file associated with a "msg" class node. Any parts of other types are each stored in separate files and given "file" class nodes that are linked to the "msg" node. . In a multipart/alternative message or part, we look for a text/plain subpart and ignore the other parts. . A message/rfc822 is treated similar tomultipart/mixed (except for special handling of the first text part) if unpack_rfc822 is set in the mailgw config section. Summary ------- The "summary" property on message nodes is taken from the first non-quoting section in the message body. The message body is divided into sections by blank lines. Sections where the second and all subsequent lines begin with a ">" or "|" character are considered "quoting sections". The first line of the first non-quoting section becomes the summary of the message. Addresses --------- All of the addresses in the To: and Cc: headers of the incoming message are looked up among the user nodes, and the corresponding users are placed in the "recipients" property on the new "msg" node. The address in the From: header similarly determines the "author" property of the new "msg" node. The default handling for addresses that don't have corresponding users is to create new users with no passwords and a username equal to the address. (The web interface does not permit logins for users with no passwords.) If we prefer to reject mail from outside sources, we can simply register an auditor on the "user" class that prevents the creation of user nodes with no passwords. Actions ------- The subject line of the incoming message is examined to determine whether the message is an attempt to create a new item or to discuss an existing item. A designator enclosed in square brackets is sought as the first thing on the subject line (after skipping any "Fwd:" or "Re:" prefixes). If an item designator (class name and id number) is found there, the newly created "msg" node is added to the "messages" property for that item, and any new "file" nodes are added to the "files" property for the item. If just an item class name is found there, we attempt to create a new item of that class with its "messages" property initialized to contain the new "msg" node and its "files" property initialized to contain any new "file" nodes. Triggers -------- Both cases may trigger detectors (in the first case we are calling the set() method to add the message to the item's spool; in the second case we are calling the create() method to create a new node). If an auditor raises an exception, the original message is bounced back to the sender with the explanatory message given in the exception. $Id: mailgw.py,v 1.196 2008-07-23 03:04:44 richard Exp $ -------------- next part -------------- Return-Path: X-Original-To: report at bugs.python.org Delivered-To: roundup+tracker at psf.upfronthosting.co.za Received: from mail.python.org (mail.python.org [82.94.164.166]) by psf.upfronthosting.co.za (Postfix) with ESMTPS id 2EFFC1DE47 for ; Mon, 7 Nov 2011 09:55:13 +0100 (CET) Received: from albatross.python.org (localhost [127.0.0.1]) by mail.python.org (Postfix) with ESMTP id 3ScSJm5CFjzNHh for ; Mon, 7 Nov 2011 09:55:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901; t=1320656104; bh=cyzeMXQr1LFbdsO9tordSV8pQdV5MwemLO4/gJ4vIY0=; h=Date:Message-Id:Content-Type:MIME-Version: Content-Transfer-Encoding:From:To:Subject; b=jc4GjbMIfB4iVA2crVNpOAuf8gWkNAWpABvzog5HDVxOrgwEYwPLu+yg/c02Ix5zf P+0/l1dcenWA/8pFCOthL4UvWmykT+XVrUf6FtByHAP0JijJR9ItrOM5NKD6SBN6e5 5vsORTztxsUksIq48QwMAQsURD408JBkbkM0b7bM= Received: from localhost (HELO mail.python.org) (127.0.0.1) by albatross.python.org with SMTP; 07 Nov 2011 09:55:04 +0100 Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.python.org (Postfix) with ESMTPS for ; Mon, 7 Nov 2011 09:55:04 +0100 (CET) Received: from localhost ([127.0.0.1] helo=dinsdale.python.org ident=hg) by dinsdale.python.org with esmtp (Exim 4.72) (envelope-from ) id 1RNKys-0002nh-Es for report at bugs.python.org; Mon, 07 Nov 2011 09:55:02 +0100 Date: Mon, 07 Nov 2011 09:55:02 +0100 Message-Id: Content-Type: text/plain; charset="utf8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 From: python-dev at python.org To: report at bugs.python.org Subject: [issue13661] [status=closed; resolution=fixed; stage=committed/rejected] TmV3IGNoYW5nZXNldCA1ZjNiNzUyOGIxNDQgYnkgVmluYXkgU2FqaXAgaW4gYnJhbmNoICcyLjcn OgpDbG9zZXMgIzEzNjYxOiBDaGVjayBhZGRlZCBmb3IgdHlwZSBvZiBsb2dnZXIgbmFtZS4KaHR0 cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvNWYzYjc1MjhiMTQ0CgoKTmV3IGNoYW5nZXNl dCBhM2JhOTA1NDQ3YmEgYnkgVmluYXkgU2FqaXAgaW4gYnJhbmNoICczLjInOgpDbG9zZXMgIzEz NjYxOiBDaGVjayBhZGRlZCBmb3IgdHlwZSBvZiBsb2dnZXIgbmFtZS4KaHR0cDovL2hnLnB5dGhv bi5vcmcvY3B5dGhvbi9yZXYvYTNiYTkwNTQ0N2JhCg== From vinay_sajip at yahoo.co.uk Mon Nov 7 10:26:09 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 7 Nov 2011 09:26:09 +0000 (UTC) Subject: [Python-Dev] Packaging and binary distributions References: <4EAE7BB2.2050502@v.loewis.de> <4EB63D0F.9010505@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > Again, that's a bdist_msi implementation issue. It could generate custom > actions that run the "proper" setup.cfg hooks (I presume - I have no > idea what a setup.cfg hook actually is). > I know that custom hooks are quite powerful, but my comment was about having the functionality in Python. Here's an example of a working hooks.py: import os import sys if os.name == 'nt': def get_personal_path(): from ctypes import (wintypes, windll, create_unicode_buffer, WinError, c_int, HRESULT) from ctypes.wintypes import HWND, HANDLE, DWORD, LPWSTR, MAX_PATH CSIDL_PERSONAL = 5 # We use an older API to remain XP-compatible. SHGetFolderPath = windll.shell32.SHGetFolderPathW SHGetFolderPath.argtypes = [HWND, c_int, HANDLE, DWORD, LPWSTR] SHGetFolderPath.restype = DWORD path = create_unicode_buffer(MAX_PATH) hr = SHGetFolderPath(0, CSIDL_PERSONAL, 0, 0, path) if hr != 0: raise WinError() return path.value path = get_personal_path() del get_personal_path # Assume ~\Documents\WindowsPowerShell\Modules is in $PSModulePath, # which should be true in a default installation of PowerShell 2.0. psroot = os.path.join(path, 'WindowsPowerShell') psmodules = os.path.join(psroot, 'Modules') psscripts = os.path.join(psroot, 'Scripts') def setup(config): files = config['files'] if os.name != 'nt': files_to_add = 'virtualenvwrapper.sh = {scripts}' else: files_to_add = ('winfiles/ *.ps* = ' '{psmodules}/virtualenvwrapper\n' 'winfiles/ vew_profile.ps1 = {psscripts}') if 'resources' not in files: files['resources'] = files_to_add else: files['resources'] += '\n%s' % files_to_add def pre_install_data(cmd): if os.name == 'nt': cmd.categories['psmodules'] = psmodules cmd.categories['psscripts'] = psscripts cmd.categories['psroot'] = psroot which works with the following setup.cfg: [global] setup_hooks = hooks.setup [install_data] pre-hook.win32 = hooks.pre_install_data categories = cat1 = /path/one # comment cat2 = /path/two #[install_dist] #post-hook.win32 = hooks.post_install_dist [metadata] name = nemo version = 0.1 summary = New Environments Made, Obviously description = A tool to manage virtual environments download_url = UNKNOWN home_page = https://bitbucket.org/vinay.sajip/nemo author = Vinay Sajip author_email = vinay_sajip at yahoo.co.uk license = BSD classifier = Development Status :: 3 - Alpha Programming Language :: Python :: 3 Operating System :: OS Independent Intended Audience :: System Administrators Intended Audience :: Developers License :: OSI Approved :: BSD License requires_python = >= 3.3 [files] packages = nemo virtualenvwrapper scripts = nemo = nemo.main extra_files = hooks.py winfiles/* # Additional esources are added in hooks based on platform resources = nemo/scripts/** = {purelib} I'm curious to know how this level of flexibility can be achieved with the MSI format: I know one can code the equivalent logic in C (for example) in a custom action, but don't know how you can keep the logic in Python. Regards, Vinay Sajip From p.f.moore at gmail.com Mon Nov 7 11:22:27 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 Nov 2011 10:22:27 +0000 Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: References: <4EAE7BB2.2050502@v.loewis.de> <4EB63D0F.9010505@v.loewis.de> Message-ID: On 7 November 2011 09:26, Vinay Sajip wrote: > Martin v. L?wis v.loewis.de> writes: > >> >> Again, that's a bdist_msi implementation issue. It could generate custom >> actions that run the "proper" setup.cfg hooks (I presume - I have no >> idea what a setup.cfg hook actually is). >> > > I know that custom hooks are quite powerful, but my comment was about having > the functionality in Python. Here's an example of a working hooks.py: It seems to me that there are two separate things going on in this sample. It's not 100% clear that they are separate, at first glance, as packaging currently doesn't make a strong distinction between things going on at "build" time, and things going on at "install" time. This is essentially because the idea of binary installs is not fundamental to the design. (Thanks for sharing this example, btw, I hadn't really spotted this issue until I saw the code here). Suppose you have two people involved - the "packager" who uses the source code to create a binary distribution (MSI, wininst, zip, doesn't matter - conceptually, it's a set of "final" files that need no further processing and can just be put in the correct locations on the target PC) and the "end user" who takes that binary distribution and installs it on his PC. Some of the hook code is designed to run at "build" time (the stuff that adds the right resource files). This can be run on the packager's machine quite happily, as long as the packager is using the same OS as the end user. However, other parts of the hook code (the stuff that defines the custom categories) must run on the end user's PC, as it detects specific aspects of the target PC configuration. I think Martin is only really interested in the second type of hook here. I know that I am, insofar as they are the only type I would expect to need to support if I were building a new binary distribution format. But without the two types being more clearly separated, it's not obvious that it's possible to "just support one type" in quite that sense... Paul. PS There are subtleties here, of course - byte-compiling .py files is probably an install-time action rather than a build-time one, so my "no further processing required" comment isn't 100% true. But the basic principle certainly applies. From vinay_sajip at yahoo.co.uk Mon Nov 7 11:38:23 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 7 Nov 2011 10:38:23 +0000 (GMT) Subject: [Python-Dev] Packaging and binary distributions References: <4EAE7BB2.2050502@v.loewis.de> <4EB63D0F.9010505@v.loewis.de> Message-ID: <1320662303.42494.YahooMailNeo@web25805.mail.ukl.yahoo.com> > > It seems to me that there are two separate things going on in this > sample. It's not 100% clear that they are separate, at first glance, > as packaging currently doesn't make a strong distinction between > things going on at "build" time, and things going on at > "install" > time. This is essentially because the idea of binary installs is not > fundamental to the design. (Thanks for sharing this example, btw, I > hadn't really spotted this issue until I saw the code here). > > Suppose you have two people involved - the "packager" who uses the > source code to create a binary distribution (MSI, wininst, zip, > doesn't matter - conceptually, it's a set of "final" files > that need > no further processing and can just be put in the correct locations on > the target PC) and the "end user" who takes that binary distribution > and installs it on his PC. > > Some of the hook code is designed to run at "build" time (the stuff > that adds the right resource files). This can be run on the packager's > machine quite happily, as long as the packager is using the same OS as > the end user. However, other parts of the hook code (the stuff that > defines the custom categories) must run on the end user's PC, as it > detects specific aspects of the target PC configuration. In this case at least, the code *all* runs at installation time: the distributed package contains all files for all platforms, and at installation time the choice is made as to which files to actually install from the installation directory to the target directories. While this might not be ideal for all packagers, the only downside of having all files for all platforms available in a single distribution is disk space - an increasingly cheap commodity. OTOH there is some advantage in having a single package which would be usable on all platforms (supported by the package being distributed), albeit perhaps for a particular version of Python. > I think Martin is only really interested in the second type of hook > here. I know that I am, insofar as they are the only type I would > expect to need to support if I were building a new binary distribution > format. But without the two types being more clearly separated, it's > not obvious that it's possible to "just support one type" in quite > that sense... In terms of the flexibility required, the code to determine the "personal path" is being run at installation time, to determine the target folder for the PowerShell scripts. It's this kind of flexibility (by which I mean Python coded logic) that I don't see how to easily provide in the MSI format, short of recoding in e.g. C, in a custom action DLL or EXE. (This latter approach is what I've used in the PEP 397 launcher MSI.) Regards, Vinay Sajip From merwok at netwok.org Mon Nov 7 18:24:25 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 07 Nov 2011 18:24:25 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: References: Message-ID: <4EB81449.6040004@netwok.org> Hi, > http://hg.python.org/cpython/rev/bbc929bc2224 > user: Philip Jenvey > summary: > quote the type name for improved readability > > files: > Python/bltinmodule.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > > diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c > --- a/Python/bltinmodule.c > +++ b/Python/bltinmodule.c > @@ -1121,7 +1121,7 @@ > return NULL; > if (!PyIter_Check(it)) { > PyErr_Format(PyExc_TypeError, > - "%.200s object is not an iterator", > + "'%.200s' object is not an iterator", > it->ob_type->tp_name); > return NULL; > } What about displaying the repr of the type object? From stefan_ml at behnel.de Mon Nov 7 18:46:25 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 07 Nov 2011 18:46:25 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: <4EB81449.6040004@netwok.org> References: <4EB81449.6040004@netwok.org> Message-ID: ?ric Araujo, 07.11.2011 18:24: >> http://hg.python.org/cpython/rev/bbc929bc2224 > >> user: Philip Jenvey >> summary: >> quote the type name for improved readability >> >> files: >> Python/bltinmodule.c | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> >> diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c >> --- a/Python/bltinmodule.c >> +++ b/Python/bltinmodule.c >> @@ -1121,7 +1121,7 @@ >> return NULL; >> if (!PyIter_Check(it)) { >> PyErr_Format(PyExc_TypeError, >> - "%.200s object is not an iterator", >> + "'%.200s' object is not an iterator", >> it->ob_type->tp_name); >> return NULL; >> } > > What about displaying the repr of the type object? While I agree that this is more readable, quoted type names are rather rare if not pretty much unprecedented in core exception messages, so this is definitely not the only place that would need changing. However, note that arbitrarily changing exception messages always breaks someone's doctests, so my personal preference would be to keep it as it was. Stefan From michael at walle.cc Mon Nov 7 23:37:46 2011 From: michael at walle.cc (Michael Walle) Date: Mon, 7 Nov 2011 23:37:46 +0100 Subject: [Python-Dev] ctypes: alignment of (simple) types Message-ID: <201111072337.46438.michael@walle.cc> Hi all, gcc allows to set alignments for typedefs like: typedef double MyDouble __attribute__((__aligned__(8))); Now if i use this new type within a structure: struct s { char c; MyDouble d; }; The following holds: sizeof(struct s) == 16 and offsetof(struct s, d) == 8. ctypes doesn't seem to support this, although i saw a 'padded' function on the ctypes-users mailinglist which dynamically insert padding fields. I would consider this more or less a hack :) What do you think about adding a special attribute '_align_' which, if set for a data type overrides the hardcoded align property of that type? -- Michael From amauryfa at gmail.com Tue Nov 8 00:39:27 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 8 Nov 2011 00:39:27 +0100 Subject: [Python-Dev] ctypes: alignment of (simple) types In-Reply-To: <201111072337.46438.michael@walle.cc> References: <201111072337.46438.michael@walle.cc> Message-ID: Hi, 2011/11/7 Michael Walle > What do you think about adding a special attribute '_align_' which, if set > for > a data type overrides the hardcoded align property of that type? > It's a good idea. But you should also consider the other feature requests around custom alignments. IMO a good thing would be a way to specify a function that computes sizes and alignments, that one can override to implement specific compiler features. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Nov 8 09:41:42 2011 From: guido at python.org (Guido van Rossum) Date: Tue, 8 Nov 2011 00:41:42 -0800 Subject: [Python-Dev] None as slice params to list.index() and tuple.index() In-Reply-To: <0C2EB7D1-F6A4-4240-B71E-CD391AA0BFDD@gmail.com> References: <20111106074927.GA2146@ihaa> <0C2EB7D1-F6A4-4240-B71E-CD391AA0BFDD@gmail.com> Message-ID: Hm. I agree with Raymond that this should be treated as a feature request and not "fixed" in 2.7 / 3.2. (However the mention of 'find' in the error message for 'index' is a bug and should be fixed.) As for the feature request, I think that allowing None in more places is more regular and consistent across interfaces. I note that the slice() object also represents "missing" or "default" values as None, so it is not just a carryover from the old string.py. So, +1 on the feature for 3.3; -1 on the "fix" in 3.2 or 2.7. --Guido On Sun, Nov 6, 2011 at 11:56 AM, Raymond Hettinger wrote: > > On Nov 6, 2011, at 12:49 AM, Petri Lehtinen wrote: > > Currently, find(), rfind(), index(), rindex(), count(), startswith() > and endswith() of str, bytes and bytearray accept None. Should > list.index() and tuple.index() accept it, too? > > The string methods accept None as a historical artifact > of being in string.py where optional arguments defaulted to None. > That doesn't imply that you should change every other API that > accepts a start argument. > The list.index() API is ancient and stable. ?There has been little or > no demonstrated need for its start argument to be None. > Also, the list API does not exist in isolation. ?It shows up in > strings, the sequence ABC, and every API that aspires to > be list-like. > Overall, I'm -1 on this change and find it to be gratuitous. > We have *way* to many micro API changes of dubious benefit. > Also, the change should not have been applied to Py2.7 and Py3.2. > We don't backport API changes. ? That would just make Jython > and IronPython become non-compliant in mid-stream. > > Raymond > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) From vinay_sajip at yahoo.co.uk Tue Nov 8 11:02:14 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 8 Nov 2011 10:02:14 +0000 (UTC) Subject: [Python-Dev] Regression test coupling Message-ID: I ran into an error today related to the use of support.TESTFN throughout the regression test suite. In my Windows tests, test_base64 passed, but left a file (named by support.TESTFN) lying around: 'test_base64' left behind file '@test_3532_tmp' Much later in the run, a set of unrelated tests started failing, all apparently because of the left-behind file - for example, test_mailbox.setUp failed while trying to make a directory named by support.TESTFN: File "c:\Users\Vinay\Projects\pythonv\lib\mailbox.py", line 270, in __init__ os.mkdir(self._path, 0o700) PermissionError: [Error 5] Access is denied: 'c:\\Users\\Vinay\\Projects\\pythonv\\build\\test_python_3532\\@test_3532_tmp' Sorry if this has come up before, but why do we couple the tests in this way, so that failure to clean up in one test causes drive-by failures in other, unrelated tests? Regards, Vinay Sajip From ncoghlan at gmail.com Tue Nov 8 13:37:06 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 8 Nov 2011 22:37:06 +1000 Subject: [Python-Dev] Regression test coupling In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 8:02 PM, Vinay Sajip wrote: > Sorry if this has come up before, but why do we couple the tests in this way, so > that failure to clean up in one test causes drive-by failures in other, > unrelated tests? Personally, I just use the tempfile module in tests that I write (historically via test.script_helper.temp_dir, these days via the public tempfile.TemporaryDirectory API). Given the other things regrtest cleans up between tests, I'm not sure why it doesn't also kill TESTFN, though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From vinay_sajip at yahoo.co.uk Tue Nov 8 14:05:03 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 8 Nov 2011 13:05:03 +0000 (UTC) Subject: [Python-Dev] Regression test coupling References: Message-ID: Nick Coghlan gmail.com> writes: > Given the other things regrtest cleans up between tests, I'm not sure > why it doesn't also kill TESTFN, though. Well, there's a function regrtest.cleanup_test_droppings which aims to do just this, and it's called in a finally: block from regrtest.runtest. It's supposed to print a message if removal fails, and appears to be what prints the "left behind" message. In my case no "couldn't remove" message was printed, and yet the file was there later - whether it wasn't properly removed, or whether it was created in an intervening test, is not easy to determine :-( Regards, Vinay Sajip From jcea at jcea.es Tue Nov 8 16:49:43 2011 From: jcea at jcea.es (Jesus Cea) Date: Tue, 08 Nov 2011 16:49:43 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" Message-ID: <4EB94F97.6020002@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). Instead of copy&paste the test manually between versions, has anybody a better workflow?. Since any change applied to 3.2 should be applied to 3.3 too (except very few cases), Mercurial merge machinery should be able to merge both versions except when the changes are very near the version headers. I haven't checked, but I guess that the problem is that the different issues have been added in different positions in the file, so both branches are diverging, instead of only divert in the python versions referenced. If that is the case, could be acceptable to reorganize 3.3 version to ease future merges?. Would that solve it? Ideas?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTrlPl5lgi5GaxT1NAQKVggP/bn6vUhQlHjEYg+pFEInnVXYSudamPafP m6bgX6hKS/MtaixVJGlRnAwJ6UQ/nftjmVn80Yd7CsxnsyPApUZVgzkaLMLOhh++ H08gwxgoh1skciYmtyjsy4Vi4xi/4tehu2IVc73SVXkLVbnkc4z1c2Xmsu4TZ2ai r2ncgxRkHgw= =pCHL -----END PGP SIGNATURE----- From s.brunthaler at uci.edu Tue Nov 8 17:45:25 2011 From: s.brunthaler at uci.edu (stefan brunthaler) Date: Tue, 8 Nov 2011 08:45:25 -0800 Subject: [Python-Dev] Python 3 optimizations, continued, continued again... Message-ID: Hi guys, while there is at least some interest in incorporating my optimizations, response has still been low. I figure that the changes are probably too much for a single big incorporation step. On a recent flight, I thought about cutting it down to make it more easily digestible. The basic idea is to remove the optimized interpreter dispatch loop and advanced instruction format and use the existing ones. Currently (rev. ca8a0dfb2176), opcode.h uses 109 of potentially available 255 instructions using the current instruction format. Hence, up to 149 instruction opcodes could be given to optimized instruction derivatives. Consequently, a possible change would require to change: a) opcode.h to add new instruction opcodes, b) ceval.c to include the new instruction opcodes in PyEval_EvalFrameEx, c) abstract.c, object.c (possible other files) to add the quickening/rewriting function calls. If this is more interesting, I could start evaluating which instruction opcodes should be allocated to which derivatives to get the biggest benefit. This is a lot easier to implement (because I can re-use the existing instruction implementations) and can easily be made to be conditionally compile-able, similar to the computed-gotos option. Since the changes are minimal it is also simpler to understand and deal with for everybody else, too. On the "downside", however, not all optimizations are possible and/or make sense in the given limit of instructions (no data-object inlining and no reference-count elimination.) How does that sound? Have a nice day, --stefan From benjamin at python.org Tue Nov 8 19:36:56 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 8 Nov 2011 13:36:56 -0500 Subject: [Python-Dev] Python 3 optimizations, continued, continued again... In-Reply-To: References: Message-ID: 2011/11/8 stefan brunthaler : > How does that sound? I think I can hear real patches and benchmarks most clearly. -- Regards, Benjamin From barry at python.org Tue Nov 8 19:50:25 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 8 Nov 2011 13:50:25 -0500 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4EB94F97.6020002@jcea.es> References: <4EB94F97.6020002@jcea.es> Message-ID: <20111108135025.632b7a5a@limelight.wooz.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On Nov 08, 2011, at 04:49 PM, Jesus Cea wrote: >When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). >Instead of copy&paste the test manually between versions, has anybody >a better workflow?. Does Mercurial support custom merge plugins? I know for example, Bazaar has a custom merge plugin for dealing with debian/changelogs which has greatly reduced conflicts when doing similar operations there. Seems like that would be the best way to go for Misc/NEWS. Barring that, I always keep a copy of the unconflicted Misc/NEWS around to consult when manually resolving the conflicts. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJOuXnxAAoJEBJutWOnSwa/9iAP/03KIeGBA/3qTyyfaEhen2o/ dYXvTrabjCI9S0tcbYyU3oUpUtGX8kxjOaIklwKBxpF82CmAES96jJh+gIf/5qUX K3aGQe3Xhr7yezFA5fWiE52rUqtFenPDTz9R+06xzDvoBM/MkEZRcl3KDZloYeI+ wm+h13x7mDp2MEJwRbYGFBe6ydW3phraMbNdC6zu2CbXQ8ttcKm3sohbVL4IEzHb rKVMJFiub1fu270UCdRHClzeGovqytbjFmiFTM91qNRR/xi5Wky/9RaKT/ar4w+r tr19ZCRt+9TtdluW1iJ3I8C+ygzKQH+d6vgpdyxfzLoq8RVnIVpVxWLQZz3efm1I yvBUtsxNsZeEEnvtm6qgBWB+KRzMVqmZLxf/kJgSY1+ybWdrbV6g+cWk5y0UMZNQ hlEE44S6/wCKl9hjUgFufw1ox4bXJpYgyc10cIrIwnL1jjoxIqrTV06GtwqI6JJO O1/1UJQ/LfM8P79deZeflYAdxarUEewOPqruWSBFt1Hv+L10DF1N2IebDluctLzX KD/WJA8smKiWwzo0TAHhniQL8Ckxr/7SJNeRq0q+LrypVz86okZExFWOjHm6zWih cpcFCJ+0tX6ajS++9nzOTJGGQ166fMC86HTLnP7565Gg57G/JoeLRCTnLkODdb6X NKuRRPZIK0daJVEmiUpE =s7Wv -----END PGP SIGNATURE----- From tjreedy at udel.edu Tue Nov 8 19:51:44 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 08 Nov 2011 13:51:44 -0500 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4EB94F97.6020002@jcea.es> References: <4EB94F97.6020002@jcea.es> Message-ID: On 11/8/2011 10:49 AM, Jesus Cea wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). > Instead of copy&paste the test manually between versions, has anybody > a better workflow?. If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, so perhaps it should not be added there. NEWS could just refer back to previous sections. Then 3.3.0 News would only be new features and the occasional ambiguous item not fixed before. -- Terry Jan Reedy From martin at v.loewis.de Tue Nov 8 20:34:01 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 08 Nov 2011 20:34:01 +0100 Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: References: <4EAE7BB2.2050502@v.loewis.de> <4EB63D0F.9010505@v.loewis.de> Message-ID: <4EB98429.1080704@v.loewis.de> > I'm curious to know how this level of flexibility can be achieved with the > MSI format: I know one can code the equivalent logic in C (for example) in > a custom action, but don't know how you can keep the logic in Python. I'd provide a fixed custom action which gets hold of the installer session, and then runs a Python script. IIUC, it should be possible to map categories to entries in the Directory table, so that the Python script would actually configure the installer process before the installer actually starts installing the files. The DLL could be part of packaging, similar to how the bdist_wininst executable is part of distutils. Regards, Martin From cs at zip.com.au Tue Nov 8 21:19:10 2011 From: cs at zip.com.au (Cameron Simpson) Date: Wed, 9 Nov 2011 07:19:10 +1100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <20111108135025.632b7a5a@limelight.wooz.org> References: <20111108135025.632b7a5a@limelight.wooz.org> Message-ID: <20111108201910.GA469@cskk.homeip.net> On 08Nov2011 13:50, Barry Warsaw wrote: | On Nov 08, 2011, at 04:49 PM, Jesus Cea wrote: | >When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). | >Instead of copy&paste the test manually between versions, has anybody | >a better workflow?. | | Does Mercurial support custom merge plugins? I know for example, Bazaar has a | custom merge plugin for dealing with debian/changelogs which has greatly | reduced conflicts when doing similar operations there. Yes it does. I use this facility to merge timesheet files mainatined on separate hosts (home machine, travelling laptop) in my hgbox script. The hgrc says: [merge-patterns] timesheets-cameron/2* = merge-dumb dailylog-cameron/2*/[A-Z]* = merge-dumb so it is easy to specify a particular tool to merge particular files. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ It is a tale told by an idiot, full of sound and fury, signifying nothing. - William Shakespeare From cs at zip.com.au Tue Nov 8 21:20:45 2011 From: cs at zip.com.au (Cameron Simpson) Date: Wed, 9 Nov 2011 07:20:45 +1100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <20111108201910.GA469@cskk.homeip.net> References: <20111108201910.GA469@cskk.homeip.net> Message-ID: <20111108202045.GA1326@cskk.homeip.net> On 09Nov2011 07:19, I wrote: | Yes it does. I use this facility to merge timesheet files mainatined on | separate hosts (home machine, travelling laptop) in my hgbox script. | The hgrc says: | | [merge-patterns] | timesheets-cameron/2* = merge-dumb | dailylog-cameron/2*/[A-Z]* = merge-dumb | | so it is easy to specify a particular tool to merge particular files. Oh yes, the associated clause: [merge-tools] merge-dumb.args = $local $other > $output to specify how merge-dumb is invoked. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ I was gratified to be able to answer promptly and I did. I said I didn't know. - Mark Twain From g.brandl at gmx.net Tue Nov 8 21:47:38 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 08 Nov 2011 21:47:38 +0100 Subject: [Python-Dev] cpython: Remove the old style [...] to denote optional args and show the defaults. In-Reply-To: References: Message-ID: Am 08.11.2011 21:30, schrieb brian.curtin: > http://hg.python.org/cpython/rev/60ae7979fec8 > changeset: 73463:60ae7979fec8 > user: Brian Curtin > date: Tue Nov 08 14:30:02 2011 -0600 > summary: > Remove the old style [...] to denote optional args and show the defaults. > > files: > Doc/library/os.rst | 12 ++++++------ > 1 files changed, 6 insertions(+), 6 deletions(-) > > > diff --git a/Doc/library/os.rst b/Doc/library/os.rst > --- a/Doc/library/os.rst > +++ b/Doc/library/os.rst > @@ -872,7 +872,7 @@ > .. versionadded:: 3.3 > > > -.. function:: futimesat(dirfd, path[, (atime, mtime)]) > +.. function:: futimesat(dirfd, path, (atime, mtime)=None) > > Like :func:`utime` but if *path* is relative, it is taken as relative to *dirfd*. > If *path* is relative and *dirfd* is the special value :data:`AT_FDCWD`, then *path* Hmm, while the [] are old style, they are still correct when the function doesn't support kwargs. Please revert. (Also, the syntax ``(atime, mtime)=None`` would not be valid Python and at is best confusing.) Georg From brian.curtin at gmail.com Tue Nov 8 21:55:49 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Tue, 8 Nov 2011 14:55:49 -0600 Subject: [Python-Dev] cpython: Remove the old style [...] to denote optional args and show the defaults. In-Reply-To: References: Message-ID: On Tue, Nov 8, 2011 at 14:47, Georg Brandl wrote: > Am 08.11.2011 21:30, schrieb brian.curtin: >> http://hg.python.org/cpython/rev/60ae7979fec8 >> changeset: ? 73463:60ae7979fec8 >> user: ? ? ? ?Brian Curtin >> date: ? ? ? ?Tue Nov 08 14:30:02 2011 -0600 >> summary: >> ? Remove the old style [...] to denote optional args and show the defaults. >> >> files: >> ? Doc/library/os.rst | ?12 ++++++------ >> ? 1 files changed, 6 insertions(+), 6 deletions(-) >> >> >> diff --git a/Doc/library/os.rst b/Doc/library/os.rst >> --- a/Doc/library/os.rst >> +++ b/Doc/library/os.rst >> @@ -872,7 +872,7 @@ >> ? ? .. versionadded:: 3.3 >> >> >> -.. function:: futimesat(dirfd, path[, (atime, mtime)]) >> +.. function:: futimesat(dirfd, path, (atime, mtime)=None) >> >> ? ? Like :func:`utime` but if *path* is relative, it is taken as relative to *dirfd*. >> ? ? If *path* is relative and *dirfd* is the special value :data:`AT_FDCWD`, then *path* > > Hmm, while the [] are old style, they are still correct when the function > doesn't support kwargs. ?Please revert. > > (Also, the syntax ``(atime, mtime)=None`` would not be valid Python and at > is best confusing.) > > Georg Backed out. http://hg.python.org/cpython/rev/2636df45b630 From vinay_sajip at yahoo.co.uk Tue Nov 8 22:42:41 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 8 Nov 2011 21:42:41 +0000 (GMT) Subject: [Python-Dev] Packaging and binary distributions In-Reply-To: <4EB98429.1080704@v.loewis.de> References: <4EAE7BB2.2050502@v.loewis.de> <4EB63D0F.9010505@v.loewis.de> <4EB98429.1080704@v.loewis.de> Message-ID: <1320788561.63689.YahooMailNeo@web25808.mail.ukl.yahoo.com> > I'd provide a fixed custom action which gets hold of the installer > session, and then runs a Python script. IIUC, it should be possible > to map categories to entries in the Directory table, so that the > Python script would actually configure the installer process before > the installer actually starts installing the files. The DLL could be > part of packaging, similar to how the bdist_wininst executable is > part of distutils. Presumably the code in the DLL would need to be independent of Python, and find the correct Python version to run? Perhaps a variable in the .MSI could serve to indicate the version dependency. It's certainly feasible, but needs specifying in more detail ... Regards, Vinay Sajip From victor.stinner at haypocalc.com Wed Nov 9 01:41:56 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 9 Nov 2011 01:41:56 +0100 Subject: [Python-Dev] Emit a BytesWarning on bytes filenames on Windows In-Reply-To: <4EAB9355.8010906@gmail.com> References: <201110290052.41619.victor.stinner@haypocalc.com> <4EAB9355.8010906@gmail.com> Message-ID: <201111090141.56084.victor.stinner@haypocalc.com> Le samedi 29 octobre 2011 07:47:01, vous avez ?crit : > Therefore, as you imply, I think the solution to this issue is to start > the process of deprecating the bytes version of the api in py3k with a > view to removing it completely - possibly with a less aggressive > timeline than normal. In Python 2.7, I think documenting the issue and > a recommendation to always use unicode is sufficient (ie, we can't > deprecate it and a new BytesWarning seems gratuitous.) I wrote a patch to implement the deprecation: http://bugs.python.org/issue13374 Victor From ncoghlan at gmail.com Wed Nov 9 01:43:04 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Nov 2011 10:43:04 +1000 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: <4EAAF66F.9020603@oddbird.net> References: <4EAAF66F.9020603@oddbird.net> Message-ID: On Sat, Oct 29, 2011 at 4:37 AM, Carl Meyer wrote: > > Why not modify sys.prefix? > - -------------------------- > > As discussed above under `Backwards Compatibility`_, this PEP proposes > to add ``sys.site_prefix`` as "the prefix relative to which > site-package directories are found". This maintains compatibility with > the documented meaning of ``sys.prefix`` (as the location relative to > which the standard library can be found), but means that code assuming > that site-packages directories are found relative to ``sys.prefix`` > will not respect the virtual environment correctly. > > Since it is unable to modify ``distutils``/``sysconfig``, > `virtualenv`_ is forced to instead re-point ``sys.prefix`` at the > virtual environment. > > An argument could be made that this PEP should follow virtualenv's > lead here (and introduce something like ``sys.base_prefix`` to point > to the standard library and header files), since virtualenv already > does this and it doesn't appear to have caused major problems with > existing code. > > Another argument in favor of this is that it would be preferable to > err on the side of greater, rather than lesser, isolation. Changing > ``sys.prefix`` to point to the virtual environment and introducing a > new ``sys.base_prefix`` attribute would err on the side of greater > isolation in the face of existing code's use of ``sys.prefix``. I'm actually finding I quite like the virtualenv scheme of having "sys.prefix" refer to the virtual environment and "sys.real_prefix" refer to the interpeter's default environment. If pyvenv used the same naming scheme, then a lot of code designed to work with virtualenv would probably "just work" with pyvenv as well. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From carl at oddbird.net Wed Nov 9 01:57:49 2011 From: carl at oddbird.net (Carl Meyer) Date: Tue, 08 Nov 2011 17:57:49 -0700 Subject: [Python-Dev] draft PEP: virtual environments In-Reply-To: References: <4EAAF66F.9020603@oddbird.net> Message-ID: <4EB9D00D.1000401@oddbird.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/08/2011 05:43 PM, Nick Coghlan wrote: > I'm actually finding I quite like the virtualenv scheme of having > "sys.prefix" refer to the virtual environment and "sys.real_prefix" > refer to the interpeter's default environment. If pyvenv used the same > naming scheme, then a lot of code designed to work with virtualenv > would probably "just work" with pyvenv as well. Indeed. I've already been convinced (see my reply to Chris McDonough earlier) that this is the more practical approach. I've already updated my copy of the PEP on Bitbucket (https://bitbucket.org/carljm/python-peps/src/0936d8e00e5b/pep-0404.txt) to reflect this switch, working (slowly) on an update of the reference implementation. Carl -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk650A0ACgkQ8W4rlRKtE2cYuACgk5oRU54R+w4jHAynvW/QAxNU mQQAoI0zM4wzpPdOa0RIvEuAkUCmm+jT =RMyV -----END PGP SIGNATURE----- From stefan_ml at behnel.de Wed Nov 9 09:25:59 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 09 Nov 2011 09:25:59 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: References: <4EB81449.6040004@netwok.org> Message-ID: Stefan Behnel, 07.11.2011 18:46: > ?ric Araujo, 07.11.2011 18:24: >>> http://hg.python.org/cpython/rev/bbc929bc2224 >> >>> user: Philip Jenvey >>> summary: >>> quote the type name for improved readability >>> >>> files: >>> Python/bltinmodule.c | 2 +- >>> 1 files changed, 1 insertions(+), 1 deletions(-) >>> >>> >>> diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c >>> --- a/Python/bltinmodule.c >>> +++ b/Python/bltinmodule.c >>> @@ -1121,7 +1121,7 @@ >>> return NULL; >>> if (!PyIter_Check(it)) { >>> PyErr_Format(PyExc_TypeError, >>> - "%.200s object is not an iterator", >>> + "'%.200s' object is not an iterator", >>> it->ob_type->tp_name); >>> return NULL; >>> } >> >> What about displaying the repr of the type object? > > While I agree that this is more readable, quoted type names are rather rare > if not pretty much unprecedented in core exception messages, so this is > definitely not the only place that would need changing. > > However, note that arbitrarily changing exception messages always breaks > someone's doctests, so my personal preference would be to keep it as it was. ... and I just noticed that it did break a doctest in Cython's regression test suite. Should I change the test, or will this be taken back? Stefan From martin at v.loewis.de Wed Nov 9 10:44:50 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 09 Nov 2011 10:44:50 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: References: <4EB81449.6040004@netwok.org> Message-ID: <4EBA4B92.6000409@v.loewis.de> Am 09.11.2011 09:25, schrieb Stefan Behnel: > Stefan Behnel, 07.11.2011 18:46: >> ?ric Araujo, 07.11.2011 18:24: >>>> http://hg.python.org/cpython/rev/bbc929bc2224 >>> >>>> user: Philip Jenvey >>>> summary: >>>> quote the type name for improved readability >>>> >>>> files: >>>> Python/bltinmodule.c | 2 +- >>>> 1 files changed, 1 insertions(+), 1 deletions(-) >>>> >>>> >>>> diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c >>>> --- a/Python/bltinmodule.c >>>> +++ b/Python/bltinmodule.c >>>> @@ -1121,7 +1121,7 @@ >>>> return NULL; >>>> if (!PyIter_Check(it)) { >>>> PyErr_Format(PyExc_TypeError, >>>> - "%.200s object is not an iterator", >>>> + "'%.200s' object is not an iterator", >>>> it->ob_type->tp_name); >>>> return NULL; >>>> } >>> >>> What about displaying the repr of the type object? >> >> While I agree that this is more readable, quoted type names are rather >> rare >> if not pretty much unprecedented in core exception messages, so this is >> definitely not the only place that would need changing. >> >> However, note that arbitrarily changing exception messages always breaks >> someone's doctests, so my personal preference would be to keep it as >> it was. > > ... and I just noticed that it did break a doctest in Cython's > regression test suite. Should I change the test, or will this be taken > back? I recommend reverting the change. I fail to see why quoting the name improves readability. Regards, Martin From solipsis at pitrou.net Wed Nov 9 11:19:13 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 9 Nov 2011 11:19:13 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability References: <4EB81449.6040004@netwok.org> <4EBA4B92.6000409@v.loewis.de> Message-ID: <20111109111913.177278e7@pitrou.net> On Wed, 09 Nov 2011 10:44:50 +0100 "Martin v. L?wis" wrote: > Am 09.11.2011 09:25, schrieb Stefan Behnel: > > Stefan Behnel, 07.11.2011 18:46: > >> ?ric Araujo, 07.11.2011 18:24: > >>>> http://hg.python.org/cpython/rev/bbc929bc2224 > >>> > >>>> user: Philip Jenvey > >>>> summary: > >>>> quote the type name for improved readability > >>>> > >>>> files: > >>>> Python/bltinmodule.c | 2 +- > >>>> 1 files changed, 1 insertions(+), 1 deletions(-) > >>>> > >>>> > >>>> diff --git a/Python/bltinmodule.c b/Python/bltinmodule.c > >>>> --- a/Python/bltinmodule.c > >>>> +++ b/Python/bltinmodule.c > >>>> @@ -1121,7 +1121,7 @@ > >>>> return NULL; > >>>> if (!PyIter_Check(it)) { > >>>> PyErr_Format(PyExc_TypeError, > >>>> - "%.200s object is not an iterator", > >>>> + "'%.200s' object is not an iterator", > >>>> it->ob_type->tp_name); > >>>> return NULL; > >>>> } > >>> > >>> What about displaying the repr of the type object? > >> > >> While I agree that this is more readable, quoted type names are rather > >> rare > >> if not pretty much unprecedented in core exception messages, so this is > >> definitely not the only place that would need changing. > >> > >> However, note that arbitrarily changing exception messages always breaks > >> someone's doctests, so my personal preference would be to keep it as > >> it was. > > > > ... and I just noticed that it did break a doctest in Cython's > > regression test suite. Should I change the test, or will this be taken > > back? > > I recommend reverting the change. I fail to see why quoting the name > improves readability. It does if the name is "Throatwobbler Mangrove". Regards Antoine. From victor.stinner at haypocalc.com Wed Nov 9 11:15:25 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 09 Nov 2011 11:15:25 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Change decoders to use Unicode API instead of Py_UNICODE. In-Reply-To: References: Message-ID: <2482698.ZYMEVpgivP@dsk000552> First of all, thanks for having upgraded this huge part (codecs) to the new Unicode API! > +static int > +unicode_widen(PyObject **p_unicode, int maxchar) > +{ > + PyObject *result; > + assert(PyUnicode_IS_READY(*p_unicode)); > + if (maxchar <= PyUnicode_MAX_CHAR_VALUE(*p_unicode)) > + return 0; > + result = PyUnicode_New(PyUnicode_GET_LENGTH(*p_unicode), > + maxchar); > + if (result == NULL) > + return -1; > + PyUnicode_CopyCharacters(result, 0, *p_unicode, 0, > + PyUnicode_GET_LENGTH(*p_unicode)); > + Py_DECREF(*p_unicode); > + *p_unicode = result; > + return 0; > +} PyUnicode_CopyCharacters() result must be checked. If you are sure that the call cannot fail, use copy_characters() which uses assertions in debug mode ( and no check in release mode). > -#ifndef DONT_MAKE_RESULT_READY > - if (_PyUnicode_READY_REPLACE(&v)) { > - Py_DECREF(v); > - return NULL; > - } > -#endif Why did you remove this call from PyUnicode_DecodeRawUnicodeEscape(), _PyUnicode_DecodeUnicodeInternal(), PyUnicode_DecodeASCII() and PyUnicode_DecodeCharmap()? It may reuse latin1 characters singletons to share a little bit more memory (there is already a special case for empty string). "_PyUnicode_READY_REPLACE" is maybe not the best name :-) Victor From victor.stinner at haypocalc.com Wed Nov 9 11:14:50 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 09 Nov 2011 11:14:50 +0100 Subject: [Python-Dev] unicode_internal codec and the PEP 393 Message-ID: <2727750.c5cKp8gfbm@dsk000552> Hi, The unicode_internal decoder doesn't decode surrogate pairs and so test_unicode.UnicodeTest.test_codecs() is failing on Windows (16-bit wchar_t). I don't know if this codec is still revelant with the PEP 393 because the internal representation is now depending on the maximum character (Py_UCS1*, Py_UCS2* or Py_UCS4*), whereas it was a fixed size with Python <= 3.2 (Py_UNICODE*). Should we: * Drop this codec (public and documented, but I don't know if it is used) * Use wchar_t* (Py_UNICODE*) to provide a result similar to Python 3.2, and so fix the decoder to handle surrogate pairs * Use the real representation (Py_UCS1*, Py_UCS2 or Py_UCS4* string) ? The failure on Windows: FAIL: test_codecs (test.test_unicode.UnicodeTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "D:\Buildslave\3.x.moore-windows\build\lib\test\test_unicode.py", line 1408, in test_codecs self.assertEqual(str(u.encode(encoding),encoding), u) AssertionError: '\ud800\udc01\ud840\udc02\ud880\udc03\ud8c0\udc04\ud900\udc05' != '\U00030003\U00040004\U00050005' Victor From ncoghlan at gmail.com Wed Nov 9 11:47:55 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Nov 2011 20:47:55 +1000 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: <20111109111913.177278e7@pitrou.net> References: <4EB81449.6040004@netwok.org> <4EBA4B92.6000409@v.loewis.de> <20111109111913.177278e7@pitrou.net> Message-ID: On Wed, Nov 9, 2011 at 8:19 PM, Antoine Pitrou wrote: > On Wed, 09 Nov 2011 10:44:50 +0100 > "Martin v. L?wis" wrote: >> I recommend reverting the change. I fail to see why quoting the name >> improves readability. > > It does if the name is "Throatwobbler Mangrove". The readability argument doesn't really sell me, but the consistency one does: Python 3.2: >>> iter(1) Traceback (most recent call last): File "", line 1, in TypeError: 'int' object is not iterable >>> next(1) Traceback (most recent call last): File "", line 1, in TypeError: int object is not an iterator We generally do quote the type names in error messages, so this change is just bringing next() into line with other operations: >>> 1 + '' Traceback (most recent call last): File "", line 1, in TypeError: unsupported operand type(s) for +: 'int' and 'str' Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Wed Nov 9 11:59:09 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 09 Nov 2011 23:59:09 +1300 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> Message-ID: <4EBA5CFD.1080500@canterbury.ac.nz> Nick Coghlan wrote: > > In reviewing Zbyszek's doc updates and comparing them against the Grammar, I > discovered a gratuitous change in the implementation: it allows a bare (i.e. no > parentheses) 'yield from' as an argument to a function. > > I'll add a new test to ensure "yield from x" requires parentheses whenever > "yield x" requires them (and fix the Grammar file on the implementation branch > accordingly). Wait a minute, there's nothing in the PEP as accepted that mentions any such restriction. My intention was that it should be as easy as possible to replace any function call 'f(x)' with 'yield from f(x)'. If parentheses are required, then instead of f(yield from g(x)) we would have to write f((yield from g(x))) I can't see how this is an improvement in readability or clarity. -- Greg From petri at digip.org Wed Nov 9 12:19:45 2011 From: petri at digip.org (Petri Lehtinen) Date: Wed, 9 Nov 2011 13:19:45 +0200 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: <20111109111944.GA2186@ihaa> Terry Reedy wrote: > On 11/8/2011 10:49 AM, Jesus Cea wrote: > >-----BEGIN PGP SIGNED MESSAGE----- > >Hash: SHA1 > > > >When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). > >Instead of copy&paste the test manually between versions, has anybody > >a better workflow?. > > If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, > so perhaps it should not be added there. NEWS could just refer back > to previous sections. Then 3.3.0 News would only be new features and > the occasional ambiguous item not fixed before. In this case, we would have to *remove* entries from Misc/NEWS after merging to 3.3, right? From ncoghlan at gmail.com Wed Nov 9 12:21:45 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Nov 2011 21:21:45 +1000 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: <4EBA5CFD.1080500@canterbury.ac.nz> References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> Message-ID: On Wed, Nov 9, 2011 at 8:59 PM, Greg Ewing wrote: > Nick Coghlan wrote: >> I'll add a new test to ensure "yield from x" requires parentheses whenever >> "yield x" requires them (and fix the Grammar file on the implementation >> branch >> accordingly). > > Wait a minute, there's nothing in the PEP as accepted that > mentions any such restriction. It doesn't need to be mentioned in the PEP - it's inherited due to the fact that the PEP builds on top of yield expressions. There is no excuse for the two expressions being inconsistent in their parenthesis requirements when they are so similar syntactically. For ordinary yield expressions, "f(yield x, y)" has to be disallowed since it is genuinely ambiguous (does it mean "f(yield (x, y))" or "f((yield x), y)"?), and it then becomes something of a pain to allow any yield argument other than an unparenthesised tuple when a yield expression is provided as the sole argument to a call. So, from my point of view, it is absolutely a requirement that 'yield from' expressions be parenthesised in the multiple argument case. The previous grammar allowed confusing constructs like "f(yield from x, y)" (interpreting it as "f((yield from x), y) while still issuing a syntax error (as expected) for "yield from x, y". You *could* try to make a case that we should allow "f(yield from x)", but it isn't then clear to the casual reader why that's OK and "f(yield x)" gets disallowed. And that brings us back to the complexity of creating a special yield expression variant that's allowed as the sole argument to a function call without additional parentheses, but without introducing ambiguity into the grammar. It's simpler and cleaner to just make the rules for the two constructs identical - if it's a subexpression, you have to add the extra parentheses, if it's a statement or the sole expression on the RHS of an assignment, you don't. That said, it's likely *possible* to make those parentheses optional in the single argument call case (as we do for generator expressions), but do you really want to hold up the PEP 380 implementation for a minor detail like that? Besides, since it would affect the grammar definition for ordinary yield expressions as well, it might even need a new PEP. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Wed Nov 9 12:28:54 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 Nov 2011 21:28:54 +1000 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: On Wed, Nov 9, 2011 at 4:51 AM, Terry Reedy wrote: > If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, so > perhaps it should not be added there. NEWS could just refer back to previous > sections. Then 3.3.0 News would only be new features and the occasional > ambiguous item not fixed before. The 3.2.x maintenance release sections are not present in the 3.3 NEWS file - it jumps straight from 3.2 to 3.3a1. So bug fixes should be recorded in both places - the 3.3a1 notes record the deltas against 3.2.0, not against whatever the latest release of 3.2 happens to be when 3.3 is released. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Wed Nov 9 12:24:30 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 9 Nov 2011 12:24:30 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" References: <4EB94F97.6020002@jcea.es> Message-ID: <20111109122430.0e081988@pitrou.net> On Tue, 08 Nov 2011 13:51:44 -0500 Terry Reedy wrote: > On 11/8/2011 10:49 AM, Jesus Cea wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > When merging from 3.2 to 3.3 "Misc/NEWS" always conflicts (lately). > > Instead of copy&paste the test manually between versions, has anybody > > a better workflow?. > > If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, so > perhaps it should not be added there. NEWS could just refer back to > previous sections. So people who download 3.3 would also have to download 3.2 to get the complete list of changes? That's confusing. From guido at python.org Wed Nov 9 16:54:46 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 9 Nov 2011 07:54:46 -0800 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> Message-ID: I see this as inevitable. By the time the parser sees 'yield' it has made its choices; the 'from' keyword cannot modify that. So whenever "yield expr" must be parenthesized, "yield from expr" must too. (And yes, there are parsing systems that don't have this restriction. But Python's does and we like it that way.) At the same time, "yield expr, expr" works; but does "yield from expr, expr" mean anything? Finally, glad to see work on this PEP proceeding; I'm looking forward to using the fruits of that labor! --Guido On Wed, Nov 9, 2011 at 3:21 AM, Nick Coghlan wrote: > On Wed, Nov 9, 2011 at 8:59 PM, Greg Ewing wrote: >> Nick Coghlan wrote: >>> I'll add a new test to ensure "yield from x" requires parentheses whenever >>> "yield x" requires them (and fix the Grammar file on the implementation >>> branch >>> accordingly). >> >> Wait a minute, there's nothing in the PEP as accepted that >> mentions any such restriction. > > It doesn't need to be mentioned in the PEP - it's inherited due to the > fact that the PEP builds on top of yield expressions. There is no > excuse for the two expressions being inconsistent in their parenthesis > requirements when they are so similar syntactically. > > For ordinary yield expressions, "f(yield x, y)" has to be disallowed > since it is genuinely ambiguous (does it mean "f(yield (x, y))" or > "f((yield x), y)"?), and it then becomes something of a pain to allow > any yield argument other than an unparenthesised tuple when a yield > expression is provided as the sole argument to a call. > > So, from my point of view, it is absolutely a requirement that 'yield > from' expressions be parenthesised in the multiple argument case. The > previous grammar allowed confusing constructs like "f(yield from x, > y)" (interpreting it as "f((yield from x), y) while still issuing a > syntax error (as expected) for "yield from x, y". > > You *could* try to make a case that we should allow "f(yield from x)", > but it isn't then clear to the casual reader why that's OK and > "f(yield x)" gets disallowed. And that brings us back to the > complexity of creating a special yield expression variant that's > allowed as the sole argument to a function call without additional > parentheses, but without introducing ambiguity into the grammar. > > It's simpler and cleaner to just make the rules for the two constructs > identical - if it's a subexpression, you have to add the extra > parentheses, if it's a statement or the sole expression on the RHS of > an assignment, you don't. > > That said, it's likely *possible* to make those parentheses optional > in the single argument call case (as we do for generator expressions), > but do you really want to hold up the PEP 380 implementation for a > minor detail like that? Besides, since it would affect the grammar > definition for ordinary yield expressions as well, it might even need > a new PEP. > > Cheers, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From barry at python.org Wed Nov 9 17:14:57 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 9 Nov 2011 11:14:57 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule Message-ID: <20111109111457.2f695e3a@limelight.wooz.org> I think we should have an official pronouncement about Python 2.8, and PEPs are as official as it gets 'round here. Thus I propose the following. If there are no objections , I'll commit this taking the next available number. Cheers, -Barry PEP: 405 Title: Python 2.8 Release Schedule Version: $Revision$ Last-Modified: $Date$ Author: Barry Warsaw Status: Final Type: Informational Content-Type: text/x-rst Created: 2011-11-09 Python-Version: 2.8 Abstract ======== This document describes the development and release schedule for Python 2.8. Release Schedule ================ The current schedule is: - 2.8 final Never Official pronouncement ====================== There will never be an official Python 2.8 release. Upgrade path ============ The official upgrade path from Python 2.7 is to Python 3. Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From benjamin at python.org Wed Nov 9 17:18:00 2011 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 9 Nov 2011 11:18:00 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109111457.2f695e3a@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: 2011/11/9 Barry Warsaw : > I think we should have an official pronouncement about Python 2.8, and PEPs > are as official as it gets 'round here. ?Thus I propose the following. ?If > there are no objections , I'll commit this taking the next available > number. > > Cheers, > -Barry > > PEP: 405 > Title: Python 2.8 Release Schedule I don't know why this PEP is necessary, but I think a more appropriate title would be "2.x is in maintenance only mode". > Version: $Revision$ > Last-Modified: $Date$ > Author: Barry Warsaw > Status: Final > Type: Informational > Content-Type: text/x-rst > Created: 2011-11-09 > Python-Version: 2.8 -- Regards, Benjamin From amauryfa at gmail.com Wed Nov 9 17:18:45 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 9 Nov 2011 17:18:45 +0100 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109111457.2f695e3a@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: Hi, 2011/11/9 Barry Warsaw > I think we should have an official pronouncement about Python 2.8, and PEPs > are as official as it gets 'round here. > Do we need to designate a release manager? -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Wed Nov 9 17:46:12 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 09 Nov 2011 17:46:12 +0100 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <1735926.tPXblSBuLR@dsk000552> Le Mercredi 9 Novembre 2011 17:18:45 Amaury Forgeot d'Arc a ?crit : > Hi, > > 2011/11/9 Barry Warsaw > > > I think we should have an official pronouncement about Python 2.8, and > > PEPs are as official as it gets 'round here. > > Do we need to designate a release manager? random.choice() should help in this case. Victor From barry at python.org Wed Nov 9 17:51:58 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 9 Nov 2011 11:51:58 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <20111109115158.3b9739a5@limelight.wooz.org> On Nov 09, 2011, at 05:18 PM, Amaury Forgeot d'Arc wrote: >2011/11/9 Barry Warsaw > >> I think we should have an official pronouncement about Python 2.8, and PEPs >> are as official as it gets 'round here. >> > >Do we need to designate a release manager? I'd happily serve as the un-release manager. :) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From ethan at stoneleaf.us Wed Nov 9 17:24:14 2011 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 09 Nov 2011 08:24:14 -0800 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <4EBAA92E.8020309@stoneleaf.us> Benjamin Peterson wrote: > 2011/11/9 Barry Warsaw : >> I think we should have an official pronouncement about Python 2.8, and PEPs >> are as official as it gets 'round here. Thus I propose the following. If >> there are no objections , I'll commit this taking the next available >> number. >> >> Cheers, >> -Barry >> >> PEP: 405 >> Title: Python 2.8 Release Schedule > > I don't know why this PEP is necessary, but I think a more appropriate > title would be "2.x is in maintenance only mode". I think somebody searching will more easily find "Python 2.8 Release Schedule" rather than "2.x is in maintenance only mode". ~Ethan~ From barry at python.org Wed Nov 9 17:53:00 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 9 Nov 2011 11:53:00 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <20111109115300.0fdf3994@limelight.wooz.org> On Nov 09, 2011, at 11:18 AM, Benjamin Peterson wrote: >I don't know why this PEP is necessary, but I think a more appropriate >title would be "2.x is in maintenance only mode". Okay, it's a little tongue-in-cheek, but the practical reason is that I want it to be the top hit when someone searches for "Python 2.8". Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Wed Nov 9 20:14:38 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 09 Nov 2011 14:14:38 -0500 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: On 11/9/2011 6:28 AM, Nick Coghlan wrote: > So bug fixes should be recorded in both places - the 3.3a1 notes > record the deltas against 3.2.0, not against whatever the latest > release of 3.2 happens to be when 3.3 is released. OK, I see that now. Idea 2: If "What's New in Python 3.3 Alpha 1?" had two major sections" NEW FEATURES and BUG FIXES (since 3.2.0) with duplicated subheaders, and if the subheaders were tagged, like Core and Builtins -- Features Core and Builtins -- Fixes and if the subheaders for 3.2.z, z>=1 had the latter tags, then the context for merges of bug fixes would not be disturbed by the interposition of feature items. There would only be a problem when the first merge of a subsection for a 3.2.z, z>=2 release is not the first item in the corresponding section for 3.3.0alpha1. Idea 3: Someone else suggested a file-specific merge. If the 3.2.z patch simply inserts an item under "Some Header", with no deletions, the custom merge inserts the item under the first occurrence of "Some Header" in the 3.3 file. Ignoring what comes after in both files should prevent new feature items from blocking the merge. Otherwise, use the normal merge. -- Terry Jan Reedy From tjreedy at udel.edu Wed Nov 9 20:59:56 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 09 Nov 2011 14:59:56 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109111457.2f695e3a@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: On 11/9/2011 11:14 AM, Barry Warsaw wrote: > I think we should have an official pronouncement about Python 2.8, http://python.org/download/releases/2.7.2/ (and similar) already say: The Python 2.7 series is scheduled to be the last major version in the 2.x series before 2.x moves into an extended maintenance period. If Guido is ready to "pound the final nail in the coffin", delete 'scheduled to be', and change "series before 2.x moves into" to "series. 2.x is in", fine with me. I am not sure a PEP is also needed, but OK, with revision. > and PEPs are as official as it gets 'round here. > Thus I propose the following. If there are no objections, There are ;-). The title is misleading, and the whole thing reads like an April Fools Joke. If I were looking for information, I would be slightly annoyed. > PEP: 405 > Title: Python 2.8 Release Schedule Title: Python 2.8 Will Never Happen tells anyone searching what they need to know immediately. They would only need to click on a link if they wanted to know 'why'. > Version: $Revision$ > Last-Modified: $Date$ > Author: Barry Warsaw Guido should be the first author. > Status: Final > Type: Informational So let us be informative from the title on. > Content-Type: text/x-rst > Created: 2011-11-09 > Python-Version: 2.8 > > Abstract > ======== > > This document describes the development and release schedule for Python 2.8. More non-informative teasing. Instead, I suggest replacing everything with short, sweet, and informative. Rationale ========= For backward-compatibility reasons, the Python 2 series is burdened with several obsolete and duplicate features that were removed in Python 3. In addition, the primary character set was expanded from ascii to unicode. While bug fixes continue for 2.7, new developments go into Python 3.x versions. -- Terry Jan Reedy From brian.curtin at gmail.com Wed Nov 9 21:23:21 2011 From: brian.curtin at gmail.com (Brian Curtin) Date: Wed, 9 Nov 2011 14:23:21 -0600 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109111457.2f695e3a@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: On Wed, Nov 9, 2011 at 10:14, Barry Warsaw wrote: > I think we should have an official pronouncement about Python 2.8, and PEPs > are as official as it gets 'round here. Thus I propose the following. If > there are no objections , I'll commit this taking the next available > number. > > Cheers, > -Barry > > PEP: 405 > Title: Python 2.8 Release Schedule > Version: $Revision$ > Last-Modified: $Date$ > Author: Barry Warsaw > Status: Final > Type: Informational > Content-Type: text/x-rst > Created: 2011-11-09 > Python-Version: 2.8 > > > Abstract > ======== > > This document describes the development and release schedule for Python > 2.8. > > > Release Schedule > ================ > > The current schedule is: > > - 2.8 final Never > > > Official pronouncement > ====================== > > There will never be an official Python 2.8 release. > > > Upgrade path > ============ > > The official upgrade path from Python 2.7 is to Python 3. > > > Copyright > ========= > > This document has been placed in the public domain. > > > > .. > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: +1. post it as-is. -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Nov 9 21:58:30 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 9 Nov 2011 20:58:30 +0000 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: On Wed, Nov 9, 2011 at 10:14, Barry Warsaw wrote: > > I think we should have an official pronouncement about Python 2.8, and > PEPs > are as official as it gets 'round here. ?Thus I propose the following. ?If > there are no objections , I'll commit this taking the next available > number. +1 on having a PEP +0 on posting it as it stands I see why people feel that something more precise and/or formal would get the message across better, but the mildly tongue-in-cheek tone isn't really inappropriate for a language named after Monty Python :-) +1 for Cardinal Biggles as release manager. Paul From martin at v.loewis.de Wed Nov 9 22:03:52 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 09 Nov 2011 22:03:52 +0100 Subject: [Python-Dev] unicode_internal codec and the PEP 393 In-Reply-To: <2727750.c5cKp8gfbm@dsk000552> References: <2727750.c5cKp8gfbm@dsk000552> Message-ID: <4EBAEAB8.2060509@v.loewis.de> > The unicode_internal decoder doesn't decode surrogate pairs and so > test_unicode.UnicodeTest.test_codecs() is failing on Windows (16-bit wchar_t). > I don't know if this codec is still revelant with the PEP 393 because the > internal representation is now depending on the maximum character (Py_UCS1*, > Py_UCS2* or Py_UCS4*), whereas it was a fixed size with Python <= 3.2 > (Py_UNICODE*). The current status is the way it is because we (Torsten and me) didn't bother figuring out the purpose of the internal codec. > Should we: > > * Drop this codec (public and documented, but I don't know if it is used) > * Use wchar_t* (Py_UNICODE*) to provide a result similar to Python 3.2, and > so fix the decoder to handle surrogate pairs > * Use the real representation (Py_UCS1*, Py_UCS2 or Py_UCS4* string) It's described as "Return the internal representation of the operand". That would suggest that the last choice (i.e. return the real internal representation) would be best, except that this doesn't round-trip. Adding a prefix byte indicating the kind (and perhaps also the ASCII flag) would then be closest to the real representation. As that is likely not very useful, and might break some applications of the encoding (if there are any at all) which might expect to pass unicode-internal strings across Python versions, I would then also deprecate the encoding. Regards, Martin From martin at v.loewis.de Wed Nov 9 22:10:54 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 09 Nov 2011 22:10:54 +0100 Subject: [Python-Dev] [Python-checkins] cpython: quote the type name for improved readability In-Reply-To: <20111109111913.177278e7@pitrou.net> References: <4EB81449.6040004@netwok.org> <4EBA4B92.6000409@v.loewis.de> <20111109111913.177278e7@pitrou.net> Message-ID: <4EBAEC5E.1040406@v.loewis.de> >> I recommend reverting the change. I fail to see why quoting the name >> improves readability. > > It does if the name is "Throatwobbler Mangrove". But that can't be - the type name ought to be an identifier, so it can't have spaces. It might be possible to create deliberately confusing error messages, though, such as py> class your:pass ... py> next(your()) Traceback (most recent call last): File "", line 1, in TypeError: your object is not an iterator Regards, Martin From jxo6948 at rit.edu Wed Nov 9 22:14:12 2011 From: jxo6948 at rit.edu (John O'Connor) Date: Wed, 9 Nov 2011 16:14:12 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: > +1 for Cardinal Biggles as release manager. +1 From barry at python.org Wed Nov 9 22:55:11 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 9 Nov 2011 16:55:11 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <20111109165511.19b8d1ee@limelight.wooz.org> On Nov 09, 2011, at 08:58 PM, Paul Moore wrote: >I see why people feel that something more precise and/or formal would >get the message across better, but the mildly tongue-in-cheek tone >isn't really inappropriate for a language named after Monty Python :-) *Thank you* :) >+1 for Cardinal Biggles as release manager. Brilliant! -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From victor.stinner at haypocalc.com Wed Nov 9 22:49:35 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 9 Nov 2011 22:49:35 +0100 Subject: [Python-Dev] unicode_internal codec and the PEP 393 In-Reply-To: <4EBAEAB8.2060509@v.loewis.de> References: <2727750.c5cKp8gfbm@dsk000552> <4EBAEAB8.2060509@v.loewis.de> Message-ID: <201111092249.35994.victor.stinner@haypocalc.com> Le mercredi 9 novembre 2011 22:03:52, vous avez ?crit : > > > Should we: > > * Drop this codec (public and documented, but I don't know if it is > > used) * Use wchar_t* (Py_UNICODE*) to provide a result similar to > > Python 3.2, and > > > > so fix the decoder to handle surrogate pairs > > > > * Use the real representation (Py_UCS1*, Py_UCS2 or Py_UCS4* string) > > It's described as "Return the internal representation of the operand". > That would suggest that the last choice (i.e. return the real internal > representation) would be best, except that this doesn't round-trip. > Adding a prefix byte indicating the kind (and perhaps also the ASCII > flag) would then be closest to the real representation. > > As that is likely not very useful, and might break some applications > of the encoding (if there are any at all) which might expect to > pass unicode-internal strings across Python versions, I would then > also deprecate the encoding. After a quick search on Google codesearch (before it disappears!), I don't think that "encoding" a Unicode string to its internal PEP-393 representation would satisfy any program. It looks like wchar_t* is a better candidate. Programs use maybe unicode_internal to decode strings coming from libraries using wchar_t* (and no PyUnicodeObject). taskcoach, drag & drop code using wxPython: data = self.__thunderbirdMailDataObject.GetData() # We expect the data to be encoded with 'unicode_internal', # but on Fedora it can also be 'utf-16', be prepared: try: data = data.decode('unicode_internal') except UnicodeDecodeError: data = data.decode('utf-16') => thunderbirdMailDataObject.GetData() result type should be a Unicode, not bytes hydrat, tokenizer: def bytes(str): return filter(lambda x: x != '\x00', str.encode('unicode_internal')) => this algorithm is really strange... djebel, fscache/rst.py class RstDocument(object): ... def __init__(self, path, options={}): opts = {'input_encoding': 'euc-jp', 'output_encoding': 'unicode_internal', 'doctitle_xform': True, 'file_insertion_enabled': True} ... doctree = core.publish_doctree(source=file(path, 'rb').read(), ..., settings_overrides=opts) ... content = parts['html_body'] or u'' if not isinstance(content, unicode): content = unicode(content, 'unicode_internal') if not isinstance(title, unicode): title = unicode(title, 'unicode_internal') ... => I don't understand this code Victor From ncoghlan at gmail.com Wed Nov 9 23:01:35 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Nov 2011 08:01:35 +1000 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: On Thu, Nov 10, 2011 at 5:14 AM, Terry Reedy wrote: > On 11/9/2011 6:28 AM, Nick Coghlan wrote: > >> So bug fixes should be recorded in both places - the 3.3a1 notes >> record the deltas against 3.2.0, not against whatever the latest >> release of 3.2 happens to be when 3.3 is released. > > OK, I see that now. > > Idea 2: If "What's New in Python 3.3 Alpha 1?" had two major sections" NEW > FEATURES and BUG FIXES (since 3.2.0) with duplicated subheaders, and if the > subheaders were tagged, like > Core and Builtins -- Features > Core and Builtins -- Fixes > and if the subheaders for 3.2.z, z>=1 had the latter tags, then the context > for merges of bug fixes would not be disturbed by the interposition of > feature items. There would only be a problem when the first merge of a > subsection for a 3.2.z, z>=2 release is not the first item in the > corresponding section for 3.3.0alpha1. Alas, things sometimes don't divide that cleanly - sometimes an API change/addition is part of fixing a bug rather than a new feature in its own right. A custom merge function along the lines of the one you suggest (i.e. if it's after a section header in 3.2.x Misc/NEWS, add it after the first occurrence of the same header in 3.3) is probably the way to go. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Wed Nov 9 23:13:15 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Nov 2011 11:13:15 +1300 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> Message-ID: <4EBAFAFB.3010406@canterbury.ac.nz> Guido van Rossum wrote: > I see this as inevitable. By the time the parser sees 'yield' it has > made its choices; the 'from' keyword cannot modify that. So whenever > "yield expr" must be parenthesized, "yield from expr" must too. This is patently untrue, because by version of the grammar allows 'f(yield from x)', while disallowing 'f(yield x)'. I made a conscious decision to do that, and I'm a bit alarmed at this decision being overridden at the last moment with no debate. > At the same time, "yield expr, expr" works; Um, no, that's a syntax error in any context, as far as I can see. > but does "yield from expr, expr" mean anything? In my current grammar, it's a syntax error on its own, but 'f(yield from x, y)' parses as 'f((yield from x), y)', which seems like a reasonable interpretation to me. What's not quite so reasonable is that if you have an expression such as f(x) + g(y) and you decide to turn f into a generator, the obvious way to rewrite it would be yield from f(x) + g(y) but that unfortunately parses as yield from (f(x) + g(y)) If I'd thought about this more at the time, I would probably have tried to make the argument to yield-from something further down in the expression hierarchy, such as a power. That might be tricky to achieve while keeping the existing behaviour of 'yield', though. -- Greg From jcea at jcea.es Wed Nov 9 23:21:35 2011 From: jcea at jcea.es (Jesus Cea) Date: Wed, 09 Nov 2011 23:21:35 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: <4EBAFCEF.5040004@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 08/11/11 19:51, Terry Reedy wrote: > If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, > so perhaps it should not be added there. NEWS could just refer back > to previous sections. Then 3.3.0 News would only be new features > and the occasional ambiguous item not fixed before. I am confused. My usual usage case is this: 1. I fix something in 3.2. 2. I merge that fix into 3.3. Everything goes smooth except Misc/NEWS. 3. Recover the original 3.3 Misc/NEWS, and add manually what I added to the 3.2 Misc/NEWS. Mark the file as "resolved" and commit. I would like to avoid (3). A custom merge script is an option, but seems complicated and error prone. I have the feeling that structuring Misc/NEWS in the right way could solve it automatically. Something like adding patches to be up-ported/merged at the end of the section, 3.3 only patches be added at the beginning, so they don't conflict. A bit of discipline and, voila, automatic flawless merges! :-) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTrr87plgi5GaxT1NAQL5GgQAnKsWb4TM1oXXo4Dg84XFIHoKpxQQwRWq oKFIaddNOaZ3wp+ccR0G2aoi+LX2BrsEn3sBL7RXRXVFPludGDvonWcvHar/2DLw E52jDytiMd0gED5TkyqPdck3s6NhUCaZz1qfncI9jHkb2/rznXiBK0mLD+suRleu f+AQ6yoPD2o= =ruC4 -----END PGP SIGNATURE----- From jcea at jcea.es Wed Nov 9 23:24:33 2011 From: jcea at jcea.es (Jesus Cea) Date: Wed, 09 Nov 2011 23:24:33 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> Message-ID: <4EBAFDA1.3070505@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 09/11/11 12:28, Nick Coghlan wrote: > The 3.2.x maintenance release sections are not present in the 3.3 > NEWS file - it jumps straight from 3.2 to 3.3a1. > > So bug fixes should be recorded in both places - the 3.3a1 notes > record the deltas against 3.2.0, not against whatever the latest > release of 3.2 happens to be when 3.3 is released. But fixes in 3.2.x should be incorporated to 3.3, and be in the Misc/NEWS. The general rule is that everything committed to 3.2 is merged to 3.3 (with very few exceptions, managed via dummy merges, like changing the version from "3.2.2" to "3.2.3") - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTrr9oZlgi5GaxT1NAQIPUAP/YQJ4gG1ES0x2LiFN8Hinvk9snUqHJMjC GrgjTGeqAcFBt6vZxVC7UujKwync4BSVPtXX/Fogzj/P3yjN2hNRf/YoL5kHqIID W8HdY0n8ncfiy3ekgpa3i+8Ie4lnSw4OxnxEZMnGkKoH38HCPaRmH1yvcGQsy2yL ZgTwVOM/eUk= =bNy+ -----END PGP SIGNATURE----- From solipsis at pitrou.net Wed Nov 9 23:32:58 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 9 Nov 2011 23:32:58 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" References: <4EB94F97.6020002@jcea.es> <4EBAFCEF.5040004@jcea.es> Message-ID: <20111109233258.52fbc08e@pitrou.net> On Wed, 09 Nov 2011 23:21:35 +0100 Jesus Cea wrote: > > On 08/11/11 19:51, Terry Reedy wrote: > > If a bug is fixed in 3.2.latest, then it will not be new in 3.3.0, > > so perhaps it should not be added there. NEWS could just refer back > > to previous sections. Then 3.3.0 News would only be new features > > and the occasional ambiguous item not fixed before. > > I am confused. My usual usage case is this: > > 1. I fix something in 3.2. > 2. I merge that fix into 3.3. Everything goes smooth except Misc/NEWS. > 3. Recover the original 3.3 Misc/NEWS, and add manually what I added > to the 3.2 Misc/NEWS. Mark the file as "resolved" and commit. > > I would like to avoid (3). You can avoid (3) by resolving the conflict by hand instead. It isn't as smooth as if the merge had gone through without any conflict, but I find it relatively painless in practice. FYI, Twisted has chosen a totally different approach: http://twistedmatrix.com/trac/wiki/ReviewProcess#Newsfiles ?[...] If we just let each author add to the NEWS files on every commit, though, we would run into lots of spurious conflicts. To avoid this, we have come up with a scheme involving separate files for each change. Changes must be accompanied by an entry in at least one topfiles directory. [...] An entry must be a file named .. You should replace with the ticket number which is being resolved by the change (if multiple tickets are resolved, multiple files with the same contents should be added)? Regards Antoine. From timothy.c.delaney at gmail.com Wed Nov 9 23:43:28 2011 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Thu, 10 Nov 2011 09:43:28 +1100 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: <4EBAFAFB.3010406@canterbury.ac.nz> References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> Message-ID: On 10 November 2011 09:13, Greg Ewing wrote: > This is patently untrue, because by version of the grammar > allows 'f(yield from x)', while disallowing 'f(yield x)'. > > I made a conscious decision to do that, and I'm a bit alarmed > at this decision being overridden at the last moment with no > debate. We have precedent for being more restrictive initially, and relaxing those restrictions later. I suggest that the more restrictive implementation go in now so that people can start playing with it. If the discussion comes to a consensus on more relaxed syntax, that can be added later (either in 3.3 or a later release). Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Wed Nov 9 23:45:16 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 09 Nov 2011 23:45:16 +0100 Subject: [Python-Dev] unicode_internal codec and the PEP 393 In-Reply-To: <201111092249.35994.victor.stinner@haypocalc.com> References: <2727750.c5cKp8gfbm@dsk000552> <4EBAEAB8.2060509@v.loewis.de> <201111092249.35994.victor.stinner@haypocalc.com> Message-ID: <4EBB027C.2090503@v.loewis.de> > After a quick search on Google codesearch (before it disappears!), I don't > think that "encoding" a Unicode string to its internal PEP-393 representation > would satisfy any program. It looks like wchar_t* is a better candidate. Ok. Making it Py_UNICODE, documenting that, and deprecating the encoding sounds fine to me as well. Regards, Martin From ncoghlan at gmail.com Wed Nov 9 23:58:42 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Nov 2011 08:58:42 +1000 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109165511.19b8d1ee@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> Message-ID: On Thu, Nov 10, 2011 at 7:55 AM, Barry Warsaw wrote: >>+1 for Cardinal Biggles as release manager. Now you need to persuade Vinay to let you trade PEP numbers with the pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an opportunity to pass up :) Getting boring for a moment, I suggest including the following new section just before the copyright section: And Now For Something Completely Different ================================= Sorry, sorry, that's just being too silly. While the language may be named after a British comedy troupe (and the overall tone of this PEP reflects that), there are some serious reasons that explain why there won't be an official 2.8 release from the CPython development team. If a search for "Python 2.8" brought you to this document, you may not be aware of the underlying problems in the design of Python 2.x that led to the creation of the 3.x series. First and foremost, Python 2.x is a language with ASCII text at its core. The main text manipulation interfaces, the standard I/O stack and many other elements of the standard library are built around that assumption. While Unicode is supported, it's quite clearly an added on feature rather than something that is fundamental to the language. Python 3.x changes that core assumption, instead building the language around Unicode text. This affects the builtin ``str`` type (which is now Unicode text rather than 8-bit data), the standard I/O stack (which now supports Unicode encoding concepts directly), what identifier and module names are legal (with most Unicode alphanumeric characters being supported) and several other aspects of the language. With the text handling and associated I/O changes breaking backwards compatibility *anyway*, Guido took the opportunity to finally eliminate some other design defects in Python 2.x that had been preserved solely for backwards compatibility reasons. These changes include: - complete removal of support for "classic" (i.e. pre-2.2 style) class semantics - the separate ``int`` (machine level integer) and ``long`` (arbitrarily large) integer types have been merged into a single ``int`` type (that supports arbitrarily large values) - integer division now promotes non-integer results to binary floating values automatically - the error prone ``except Exception, exc:`` syntax has been removed (in favour of the more explicit ``except Exception as exc:``) - ``print`` and ``exec`` are now ordinary functions rather than statements - the backtick based ```x``` alternate spelling of ``repr(x)`` has been removed - the ``<>`` alternate spelling of ``!=`` has been removed - implicit relative imports have been removed - star imports (i.e. ``from x import *``) are now permitted only in module level code - implicit ordering comparisons between objects of different types have been removed - list comprehensions no longer leak their iteration variables into the surrounding scope - many APIs that previously returned lists now return iterators or lightweight views instead (e.g. ``map`` produces an iterator, ``range`` creates a virtual sequence, ``dict.keys`` a view of the original dict) - iterator advancement is now via a protocal-based builtin (``next()`` invoking ``__next__()``) rather than an ordinary method call - some rarely needed builtins have been relocated to standard library modules (``reduce`` is now ``functools.reduce``, ``reload`` is now ``imp.reload``) - some areas of the standard library have been rearranged in an attempt to make the naming schemes more intuitive More details on the backwards incompatible changes relative to the 2.x series can be found in the `Python 3.0 What's New`_ document. With the 3.x Unicode based architecture providing a significantly better foundation for a language with a global audience, all new features will appear solely in the Python 3.x series. However, as detailed elsewhere, the 2.7 release will still be supported with bug fixes and maintenance releases for several years. .. _`Python 3.0 What's New`: http://docs.python.org/py3k/whatsnew/3.0.html -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Thu Nov 10 00:11:22 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 9 Nov 2011 15:11:22 -0800 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: <4EBAFAFB.3010406@canterbury.ac.nz> References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> Message-ID: On Wed, Nov 9, 2011 at 2:13 PM, Greg Ewing wrote: > Guido van Rossum wrote: >> >> I see this as inevitable. By the time the parser sees 'yield' it has >> made its choices; the 'from' keyword cannot modify that. So whenever >> "yield expr" must be parenthesized, "yield from expr" must too. > > This is patently untrue, because by version of the grammar > allows 'f(yield from x)', while disallowing 'f(yield x)'. > > I made a conscious decision to do that, and I'm a bit alarmed > at this decision being overridden at the last moment with no > debate. We're having the debate now. :-) I can't find anywhere in the PEP where it says what the operator priority of "yield from" is, so you can't blame me for thinking the priority should be the same as for "yield". >> At the same time, "yield expr, expr" works; > > Um, no, that's a syntax error in any context, as far as I > can see. Actually it is valid, meaning "yield (expr, expr)" in any context where "yield expr" is valid (please let me know if there are any contexts where that isn't true): >>> def foo(): ... yield 1, 1 ... >>> def foo(): ... if (yield 1, 1): pass ... >>> def foo(): ... bar((yield 1, 1)) ... >>> def foo(): ... x = yield 1, 1 ... >> but does "yield from expr, expr" mean anything? > > In my current grammar, it's a syntax error on its own, > but 'f(yield from x, y)' parses as 'f((yield from x), y)', > which seems like a reasonable interpretation to me. Once you realize that "yield from x, y" has no meaning, sure. But without thinking deeper about that I can't prove that we'll never find a meaning for it. We had a similar limitation for "with a, b:" -- initially it was illegal, eventually we gave it a meaning. > What's not quite so reasonable is that if you have an > expression such as > > ? f(x) + g(y) > > and you decide to turn f into a generator, the obvious > way to rewrite it would be > > ? yield from f(x) + g(y) > > but that unfortunately parses as > > ? yield from (f(x) + g(y)) Well duh. :-) > If I'd thought about this more at the time, I would > probably have tried to make the argument to yield-from > something further down in the expression hierarchy, > such as a power. That might be tricky to achieve > while keeping the existing behaviour of 'yield', > though. IMO that would be the wrong priority for a keyword-based operator. Note that 'not' also has a very low priority. -- --Guido van Rossum (python.org/~guido) From barry at python.org Thu Nov 10 01:05:14 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 9 Nov 2011 19:05:14 -0500 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> Message-ID: <20111109190514.712f987e@limelight.wooz.org> On Nov 10, 2011, at 08:58 AM, Nick Coghlan wrote: >Now you need to persuade Vinay to let you trade PEP numbers with the >pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an >opportunity to pass up :) Brilliant suggestion! Vinay? :) >Getting boring for a moment, I suggest including the following new >section just before the copyright section: You have a very good point. I think I'd like a shorter 'serious' section though. Let me see what I can put together. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From greg.ewing at canterbury.ac.nz Thu Nov 10 01:13:06 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Nov 2011 13:13:06 +1300 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: <4EBB1712.1000509@canterbury.ac.nz> On 10/11/11 05:18, Amaury Forgeot d'Arc wrote: > Do we need to designate a release manager? I nominate John Cleese. Although he's undoubtedly a busy man, this shouldn't take up too much of his time. -- Greg From timothy.c.delaney at gmail.com Thu Nov 10 01:15:14 2011 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Thu, 10 Nov 2011 11:15:14 +1100 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: <20111109190514.712f987e@limelight.wooz.org> References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> <20111109190514.712f987e@limelight.wooz.org> Message-ID: On 10 November 2011 11:05, Barry Warsaw wrote: > On Nov 10, 2011, at 08:58 AM, Nick Coghlan wrote: > > >Now you need to persuade Vinay to let you trade PEP numbers with the > >pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an > >opportunity to pass up :) > > Brilliant suggestion! Vinay? :) 410 Gone would be more appropriate IMO. But 404 does have more mindshare. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Thu Nov 10 01:34:30 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Nov 2011 13:34:30 +1300 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> Message-ID: <4EBB1C16.6050901@canterbury.ac.nz> On 10/11/11 12:11, Guido van Rossum wrote: > Actually it is valid, meaning "yield (expr, expr)" in any context > where "yield expr" is valid Hmmm, it seems you're right. I was testing it using my patched yield-from version of Python, where it has apparently become a syntax error. I didn't mean to break it, sorry! -- Greg From greg.ewing at canterbury.ac.nz Thu Nov 10 01:35:10 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Nov 2011 13:35:10 +1300 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> Message-ID: <4EBB1C3E.20300@canterbury.ac.nz> On 10/11/11 11:43, Tim Delaney wrote: > We have precedent for being more restrictive initially, and relaxing those > restrictions later. > > I suggest that the more restrictive implementation go in now so that people > can start playing with it. If the discussion comes to a consensus on more > relaxed syntax, that can be added later (either in 3.3 or a later release). That's fair enough. I'll shut up now. -- Greg From ncoghlan at gmail.com Thu Nov 10 02:42:10 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Nov 2011 11:42:10 +1000 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <20111109233258.52fbc08e@pitrou.net> References: <4EB94F97.6020002@jcea.es> <4EBAFCEF.5040004@jcea.es> <20111109233258.52fbc08e@pitrou.net> Message-ID: On Thu, Nov 10, 2011 at 8:32 AM, Antoine Pitrou wrote: > FYI, Twisted has chosen a totally different approach: > http://twistedmatrix.com/trac/wiki/ReviewProcess#Newsfiles > > ?[...] If we just let each author add to the NEWS files on every > commit, though, we would run into lots of spurious conflicts. To avoid > this, we have come up with a scheme involving separate files for each > change. > > Changes must be accompanied by an entry in at least one topfiles > directory. [...] > An entry must be a file named .. You should > replace with the ticket number which is being resolved > by the change (if multiple tickets are resolved, multiple files with > the same contents should be added)? An approach like that would also have the virtue of avoiding conflicts if you set up a NEWS file change in a feature branch that ends up being in use for longer than you originally thought. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Nov 10 02:50:34 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 Nov 2011 11:50:34 +1000 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: <4EBB1C3E.20300@canterbury.ac.nz> References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> <4EBB1C3E.20300@canterbury.ac.nz> Message-ID: On Thu, Nov 10, 2011 at 10:35 AM, Greg Ewing wrote: > On 10/11/11 11:43, Tim Delaney wrote: >> >> We have precedent for being more restrictive initially, and relaxing those >> restrictions later. >> >> I suggest that the more restrictive implementation go in now so that >> people >> can start playing with it. If the discussion comes to a consensus on more >> relaxed syntax, that can be added later (either in 3.3 or a later >> release). > > That's fair enough. I'll shut up now. No worries - given the dance you had to go through in the Grammar file to make it work in the first place, I should have realised you'd done it deliberately. (The mention of the 'yield_from' node in the doc patch I was reviewing is actually what got me looking into this). As I said earlier, I'd actually be amenable to making it legal to omit the extra parentheses for both yield & yield from in the single argument case where there's no ambiguity (following the generator expression precedent), but that's a tricky change given the parser limitations. The way your patch tried to do it also allowed "f(yield from x, 1)" which strikes me as being far too confusing to a human reader, even if the parser understands it. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Thu Nov 10 04:33:11 2011 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 Nov 2011 16:33:11 +1300 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> <4EBB1C3E.20300@canterbury.ac.nz> Message-ID: <4EBB45F7.1040102@canterbury.ac.nz> On 10/11/11 14:50, Nick Coghlan wrote: > I'd actually be amenable to making it legal to omit > the extra parentheses for both yield& yield from in the single > argument case where there's no ambiguity... > > The way your patch tried to do it also allowed "f(yield > from x, 1)" which strikes me as being far too confusing Since 'yield from' is intended mainly for delegating to another generator, the 'x' there will usually be a function call, so you'll be looking at something like f(yield from g(x), 1) which doesn't look very confusing to me, but maybe I'm not impartial enough to judge. In any case, I'm now pursuing cofunctions as a better way of doing lightweight threading, so this issue probably doesn't matter so much. -- Greg From stephen at xemacs.org Thu Nov 10 04:49:48 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 10 Nov 2011 12:49:48 +0900 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4EBAFCEF.5040004@jcea.es> References: <4EB94F97.6020002@jcea.es> <4EBAFCEF.5040004@jcea.es> Message-ID: <87ty6cmyar.fsf@uwakimon.sk.tsukuba.ac.jp> Jesus Cea writes: > A bit of discipline and, voila, automatic flawless merges! :-) Doesn't work that way, sorry. The discipline required is isomorphic to that required by threaded programs. See multiple threads on python-ideas for why that is less than automatic or flawless. :-) Antoine's suggestion looks to be analogous to STM. From solipsis at pitrou.net Thu Nov 10 04:52:58 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 10 Nov 2011 04:52:58 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" References: <4EB94F97.6020002@jcea.es> <4EBAFCEF.5040004@jcea.es> <87ty6cmyar.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20111110045258.0ce58174@pitrou.net> On Thu, 10 Nov 2011 12:49:48 +0900 "Stephen J. Turnbull" wrote: > > > A bit of discipline and, voila, automatic flawless merges! :-) > > Doesn't work that way, sorry. The discipline required is isomorphic to > that required by threaded programs. See multiple threads on > python-ideas for why that is less than automatic or flawless. :-) > > Antoine's suggestion looks to be analogous to STM. It's more like shared-nothing, actually. (but that's not my suggestion, just a mention of how another project deals with the issue; I find the current situation quite manageable myself) Regards Antoine. From p.f.moore at gmail.com Thu Nov 10 09:16:08 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 10 Nov 2011 08:16:08 +0000 Subject: [Python-Dev] [SPAM: 3.000] [issue11682] PEP 380 reference implementation for 3.3 In-Reply-To: References: <1320833824.19.0.616075685755.issue11682@psf.upfronthosting.co.za> <4EBA5CFD.1080500@canterbury.ac.nz> <4EBAFAFB.3010406@canterbury.ac.nz> Message-ID: On 9 November 2011 23:11, Guido van Rossum wrote: > On Wed, Nov 9, 2011 at 2:13 PM, Greg Ewing wrote: >> In my current grammar, it's a syntax error on its own, >> but 'f(yield from x, y)' parses as 'f((yield from x), y)', >> which seems like a reasonable interpretation to me. > > Once you realize that "yield from x, y" has no meaning, sure. But > without thinking deeper about that I can't prove that we'll never find > a meaning for it. We had a similar limitation for "with a, b:" -- > initially it was illegal, eventually we gave it a meaning. Without the context of this thread, my immediate thought would be that yield from x, y is some sort of chaining construct. But I have no vested interest in arguing for this, it's just for information. Paul From techtonik at gmail.com Thu Nov 10 12:09:31 2011 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 10 Nov 2011 14:09:31 +0300 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> Message-ID: On Thu, Nov 10, 2011 at 1:58 AM, Nick Coghlan wrote: > > Getting boring for a moment, I suggest including the following new > section just before the copyright section: I'd also include a "roadmap" section with all 2.x wannabes that are not going to be be released with 2.8. And a special epilogue chapter listing all missing stdlib wannabes and fixes that were never implemented, because they break backward 2.x compatibility. -- anatoly t. From vinay_sajip at yahoo.co.uk Thu Nov 10 15:17:59 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 10 Nov 2011 14:17:59 +0000 (UTC) Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> <20111109190514.712f987e@limelight.wooz.org> Message-ID: Barry Warsaw python.org> writes: > > On Nov 10, 2011, at 08:58 AM, Nick Coghlan wrote: > > >Now you need to persuade Vinay to let you trade PEP numbers with the > >pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an > >opportunity to pass up :) > > Brilliant suggestion! Vinay? :) > Actually you need Carl Meyer's agreement, not mine - he's the one writing the PEP. But I'm in favour :-) Regards, Vinay Sajip From carl at oddbird.net Thu Nov 10 18:46:14 2011 From: carl at oddbird.net (Carl Meyer) Date: Thu, 10 Nov 2011 10:46:14 -0700 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> <20111109165511.19b8d1ee@limelight.wooz.org> <20111109190514.712f987e@limelight.wooz.org> Message-ID: <4EBC0DE6.9080106@oddbird.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/10/2011 07:17 AM, Vinay Sajip wrote: > Barry Warsaw python.org> writes: >> On Nov 10, 2011, at 08:58 AM, Nick Coghlan wrote: >>> Now you need to persuade Vinay to let you trade PEP numbers with the >>> pyvenv PEP. Having an unrelease schedule as PEP 404 is too good an >>> opportunity to pass up :) >> >> Brilliant suggestion! Vinay? :) >> > > Actually you need Carl Meyer's agreement, not mine - he's the one writing the > PEP. But I'm in favour :-) No objection here. Carl -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk68DeYACgkQ8W4rlRKtE2c9pACgvYw22k3HQOgjmRjNk+F5AdW4 QIcAoLgzdPb8PNNHqqEdGYWGeMp0lD3I =u9HS -----END PGP SIGNATURE----- From eliben at gmail.com Fri Nov 11 09:39:33 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 11 Nov 2011 10:39:33 +0200 Subject: [Python-Dev] order of Misc/ACKS Message-ID: The PS: at the top of Misc/ACKS says: PS: In the standard Python distribution, this file is encoded in UTF-8 and the list is in rough alphabetical order by last names. However, the last 3 names in the list don't appear to be part of that alphabetical order. Is this somehow intentional, or just a mistake? Eli From ezio.melotti at gmail.com Fri Nov 11 10:56:31 2011 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Fri, 11 Nov 2011 11:56:31 +0200 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: References: Message-ID: <4EBCF14F.7090807@gmail.com> Hi, On 11/11/2011 10.39, Eli Bendersky wrote: > The PS: at the top of Misc/ACKS says: > > PS: In the standard Python distribution, this file is encoded in UTF-8 > and the list is in rough alphabetical order by last names. > > However, the last 3 names in the list don't appear to be part of that > alphabetical order. Is this somehow intentional, or just a mistake? Only the last two are out of place, and should be fixed. The '?' in "Peter ?strand" sorts after 'Z'. See http://mail.python.org/pipermail/python-dev/2010-August/102961.html for a discussion about the order of Misc/ACKS. Best Regards, Ezio Melotti > Eli > From martin at v.loewis.de Fri Nov 11 11:59:33 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 11 Nov 2011 11:59:33 +0100 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <4EBCF14F.7090807@gmail.com> References: <4EBCF14F.7090807@gmail.com> Message-ID: <4EBD0015.1010805@v.loewis.de> Am 11.11.2011 10:56, schrieb Ezio Melotti: > Hi, > > On 11/11/2011 10.39, Eli Bendersky wrote: >> The PS: at the top of Misc/ACKS says: >> >> PS: In the standard Python distribution, this file is encoded in UTF-8 >> and the list is in rough alphabetical order by last names. >> >> However, the last 3 names in the list don't appear to be part of that >> alphabetical order. Is this somehow intentional, or just a mistake? > > Only the last two are out of place, and should be fixed. The '?' in > "Peter ?strand" sorts after 'Z'. > See http://mail.python.org/pipermail/python-dev/2010-August/102961.html > for a discussion about the order of Misc/ACKS. The key point here is that it is *rough* alphabetic order. IMO, sorting accented characters along with their unaccented versions would be fine as well, and be more practical. In general, it's not possible to provide a "correct" alphabetic order. For example, in German, '?' sorts after 'o', whereas in Swedish, it sorts after 'z'. In fact, in German, we have two different ways of sorting the ?: one is to treat it is a letter after o, and the other is to treat it as equivalent to oe. Regards, Martin From victor.stinner at haypocalc.com Fri Nov 11 13:05:14 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Fri, 11 Nov 2011 13:05:14 +0100 Subject: [Python-Dev] unicode_internal codec and the PEP 393 In-Reply-To: <4EBB027C.2090503@v.loewis.de> References: <2727750.c5cKp8gfbm@dsk000552> <4EBAEAB8.2060509@v.loewis.de> <201111092249.35994.victor.stinner@haypocalc.com> <4EBB027C.2090503@v.loewis.de> Message-ID: <4EBD0F7A.8000101@haypocalc.com> Le 09/11/2011 23:45, "Martin v. L?wis" a ?crit : >> After a quick search on Google codesearch (before it disappears!), I don't >> think that "encoding" a Unicode string to its internal PEP-393 representation >> would satisfy any program. It looks like wchar_t* is a better candidate. > > Ok. Making it Py_UNICODE, documenting that, and deprecating the encoding > sounds fine to me as well. Done. Victor From eliben at gmail.com Fri Nov 11 13:12:18 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 11 Nov 2011 14:12:18 +0200 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <4EBD0015.1010805@v.loewis.de> References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> Message-ID: > The key point here is that it is *rough* alphabetic order. IMO, sorting > accented characters along with their unaccented versions would be fine > as well, and be more practical. In general, it's not possible to provide > a "correct" alphabetic order. For example, in German, '?' sorts after > 'o', whereas in Swedish, it sorts after 'z'. In fact, in German, we have > two different ways of sorting the ?: one is to treat it is a letter > after o, and the other is to treat it as equivalent to oe. This is really interesting. I guess lexical ordering of alphabet letters is a locale thing, but Misc/ACKS isn't supposed to be any special locale. It makes me wonder whether it's possible to have a contradiction in the ordering, i.e. have a set of names that just can't be sorted in any order acceptable by everyone. We can then call it "the Misc/ACKS incompleteness theorem" ;-) Eli From status at bugs.python.org Fri Nov 11 18:07:28 2011 From: status at bugs.python.org (Python tracker) Date: Fri, 11 Nov 2011 18:07:28 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20111111170728.4EDA31CDDA@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2011-11-04 - 2011-11-11) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3110 ( -8) closed 22056 (+50) total 25166 (+42) Open issues with patches: 1325 Issues opened (19) ================== #13340: list.index does not accept None as start or stop http://bugs.python.org/issue13340 reopened by rhettinger #13344: closed sockets don't raise EBADF anymore http://bugs.python.org/issue13344 opened by pitrou #13346: re.split() should behave like string.split() for maxsplit=0 an http://bugs.python.org/issue13346 opened by acg #13348: test_unicode_file fails: shutil.copy2 says "same file" http://bugs.python.org/issue13348 opened by flox #13349: Uninformal error message in index() and remove() functions http://bugs.python.org/issue13349 opened by petri.lehtinen #13354: tcpserver should document non-threaded long-living connections http://bugs.python.org/issue13354 opened by shevek #13355: random.triangular error when low = mode http://bugs.python.org/issue13355 opened by mark108 #13357: HTMLParser parses attributes incorrectly. http://bugs.python.org/issue13357 opened by Michael.Brooks #13358: HTMLParser incorrectly handles cdata elements. http://bugs.python.org/issue13358 opened by Michael.Brooks #13359: urllib2 doesn't escape spaces in http requests http://bugs.python.org/issue13359 opened by davide.rizzo #13368: Possible problem in documentation of module subprocess, method http://bugs.python.org/issue13368 opened by eli.bendersky #13369: timeout with exit code 0 while re-running failed tests http://bugs.python.org/issue13369 opened by flox #13370: test_ctypes fails on osx 10.7 http://bugs.python.org/issue13370 opened by ronaldoussoren #13371: Some Carbon extensions don't build on OSX 10.7 http://bugs.python.org/issue13371 opened by ronaldoussoren #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 opened by xdegaye #13374: Deprecate usage of the Windows ANSI API in the nt module http://bugs.python.org/issue13374 opened by haypo #13376: readline: pre_input_hook not getting called http://bugs.python.org/issue13376 opened by scates #13378: Change the variable "nsmap" from global to instance (xml.etree http://bugs.python.org/issue13378 opened by Nekmo #13380: ctypes: add an internal function for reseting the ctypes cache http://bugs.python.org/issue13380 opened by meador.inge Most recent 15 issues with no replies (15) ========================================== #13380: ctypes: add an internal function for reseting the ctypes cache http://bugs.python.org/issue13380 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 #13369: timeout with exit code 0 while re-running failed tests http://bugs.python.org/issue13369 #13354: tcpserver should document non-threaded long-living connections http://bugs.python.org/issue13354 #13336: packaging.command.Command.copy_file doesn't implement preserve http://bugs.python.org/issue13336 #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 #13294: http.server - HEAD request when no resource is defined. http://bugs.python.org/issue13294 #13282: the table of contents in epub file is too long http://bugs.python.org/issue13282 #13277: tzinfo subclasses information http://bugs.python.org/issue13277 #13276: distutils bdist_wininst created installer does not run the pos http://bugs.python.org/issue13276 #13272: 2to3 fix_renames doesn't rename string.lowercase/uppercase/let http://bugs.python.org/issue13272 #13231: sys.settrace - document 'some other code blocks' for 'call' ev http://bugs.python.org/issue13231 #13217: Missing header dependencies in Makefile http://bugs.python.org/issue13217 #13213: generator.throw() behavior http://bugs.python.org/issue13213 Most recent 15 issues waiting for review (15) ============================================= #13380: ctypes: add an internal function for reseting the ctypes cache http://bugs.python.org/issue13380 #13378: Change the variable "nsmap" from global to instance (xml.etree http://bugs.python.org/issue13378 #13374: Deprecate usage of the Windows ANSI API in the nt module http://bugs.python.org/issue13374 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 #13371: Some Carbon extensions don't build on OSX 10.7 http://bugs.python.org/issue13371 #13359: urllib2 doesn't escape spaces in http requests http://bugs.python.org/issue13359 #13338: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER http://bugs.python.org/issue13338 #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 #13328: pdb shows code from wrong module http://bugs.python.org/issue13328 #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 #13323: urllib2 does not correctly handle multiple www-authenticate he http://bugs.python.org/issue13323 #13322: buffered read() and write() does not raise BlockingIOError http://bugs.python.org/issue13322 #13305: datetime.strftime("%Y") not consistent for years < 1000 http://bugs.python.org/issue13305 #13301: the script Tools/i18n/msgfmt.py allows arbitrary code executio http://bugs.python.org/issue13301 #13297: xmlrpc.client could accept bytes for input and output http://bugs.python.org/issue13297 Top 10 most discussed issues (10) ================================= #11812: transient socket failure to connect to 'localhost' http://bugs.python.org/issue11812 19 msgs #13193: test_packaging and test_distutils failures http://bugs.python.org/issue13193 17 msgs #13340: list.index does not accept None as start or stop http://bugs.python.org/issue13340 14 msgs #6397: Implementing Solaris "/dev/poll" in the "select" module http://bugs.python.org/issue6397 13 msgs #13322: buffered read() and write() does not raise BlockingIOError http://bugs.python.org/issue13322 10 msgs #13229: Improve tools for iterating over filesystem directories http://bugs.python.org/issue13229 9 msgs #13374: Deprecate usage of the Windows ANSI API in the nt module http://bugs.python.org/issue13374 8 msgs #13211: urllib2.HTTPError does not have 'reason' attribute. http://bugs.python.org/issue13211 7 msgs #13309: test_time fails: time data 'LMT' does not match format '%Z' http://bugs.python.org/issue13309 7 msgs #4489: shutil.rmtree is vulnerable to a symlink attack http://bugs.python.org/issue4489 5 msgs Issues closed (46) ================== #7777: Support needed for AF_RDS family http://bugs.python.org/issue7777 closed by neologix #8025: TypeError: string argument expected, got 'str' http://bugs.python.org/issue8025 closed by ezio.melotti #9896: Introspectable range objects http://bugs.python.org/issue9896 closed by python-dev #11937: Interix support http://bugs.python.org/issue11937 closed by jcea #12163: str.count http://bugs.python.org/issue12163 closed by petri.lehtinen #12260: Make install default to user site-packages http://bugs.python.org/issue12260 closed by eric.araujo #13149: optimization for append-only StringIO http://bugs.python.org/issue13149 closed by pitrou #13161: problems with help() documentation of __i*__ operators http://bugs.python.org/issue13161 closed by eli.bendersky #13191: Typo in argparse documentation http://bugs.python.org/issue13191 closed by eli.bendersky #13200: Add start, stop and step attributes to range objects http://bugs.python.org/issue13200 closed by eric.araujo #13237: subprocess docs should emphasise convenience functions http://bugs.python.org/issue13237 closed by ncoghlan #13254: maildir.items() broken http://bugs.python.org/issue13254 closed by python-dev #13284: email.utils.formatdate function does not handle timezones corr http://bugs.python.org/issue13284 closed by r.david.murray #13292: missing versionadded for bytearray http://bugs.python.org/issue13292 closed by flox #13300: IDLE 3.3 Restart Shell command fails http://bugs.python.org/issue13300 closed by ned.deily #13311: asyncore handle_read should call recv http://bugs.python.org/issue13311 closed by neologix #13321: fstat doesn't accept an object with "fileno" method http://bugs.python.org/issue13321 closed by petri.lehtinen #13326: make clean failed on OpenBSD http://bugs.python.org/issue13326 closed by python-dev #13327: Update utime API to not require explicit None argument http://bugs.python.org/issue13327 closed by brian.curtin #13335: Service application hang in python25.dll http://bugs.python.org/issue13335 closed by terry.reedy #13342: input() builtin always uses "strict" error handler http://bugs.python.org/issue13342 closed by pitrou #13343: Lambda keyword-only argument not updating co_freevars http://bugs.python.org/issue13343 closed by amaury.forgeotdarc #13345: Invisible Files in Windows 7 http://bugs.python.org/issue13345 closed by loewis #13347: .py extension not auto added http://bugs.python.org/issue13347 closed by ned.deily #13350: Use PyUnicode_FromFomat instead of PyUnicode_Format for fixed http://bugs.python.org/issue13350 closed by amaury.forgeotdarc #13351: Strange time complexity when creating nested lists http://bugs.python.org/issue13351 closed by quakes #13352: tutorial section 9.3.3 documentation problem http://bugs.python.org/issue13352 closed by eli.bendersky #13353: documentation problem in logging.handlers.TimedRotatingFileHan http://bugs.python.org/issue13353 closed by vinay.sajip #13356: test_logging warning on 2.7 http://bugs.python.org/issue13356 closed by python-dev #13360: UnicodeWarning raised on sequence and set comparisons http://bugs.python.org/issue13360 closed by flox #13361: getLogger does not check its argument http://bugs.python.org/issue13361 closed by python-dev #13362: Many PEP 8 errors http://bugs.python.org/issue13362 closed by benjamin.peterson #13363: Many usages of dict.keys(), dict.values(), dict.items() when t http://bugs.python.org/issue13363 closed by ned.deily #13364: Duplicated code in decimal module http://bugs.python.org/issue13364 closed by mark.dickinson #13365: str.expandtabs documentation is wrong http://bugs.python.org/issue13365 closed by eli.bendersky #13366: test_pep277 failures under WIndows http://bugs.python.org/issue13366 closed by python-dev #13367: PyCapsule_New's argument *must* not a NULL. http://bugs.python.org/issue13367 closed by benjamin.peterson #13373: Unexpected blocking call to multiprocessing.Queue.get with a t http://bugs.python.org/issue13373 closed by pitrou #13375: Provide a namedtuple style interface for os.walk values http://bugs.python.org/issue13375 closed by ncoghlan #13377: test_codecs "Segmentation fault" on Windows http://bugs.python.org/issue13377 closed by haypo #13379: Wrong Unicode version in unicodedata docstring http://bugs.python.org/issue13379 closed by ezio.melotti #13381: compile fails to compile a ast module object giving a incompre http://bugs.python.org/issue13381 closed by benjamin.peterson #13382: IDLE menu scroll bar does not scroll with OS X 10.4 Apple Tcl/ http://bugs.python.org/issue13382 closed by ned.deily #13383: UnicodeDecodeError in distutils.core.setup when version is uni http://bugs.python.org/issue13383 closed by ezio.melotti #13384: Unnecessary __future__ import in random module http://bugs.python.org/issue13384 closed by brian.curtin #1200313: HTMLParser fails to handle charref in attribute value http://bugs.python.org/issue1200313 closed by ezio.melotti From eliben at gmail.com Fri Nov 11 20:24:40 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 11 Nov 2011 21:24:40 +0200 Subject: [Python-Dev] documenting the Hg commit message hooks in the devguide Message-ID: Hi, Our Hg repo has some useful hooks on commit messages that allow to specify which issue to notify for commits, and which issue to close. AFAIU, it's currently documented only in the code of the hook (http://hg.python.org/hooks/file/tip/hgroundup.py). I think adding a short description into the devguide would be a good idea, probably here: http://docs.python.org/devguide/committing.html#commit-messages-and-news-entries Any objections/alternative ideas? Eli From brett at python.org Fri Nov 11 23:01:12 2011 From: brett at python.org (Brett Cannon) Date: Fri, 11 Nov 2011 14:01:12 -0800 Subject: [Python-Dev] documenting the Hg commit message hooks in the devguide In-Reply-To: References: Message-ID: On Fri, Nov 11, 2011 at 11:24, Eli Bendersky wrote: > Hi, > > Our Hg repo has some useful hooks on commit messages that allow to > specify which issue to notify for commits, and which issue to close. > AFAIU, it's currently documented only in the code of the hook > (http://hg.python.org/hooks/file/tip/hgroundup.py). > > I think adding a short description into the devguide would be a good > idea, probably here: > > http://docs.python.org/devguide/committing.html#commit-messages-and-news-entries > > Any objections/alternative ideas? > +1 from me on documenting. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Sat Nov 12 07:31:49 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 12 Nov 2011 07:31:49 +0100 Subject: [Python-Dev] documenting the Hg commit message hooks in the devguide In-Reply-To: References: Message-ID: Am 11.11.2011 20:24, schrieb Eli Bendersky: > Hi, > > Our Hg repo has some useful hooks on commit messages that allow to > specify which issue to notify for commits, and which issue to close. > AFAIU, it's currently documented only in the code of the hook > (http://hg.python.org/hooks/file/tip/hgroundup.py). > > I think adding a short description into the devguide would be a good > idea, probably here: > http://docs.python.org/devguide/committing.html#commit-messages-and-news-entries > > Any objections/alternative ideas? No objections, it's a good idea. Georg From stephen at xemacs.org Sat Nov 12 08:03:50 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 12 Nov 2011 16:03:50 +0900 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> Message-ID: <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> Eli Bendersky writes: > special locale. It makes me wonder whether it's possible to have a > contradiction in the ordering, i.e. have a set of names that just > can't be sorted in any order acceptable by everyone. Yes, it is. The examples were already given in this thread. The Han-using languages also have this problem, and Japanese is nondetermistic all by itself (there are kanji names which for historical reasons are pronounced in several different ways, and therefore cannot be placed in phonetic order without additional information). The sensible thing is to just sort in Unicode code point order, I think. From larry at hastings.org Sat Nov 12 10:18:15 2011 From: larry at hastings.org (Larry Hastings) Date: Sat, 12 Nov 2011 01:18:15 -0800 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <4EBE39D7.8000700@hastings.org> On 11/11/2011 11:03 PM, Stephen J. Turnbull wrote: > The sensible thing is to just sort in Unicode code point order, I > think. I was going to suggest the official Unicode Collation Algorithm: http://unicode.org/reports/tr10/ But I peeked in the can, saw it was chock-a-block with worms, and declined to open it. /larry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Sat Nov 12 10:24:43 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 12 Nov 2011 10:24:43 +0100 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: Am 12.11.2011 08:03, schrieb Stephen J. Turnbull: > Eli Bendersky writes: > > > special locale. It makes me wonder whether it's possible to have a > > contradiction in the ordering, i.e. have a set of names that just > > can't be sorted in any order acceptable by everyone. > > Yes, it is. The examples were already given in this thread. The > Han-using languages also have this problem, and Japanese is > nondetermistic all by itself (there are kanji names which for > historical reasons are pronounced in several different ways, and > therefore cannot be placed in phonetic order without additional > information). > > The sensible thing is to just sort in Unicode code point order, I > think. The sensible thing is to accept that there is no solution, and to stop worrying. Georg From barry at python.org Sat Nov 12 15:22:10 2011 From: barry at python.org (Barry Warsaw) Date: Sat, 12 Nov 2011 09:22:10 -0500 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20111112092210.31197d7c@limelight.wooz.org> On Nov 12, 2011, at 04:03 PM, Stephen J. Turnbull wrote: >The sensible thing is to just sort in Unicode code point order, I >think. M-x sort-lines-by-unicode-point-order RET -Barry From catch-all at masklinn.net Sat Nov 12 16:45:14 2011 From: catch-all at masklinn.net (Xavier Morel) Date: Sat, 12 Nov 2011 16:45:14 +0100 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <6892EAE0-43A7-410E-939A-0A1E1244FC57@masklinn.net> On 2011-11-12, at 10:24 , Georg Brandl wrote: > Am 12.11.2011 08:03, schrieb Stephen J. Turnbull: >> Eli Bendersky writes: >> >>> special locale. It makes me wonder whether it's possible to have a >>> contradiction in the ordering, i.e. have a set of names that just >>> can't be sorted in any order acceptable by everyone. >> >> Yes, it is. The examples were already given in this thread. The >> Han-using languages also have this problem, and Japanese is >> nondetermistic all by itself (there are kanji names which for >> historical reasons are pronounced in several different ways, and >> therefore cannot be placed in phonetic order without additional >> information). >> >> The sensible thing is to just sort in Unicode code point order, I >> think. > > The sensible thing is to accept that there is no solution, and to stop > worrying. The file could use the default collation order, that way it'd be incorrectly sorted for everybody. From merwok at netwok.org Sat Nov 12 16:56:12 2011 From: merwok at netwok.org (=?UTF-8?Q?=C3=89ric_Araujo?=) Date: Sat, 12 Nov 2011 16:56:12 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4EB94F97.6020002@jcea.es> References: <4EB94F97.6020002@jcea.es> Message-ID: <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> Hi, My usual merge tool is vimdiff, with a little configuration so that it shows only two panes instead of three (destination file on the left, file from the other branch on the right, and if I need to compare either with the common ancestor to see which chunks I want from each file, I use a log viewer). For Misc/NEWS, I have written a merge tool that opens Vim with the destination file in a pane and a diff containing the changes on the other branch in another pane. So instead of getting many diff hunks between 3.2 and 3.3, I get the 3.3 NEWS on the left and a diff corresponding to the 3.2 changes I?m merging on the right, so I can copy-paste NEWS entries. There may be duplicates when merging again after a push race, but it?s still convenient. It?s a shell script hard-coded to use Vim; I consider these bugs and intend to fix them to make it more widely useful. I initially wrote this script to handle translation branches. If you manage text and translations with Mercurial, say English content in the default branch and a French translation in branch named 'fr', when you change the English text and merge, a standard merge tool is nearly useless, as the two files are near-completely different. With my tool, you get the French file on the left and a diff of the English changes on the right, and on you go. Please help yourself, and don?t forget to look at the README for configuration information: https://bitbucket.org/Merwok/scripts-hg Ezio and I chatted a bit about his on IRC and he may try to write a Python parser for Misc/NEWS in order to write a fully automated merge tool. Cheers From eliben at gmail.com Sat Nov 12 19:48:11 2011 From: eliben at gmail.com (Eli Bendersky) Date: Sat, 12 Nov 2011 20:48:11 +0200 Subject: [Python-Dev] documenting the Hg commit message hooks in the devguide In-Reply-To: References: Message-ID: >> Our Hg repo has some useful hooks on commit messages that allow to >> specify which issue to notify for commits, and which issue to close. >> AFAIU, it's currently documented only in the code of the hook >> (http://hg.python.org/hooks/file/tip/hgroundup.py). >> >> I think adding a short description into the devguide would be a good >> idea, probably here: >> http://docs.python.org/devguide/committing.html#commit-messages-and-news-entries >> >> Any objections/alternative ideas? > > No objections, it's a good idea. > Alright. Created issue 13388 to track this. Eli From solipsis at pitrou.net Sun Nov 13 01:23:59 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 01:23:59 +0100 Subject: [Python-Dev] Hashable memoryviews Message-ID: <20111113012359.51d01fbd@pitrou.net> Hello everyone and Benjamin, Currently, memoryview objects are unhashable: >>> hash(memoryview(b"")) Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'memoryview' Compare with Python 2.7: >>> hash(buffer("")) 0 memoryviews already support equality comparison: >>> b"" == memoryview(b"") True If the original object providing the buffer is hashable, then it seems to make sense for the memoryview object to be hashable. This came while porting Twisted to Python 3. What do you think? Regards Antoine. From guido at python.org Sun Nov 13 02:15:08 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 12 Nov 2011 17:15:08 -0800 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113012359.51d01fbd@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> Message-ID: Aren't memoryview objects mutable? I think that the underlying memory can change, so it shouldn't be hashable. On Sat, Nov 12, 2011 at 4:23 PM, Antoine Pitrou wrote: > > Hello everyone and Benjamin, > > Currently, memoryview objects are unhashable: > >>>> hash(memoryview(b"")) > Traceback (most recent call last): > ?File "", line 1, in > TypeError: unhashable type: 'memoryview' > > Compare with Python 2.7: > >>>> hash(buffer("")) > 0 > > memoryviews already support equality comparison: > >>>> b"" == memoryview(b"") > True > > If the original object providing the buffer is hashable, then it > seems to make sense for the memoryview object to be hashable. This came > while porting Twisted to Python 3. > > What do you think? > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From solipsis at pitrou.net Sun Nov 13 02:19:27 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 02:19:27 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: References: <20111113012359.51d01fbd@pitrou.net> Message-ID: <20111113021927.09f582f1@pitrou.net> On Sat, 12 Nov 2011 17:15:08 -0800 Guido van Rossum wrote: > Aren't memoryview objects mutable? I think that the underlying memory > can change, so it shouldn't be hashable. Only if the original object is itself mutable, otherwise the memoryview is read-only. I would propose the following algorithm: 1) try to calculate the original object's hash; if it fails, consider the memoryview unhashable (the buffer is probably mutable) 2) otherwise, calculate the memoryview's hash with the same algorithm as bytes objects (so that it's compatible with equality comparisons) Regards Antoine. From ncoghlan at gmail.com Sun Nov 13 02:40:43 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 13 Nov 2011 11:40:43 +1000 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113021927.09f582f1@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> Message-ID: On Sun, Nov 13, 2011 at 11:19 AM, Antoine Pitrou wrote: > On Sat, 12 Nov 2011 17:15:08 -0800 > Guido van Rossum wrote: >> Aren't memoryview objects mutable? I think that the underlying memory >> can change, so it shouldn't be hashable. > > Only if the original object is itself mutable, otherwise the memoryview > is read-only. > > I would propose the following algorithm: > 1) try to calculate the original object's hash; if it fails, consider > ? the memoryview unhashable (the buffer is probably mutable) > 2) otherwise, calculate the memoryview's hash with the same algorithm > ? as bytes objects (so that it's compatible with equality comparisons) Having a memory view be hashable if the object it references is hashable seems analogous to the way tuples are hashable if everything they reference is hashable, so +0 from me. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Sun Nov 13 02:38:22 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 02:38:22 +0100 Subject: [Python-Dev] Hashable memoryviews References: <20111113012359.51d01fbd@pitrou.net> Message-ID: <20111113023822.6ee78bff@pitrou.net> Thinking of it, an alternative would be to implement lazy slices of bytes objects (Twisted uses buffer() for zero-copy slices). Regards Antoine. On Sun, 13 Nov 2011 01:23:59 +0100 Antoine Pitrou wrote: > > Hello everyone and Benjamin, > > Currently, memoryview objects are unhashable: > > >>> hash(memoryview(b"")) > Traceback (most recent call last): > File "", line 1, in > TypeError: unhashable type: 'memoryview' > > Compare with Python 2.7: > > >>> hash(buffer("")) > 0 > > memoryviews already support equality comparison: > > >>> b"" == memoryview(b"") > True > > If the original object providing the buffer is hashable, then it > seems to make sense for the memoryview object to be hashable. This came > while porting Twisted to Python 3. > > What do you think? > > Regards > > Antoine. > > From guido at python.org Sun Nov 13 02:47:23 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 12 Nov 2011 17:47:23 -0800 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> Message-ID: On Sat, Nov 12, 2011 at 5:40 PM, Nick Coghlan wrote: > On Sun, Nov 13, 2011 at 11:19 AM, Antoine Pitrou wrote: >> On Sat, 12 Nov 2011 17:15:08 -0800 >> Guido van Rossum wrote: >>> Aren't memoryview objects mutable? I think that the underlying memory >>> can change, so it shouldn't be hashable. >> >> Only if the original object is itself mutable, otherwise the memoryview >> is read-only. >> >> I would propose the following algorithm: >> 1) try to calculate the original object's hash; if it fails, consider >> ? the memoryview unhashable (the buffer is probably mutable) >> 2) otherwise, calculate the memoryview's hash with the same algorithm >> ? as bytes objects (so that it's compatible with equality comparisons) > > Having a memory view be hashable if the object it references is > hashable seems analogous to the way tuples are hashable if everything > they reference is hashable, so +0 from me. Yeah, that's ok with me too. -- --Guido van Rossum (python.org/~guido) From stefan at bytereef.org Sun Nov 13 11:39:46 2011 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 13 Nov 2011 11:39:46 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113021927.09f582f1@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> Message-ID: <20111113103946.GA1569@sleipnir.bytereef.org> Antoine Pitrou wrote: > Only if the original object is itself mutable, otherwise the memoryview > is read-only. > > I would propose the following algorithm: > 1) try to calculate the original object's hash; if it fails, consider > the memoryview unhashable (the buffer is probably mutable) With slices or the new casts (See: http://bugs.python.org/issue5231, implemented in http://hg.python.org/features/pep-3118#memoryview ), it is possible to have different hashes for equal objects: >>> b1 = bytes([1,2,3,4]) >>> b2 = bytes([4,3,2,1]) >>> m1 = memoryview(b1) >>> m2 = memoryview(b2)[::-1] >>> m1 == m2 True >>> hash(b1) 4154562130492273536 >>> hash(b2) -1828484551660457336 Or: >>> a = array.array('L', [0]) >>> b = b'\x00\x00\x00\x00\x00\x00\x00\x00' >>> m_array = memoryview(a) >>> m_bytes = memoryview(b) >>> m_cast = m_array.cast('B') >>> m_bytes == m_cast True >>> hash(b) == hash(a) Traceback (most recent call last): File "", line 1, in TypeError: unhashable type: 'array.array' Stefan Krah From solipsis at pitrou.net Sun Nov 13 11:49:11 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 11:49:11 +0100 Subject: [Python-Dev] Hashable memoryviews References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> Message-ID: <20111113114911.0d1c9cc3@pitrou.net> On Sun, 13 Nov 2011 11:39:46 +0100 Stefan Krah wrote: > Antoine Pitrou wrote: > > Only if the original object is itself mutable, otherwise the memoryview > > is read-only. > > > > I would propose the following algorithm: > > 1) try to calculate the original object's hash; if it fails, consider > > the memoryview unhashable (the buffer is probably mutable) > > With slices or the new casts (See: http://bugs.python.org/issue5231, > implemented in http://hg.python.org/features/pep-3118#memoryview ), > it is possible to have different hashes for equal objects: > > >>> b1 = bytes([1,2,3,4]) > >>> b2 = bytes([4,3,2,1]) > >>> m1 = memoryview(b1) > >>> m2 = memoryview(b2)[::-1] I don't understand this feature. How do you represent a reversed buffer using the buffer API, and how do you ensure that consumers (especially those written in C) see the buffer reversed? Regardless, it's simply a matter of getting the hash algorithm right (i.e. iterate in logical order rather than memory order). > >>> a = array.array('L', [0]) > >>> b = b'\x00\x00\x00\x00\x00\x00\x00\x00' > >>> m_array = memoryview(a) > >>> m_bytes = memoryview(b) > >>> m_cast = m_array.cast('B') > >>> m_bytes == m_cast > True > >>> hash(b) == hash(a) > Traceback (most recent call last): > File "", line 1, in > TypeError: unhashable type: 'array.array' In this case, the memoryview wouldn't be hashable either. Regards Antoine. From ncoghlan at gmail.com Sun Nov 13 11:54:22 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 13 Nov 2011 20:54:22 +1000 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113103946.GA1569@sleipnir.bytereef.org> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> Message-ID: On Sun, Nov 13, 2011 at 8:39 PM, Stefan Krah wrote: > Antoine Pitrou wrote: >> Only if the original object is itself mutable, otherwise the memoryview >> is read-only. >> >> I would propose the following algorithm: >> 1) try to calculate the original object's hash; if it fails, consider >> ? ?the memoryview unhashable (the buffer is probably mutable) > > With slices or the new casts (See: http://bugs.python.org/issue5231, > implemented in http://hg.python.org/features/pep-3118#memoryview ), > it is possible to have different hashes for equal objects: Note that Antoine isn't suggesting that the underlying hash be *used* as the memoryview's hash (that would be calculated according to the same rules as the equality comparison). Instead, the ability to hash the underlying object would just gate whether or not you could hash the memoryview at all. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun Nov 13 12:01:41 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 13 Nov 2011 21:01:41 +1000 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113114911.0d1c9cc3@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113114911.0d1c9cc3@pitrou.net> Message-ID: On Sun, Nov 13, 2011 at 8:49 PM, Antoine Pitrou wrote: > I don't understand this feature. How do you represent a reversed buffer > using the buffer API, and how do you ensure that consumers (especially > those written in C) see the buffer reversed? The values in the strides array are signed, so presumably just by specifying a "-1" for the relevant dimension (triggering all the usual failures if you encounter a buffer API consumer that can only handle C contiguous arrays). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan at bytereef.org Sun Nov 13 12:56:23 2011 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 13 Nov 2011 12:56:23 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113114911.0d1c9cc3@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113114911.0d1c9cc3@pitrou.net> Message-ID: <20111113115623.GA1799@sleipnir.bytereef.org> Antoine Pitrou wrote: > > > I would propose the following algorithm: > > > 1) try to calculate the original object's hash; if it fails, consider > > > the memoryview unhashable (the buffer is probably mutable) > > > > With slices or the new casts (See: http://bugs.python.org/issue5231, > > implemented in http://hg.python.org/features/pep-3118#memoryview ), > > it is possible to have different hashes for equal objects: > > > > >>> b1 = bytes([1,2,3,4]) > > >>> b2 = bytes([4,3,2,1]) > > >>> m1 = memoryview(b1) > > >>> m2 = memoryview(b2)[::-1] > > I don't understand this feature. How do you represent a reversed buffer > using the buffer API, and how do you ensure that consumers (especially > those written in C) see the buffer reversed? In this case, view->buf points to the last memory location and view->strides is -1. In general, any PEP-3118 compliant consumer must only access elements of a buffer either directly via PyBuffer_GetPointer() or in an equivalent manner. Basically, this means that you start at view->buf (which may be *any* location in the memory block) and follow the strides until you reach the desired element. Objects/abstract.c: =================== void* PyBuffer_GetPointer(Py_buffer *view, Py_ssize_t *indices) { char* pointer; int i; pointer = (char *)view->buf; for (i = 0; i < view->ndim; i++) { pointer += view->strides[i]*indices[i]; if ((view->suboffsets != NULL) && (view->suboffsets[i] >= 0)) { pointer = *((char**)pointer) + view->suboffsets[i]; } } return (void*)pointer; } > Regardless, it's simply a matter of getting the hash algorithm right > (i.e. iterate in logical order rather than memory order). If you know how the original object computes the hash then this would work. It's not obvious to me how this would work beyond bytes objects though. > > >>> a = array.array('L', [0]) > > >>> b = b'\x00\x00\x00\x00\x00\x00\x00\x00' > > >>> m_array = memoryview(a) > > >>> m_bytes = memoryview(b) > > >>> m_cast = m_array.cast('B') > > >>> m_bytes == m_cast > > True > > >>> hash(b) == hash(a) > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: unhashable type: 'array.array' > > In this case, the memoryview wouldn't be hashable either. Hmm, the point was that one could take the hash of m_bytes but not of m_cast, even though they are equal. Perhaps I misunderstood your proposal. I assumed that hash requests would be redirected to the original exporting object. As above, it would be possible to write a custom hash function for objects with type 'B'. Stefan Krah From stefan at bytereef.org Sun Nov 13 13:05:24 2011 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 13 Nov 2011 13:05:24 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> Message-ID: <20111113120524.GB1799@sleipnir.bytereef.org> Nick Coghlan wrote: > > With slices or the new casts (See: http://bugs.python.org/issue5231, > > implemented in http://hg.python.org/features/pep-3118#memoryview ), > > it is possible to have different hashes for equal objects: > > Note that Antoine isn't suggesting that the underlying hash be *used* > as the memoryview's hash (that would be calculated according to the > same rules as the equality comparison). Instead, the ability to hash > the underlying object would just gate whether or not you could hash > the memoryview at all. I think they necessarily have to use the same hash, since: exporter = m1 ==> hash(exporter) = hash(m1) m1 = m2 ==> hash(m1) = hash(m2) Am I missing something? Stefan Krah From solipsis at pitrou.net Sun Nov 13 13:08:11 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 13:08:11 +0100 Subject: [Python-Dev] Hashable memoryviews References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113120524.GB1799@sleipnir.bytereef.org> Message-ID: <20111113130811.5c125b48@pitrou.net> On Sun, 13 Nov 2011 13:05:24 +0100 Stefan Krah wrote: > Nick Coghlan wrote: > > > With slices or the new casts (See: http://bugs.python.org/issue5231, > > > implemented in http://hg.python.org/features/pep-3118#memoryview ), > > > it is possible to have different hashes for equal objects: > > > > Note that Antoine isn't suggesting that the underlying hash be *used* > > as the memoryview's hash (that would be calculated according to the > > same rules as the equality comparison). Instead, the ability to hash > > the underlying object would just gate whether or not you could hash > > the memoryview at all. > > I think they necessarily have to use the same hash, since: > > exporter = m1 ==> hash(exporter) = hash(m1) > m1 = m2 ==> hash(m1) = hash(m2) > > Am I missing something? The hash must simply be calculated using the same algorithm (which can even be shared as a subroutine). It's already the case for more complicated types: >>> hash(1) == hash(1.0) == hash(Decimal(1)) == hash(Fraction(1)) True Also, I think it's reasonable to limit hashability to one-dimensional memoryviews. Regards Antoine. From stefan_ml at behnel.de Sun Nov 13 13:16:59 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 13 Nov 2011 13:16:59 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113120524.GB1799@sleipnir.bytereef.org> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113120524.GB1799@sleipnir.bytereef.org> Message-ID: Stefan Krah, 13.11.2011 13:05: > Nick Coghlan wrote: >>> With slices or the new casts (See: http://bugs.python.org/issue5231, >>> implemented in http://hg.python.org/features/pep-3118#memoryview ), >>> it is possible to have different hashes for equal objects: >> >> Note that Antoine isn't suggesting that the underlying hash be *used* >> as the memoryview's hash (that would be calculated according to the >> same rules as the equality comparison). Instead, the ability to hash >> the underlying object would just gate whether or not you could hash >> the memoryview at all. > > I think they necessarily have to use the same hash, since: > > exporter = m1 ==> hash(exporter) = hash(m1) > m1 = m2 ==> hash(m1) = hash(m2) You can't expect the memoryview() to magically know what the underlying hash function is. The only guarantee you get is that iff two memoryview instances are looking at the same (subset of) data from two hashable objects (or the same object), you will get the same hash value for both. It may or may not correspond with the hash value that the buffer exporting objects would give you. Stefan From stefan at bytereef.org Sun Nov 13 14:13:19 2011 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 13 Nov 2011 14:13:19 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: <20111113130811.5c125b48@pitrou.net> References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113120524.GB1799@sleipnir.bytereef.org> <20111113130811.5c125b48@pitrou.net> Message-ID: <20111113131319.GA2021@sleipnir.bytereef.org> Antoine Pitrou wrote: > Stefan Krah wrote: > > I think they necessarily have to use the same hash, since: > > > > exporter = m1 ==> hash(exporter) = hash(m1) > > m1 = m2 ==> hash(m1) = hash(m2) > > > > Am I missing something? > > The hash must simply be calculated using the same algorithm (which > can even be shared as a subroutine). It's already the case for more > complicated types: > > >>> hash(1) == hash(1.0) == hash(Decimal(1)) == hash(Fraction(1)) > True Yes, but we control those types. I was thinking more about third-party exporters. Then again, it would be possible to publish the unified hash function as part of the PEP. Perhaps we could simply use: PyBuffer_Hash = hash(obj.tobytes()) Since tobytes() follows the logical structure, it should work for non-contiguous and multidimensional arrays as well. Stefan Krah From stephen at xemacs.org Sun Nov 13 15:40:08 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 13 Nov 2011 23:40:08 +0900 Subject: [Python-Dev] order of Misc/ACKS In-Reply-To: <6892EAE0-43A7-410E-939A-0A1E1244FC57@masklinn.net> References: <4EBCF14F.7090807@gmail.com> <4EBD0015.1010805@v.loewis.de> <87obwhkejt.fsf@uwakimon.sk.tsukuba.ac.jp> <6892EAE0-43A7-410E-939A-0A1E1244FC57@masklinn.net> Message-ID: <87ipmojdbr.fsf@uwakimon.sk.tsukuba.ac.jp> Xavier Morel writes: > On 2011-11-12, at 10:24 , Georg Brandl wrote: > > Am 12.11.2011 08:03, schrieb Stephen J. Turnbull: > >> The sensible thing is to just sort in Unicode code point order, I > >> think. > > The sensible thing is to accept that there is no solution, and to stop > > worrying. > The file could use the default collation order, that way it'd be > incorrectly sorted for everybody. "What I tell you three times is true." From stefan_ml at behnel.de Sun Nov 13 19:48:39 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 13 Nov 2011 19:48:39 +0100 Subject: [Python-Dev] _PyImport_FindExtensionObject() does not set _Py_PackageContext Message-ID: Hi, I noticed that _PyImport_FindExtensionObject() in Python/import.c does not set _Py_PackageContext when it calls the module init function for module reinitialisation. However, PyModule_Create2() still uses that variable to figure out the fully qualified module name. Was this intentionally left out or is it just an oversight? Stefan From stefan_ml at behnel.de Sun Nov 13 20:31:01 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 13 Nov 2011 20:31:01 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? Message-ID: Hi, in Python modules, the "__file__" attribute is provided by the runtime before executing the module body. For extension modules, it is set only after executing the init function. I wonder if there's any way to figure out where an extension module is currently being loaded from. The _PyImport_LoadDynamicModule() function obviously knows it, but it does not pass that information on into the module init function. I'm asking specifically because I'd like to properly implement __file__ in Cython modules at module init time. There are cases where it could be faked (when compiling modules on the fly during import), but in general, it's not known at compile time where a module will get installed and run from, so I currently don't see how to do it without dedicated runtime support. That's rather unfortunate, because it's not so uncommon for packages to look up bundled data files relative to their own position using __file__, and that is pretty much always done in module level code. Another problem is that package local imports from __init__.py no longer work when it's compiled, likely because __path__ is missing on the new born module object in sys.modules. Here, it would also help if the path to the module (and to its package) was known early enough. Any ideas how this could currently be achieved? Or could this become a new feature in the future? Stefan From martin at v.loewis.de Sun Nov 13 21:34:06 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 Nov 2011 21:34:06 +0100 Subject: [Python-Dev] Hashable memoryviews In-Reply-To: References: <20111113012359.51d01fbd@pitrou.net> <20111113021927.09f582f1@pitrou.net> <20111113103946.GA1569@sleipnir.bytereef.org> <20111113120524.GB1799@sleipnir.bytereef.org> Message-ID: <4EC029BE.6010602@v.loewis.de> > You can't expect the memoryview() to magically know what the underlying > hash function is. Hashable objects implementing the buffer interface could be required to make their hash implementation consistent with bytes hashing. IMO, that wouldn't be asking too much. There is already the issue that equality may not be transitive wrt. to buffer objects (e.g. a == memoryview(a) == memoryview(b) == b, but a != b). As that would be a bug in either a or b, failure to hash consistently would be a bug as well. Regards, Martin From martin at v.loewis.de Sun Nov 13 21:46:31 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 Nov 2011 21:46:31 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: Message-ID: <4EC02CA7.90302@v.loewis.de> > I'm asking specifically because I'd like to properly implement __file__ > in Cython modules at module init time. Why do you need to implement __file__? Python will set it eventually to its correct value, no? > Another problem is that package local imports from __init__.py no longer > work when it's compiled Does it actually work to have __init__ be an extension module? > Any ideas how this could currently be achieved? Currently, for Cython? I don't think that can work. > Or could this become a new feature in the future? Certainly. An approach similar to _Py_PackageContext should be possible. Regards, Martin From solipsis at pitrou.net Sun Nov 13 22:47:26 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 Nov 2011 22:47:26 +0100 Subject: [Python-Dev] peps: And now for something completely different. References: Message-ID: <20111113224726.2eae2b3b@pitrou.net> On Sun, 13 Nov 2011 22:33:28 +0100 barry.warsaw wrote: > > +And Now For Something Completely Different > +========================================== So, is the release manager a man with two noses? > +Strings and bytes > +----------------- > + > +Python 2's basic original string type are called 8-bit strings, and > +they play a dual role in Python 2 as both ASCII text and as byte > +arrays. While Python 2 also has a unicode string type, the > +fundamental ambiguity of the core string type, coupled with Python 2's > +default behavior of supporting automatic coercion from 8-bit strings > +to unicodes when the two are combined, often leads to `UnicodeError`s. > +Python 3's standard string type is a unicode, and Python 3 adds a > +bytes type, but critically, no automatic coercion between bytes and > +unicodes is provided. Thus, the core interpreter, its I/O libraries, > +module names, etc. are clear in their distinction between unicode > +strings and bytes. This clarity is often a source of difficulty in > +transitioning existing code to Python 3, because many third party > +libraries and applications are themselves ambiguous in this > +distinction. Once migrated though, most `UnicodeError`s can be > +eliminated. First class unicode (*) support also makes Python much friendlier to non-ASCII natives when it comes to things like filesystem access or error reporting. (*) even though Tom Christiansen would disagree, but perhaps we can settle on first and a half > +Imports > +------- > + > +In Python 3, star imports (e.g. ``from x import *``) are only > +premitted in module level code. permitted Regards Antoine. From tjreedy at udel.edu Mon Nov 14 00:12:27 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 13 Nov 2011 18:12:27 -0500 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Normalize the keyword arguments documentation notation in re.rst. Closes issue In-Reply-To: References: Message-ID: <4EC04EDB.6020902@udel.edu> On 11/13/2011 5:52 PM, eli.bendersky wrote: > http://hg.python.org/cpython/rev/87ecfd5cd5d1 > changeset: 73541:87ecfd5cd5d1 > branch: 2.7 > parent: 73529:c3b063c82ae5 > user: Eli Bendersky > date: Mon Nov 14 01:02:20 2011 +0200 > summary: > Normalize the keyword arguments documentation notation in re.rst. Closes issue #12875 > -.. function:: compile(pattern[, flags=0]) > +.. function:: compile(pattern, flags=0) ... This issue and the patch are about parameters with *default* arguments, which makes a corresponding argument in a call *optional*. For Python functions, both required and optional arguments can be passed by position (unless disabled) or keyword. Which is to say, for Python functions, any argument can be a keyword argument. I suspect I am not the only person somewhat confused when people use 'keyword' to mean 'optional' or 'default'. tjr From eliben at gmail.com Mon Nov 14 06:07:20 2011 From: eliben at gmail.com (Eli Bendersky) Date: Mon, 14 Nov 2011 07:07:20 +0200 Subject: [Python-Dev] [Python-checkins] cpython (2.7): Normalize the keyword arguments documentation notation in re.rst. Closes issue In-Reply-To: <4EC04EDB.6020902@udel.edu> References: <4EC04EDB.6020902@udel.edu> Message-ID: >> http://hg.python.org/cpython/rev/87ecfd5cd5d1 >> changeset: ? 73541:87ecfd5cd5d1 >> branch: ? ? ?2.7 >> parent: ? ? ?73529:c3b063c82ae5 >> user: ? ? ? ?Eli Bendersky >> date: ? ? ? ?Mon Nov 14 01:02:20 2011 +0200 >> summary: >> ? Normalize the keyword arguments documentation notation in re.rst. Closes >> issue #12875 > >> -.. function:: compile(pattern[, flags=0]) >> +.. function:: compile(pattern, flags=0) > > ... > > This issue and the patch are about parameters with *default* arguments, > which makes a corresponding argument in a call *optional*. For Python > functions, both required and optional arguments can be passed by position > (unless disabled) or keyword. Which is to say, for Python functions, any > argument can be a keyword argument. I suspect I am not the only person > somewhat confused when people use 'keyword' to mean 'optional' or 'default'. > You're right, Terry. Sorry for the confusing commit message. By the way, I think you may be interested in the related http://bugs.python.org/issue13386 Eli From stefan_ml at behnel.de Mon Nov 14 09:18:33 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 14 Nov 2011 09:18:33 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: <4EC02CA7.90302@v.loewis.de> References: <4EC02CA7.90302@v.loewis.de> Message-ID: "Martin v. L?wis", 13.11.2011 21:46: >> I'm asking specifically because I'd like to properly implement __file__ >> in Cython modules at module init time. > > Why do you need to implement __file__? Python will set it eventually to > its correct value, no? Well, yes, eventually. However, almost all real world usages are at module init time, not afterwards. So having CPython set it after running through the module global code doesn't help much. >> Another problem is that package local imports from __init__.py no longer >> work when it's compiled > > Does it actually work to have __init__ be an extension module? I'm just starting to try it, and the problems I found so far were __file__ (in general), __path__ and relative imports (specifically). >> Any ideas how this could currently be achieved? > > Currently, for Cython? I don't think that can work. Hmm, it might work to put an empty module next to the 'real' extension and to import it to figure out the common directory of both. As long as it's still there after installation and the right one gets imported, that is. A relative import should help on versions that support it. Although that won't help in the __init__ case because a relative import will likely depend on __path__ being available first. Chicken and egg... Support in CPython would definitely help. >> Or could this become a new feature in the future? > > Certainly. An approach similar to _Py_PackageContext should be possible. Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. Could that go into 3.3? What about 2.7? Could an exception be made there regarding new "features"? It's not likely to break anything, but it would help Cython. Stefan From victor.stinner at haypocalc.com Mon Nov 14 10:34:46 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 14 Nov 2011 10:34:46 +0100 Subject: [Python-Dev] peps: And now for something completely different. In-Reply-To: <20111113224726.2eae2b3b@pitrou.net> References: <20111113224726.2eae2b3b@pitrou.net> Message-ID: <201111141034.46809.victor.stinner@haypocalc.com> If the PEP 404 lists important changes between Python 2 and Python 3, the removal of old-style classes should also be mentioned because it is a change in the core language. Victor From gh at ghaering.de Mon Nov 14 11:56:42 2011 From: gh at ghaering.de (gh at ghaering.de) Date: Mon, 14 Nov 2011 12:56:42 +0200 Subject: [Python-Dev] Delivery reports about your e-mail Message-ID: The original message was received at Mon, 14 Nov 2011 12:56:42 +0200 from ghaering.de [173.1.122.19] ----- The following addresses had permanent fatal errors ----- ----- Transcript of session follows ----- while talking to python.org.: >>> MAIL From:gh at ghaering.de <<< 501 gh at ghaering.de... Refused -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: RemovedAttachments530.txt URL: From ncoghlan at gmail.com Mon Nov 14 12:10:49 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 14 Nov 2011 21:10:49 +1000 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: <4EC02CA7.90302@v.loewis.de> Message-ID: On Mon, Nov 14, 2011 at 6:18 PM, Stefan Behnel wrote: >> Certainly. An approach similar to _Py_PackageContext should be possible. > > Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. Could > that go into 3.3? What about 2.7? Could an exception be made there regarding > new "features"? It's not likely to break anything, but it would help Cython. Hmm, interesting call - fixing this doesn't actually require a new public API, since it's just a new data attribute protected by the import lock that is used to pass state information from importdl.c to moduleobject.c. I'm inclined to say "no" myself, but it's a close run thing. So +1 for fixing it in 3.3, and -0 for calling it a bug rather than a missing feature and also fixing it in 2.7 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From markflorisson88 at gmail.com Mon Nov 14 12:55:10 2011 From: markflorisson88 at gmail.com (mark florisson) Date: Mon, 14 Nov 2011 11:55:10 +0000 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: <4EC02CA7.90302@v.loewis.de> Message-ID: On 14 November 2011 08:18, Stefan Behnel wrote: > "Martin v. L?wis", 13.11.2011 21:46: >>> >>> I'm asking specifically because I'd like to properly implement __file__ >>> in Cython modules at module init time. >> >> Why do you need to implement __file__? Python will set it eventually to >> its correct value, no? > > Well, yes, eventually. However, almost all real world usages are at module > init time, not afterwards. So having CPython set it after running through > the module global code doesn't help much. > Perhaps Cython could detect use of __file__ at module scope (if this package context function is not available), and if it's used it tries to use something akin to imp.find_module(__name__) to find the path to the file and set __file__ manually. It should handle dots out of the box and perhaps not rely on any __path__ attributes of packages (I think not many people change __path__ or use pkgutil anyway). Would this be feasible for python < 3.3? >>> Another problem is that package local imports from __init__.py no longer >>> work when it's compiled >> >> Does it actually work to have __init__ be an extension module? > > I'm just starting to try it, and the problems I found so far were __file__ > (in general), __path__ and relative imports (specifically). > > >>> Any ideas how this could currently be achieved? >> >> Currently, for Cython? I don't think that can work. > > Hmm, it might work to put an empty module next to the 'real' extension and > to import it to figure out the common directory of both. As long as it's > still there after installation and the right one gets imported, that is. A > relative import should help on versions that support it. Although that won't > help in the __init__ case because a relative import will likely depend on > __path__ being available first. Chicken and egg... > > Support in CPython would definitely help. > > >>> Or could this become a new feature in the future? >> >> Certainly. An approach similar to _Py_PackageContext should be possible. > > Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. Could > that go into 3.3? What about 2.7? Could an exception be made there regarding > new "features"? It's not likely to break anything, but it would help Cython. > > Stefan > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/markflorisson88%40gmail.com > From stefan_ml at behnel.de Mon Nov 14 14:27:16 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Mon, 14 Nov 2011 14:27:16 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: <4EC02CA7.90302@v.loewis.de> Message-ID: mark florisson, 14.11.2011 12:55: > On 14 November 2011 08:18, Stefan Behnel wrote: >> "Martin v. L?wis", 13.11.2011 21:46: >>>> >>>> I'm asking specifically because I'd like to properly implement __file__ >>>> in Cython modules at module init time. >>> >>> Why do you need to implement __file__? Python will set it eventually to >>> its correct value, no? >> >> Well, yes, eventually. However, almost all real world usages are at module >> init time, not afterwards. So having CPython set it after running through >> the module global code doesn't help much. > > Perhaps Cython could detect use of __file__ at module scope (if this > package context function is not available), and if it's used it tries > to use something akin to imp.find_module(__name__) to find the path to > the file and set __file__ manually. It's problematic. Depending on the import hooks that are in use, a second search for the already loaded but not yet instantiated module may potentially trigger a second try to load it. And if the module is already put into sys.modules, a second search may not return a helpful result. Also, running a full import of "imp" along the way may have additional side effects, and the C-API doesn't have a way to just search for the path of a module. > Would this be feasible for python < 3.3? It would certainly not be more than a work-around, but it could be made to work "well enough" in many cases. I'd definitely prefer having something that really works in both 2.7.x and 3.3+ over a half-baked solution in all of Py2.x. Stefan From merwok at netwok.org Mon Nov 14 15:50:02 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 14 Nov 2011 15:50:02 +0100 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): Fix memory leak with FLUFL-related syntax errors (!) In-Reply-To: References: Message-ID: <4EC12A9A.2000000@netwok.org> > changeset: 0feb5a5dbaeb > user: Antoine Pitrou > date: Sun Nov 13 01:02:02 2011 +0100 > summary: > Fix memory leak with FLUFL-related syntax errors (!) I don?t think it is allowed to criticize FLUFL-related code. As the FLUFL is flawless, so are flufly things. I hear the PSU has been notified; a messenger will come t From thom.ives at hp.com Tue Nov 15 00:31:12 2011 From: thom.ives at hp.com (Thom Ives) Date: Mon, 14 Nov 2011 23:31:12 +0000 (UTC) Subject: [Python-Dev] urllib.request.urlopen struggling in Windows 7 Message-ID: Previously, in python 2.6, I had made a lot of use of urllib.urlopen to capture web page content and then post process the data from the site I was downloading. Now, those routines, and the new routines I am trying to use for python 3.2 are running into what seems to be a windows only (maybe even windows 7 only problem). Using the following code with python 3.2.2 (64) on windows 7 ... import urllib.request fp = urllib.request.urlopen(URL_string_that_I_use) string = fp.read() fp.close() print(string.decode("utf8")) I get the following message: Traceback (most recent call last): File "TATest.py", line 5, in string = fp.read() File "d:\python32\lib\http\client.py", line 489, in read return self._read_chunked(amt) File "d:\python32\lib\http\client.py", line 553, in _read_chunked self._safe_read(2) # toss the CRLF at the end of the chunk File "d:\python32\lib\http\client.py", line 592, in _safe_read raise IncompleteRead(b''.join(s), amt) http.client.IncompleteRead: IncompleteRead(0 bytes read, 2 more expected) Using the following code instead ... import urllib.request fp = urllib.request.urlopen(URL_string_that_I_use) for Line in fp: print(Line.decode("utf8").rstrip('\n')) fp.close() I get a fair amount of the web page's content, but then the rest of the capture is thwarted by ... Traceback (most recent call last): File "TATest.py", line 9, in for Line in fp: File "d:\python32\lib\http\client.py", line 489, in read return self._read_chunked(amt) File "d:\python32\lib\http\client.py", line 545, in _read_chunked self._safe_read(2) # toss the CRLF at the end of the chunk File "d:\python32\lib\http\client.py", line 592, in _safe_read raise IncompleteRead(b''.join(s), amt) http.client.IncompleteRead: IncompleteRead(0 bytes read, 2 more expected) Trying to read another page yields ... Traceback (most recent call last): File "TATest.py", line 11, in print(Line.decode("utf8").rstrip('\n')) File "d:\python32\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\x92' in position 21: character maps to I do believe this is a windows issue, but can python be made more robust to deal with what is causing it? When trying similar code on Linux, we do not encounter the problem. From martin at v.loewis.de Tue Nov 15 00:45:31 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 15 Nov 2011 00:45:31 +0100 Subject: [Python-Dev] urllib.request.urlopen struggling in Windows 7 In-Reply-To: References: Message-ID: <4EC1A81B.7010805@v.loewis.de> > I do believe this is a windows issue, but can python be made more robust to deal > with what is causing it? I can't believe that it's a Windows issue, and neither can I believe that it's a Python issue (although this is more likely). Most likely, it's a server issue, i.e. the server somehow closes the connection without providing all the data it ought to provide. If that's the issue, Python can do nothing about it - you need to fix the server. Regards, Martin From martin at v.loewis.de Tue Nov 15 01:33:50 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 15 Nov 2011 01:33:50 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: <4EC02CA7.90302@v.loewis.de> Message-ID: <4EC1B36E.8040507@v.loewis.de> >> Currently, for Cython? I don't think that can work. > > Hmm, it might work to put an empty module next to the 'real' extension > and to import it to figure out the common directory of both. As long as > it's still there after installation and the right one gets imported, > that is. A relative import should help on versions that support it. > Although that won't help in the __init__ case because a relative import > will likely depend on __path__ being available first. Chicken and egg... If there was an actual __init__.py that just had import __cinit__ then __cinit__ could copy __path__ from the already-loaded __init__, no? >> Certainly. An approach similar to _Py_PackageContext should be possible. > > Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. > Could that go into 3.3? If somebody contributes a patch: sure. > What about 2.7? Certainly not. It would be a new feature, and there can't be new features in 2.7. > Could an exception be made there regarding new "features"? > It's not likely to break anything, but it would help Cython. "Not likely to break anything" really means it practice "it probably will break somebody's code". Policies are there to be followed, and this one I personally feel strongly about. If it means that users can get certain Cython features only with Python 3.x, the better. Regards, Martin From stefan_ml at behnel.de Tue Nov 15 09:01:58 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 15 Nov 2011 09:01:58 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: <4EC1B36E.8040507@v.loewis.de> References: <4EC02CA7.90302@v.loewis.de> <4EC1B36E.8040507@v.loewis.de> Message-ID: "Martin v. L?wis", 15.11.2011 01:33: >>> Currently, for Cython? I don't think that can work. >> >> Hmm, it might work to put an empty module next to the 'real' extension >> and to import it to figure out the common directory of both. As long as >> it's still there after installation and the right one gets imported, >> that is. A relative import should help on versions that support it. >> Although that won't help in the __init__ case because a relative import >> will likely depend on __path__ being available first. Chicken and egg... > > If there was an actual __init__.py that just had > > import __cinit__ or rather from .__cinit__ import * (relative import only for CPythons that support it) > then __cinit__ could copy __path__ from the already-loaded __init__, no? Hmm, right - that should work. __init__ would be in sys.modules with a properly set __file__ and __path__, and __cinit__ (knowing its own package name anyway) could look it up there to find out its file system path. I don't think there's much code out there that actually uses __file__ to find out the name of the module rather than just its package directory. So it would (more or less) fix both problems for __init__ files, and it should work with Py2.4+. >>> Certainly. An approach similar to _Py_PackageContext should be possible. >> >> Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. >> Could that go into 3.3? > > If somebody contributes a patch: sure. Ok, cool. >> What about 2.7? > > Certainly not. It would be a new feature, and there can't be new > features in 2.7. > >> Could an exception be made there regarding new "features"? >> It's not likely to break anything, but it would help Cython. > > "Not likely to break anything" really means it practice > "it probably will break somebody's code". Policies are > there to be followed, and this one I personally feel strongly > about. If it means that users can get certain Cython features > only with Python 3.x, the better. Understandable. Stefan From regebro at gmail.com Tue Nov 15 10:22:06 2011 From: regebro at gmail.com (Lennart Regebro) Date: Tue, 15 Nov 2011 10:22:06 +0100 Subject: [Python-Dev] PEP 405 (proposed): Python 2.8 Release Schedule In-Reply-To: References: <20111109111457.2f695e3a@limelight.wooz.org> Message-ID: On Wed, Nov 9, 2011 at 17:18, Amaury Forgeot d'Arc wrote: > Hi, > 2011/11/9 Barry Warsaw >> >> I think we should have an official pronouncement about Python 2.8, and >> PEPs >> are as official as it gets 'round here. > > Do we need to designate a release manager? I volunteer. It's on my level of competence. //Lennart From merwok at netwok.org Tue Nov 15 13:42:46 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 15 Nov 2011 13:42:46 +0100 Subject: [Python-Dev] PEP 376 - contents of RECORD file In-Reply-To: References: Message-ID: <4EC25E46.4080901@netwok.org> Hi Paul, > Looking at a RECORD file installed by pysetup (on 3.3 trunk, on > Windows) all of the filenames seem to be absolute, even though the > package is pure-Python and so everything is under site-packages. > Looking at PEP 376, it looks like the paths should be relative to > site-packages. Two questions: > > 1. Am I reading this right? Is it a bug in pysetup? I believe so. > 2. Does it matter? Are relative paths needed, or is it just nice to have? I?ll try to find the mailing list thread that lead to this change. Regards From merwok at netwok.org Tue Nov 15 15:14:30 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 15 Nov 2011 15:14:30 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #6397: Support '/dev/poll' polling objects in select module, under In-Reply-To: References: Message-ID: <4EC273C6.6030208@netwok.org> Hi, > http://hg.python.org/cpython/rev/8f7ab4bf7ad9 > user: Jesus Cea > date: Mon Nov 14 19:07:41 2011 +0100 > summary: > Issue #6397: Support '/dev/poll' polling objects in select module, under Solaris & derivatives. > +.. _devpoll-objects: > + > +``/dev/poll`` Polling Objects > +---------------------------------------------- > + > + http://developers.sun.com/solaris/articles/using_devpoll.html > + http://developers.sun.com/solaris/articles/polling_efficient.html > + This markup creates a blockquote with the two links displayed in a line. You probably meant to use a comment (start the block with ?.. ?) or, if the links are generally useful, move them to a seealso block later in the file. Regards From jcea at jcea.es Wed Nov 16 12:40:06 2011 From: jcea at jcea.es (=?utf-8?Q?Jes=C3=BAs_Cea?=) Date: Wed, 16 Nov 2011 12:40:06 +0100 Subject: [Python-Dev] Is Python insider blog dead? Message-ID: Python insider blog was a great idea, trying to open and expose python-dev to the world. A great and necessary idea. But the last post was in August. I wonder if the project is dead... Would be sad :-( http://blog.python.org/ Enviado desde mi iPhone From senthil at uthcode.com Wed Nov 16 12:49:51 2011 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 16 Nov 2011 19:49:51 +0800 Subject: [Python-Dev] Is Python insider blog dead? In-Reply-To: References: Message-ID: <20111116114951.GD1919@mathmagic> No. I think, you are welcome to write something about the recent changes you made to Python. -- Senthil On Wed, Nov 16, 2011 at 12:40:06PM +0100, Jes?s Cea wrote: > Python insider blog was a great idea, trying to open and expose python-dev to the world. A great and necessary idea. > > But the last post was in August. > > I wonder if the project is dead... Would be sad :-( > > http://blog.python.org/ > > Enviado desde mi iPhone > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/senthil%40uthcode.com From brian at python.org Wed Nov 16 14:23:03 2011 From: brian at python.org (Brian Curtin) Date: Wed, 16 Nov 2011 07:23:03 -0600 Subject: [Python-Dev] Is Python insider blog dead? In-Reply-To: References: Message-ID: On Wed, Nov 16, 2011 at 05:40, Jes?s Cea wrote: > Python insider blog was a great idea, trying to open and expose python-dev > to the world. A great and necessary idea. > > But the last post was in August. > > I wonder if the project is dead... Would be sad :-( > > http://blog.python.org/ Not dead, there was just a period where I got a little too busy with real life, plus development seemed to slow down for a while. I have a few drafts working (like a post on all of the recent PEP activity) and a few more in my head, but I'd like for it to not be a one-man show :) I've been planning to do another push to get people from around here to write about their big commits, what's going on in the areas of code they work on, interesting bugs they've fixed, etc. Now that you mentioned this, I'll get going quicker and send out details in the next day or so. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Wed Nov 16 14:41:37 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 16 Nov 2011 14:41:37 +0100 Subject: [Python-Dev] Is Python insider blog dead? In-Reply-To: References: Message-ID: <1959086.E6d5Q7CMGn@dsk000552> Le Mercredi 16 Novembre 2011 07:23:03 Brian Curtin a ?crit : > Not dead, there was just a period where I got a little too busy with real > life, plus development seemed to slow down for a while. I have a few drafts > working (like a post on all of the recent PEP activity) and a few more in > my head, but I'd like for it to not be a one-man show :) Some interesting topics for this blog: - recent implemented PEP: 393 (Unicode) and 3151 (exceptions) - sys.platform and Linux 3 - deprecation of bytes filename on Windows For PEP 393, I still have a question: does old module benefit of the memory reduction or not? If a string is created using the old API, it only uses a wchar_t* buffer. When the string is read using the new API, it is converted to use the best storage (UCS 1/2/4) and the wchar_t* buffer is freed, so the memory consumption is reduced. The problem is maybe if you access the string again using the old API: Python recreates the wchar_t* buffer and so the string has two storages (double memory usage which is worse than Python 3.2). But I don't understand if this case can happen or not. FYI I added a "Deprecated" section to the What's New in Python 3.3 document: http://docs.python.org/dev/whatsnew/3.3.html#deprecated-modules-functions-and- methods. Victor From lists at cheimes.de Wed Nov 16 15:03:30 2011 From: lists at cheimes.de (Christian Heimes) Date: Wed, 16 Nov 2011 15:03:30 +0100 Subject: [Python-Dev] Is Python insider blog dead? In-Reply-To: <1959086.E6d5Q7CMGn@dsk000552> References: <1959086.E6d5Q7CMGn@dsk000552> Message-ID: Am 16.11.2011 14:41, schrieb Victor Stinner: > Le Mercredi 16 Novembre 2011 07:23:03 Brian Curtin a ?crit : >> Not dead, there was just a period where I got a little too busy with real >> life, plus development seemed to slow down for a while. I have a few drafts >> working (like a post on all of the recent PEP activity) and a few more in >> my head, but I'd like for it to not be a one-man show :) > > Some interesting topics for this blog: > > - recent implemented PEP: 393 (Unicode) and 3151 (exceptions) > - sys.platform and Linux 3 I've already blogged about the Linux 3 topic two months ago. You are welcome to use my posting as a reference. http://lipyrary.blogspot.com/2011/09/python-and-linux-kernel-30-sysplatform.html Christian From p.f.moore at gmail.com Wed Nov 16 16:31:18 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 16 Nov 2011 15:31:18 +0000 Subject: [Python-Dev] Is Python insider blog dead? In-Reply-To: References: Message-ID: On 16 November 2011 13:23, Brian Curtin wrote: > Not dead, there was just a period where I got a little too busy with real > life, plus development seemed to slow down for a while. I have a few drafts > working (like a post on all of the recent PEP activity) and a few more in my > head, but I'd like for it to not be a one-man show :) > I've been planning to do another push to get people from around here to > write about their big commits, what's going on in the areas of code they > work on, interesting bugs they've fixed, etc. Now that you mentioned this, > I'll get going quicker and send out details in the next day or so. I had planned to do an article on the new packaging features, but the discussion hasn't really come to any conclusions yet, and to be honest I don't think that everything has settled enough yet (particularly on binary installers)[1]. Maybe I could do a short post saying something along the lines of "packaging is coming in 3.3, it's available now in the form of distutils2, please try it out, work out what you like and what you have problems with, and come and contribute on python-dev"...? Paul. [1] Also real life has left me with little or no spare time once more :-) From brian at python.org Thu Nov 17 05:51:10 2011 From: brian at python.org (Brian Curtin) Date: Wed, 16 Nov 2011 22:51:10 -0600 Subject: [Python-Dev] blog.python.org - help wanted - topics, authors, etc. Message-ID: As Jesus mentioned earlier today, it has been a while since http://blog.python.org/?was been updated, and even before that it wasn't updated all that often. I'd like to try and get others involved so we can get a more steady flow going and highlight more of the work everyone is doing. The blog aims to keep people up-to-date on what's going on in the development of Python without having to follow every word of this mailing list, the bug tracker, IRC, etc. There are a number of topics that I think would be great for the blog, including but not limited to: * Surveys - Raymond likes to poll people on twitter and has done a bunch of surveys over IRC, usually relating to ideas on APIs. I'd love to put some of these up on the blog and cast a wider net. * New features - Introducing a new module, such as Victor's faulthandler, makes for a great post. As we get closer to 3.3, everyone will be stuffing the commit stream with new features and introducing interesting ones on here would be great. * PEPs - As we all know, PEP discussions can sometimes result in weeks long debates with hundreds of 500 word responses. Summarizing a discussion down to a blog post would probably be helpful for a lot of people. I know I can't follow all of these PEPs all the time, but I'd like to know what's going on. * Problems you're solving - Antoine did a nice post about his changes to remove polling from a number of areas in the code and why he did them. More explanations like this would be great. We run the blog out of a Mercurial repository on BitBucket and do the writing in reStructuredText, then publish via Blogger. There's also a great team of volunteer translators that can get your post out there in 10 languages (see the blog sidebar for the full list). We can also accept guest posts with zero process: you just write and we'll handle the back-end stuff and get your work published. I don't want to make people go through all kinds of hoops if they just want to make a one-time post about something they want to share. If you have any topics - specific or general - that you'd like to see covered, respond here and we'll add them on the tracker. If you're interested in writing, contact me and I'll get you up and running. From vikashagrawal1990 at gmail.com Thu Nov 17 11:07:34 2011 From: vikashagrawal1990 at gmail.com (vikash agrawal) Date: Thu, 17 Nov 2011 15:37:34 +0530 Subject: [Python-Dev] blog.python.org - help wanted - topics, authors, etc. In-Reply-To: References: Message-ID: hi Brian, On Thu, Nov 17, 2011 at 10:21 AM, Brian Curtin wrote: > As Jesus mentioned earlier today, it has been a while since > http://blog.python.org/ was been updated, and even before that it > wasn't updated all that often. I'd like to try and get others involved > so we can get a more steady flow going and highlight more of the work > everyone is doing. > > The blog aims to keep people up-to-date on what's going on in the > development of Python without having to follow every word of this > mailing list, the bug tracker, IRC, etc. There are a number of topics > that I think would be great for the blog, including but not limited > to: > > * Surveys - Raymond likes to poll people on twitter and has done a > bunch of surveys over IRC, usually relating to ideas on APIs. I'd love > to put some of these up on the blog and cast a wider net. > * New features - Introducing a new module, such as Victor's > faulthandler, makes for a great post. As we get closer to 3.3, > everyone will be stuffing the commit stream with new features and > introducing interesting ones on here would be great. > * PEPs - As we all know, PEP discussions can sometimes result in weeks > long debates with hundreds of 500 word responses. Summarizing a > discussion down to a blog post would probably be helpful for a lot of > people. I know I can't follow all of these PEPs all the time, but I'd > like to know what's going on. > * Problems you're solving - Antoine did a nice post about his changes > to remove polling from a number of areas in the code and why he did > them. More explanations like this would be great. > > We run the blog out of a Mercurial repository on BitBucket and do the > writing in reStructuredText, then publish via Blogger. There's also a > great team of volunteer translators that can get your post out there > in 10 languages (see the blog sidebar for the full list). We can also > accept guest posts with zero process: you just write and we'll handle > the back-end stuff and get your work published. I don't want to make > people go through all kinds of hoops if they just want to make a > one-time post about something they want to share. > > If you have any topics - specific or general - that you'd like to see > covered, respond here and we'll add them on the tracker. > If you're interested in writing, contact me and I'll get you up and running > I think, if Python blog is regularly updated, it will be a great resource for every one :), Moreover personally I feel, if we have something like weekly (or b-weekly) interviews from core-python-developers it would be great for new comers like me, Also, I would love to share helping hands with you, just guide me where should I start from :) Regards Vikash Agrawal -- sent via HTC Sensation -------------- next part -------------- An HTML attachment was scrubbed... URL: From adys.wh at gmail.com Thu Nov 17 22:09:36 2011 From: adys.wh at gmail.com (Jerome Leclanche) Date: Thu, 17 Nov 2011 21:09:36 +0000 Subject: [Python-Dev] blog.python.org - help wanted - topics, authors, etc. In-Reply-To: References: Message-ID: > We run the blog out of a Mercurial repository on BitBucket and do the > writing in reStructuredText, then publish via Blogger. It sounds to me like this could be a reason why there are so few volunteers. Blog authors/writers usually don't want to write in rst and have to commit to a repo. JL From p.f.moore at gmail.com Thu Nov 17 22:42:52 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 17 Nov 2011 21:42:52 +0000 Subject: [Python-Dev] blog.python.org - help wanted - topics, authors, etc. In-Reply-To: References: Message-ID: On 17 November 2011 21:09, Jerome Leclanche wrote: >> We run the blog out of a Mercurial repository on BitBucket and do the >> writing in reStructuredText, then publish via Blogger. > > It sounds to me like this could be a reason why there are so few > volunteers. Blog authors/writers usually don't want to write in rst > and have to commit to a repo. In the part you didn't quote: "We can also accept guest posts with zero process: you just write and we'll handle the back-end stuff and get your work published." There is no need to use any toolset you don't want to - just write up a post however you want and send it in. Paul. From adys.wh at gmail.com Thu Nov 17 22:47:53 2011 From: adys.wh at gmail.com (Jerome Leclanche) Date: Thu, 17 Nov 2011 21:47:53 +0000 Subject: [Python-Dev] blog.python.org - help wanted - topics, authors, etc. In-Reply-To: References: Message-ID: My apologies, I missed that bit :) JL On Thu, Nov 17, 2011 at 9:42 PM, Paul Moore wrote: > On 17 November 2011 21:09, Jerome Leclanche wrote: >>> We run the blog out of a Mercurial repository on BitBucket and do the >>> writing in reStructuredText, then publish via Blogger. >> >> It sounds to me like this could be a reason why there are so few >> volunteers. Blog authors/writers usually don't want to write in rst >> and have to commit to a repo. > > In the part you didn't quote: > > "We can also accept guest posts with zero process: you just write and > we'll handle the back-end stuff and get your work published." > > There is no need to use any toolset you don't want to - just write up > a post however you want and send it in. > Paul. > From merwok at netwok.org Fri Nov 18 16:10:17 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Fri, 18 Nov 2011 16:10:17 +0100 Subject: [Python-Dev] [Python-checkins] cpython (2.7): PDB now will properly escape backslashes in the names of modules it executes. In-Reply-To: References: Message-ID: <4EC67559.90409@netwok.org> Hi Jason, > http://hg.python.org/cpython/rev/f7dd5178f36a > branch: 2.7 > user: Jason R. Coombs > date: Thu Nov 17 18:03:24 2011 -0500 > summary: > PDB now will properly escape backslashes in the names of modules it executes. Fixes #7750 > diff --git a/Lib/test/test_pdb.py b/Lib/test/test_pdb.py > +class Tester7750(unittest.TestCase): I think we have an unwritten rule that test class and method names should tell something about what they test. (We do have things like TestWeirdBugs and test_12345, but I don?t think it?s a useful pattern to follow :) Not a big deal anyway. > + # if the filename has something that resolves to a python > + # escape character (such as \t), it will fail > + test_fn = '.\\test7750.py' > + > + msg = "issue7750 only applies when os.sep is a backslash" > + @unittest.skipUnless(os.path.sep == '\\', msg) > + def test_issue7750(self): > + with open(self.test_fn, 'w') as f: > + f.write('print("hello world")') > + cmd = [sys.executable, '-m', 'pdb', self.test_fn,] > + proc = subprocess.Popen(cmd, > + stdout=subprocess.PIPE, > + stdin=subprocess.PIPE, > + stderr=subprocess.STDOUT, > + ) > + stdout, stderr = proc.communicate('quit\n') > + self.assertNotIn('IOError', stdout, "pdb munged the filename") Why not check for assertIn(filename, stdout)? (In other words, check for intended behavior rather than implementation of the erstwhile bug.) BTW, I?ve just tested that giving a message argument to assertNotIn (the third argument), unittest still displays the other arguments to allow for easier debugging. I didn?t know that, it?s cool! > + def tearDown(self): > + if os.path.isfile(self.test_fn): > + os.remove(self.test_fn) In my own tests, I?ve become fond of using ?self.addCleanup(os.remove, filename)?: It?s shorter that a tearDown and is right there on the line that follows or precedes the file creation. > if __name__ == '__main__': > test_main() > + unittest.main() This looks strange. Regards From stefan_ml at behnel.de Fri Nov 18 17:26:40 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 18 Nov 2011 17:26:40 +0100 Subject: [Python-Dev] how to find the file path to an extension module at init time? In-Reply-To: References: <4EC02CA7.90302@v.loewis.de> <4EC1B36E.8040507@v.loewis.de> Message-ID: Stefan Behnel, 15.11.2011 09:01: > "Martin v. L?wis", 15.11.2011 01:33: >>>> An approach similar to _Py_PackageContext should be possible. >>> >>> Yes, and a "_Py_ModuleImportContext" would be rather trivial to do. >>> Could that go into 3.3? >> >> If somebody contributes a patch: sure. > > Ok, cool. Patch(es) uploaded to the bug tracker. http://bugs.python.org/issue13429 Stefan From stefan_ml at behnel.de Fri Nov 18 17:27:58 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Fri, 18 Nov 2011 17:27:58 +0100 Subject: [Python-Dev] _PyImport_FindExtensionObject() does not set _Py_PackageContext In-Reply-To: References: Message-ID: Stefan Behnel, 13.11.2011 19:48: > I noticed that _PyImport_FindExtensionObject() in Python/import.c does not > set _Py_PackageContext when it calls the module init function for module > reinitialisation. However, PyModule_Create2() still uses that variable to > figure out the fully qualified module name. Was this intentionally left out > or is it just an oversight? Assuming it was an oversight, I attached a patch to ticket 13429 on the bug tracker. http://bugs.python.org/issue13429 Stefan From status at bugs.python.org Fri Nov 18 18:07:31 2011 From: status at bugs.python.org (Python tracker) Date: Fri, 18 Nov 2011 18:07:31 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20111118170731.D35F81DEBF@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2011-11-11 - 2011-11-18) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3115 ( +5) closed 22097 (+41) total 25212 (+46) Open issues with patches: 1330 Issues opened (32) ================== #12246: Support installation when running from an uninstalled Python http://bugs.python.org/issue12246 reopened by eric.araujo #13193: test_packaging and test_distutils failures http://bugs.python.org/issue13193 reopened by eric.araujo #13384: Unnecessary __future__ import in random module http://bugs.python.org/issue13384 reopened by rhettinger #13386: Document documentation conventions for optional args http://bugs.python.org/issue13386 opened by ezio.melotti #13390: Hunt memory allocations in addition to reference leaks http://bugs.python.org/issue13390 opened by pitrou #13393: Improve BufferedReader.read1() http://bugs.python.org/issue13393 opened by pitrou #13394: Patch to increase aifc lib test coverage with couple of minor http://bugs.python.org/issue13394 opened by Oleg.Plakhotnyuk #13396: new method random.getrandbytes() http://bugs.python.org/issue13396 opened by amaury.forgeotdarc #13397: Option for XMLRPC clients to automatically transform Fault exc http://bugs.python.org/issue13397 opened by rhettinger #13398: _cursesmodule does not build, doesn't find Python.h on Solaris http://bugs.python.org/issue13398 opened by automatthias #13399: Don't print traceback for unrecognized actions, commands and o http://bugs.python.org/issue13399 opened by Arfrever #13400: packaging: build command should accept --compile, --no-compile http://bugs.python.org/issue13400 opened by Arfrever #13401: test_argparse fails with root permissions http://bugs.python.org/issue13401 opened by Arfrever #13402: Document absoluteness of sys.executable http://bugs.python.org/issue13402 opened by eric.araujo #13403: Option for XMLPRC Server to support HTTPS http://bugs.python.org/issue13403 opened by rhettinger #13404: Add support for system.methodSignature() to XMLRPC Server http://bugs.python.org/issue13404 opened by rhettinger #13405: Add DTrace probes http://bugs.python.org/issue13405 opened by jcea #13407: tarfile.getnames misses members again http://bugs.python.org/issue13407 opened by sengels #13408: Rename packaging.resources back to datafiles http://bugs.python.org/issue13408 opened by eric.araujo #13410: String formatting bug in interactive mode http://bugs.python.org/issue13410 opened by jayanth #13411: Hashable memoryviews http://bugs.python.org/issue13411 opened by pitrou #13412: Symbolic links omitted by os.listdir on some systems http://bugs.python.org/issue13412 opened by alexreg #13413: time.daylight incorrect behavior in linux glibc http://bugs.python.org/issue13413 opened by dimonb #13414: test_strftime failed on OpenBSD http://bugs.python.org/issue13414 opened by rpointel #13415: del os.environ[key] ignores errors http://bugs.python.org/issue13415 opened by haypo #13417: faster utf-8 decoding http://bugs.python.org/issue13417 opened by pitrou #13418: Embedded Python memory leak http://bugs.python.org/issue13418 opened by Asesh #13420: newer() function in dep_util.py discard changes in the same se http://bugs.python.org/issue13420 opened by d.amian #13421: PyCFunction_* are not documented anywhere http://bugs.python.org/issue13421 opened by jcea #13424: Add examples for open???s new opener argument http://bugs.python.org/issue13424 opened by eric.araujo #13425: http.client.HTTPMessage.getallmatchingheaders() always returns http://bugs.python.org/issue13425 opened by stachjankowski #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 opened by scoder Most recent 15 issues with no replies (15) ========================================== #13421: PyCFunction_* are not documented anywhere http://bugs.python.org/issue13421 #13417: faster utf-8 decoding http://bugs.python.org/issue13417 #13413: time.daylight incorrect behavior in linux glibc http://bugs.python.org/issue13413 #13408: Rename packaging.resources back to datafiles http://bugs.python.org/issue13408 #13403: Option for XMLPRC Server to support HTTPS http://bugs.python.org/issue13403 #13402: Document absoluteness of sys.executable http://bugs.python.org/issue13402 #13401: test_argparse fails with root permissions http://bugs.python.org/issue13401 #13400: packaging: build command should accept --compile, --no-compile http://bugs.python.org/issue13400 #13397: Option for XMLRPC clients to automatically transform Fault exc http://bugs.python.org/issue13397 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 #13369: timeout with exit code 0 while re-running failed tests http://bugs.python.org/issue13369 #13354: tcpserver should document non-threaded long-living connections http://bugs.python.org/issue13354 #13330: Attempt full test coverage of LocaleTextCalendar.formatweekday http://bugs.python.org/issue13330 #13325: no address in the representation of asyncore dispatcher after http://bugs.python.org/issue13325 #13277: tzinfo subclasses information http://bugs.python.org/issue13277 Most recent 15 issues waiting for review (15) ============================================= #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 #13420: newer() function in dep_util.py discard changes in the same se http://bugs.python.org/issue13420 #13417: faster utf-8 decoding http://bugs.python.org/issue13417 #13415: del os.environ[key] ignores errors http://bugs.python.org/issue13415 #13411: Hashable memoryviews http://bugs.python.org/issue13411 #13410: String formatting bug in interactive mode http://bugs.python.org/issue13410 #13405: Add DTrace probes http://bugs.python.org/issue13405 #13401: test_argparse fails with root permissions http://bugs.python.org/issue13401 #13396: new method random.getrandbytes() http://bugs.python.org/issue13396 #13394: Patch to increase aifc lib test coverage with couple of minor http://bugs.python.org/issue13394 #13393: Improve BufferedReader.read1() http://bugs.python.org/issue13393 #13390: Hunt memory allocations in addition to reference leaks http://bugs.python.org/issue13390 #13380: ctypes: add an internal function for reseting the ctypes cache http://bugs.python.org/issue13380 #13378: Change the variable "nsmap" from global to instance (xml.etree http://bugs.python.org/issue13378 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 Top 10 most discussed issues (10) ================================= #13386: Document documentation conventions for optional args http://bugs.python.org/issue13386 23 msgs #13412: Symbolic links omitted by os.listdir on some systems http://bugs.python.org/issue13412 13 msgs #13410: String formatting bug in interactive mode http://bugs.python.org/issue13410 10 msgs #13349: Uninformal error message in index() and remove() functions http://bugs.python.org/issue13349 8 msgs #6727: ImportError when package is symlinked on Windows http://bugs.python.org/issue6727 7 msgs #11836: multiprocessing.queues.SimpleQueue is undocumented http://bugs.python.org/issue11836 7 msgs #13294: http.server: HEAD request should not return a body http://bugs.python.org/issue13294 7 msgs #13396: new method random.getrandbytes() http://bugs.python.org/issue13396 7 msgs #13398: _cursesmodule does not build, doesn't find Python.h on Solaris http://bugs.python.org/issue13398 7 msgs #13405: Add DTrace probes http://bugs.python.org/issue13405 7 msgs Issues closed (41) ================== #4111: Add Systemtap/DTrace probes http://bugs.python.org/issue4111 closed by jcea #6397: Implementing Solaris "/dev/poll" in the "select" module http://bugs.python.org/issue6397 closed by jcea #7732: imp.find_module crashes Python if there exists a directory nam http://bugs.python.org/issue7732 closed by haypo #7750: IOError when launching script under pdb with backslash in scri http://bugs.python.org/issue7750 closed by jason.coombs #8793: IDLE crashes on opening invalid file http://bugs.python.org/issue8793 closed by ned.deily #11112: UDPTimeoutTest derives from SocketTCPTest http://bugs.python.org/issue11112 closed by ezio.melotti #12629: HTMLParser silently stops parsing with malformed attributes http://bugs.python.org/issue12629 closed by ezio.melotti #12729: Python lib re cannot handle Unicode properly due to narrow/wid http://bugs.python.org/issue12729 closed by pitrou #12767: document threading.Condition.notify http://bugs.python.org/issue12767 closed by python-dev #12875: backport re.compile flags default value documentation http://bugs.python.org/issue12875 closed by eli.bendersky #13064: Port codecs and error handlers to the new Unicode API http://bugs.python.org/issue13064 closed by haypo #13170: distutils2 test failures http://bugs.python.org/issue13170 closed by eric.araujo #13217: Missing header dependencies in Makefile http://bugs.python.org/issue13217 closed by pitrou #13239: Remove <> operator from Grammar/Grammar http://bugs.python.org/issue13239 closed by python-dev #13264: Monkeypatching using metaclass http://bugs.python.org/issue13264 closed by eric.araujo #13297: xmlrpc.client could accept bytes for input and output http://bugs.python.org/issue13297 closed by python-dev #13309: test_time fails: time data 'LMT' does not match format '%Z' http://bugs.python.org/issue13309 closed by flox #13333: utf-7 inconsistent with surrogates http://bugs.python.org/issue13333 closed by pitrou #13346: re.split() should behave like string.split() for maxsplit=0 an http://bugs.python.org/issue13346 closed by terry.reedy #13357: HTMLParser parses attributes incorrectly. http://bugs.python.org/issue13357 closed by ezio.melotti #13358: HTMLParser incorrectly handles cdata elements. http://bugs.python.org/issue13358 closed by ezio.melotti #13374: Deprecate usage of the Windows ANSI API in the nt module http://bugs.python.org/issue13374 closed by haypo #13385: Add an explicit re.NOFLAGS flag value to the re module http://bugs.python.org/issue13385 closed by eli.bendersky #13387: suggest assertIs(type(obj), cls) for exact type checking http://bugs.python.org/issue13387 closed by ezio.melotti #13388: document hg commit hooks in the devguide http://bugs.python.org/issue13388 closed by python-dev #13389: Clear lists and dicts freelist in gc.collect() http://bugs.python.org/issue13389 closed by pitrou #13391: string.strip Does Not Remove Zero-Width-Space (ZWSP) http://bugs.python.org/issue13391 closed by ezio.melotti #13395: Python ISO-8859-1 encoding problem http://bugs.python.org/issue13395 closed by ezio.melotti #13406: Deprecation warnings when running the test suite http://bugs.python.org/issue13406 closed by ezio.melotti #13409: Invalid expression error if a regex ends with a backslash http://bugs.python.org/issue13409 closed by ezio.melotti #13416: Python Tutorial, Section 3, Minor PEP 8 adjustment http://bugs.python.org/issue13416 closed by benjamin.peterson #13419: import does not recognise SYMLINKDs on Windows 7 http://bugs.python.org/issue13419 closed by brian.curtin #13422: Subprocess: children hang due to open pipes http://bugs.python.org/issue13422 closed by gregory.p.smith #13423: Ranges cannot be meaningfully compared for equality or hashed http://bugs.python.org/issue13423 closed by flox #13426: Typos in pickle docs http://bugs.python.org/issue13426 closed by ezio.melotti #13427: string comparison with == http://bugs.python.org/issue13427 closed by flox #13428: PyUnicode_FromFormatV: support width and precision for string http://bugs.python.org/issue13428 closed by haypo #13430: Add a curry function to the functools module http://bugs.python.org/issue13430 closed by alex #1745761: Bad attributes/data handling in SGMLib http://bugs.python.org/issue1745761 closed by ezio.melotti #13392: Writing a pyc file is not atomic under Windows http://bugs.python.org/issue13392 closed by pitrou #755670: improve HTMLParser attribute processing regexps http://bugs.python.org/issue755670 closed by ezio.melotti From solipsis at pitrou.net Fri Nov 18 21:14:38 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 18 Nov 2011 21:14:38 +0100 Subject: [Python-Dev] Committing PEP 3155 Message-ID: <20111118211438.782ae82a@pitrou.net> Hello, I haven't seen any strong objections, so I would like to go ahead and commit PEP 3155 (*) soon. Is anyone against it? (*) "Qualified name for classes and functions" http://www.python.org/dev/peps/pep-3155/ Thank you Antoine. From barry at python.org Sat Nov 19 00:15:28 2011 From: barry at python.org (Barry Warsaw) Date: Fri, 18 Nov 2011 18:15:28 -0500 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: <20111118211438.782ae82a@pitrou.net> References: <20111118211438.782ae82a@pitrou.net> Message-ID: <20111118181528.022bb4d4@limelight.wooz.org> On Nov 18, 2011, at 09:14 PM, Antoine Pitrou wrote: >I haven't seen any strong objections, so I would like to go ahead and >commit PEP 3155 (*) soon. Is anyone against it? > >(*) "Qualified name for classes and functions" > http://www.python.org/dev/peps/pep-3155/ I'm still not crazy about the attribute name, although I appreciate you including the discussion in the PEP. Have you identified a BDFOP that might be able to pronounce on the choice? Or perhaps Guido would like to weigh in? The PEP says that the qualified name deliberately does not include the module name, but it doesn't explain why. I think it should (explain why). I'd like the PEP to explain why this is a better solution than re-establishing introspectability that was available through unbound methods. Cheers, -Barry From solipsis at pitrou.net Sat Nov 19 00:54:39 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Nov 2011 00:54:39 +0100 Subject: [Python-Dev] Committing PEP 3155 References: <20111118211438.782ae82a@pitrou.net> <20111118181528.022bb4d4@limelight.wooz.org> Message-ID: <20111119005439.5a9d9c0f@pitrou.net> On Fri, 18 Nov 2011 18:15:28 -0500 Barry Warsaw wrote: > On Nov 18, 2011, at 09:14 PM, Antoine Pitrou wrote: > > >I haven't seen any strong objections, so I would like to go ahead and > >commit PEP 3155 (*) soon. Is anyone against it? > > > >(*) "Qualified name for classes and functions" > > http://www.python.org/dev/peps/pep-3155/ > > I'm still not crazy about the attribute name, although I appreciate you > including the discussion in the PEP. Well, the other propositions still seem worse to me. "Qualified" is reasonably accurate, and "qualname" is fairly short and convenient (I would hate to type "__qualifiedname__" or "__qualified_name__" in full). In the same vein, we have __repr__ which may seem weird at first sight :) > Have you identified a BDFOP that might > be able to pronounce on the choice? No. Perhaps I was irenic in hoping that no opposition == no need to get an official pronouncement :-) > The PEP says that the qualified name deliberately does not include the module > name, but it doesn't explain why. I think it should (explain why). > > I'd like the PEP to explain why this is a better solution than re-establishing > introspectability that was available through unbound methods. I've added explanations for these two points. Do they satisfy your expectations? cheers Antoine. From barry at python.org Sat Nov 19 01:15:18 2011 From: barry at python.org (Barry Warsaw) Date: Fri, 18 Nov 2011 19:15:18 -0500 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: <20111119005439.5a9d9c0f@pitrou.net> References: <20111118211438.782ae82a@pitrou.net> <20111118181528.022bb4d4@limelight.wooz.org> <20111119005439.5a9d9c0f@pitrou.net> Message-ID: <20111118191518.2f6ba756@resist.wooz.org> On Nov 19, 2011, at 12:54 AM, Antoine Pitrou wrote: >I've added explanations for these two points. Do they satisfy your >expectations? Yep, thanks. -Barry From victor.stinner at haypocalc.com Sat Nov 19 03:31:09 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 19 Nov 2011 03:31:09 +0100 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: <20111118211438.782ae82a@pitrou.net> References: <20111118211438.782ae82a@pitrou.net> Message-ID: <4EC714ED.3030405@haypocalc.com> > I haven't seen any strong objections, so I would like to go ahead and > commit PEP 3155 (*) soon. Is anyone against it? I'm not against it, but I have some questions. Does you a working implementing? Do you have a patch for issue #9276 using __qualname__? Maybe not a fully working patch, but a proof-of-concept? Could you add examples on instances? I suppose that it gives the same result than classes: C.__qualname__ == C().__qualname__ C.f.__qualname__ == C().f.__qualname__ Le 19/11/2011 00:15, Barry Warsaw a ?crit : > I'd like the PEP to explain why this is a better solution than > re-establishing introspectability that was available through > unbound methods. __qualname__ works also on nested functions. Is it a new feature? Or was it already possible in Python 2 to compute the qualified name? Victor From storchaka at gmail.com Sat Nov 19 07:48:08 2011 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 19 Nov 2011 08:48:08 +0200 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: <20111119005439.5a9d9c0f@pitrou.net> References: <20111118211438.782ae82a@pitrou.net> <20111118181528.022bb4d4@limelight.wooz.org> <20111119005439.5a9d9c0f@pitrou.net> Message-ID: 19.11.11 01:54, Antoine Pitrou ???????(??): > Well, the other propositions still seem worse to me. "Qualified" is > reasonably accurate, and "qualname" is fairly short and convenient (I > would hate to type "__qualifiedname__" or "__qualified_name__" in full). > In the same vein, we have __repr__ which may seem weird at first > sight :) What about __reprname__? From ncoghlan at gmail.com Sat Nov 19 09:29:06 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 Nov 2011 18:29:06 +1000 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: References: <20111118211438.782ae82a@pitrou.net> <20111118181528.022bb4d4@limelight.wooz.org> <20111119005439.5a9d9c0f@pitrou.net> Message-ID: On Sat, Nov 19, 2011 at 4:48 PM, Serhiy Storchaka wrote: > 19.11.11 01:54, Antoine Pitrou ???????(??): >> >> Well, the other propositions still seem worse to me. "Qualified" is >> reasonably accurate, and "qualname" is fairly short and convenient (I >> would hate to type "__qualifiedname__" or "__qualified_name__" in full). >> In the same vein, we have __repr__ which may seem weird at first >> sight :) > > What about __reprname__? Antoine only mentioned 'repr' as being an abbreviation for 'representation', just as 'qualname' will be an abbreviation for 'qualified name'. The "less ambiguous repr()" use case is just one minor aspect of the new qualified names, even if it's the most immediately visible, so using 'repr' in the attribute name would give people all sorts of wrong ideas about the scope of its utility. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From stefan_ml at behnel.de Sat Nov 19 09:42:49 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 19 Nov 2011 09:42:49 +0100 Subject: [Python-Dev] patch metadata - to use or not to use? Message-ID: Hi, I recently got some patches accepted for inclusion in 3.3, and each time, the patch metadata (such as my name and my commit comment) were stripped by applying the patch manually, instead of hg importing it. This makes it clear in the history who eventually reviewed and applied the patch, but less visible who wrote it (except for the entry in Misc/NEWS). I didn't see this mentioned in the dev-guide. Is it being considered the Right Way To Do It? Stefan From solipsis at pitrou.net Sat Nov 19 10:49:31 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Nov 2011 10:49:31 +0100 Subject: [Python-Dev] Committing PEP 3155 References: <20111118211438.782ae82a@pitrou.net> <4EC714ED.3030405@haypocalc.com> Message-ID: <20111119104931.6958d505@pitrou.net> On Sat, 19 Nov 2011 03:31:09 +0100 Victor Stinner wrote: > > I haven't seen any strong objections, so I would like to go ahead and > > commit PEP 3155 (*) soon. Is anyone against it? > > I'm not against it, but I have some questions. > > Does you a working implementing? I suppose the question is about a working implementation :) http://hg.python.org/features/pep-3155 > Do you have a patch for issue #9276 using __qualname__? Maybe not a > fully working patch, but a proof-of-concept? No. That's part of PEP 3154. > Could you add examples on instances? I suppose that it gives the same > result than classes: > > C.__qualname__ == C().__qualname__ > C.f.__qualname__ == C().f.__qualname__ No. You have to use C().__class__.__qualname__. Same as C().__class__.__name__, really. > Le 19/11/2011 00:15, Barry Warsaw a ?crit : > > I'd like the PEP to explain why this is a better solution than > > re-establishing introspectability that was available through > > unbound methods. > > __qualname__ works also on nested functions. Is it a new feature? Or was > it already possible in Python 2 to compute the qualified name? It's a new feature indeed. Regards Antoine. From solipsis at pitrou.net Sat Nov 19 10:51:49 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Nov 2011 10:51:49 +0100 Subject: [Python-Dev] patch metadata - to use or not to use? References: Message-ID: <20111119105149.548aacf8@pitrou.net> On Sat, 19 Nov 2011 09:42:49 +0100 Stefan Behnel wrote: > Hi, > > I recently got some patches accepted for inclusion in 3.3, and each time, > the patch metadata (such as my name and my commit comment) were stripped by > applying the patch manually, instead of hg importing it. This makes it > clear in the history who eventually reviewed and applied the patch, but > less visible who wrote it (except for the entry in Misc/NEWS). > > I didn't see this mentioned in the dev-guide. Is it being considered the > Right Way To Do It? It is common to add minor things to the patch when committing (such as a redacted NEWS entry, or a ACKS entry), in which cases you can't import it anyway. Often the name of the contributor is added to NEWS *and* to the commit message. Regards Antoine. From solipsis at pitrou.net Sat Nov 19 11:20:30 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 19 Nov 2011 11:20:30 +0100 Subject: [Python-Dev] patch metadata - to use or not to use? References: Message-ID: <20111119112030.3463ea9b@pitrou.net> On Sat, 19 Nov 2011 09:42:49 +0100 Stefan Behnel wrote: > > I didn't see this mentioned in the dev-guide. Is it being considered the > Right Way To Do It? That said, to answer your question more generally, I think it's simply how we worked with SVN, and we haven't found any compelling reason to change. Regards Antoine. From ncoghlan at gmail.com Sat Nov 19 11:52:14 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 Nov 2011 20:52:14 +1000 Subject: [Python-Dev] patch metadata - to use or not to use? In-Reply-To: References: Message-ID: On Sat, Nov 19, 2011 at 6:42 PM, Stefan Behnel wrote: > Hi, > > I recently got some patches accepted for inclusion in 3.3, and each time, > the patch metadata (such as my name and my commit comment) were stripped by > applying the patch manually, instead of hg importing it. This makes it clear > in the history who eventually reviewed and applied the patch, but less > visible who wrote it (except for the entry in Misc/NEWS). > > I didn't see this mentioned in the dev-guide. Is it being considered the > Right Way To Do It? Generally speaking, it's more useful for the checkin metadata to reflect who actually did the checkin, since that's the most useful information for the tracker and buildbot integration. The question of did the original patch does matter in terms of giving appropriate credit (which is covered by NEWS and the commit message), but who did the checkin matters for immediate workflow reasons (i.e. who is responsible for dealing with any buildbot breakage, objections on python-dev, objections on the tracker, etc). One of the key aspects of having push rights is that we're the ones that take responsibility for the state of the central repo - if we stuff it up and break the build (either because we missed something on review, or due to cross-platform issues), that's *our* problem, not usually something the original patch contributor needs to worry about. We do have a guideline that says to always use the "--no-commit" flag with "hg import" and then run the tests before committing, so that may answer your question about whether or not it's an official policy. (That said, I don't know if the devguide actually says that explicitly anywhere - it's just reflected in the various workflow examples, as well as in the mailing list discussions that helped craft those examples) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Sat Nov 19 17:36:30 2011 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Nov 2011 08:36:30 -0800 Subject: [Python-Dev] patch metadata - to use or not to use? In-Reply-To: References: Message-ID: On Sat, Nov 19, 2011 at 2:52 AM, Nick Coghlan wrote: > On Sat, Nov 19, 2011 at 6:42 PM, Stefan Behnel wrote: >> Hi, >> >> I recently got some patches accepted for inclusion in 3.3, and each time, >> the patch metadata (such as my name and my commit comment) were stripped by >> applying the patch manually, instead of hg importing it. This makes it clear >> in the history who eventually reviewed and applied the patch, but less >> visible who wrote it (except for the entry in Misc/NEWS). >> >> I didn't see this mentioned in the dev-guide. Is it being considered the >> Right Way To Do It? > > Generally speaking, it's more useful for the checkin metadata to > reflect who actually did the checkin, since that's the most useful > information for the tracker and buildbot integration. The question of > did the original patch does matter in terms of giving appropriate > credit (which is covered by NEWS and the commit message), but who did > the checkin matters for immediate workflow reasons (i.e. who is > responsible for dealing with any buildbot breakage, objections on > python-dev, objections on the tracker, etc). > > One of the key aspects of having push rights is that we're the ones > that take responsibility for the state of the central repo - if we > stuff it up and break the build (either because we missed something on > review, or due to cross-platform issues), that's *our* problem, not > usually something the original patch contributor needs to worry about. Well, it doesn't hurt to keep the patch author in the loop about those -- they may know their patch best and they may even learn something new, which might make their future patches better! Of course if they *don't* know how to fix an issue (e.g. if it's a platform-specific thing) then they shouldn't be blamed. > We do have a guideline that says to always use the "--no-commit" flag > with "hg import" and then run the tests before committing, so that may > answer your question about whether or not it's an official policy. > (That said, I don't know if the devguide actually says that explicitly > anywhere - it's just reflected in the various workflow examples, as > well as in the mailing list discussions that helped craft those > examples) I agree with this, but I also want to make sure the author of the patch always gets proper recognition (after all that's what motivates people to contribute!). I think that their name should always be in the description if it's not in the committer field. Use your best judgment or qualifying terms for patches that are co-productions of committer and original author. -- --Guido van Rossum (python.org/~guido) From petri at digip.org Sat Nov 19 20:41:13 2011 From: petri at digip.org (Petri Lehtinen) Date: Sat, 19 Nov 2011 21:41:13 +0200 Subject: [Python-Dev] patch metadata - to use or not to use? In-Reply-To: References: Message-ID: <20111119194113.GA2071@ihaa> Nick Coghlan wrote: > Generally speaking, it's more useful for the checkin metadata to > reflect who actually did the checkin, since that's the most useful > information for the tracker and buildbot integration. At least in git, the commit metadata contains both author and committer (at least if they differ). Maybe mercurial has this too? From dirkjan at ochtman.nl Sat Nov 19 22:41:24 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sat, 19 Nov 2011 22:41:24 +0100 Subject: [Python-Dev] patch metadata - to use or not to use? In-Reply-To: <20111119194113.GA2071@ihaa> References: <20111119194113.GA2071@ihaa> Message-ID: On Sat, Nov 19, 2011 at 20:41, Petri Lehtinen wrote: >> Generally speaking, it's more useful for the checkin metadata to >> reflect who actually did the checkin, since that's the most useful >> information for the tracker and buildbot integration. > > At least in git, the commit metadata contains both author and > committer (at least if they differ). Maybe mercurial has this too? It does not. Personally, I find it more appropriate to have the original patch author in the "official" metadata, mostly because I personally find it very satisfying to see my name in the changelog on hgweb and the like. My own experience with that makes me think that it's probably helpful in engaging contributors. Cheers, Dirkjan From vinay_sajip at yahoo.co.uk Sat Nov 19 23:06:20 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 19 Nov 2011 22:06:20 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Python_3=2C_new-style_classes_and_=5F=5Fcl?= =?utf-8?b?YXNzX18=?= Message-ID: I was looking through the errors which occur when running the test suite of Django's py3k branch under Python 3, and one particular set of errors caught my eye which is unrelated to the bytes/str dance. These errors occur in some Django utility code, which supplies a SimpleLazyObject (new-style) class [1]. This implements a proxy, which is initialised using a callable. The callable returns the object to be wrapped, and it's called when needed to set up the wrapped instance. The SimpleLazyObject needs to pretend to be the class of the wrapped object, e.g. for equality tests. This pretending is done by declaring __class__ as a property in SimpleLazyObject which fetches and returns the __class__ attribute of the wrapped object. This approach doesn't work in Python 3, however: the property named __class__ doesn't show up in the class dict of SimpleLazyObject, and moreover, there are restrictions on what you can set __class__ to - e.g. Python complains if you try and set a __class__ attribute on the instance to anything other than a new-style class. What's the simplest way in Python 3 of implementing the equivalent approach to pretending to be a different class? Any pointers appreciated. Thanks and regards, Vinay Sajip [1] http://goo.gl/1Jlbj From nad at acm.org Sat Nov 19 23:07:55 2011 From: nad at acm.org (Ned Deily) Date: Sat, 19 Nov 2011 14:07:55 -0800 Subject: [Python-Dev] patch metadata - to use or not to use? References: <20111119194113.GA2071@ihaa> Message-ID: In article , Dirkjan Ochtman wrote: > On Sat, Nov 19, 2011 at 20:41, Petri Lehtinen wrote: > >> Generally speaking, it's more useful for the checkin metadata to > >> reflect who actually did the checkin, since that's the most useful > >> information for the tracker and buildbot integration. > > At least in git, the commit metadata contains both author and > > committer (at least if they differ). Maybe mercurial has this too? > It does not. > > Personally, I find it more appropriate to have the original patch > author in the "official" metadata, mostly because I personally find it > very satisfying to see my name in the changelog on hgweb and the like. > My own experience with that makes me think that it's probably helpful > in engaging contributors. As Nick pointed out, it's important that who did the checkin is recorded for python-dev workflow reasons. Ensuring that the original patch submitter is mentioned in the commit message and, as appropriate, in any Misc/NEWS item seems to me an appropriate and sufficient way to give that recognition. The NEWS file will eventually get installed on countless systems around the world: hard to beat that! WRT the original commit message, a more flexible approach to applying patches is to use "hg qimport" rather than "hg import". It is then possible to edit the patch, make the necessary changes to Misc/NEWS, edit the original patch commit comment using "hg qrefresh -e" and then commit the patch with "hg qfinish". -- Ned Deily, nad at acm.org From fuzzyman at voidspace.org.uk Sat Nov 19 23:25:14 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 19 Nov 2011 22:25:14 +0000 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: References: Message-ID: <4EC82CCA.7040105@voidspace.org.uk> On 19/11/2011 22:06, Vinay Sajip wrote: > I was looking through the errors which occur when running the test suite of > Django's py3k branch under Python 3, and one particular set of errors caught my > eye which is unrelated to the bytes/str dance. These errors occur in some Django > utility code, which supplies a SimpleLazyObject (new-style) class [1]. This > implements a proxy, which is initialised using a callable. The callable returns > the object to be wrapped, and it's called when needed to set up the wrapped > instance. > > The SimpleLazyObject needs to pretend to be the class of the wrapped object, > e.g. for equality tests. This pretending is done by declaring __class__ as a > property in SimpleLazyObject which fetches and returns the __class__ attribute > of the wrapped object. This approach doesn't work in Python 3, however: the > property named __class__ doesn't show up in the class dict of SimpleLazyObject, > and moreover, there are restrictions on what you can set __class__ to - e.g. > Python complains if you try and set a __class__ attribute on the instance to > anything other than a new-style class. > > What's the simplest way in Python 3 of implementing the equivalent approach to > pretending to be a different class? Any pointers appreciated. That works fine in Python 3 (mock.Mock does it): >>> class Foo(object): ... @property ... def __class__(self): ... return int ... >>> a = Foo() >>> isinstance(a, int) True >>> a.__class__ There must be something else going on here. All the best, Michael Foord > > Thanks and regards, > > > Vinay Sajip > > [1] http://goo.gl/1Jlbj > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From vinay_sajip at yahoo.co.uk Sun Nov 20 00:11:38 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 19 Nov 2011 23:11:38 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Python_3=2C_new-style_classes_and_=5F=5Fcl?= =?utf-8?b?YXNzX18=?= References: <4EC82CCA.7040105@voidspace.org.uk> Message-ID: Michael Foord voidspace.org.uk> writes: > That works fine in Python 3 (mock.Mock does it): > > >>> class Foo(object): > ... @property > ... def __class__(self): > ... return int > ... > >>> a = Foo() > >>> isinstance(a, int) > True > >>> a.__class__ > > > There must be something else going on here. > Michael, thanks for the quick response. Okay, I'll dig in a bit further: the definition in SimpleLazyObject is __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) so perhaps the problem is something related to the specifics of the definition. Here's what I found in initial exploration: -------------------------------------------------------------------------- Python 2.7.2+ (default, Oct 4 2011, 20:06:09) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from django.utils.functional import SimpleLazyObject >>> fake_bool = SimpleLazyObject(lambda: True) >>> fake_bool.__class__ >>> fake_bool.__dict__ {'_setupfunc': at 0xca9ed8>, '_wrapped': True} >>> SimpleLazyObject.__dict__ dict_proxy({ '__module__': 'django.utils.functional', '__nonzero__': , '__deepcopy__': , '__str__': , '_setup': , '__class__': , '__hash__': , '__unicode__': , '__bool__': , '__eq__': , '__doc__': '\n A lazy object initialised from any function.\n\n Designed for compound objects of unknown type. For builtins or objects of\n known type, use django.utils.functional.lazy.\n ', '__init__': }) -------------------------------------------------------------------------- Python 3.2.2 (default, Sep 5 2011, 21:17:14) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from django.utils.functional import SimpleLazyObject >>> fake_bool = SimpleLazyObject(lambda : True) >>> fake_bool.__class__ >>> fake_bool.__dict__ { '_setupfunc': at 0x1c36ea8>, '_wrapped': } >>> SimpleLazyObject.__dict__ dict_proxy({ '__module__': 'django.utils.functional', '__nonzero__': , '__deepcopy__': , '__str__': , '_setup': , '__hash__': , '__unicode__': , '__bool__': , '__eq__': , '__doc__': '\n A lazy object initialised from any function.\n\n Designed for compound objects of unknown type. For builtins or objects of\n known type, use django.utils.functional.lazy.\n ', '__init__': }) -------------------------------------------------------------------------- In Python 3, there's no __class__ property as there is in Python 2, the fake_bool's type isn't bool, and the callable to set up the wrapped object never gets called (which is why _wrapped is not set to True, but to an anonymous object - this is set in SimpleLazyObject.__init__). Puzzling! Regards, Vinay Sajip From fuzzyman at voidspace.org.uk Sun Nov 20 03:13:27 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 20 Nov 2011 02:13:27 +0000 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: References: <4EC82CCA.7040105@voidspace.org.uk> Message-ID: On 19 November 2011 23:11, Vinay Sajip wrote: > Michael Foord voidspace.org.uk> writes: > > > That works fine in Python 3 (mock.Mock does it): > > > > >>> class Foo(object): > > ... @property > > ... def __class__(self): > > ... return int > > ... > > >>> a = Foo() > > >>> isinstance(a, int) > > True > > >>> a.__class__ > > > > > > There must be something else going on here. > > > > Michael, thanks for the quick response. Okay, I'll dig in a bit further: > the > definition in SimpleLazyObject is > > __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) > > so perhaps the problem is something related to the specifics of the > definition. > Here's what I found in initial exploration: > > -------------------------------------------------------------------------- > Python 2.7.2+ (default, Oct 4 2011, 20:06:09) > [GCC 4.6.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from django.utils.functional import SimpleLazyObject > >>> fake_bool = SimpleLazyObject(lambda: True) > >>> fake_bool.__class__ > > >>> fake_bool.__dict__ > {'_setupfunc': at 0xca9ed8>, '_wrapped': True} > >>> SimpleLazyObject.__dict__ > dict_proxy({ > '__module__': 'django.utils.functional', > '__nonzero__': , > '__deepcopy__': , > '__str__': , > '_setup': , > '__class__': , > '__hash__': , > '__unicode__': , > '__bool__': , > '__eq__': , > '__doc__': '\n A lazy object initialised from any function.\n\n > Designed for compound objects of unknown type. For builtins or > objects of\n known type, use django.utils.functional.lazy.\n ', > '__init__': > }) > -------------------------------------------------------------------------- > Python 3.2.2 (default, Sep 5 2011, 21:17:14) > [GCC 4.6.1] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> from django.utils.functional import SimpleLazyObject > >>> fake_bool = SimpleLazyObject(lambda : True) > >>> fake_bool.__class__ > > >>> fake_bool.__dict__ > { > '_setupfunc': at 0x1c36ea8>, > '_wrapped': > } > >>> SimpleLazyObject.__dict__ > dict_proxy({ > '__module__': 'django.utils.functional', > '__nonzero__': , > '__deepcopy__': , > '__str__': , > '_setup': , > '__hash__': , > '__unicode__': , > '__bool__': , > '__eq__': , > '__doc__': '\n A lazy object initialised from any function.\n\n > Designed for compound objects of unknown type. For builtins or > objects of\n known type, use django.utils.functional.lazy.\n ', > '__init__': > }) > -------------------------------------------------------------------------- > > In Python 3, there's no __class__ property as there is in Python 2, > the fake_bool's type isn't bool, and the callable to set up the wrapped > object never gets called (which is why _wrapped is not set to True, but to > an anonymous object - this is set in SimpleLazyObject.__init__). > > The Python compiler can do strange things with assignment to __class__ in the presence of super. This issue has now been fixed, but it may be what is biting you: http://bugs.python.org/issue12370 If this *is* the problem, then see the workaround suggested in the issue. (alias super to _super in the module scope and use the old style super calling convention.) Michael > Puzzling! > > Regards, > > Vinay Sajip > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinay_sajip at yahoo.co.uk Sun Nov 20 10:34:04 2011 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sun, 20 Nov 2011 09:34:04 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Python_3=2C_new-style_classes_and_=5F=5Fcl?= =?utf-8?b?YXNzX18=?= References: <4EC82CCA.7040105@voidspace.org.uk> Message-ID: Michael Foord voidspace.org.uk> writes: > The Python compiler can do strange things with assignment to __class__ in the > presence of super. This issue has now been fixed, but it may be what is > biting you: > > http://bugs.python.org/issue12370 > > If this *is* the problem, then see the workaround suggested in the issue. > Yes, that workaround worked. Good catch - thanks! Regards, Vinay Sajip From guido at python.org Sun Nov 20 17:35:26 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Nov 2011 08:35:26 -0800 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: References: <4EC82CCA.7040105@voidspace.org.uk> Message-ID: Um, what?! __class__ *already* has a special meaning. Those examples violate that meaning. No wonder they get garbage results. The correct way to override isinstance is explained here: http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass . --Guido On Sat, Nov 19, 2011 at 6:13 PM, Michael Foord wrote: > > > On 19 November 2011 23:11, Vinay Sajip wrote: >> >> Michael Foord voidspace.org.uk> writes: >> >> > That works fine in Python 3 (mock.Mock does it): >> > >> > ?>>> class Foo(object): >> > ... ?@property >> > ... ?def __class__(self): >> > ... ? return int >> > ... >> > ?>>> a = Foo() >> > ?>>> isinstance(a, int) >> > True >> > ?>>> a.__class__ >> > >> > >> > There must be something else going on here. >> > >> >> Michael, thanks for the quick response. Okay, I'll dig in a bit further: >> the >> definition in SimpleLazyObject is >> >> __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) >> >> so perhaps the problem is something related to the specifics of the >> definition. >> Here's what I found in initial exploration: >> >> -------------------------------------------------------------------------- >> Python 2.7.2+ (default, Oct 4 2011, 20:06:09) >> [GCC 4.6.1] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> from django.utils.functional import SimpleLazyObject >> >>> fake_bool = SimpleLazyObject(lambda: True) >> >>> fake_bool.__class__ >> >> >>> fake_bool.__dict__ >> {'_setupfunc': at 0xca9ed8>, '_wrapped': True} >> >>> SimpleLazyObject.__dict__ >> dict_proxy({ >> ? ?'__module__': 'django.utils.functional', >> ? ?'__nonzero__': , >> ? ?'__deepcopy__': , >> ? ?'__str__': , >> ? ?'_setup': , >> ? ?'__class__': , >> ? ?'__hash__': , >> ? ?'__unicode__': , >> ? ?'__bool__': , >> ? ?'__eq__': , >> ? ?'__doc__': '\n A lazy object initialised from any function.\n\n >> ? ? ? ?Designed for compound objects of unknown type. For builtins or >> ? ? ? ?objects of\n known type, use django.utils.functional.lazy.\n ', >> ? ?'__init__': >> }) >> -------------------------------------------------------------------------- >> Python 3.2.2 (default, Sep 5 2011, 21:17:14) >> [GCC 4.6.1] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> from django.utils.functional import SimpleLazyObject >> >>> fake_bool = SimpleLazyObject(lambda : True) >> >>> fake_bool.__class__ >> >> >>> fake_bool.__dict__ >> { >> ? ?'_setupfunc': at 0x1c36ea8>, >> ? ?'_wrapped': >> } >> >>> SimpleLazyObject.__dict__ >> dict_proxy({ >> ? ?'__module__': 'django.utils.functional', >> ? ?'__nonzero__': , >> ? ?'__deepcopy__': , >> ? ?'__str__': , >> ? ?'_setup': , >> ? ?'__hash__': , >> ? ?'__unicode__': , >> ? ?'__bool__': , >> ? ?'__eq__': , >> ? ?'__doc__': '\n A lazy object initialised from any function.\n\n >> ? ? ? ?Designed for compound objects of unknown type. For builtins or >> ? ? ? ?objects of\n known type, use django.utils.functional.lazy.\n ', >> ? ?'__init__': >> }) >> -------------------------------------------------------------------------- >> >> In Python 3, there's no __class__ property as there is in Python 2, >> the fake_bool's type isn't bool, and the callable to set up the wrapped >> object never gets called (which is why _wrapped is not set to True, but to >> an anonymous object - this is set in SimpleLazyObject.__init__). >> > > The Python compiler can do strange things with assignment to __class__ in > the presence of super. This issue has now been fixed, but it may be what is > biting you: > > ??? http://bugs.python.org/issue12370 > > If this *is* the problem, then see the workaround suggested in the issue. > (alias super to _super in the module scope and use the old style super > calling convention.) > > Michael > > >> >> Puzzling! >> >> Regards, >> >> Vinay Sajip >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk >> > > > > -- > > http://www.voidspace.org.uk/ > > May you do good and not evil > May you find forgiveness for yourself and forgive others > > May you share freely, never taking more than you give. > -- the sqlite blessing http://www.sqlite.org/different.html > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) From fuzzyman at voidspace.org.uk Sun Nov 20 19:44:25 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 20 Nov 2011 18:44:25 +0000 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: References: <4EC82CCA.7040105@voidspace.org.uk> Message-ID: <2F92C28C-B2E8-4329-AB85-BF46939D766D@voidspace.org.uk> On 20 Nov 2011, at 16:35, Guido van Rossum wrote: > Um, what?! __class__ *already* has a special meaning. Those examples > violate that meaning. No wonder they get garbage results. > > The correct way to override isinstance is explained here: > http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass > . > Proxy classes have been using __class__ as a descriptor for this purpose for years before ABCs were introduced. This worked fine up until Python 3 where the compiler magic broke it when super is used. That is now fixed anyway. If I understand correctly, ABCs are great for allowing classes of objects to pass isinstance checks (etc) - what proxy, lazy and mock objects need is to be able to allow individual instances to pass different isinstance checks. All the best, Michael Foord > --Guido > > On Sat, Nov 19, 2011 at 6:13 PM, Michael Foord > wrote: >> >> >> On 19 November 2011 23:11, Vinay Sajip wrote: >>> >>> Michael Foord voidspace.org.uk> writes: >>> >>>> That works fine in Python 3 (mock.Mock does it): >>>> >>>> >>> class Foo(object): >>>> ... @property >>>> ... def __class__(self): >>>> ... return int >>>> ... >>>> >>> a = Foo() >>>> >>> isinstance(a, int) >>>> True >>>> >>> a.__class__ >>>> >>>> >>>> There must be something else going on here. >>>> >>> >>> Michael, thanks for the quick response. Okay, I'll dig in a bit further: >>> the >>> definition in SimpleLazyObject is >>> >>> __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) >>> >>> so perhaps the problem is something related to the specifics of the >>> definition. >>> Here's what I found in initial exploration: >>> >>> -------------------------------------------------------------------------- >>> Python 2.7.2+ (default, Oct 4 2011, 20:06:09) >>> [GCC 4.6.1] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>>>>> from django.utils.functional import SimpleLazyObject >>>>>> fake_bool = SimpleLazyObject(lambda: True) >>>>>> fake_bool.__class__ >>> >>>>>> fake_bool.__dict__ >>> {'_setupfunc': at 0xca9ed8>, '_wrapped': True} >>>>>> SimpleLazyObject.__dict__ >>> dict_proxy({ >>> '__module__': 'django.utils.functional', >>> '__nonzero__': , >>> '__deepcopy__': , >>> '__str__': , >>> '_setup': , >>> '__class__': , >>> '__hash__': , >>> '__unicode__': , >>> '__bool__': , >>> '__eq__': , >>> '__doc__': '\n A lazy object initialised from any function.\n\n >>> Designed for compound objects of unknown type. For builtins or >>> objects of\n known type, use django.utils.functional.lazy.\n ', >>> '__init__': >>> }) >>> -------------------------------------------------------------------------- >>> Python 3.2.2 (default, Sep 5 2011, 21:17:14) >>> [GCC 4.6.1] on linux2 >>> Type "help", "copyright", "credits" or "license" for more information. >>>>>> from django.utils.functional import SimpleLazyObject >>>>>> fake_bool = SimpleLazyObject(lambda : True) >>>>>> fake_bool.__class__ >>> >>>>>> fake_bool.__dict__ >>> { >>> '_setupfunc': at 0x1c36ea8>, >>> '_wrapped': >>> } >>>>>> SimpleLazyObject.__dict__ >>> dict_proxy({ >>> '__module__': 'django.utils.functional', >>> '__nonzero__': , >>> '__deepcopy__': , >>> '__str__': , >>> '_setup': , >>> '__hash__': , >>> '__unicode__': , >>> '__bool__': , >>> '__eq__': , >>> '__doc__': '\n A lazy object initialised from any function.\n\n >>> Designed for compound objects of unknown type. For builtins or >>> objects of\n known type, use django.utils.functional.lazy.\n ', >>> '__init__': >>> }) >>> -------------------------------------------------------------------------- >>> >>> In Python 3, there's no __class__ property as there is in Python 2, >>> the fake_bool's type isn't bool, and the callable to set up the wrapped >>> object never gets called (which is why _wrapped is not set to True, but to >>> an anonymous object - this is set in SimpleLazyObject.__init__). >>> >> >> The Python compiler can do strange things with assignment to __class__ in >> the presence of super. This issue has now been fixed, but it may be what is >> biting you: >> >> http://bugs.python.org/issue12370 >> >> If this *is* the problem, then see the workaround suggested in the issue. >> (alias super to _super in the module scope and use the old style super >> calling convention.) >> >> Michael >> >> >>> >>> Puzzling! >>> >>> Regards, >>> >>> Vinay Sajip >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk >>> >> >> >> >> -- >> >> http://www.voidspace.org.uk/ >> >> May you do good and not evil >> May you find forgiveness for yourself and forgive others >> >> May you share freely, never taking more than you give. >> -- the sqlite blessing http://www.sqlite.org/different.html >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > > -- > --Guido van Rossum (python.org/~guido) > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From guido at python.org Sun Nov 20 22:41:36 2011 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Nov 2011 13:41:36 -0800 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: <2F92C28C-B2E8-4329-AB85-BF46939D766D@voidspace.org.uk> References: <4EC82CCA.7040105@voidspace.org.uk> <2F92C28C-B2E8-4329-AB85-BF46939D766D@voidspace.org.uk> Message-ID: On Sun, Nov 20, 2011 at 10:44 AM, Michael Foord wrote: > > On 20 Nov 2011, at 16:35, Guido van Rossum wrote: > >> Um, what?! __class__ *already* has a special meaning. Those examples >> violate that meaning. No wonder they get garbage results. >> >> The correct way to override isinstance is explained here: >> http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass >> . >> > > > Proxy classes have been using __class__ as a descriptor for this purpose for years before ABCs were introduced. This worked fine up until Python 3 where the compiler magic broke it when super is used. That is now fixed anyway. Hm, okay. Though it's disheartening that it took three releases of 3.x to figure this out. And there was a PEP even! > If I understand correctly, ABCs are great for allowing classes of objects to pass isinstance checks (etc) - what proxy, lazy and mock objects need is to be able to allow individual instances to pass different isinstance checks. Ah, oops. Yes, __instancecheck__ is for the class to override isinstance(inst, cls); for the *instance* to override apparently you'll need to mess with __class__. I guess my request at this point would be to replace '@__class__' with some other *legal* __identifier__ that doesn't clash with existing use -- I don't like the arbitrary use of @ here. --Guido > All the best, > > Michael Foord > >> --Guido >> >> On Sat, Nov 19, 2011 at 6:13 PM, Michael Foord >> wrote: >>> >>> >>> On 19 November 2011 23:11, Vinay Sajip wrote: >>>> >>>> Michael Foord voidspace.org.uk> writes: >>>> >>>>> That works fine in Python 3 (mock.Mock does it): >>>>> >>>>> ?>>> class Foo(object): >>>>> ... ?@property >>>>> ... ?def __class__(self): >>>>> ... ? return int >>>>> ... >>>>> ?>>> a = Foo() >>>>> ?>>> isinstance(a, int) >>>>> True >>>>> ?>>> a.__class__ >>>>> >>>>> >>>>> There must be something else going on here. >>>>> >>>> >>>> Michael, thanks for the quick response. Okay, I'll dig in a bit further: >>>> the >>>> definition in SimpleLazyObject is >>>> >>>> __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) >>>> >>>> so perhaps the problem is something related to the specifics of the >>>> definition. >>>> Here's what I found in initial exploration: >>>> >>>> -------------------------------------------------------------------------- >>>> Python 2.7.2+ (default, Oct 4 2011, 20:06:09) >>>> [GCC 4.6.1] on linux2 >>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>> from django.utils.functional import SimpleLazyObject >>>>>>> fake_bool = SimpleLazyObject(lambda: True) >>>>>>> fake_bool.__class__ >>>> >>>>>>> fake_bool.__dict__ >>>> {'_setupfunc': at 0xca9ed8>, '_wrapped': True} >>>>>>> SimpleLazyObject.__dict__ >>>> dict_proxy({ >>>> ? ?'__module__': 'django.utils.functional', >>>> ? ?'__nonzero__': , >>>> ? ?'__deepcopy__': , >>>> ? ?'__str__': , >>>> ? ?'_setup': , >>>> ? ?'__class__': , >>>> ? ?'__hash__': , >>>> ? ?'__unicode__': , >>>> ? ?'__bool__': , >>>> ? ?'__eq__': , >>>> ? ?'__doc__': '\n A lazy object initialised from any function.\n\n >>>> ? ? ? ?Designed for compound objects of unknown type. For builtins or >>>> ? ? ? ?objects of\n known type, use django.utils.functional.lazy.\n ', >>>> ? ?'__init__': >>>> }) >>>> -------------------------------------------------------------------------- >>>> Python 3.2.2 (default, Sep 5 2011, 21:17:14) >>>> [GCC 4.6.1] on linux2 >>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>> from django.utils.functional import SimpleLazyObject >>>>>>> fake_bool = SimpleLazyObject(lambda : True) >>>>>>> fake_bool.__class__ >>>> >>>>>>> fake_bool.__dict__ >>>> { >>>> ? ?'_setupfunc': at 0x1c36ea8>, >>>> ? ?'_wrapped': >>>> } >>>>>>> SimpleLazyObject.__dict__ >>>> dict_proxy({ >>>> ? ?'__module__': 'django.utils.functional', >>>> ? ?'__nonzero__': , >>>> ? ?'__deepcopy__': , >>>> ? ?'__str__': , >>>> ? ?'_setup': , >>>> ? ?'__hash__': , >>>> ? ?'__unicode__': , >>>> ? ?'__bool__': , >>>> ? ?'__eq__': , >>>> ? ?'__doc__': '\n A lazy object initialised from any function.\n\n >>>> ? ? ? ?Designed for compound objects of unknown type. For builtins or >>>> ? ? ? ?objects of\n known type, use django.utils.functional.lazy.\n ', >>>> ? ?'__init__': >>>> }) >>>> -------------------------------------------------------------------------- >>>> >>>> In Python 3, there's no __class__ property as there is in Python 2, >>>> the fake_bool's type isn't bool, and the callable to set up the wrapped >>>> object never gets called (which is why _wrapped is not set to True, but to >>>> an anonymous object - this is set in SimpleLazyObject.__init__). >>>> >>> >>> The Python compiler can do strange things with assignment to __class__ in >>> the presence of super. This issue has now been fixed, but it may be what is >>> biting you: >>> >>> ? ? http://bugs.python.org/issue12370 >>> >>> If this *is* the problem, then see the workaround suggested in the issue. >>> (alias super to _super in the module scope and use the old style super >>> calling convention.) >>> >>> Michael >>> >>> >>>> >>>> Puzzling! >>>> >>>> Regards, >>>> >>>> Vinay Sajip >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> http://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: >>>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk >>>> >>> >>> >>> >>> -- >>> >>> http://www.voidspace.org.uk/ >>> >>> May you do good and not evil >>> May you find forgiveness for yourself and forgive others >>> >>> May you share freely, never taking more than you give. >>> -- the sqlite blessing http://www.sqlite.org/different.html >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> http://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > > -- > http://www.voidspace.org.uk/ > > > May you do good and not evil > May you find forgiveness for yourself and forgive others > May you share freely, never taking more than you give. > -- the sqlite blessing > http://www.sqlite.org/different.html > > > > > > -- --Guido van Rossum (python.org/~guido) From fijall at gmail.com Mon Nov 21 11:36:33 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 21 Nov 2011 12:36:33 +0200 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: ================================== PyPy 1.7 - widening the sweet spot ================================== We're pleased to announce the 1.7 release of PyPy. As became a habit, this release brings a lot of bugfixes and performance improvements over the 1.6 release. However, unlike the previous releases, the focus has been on widening the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly speed up should be vastly improved with this release. You can download the 1.7 release here: ? ?http://pypy.org/download.html What is PyPy? ============= PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7. It's fast (`pypy 1.7 and cpython 2.7.1`_ performance comparison) due to its integrated tracing JIT compiler. This release supports x86 machines running Linux 32/64, Mac OS X 32/64 or Windows 32. Windows 64 work is ongoing, but not yet natively supported. The main topic of this release is widening the range of code which PyPy can greatly speed up. On average on our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up to **20 times** faster on some benchmarks. .. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org Highlights ========== * Numerous performance improvements. There are too many examples which python ?constructs now should behave faster to list them. * Bugfixes and compatibility fixes with CPython. * Windows fixes. * PyPy now comes with stackless features enabled by default. However, ?any loop using stackless features will interrupt the JIT for now, so no real ?performance improvement for stackless-based programs. Contact pypy-dev for ?info how to help on removing this restriction. * NumPy effort in PyPy was renamed numpypy. In order to try using it, simply ?write:: ? ?import numpypy as numpy ?at the beginning of your program. There is a huge progress on numpy in PyPy ?since 1.6, the main feature being implementation of dtypes. * JSON encoder (but not decoder) has been replaced with a new one. This one ?is written in pure Python, but is known to outperform CPython's C extension ?up to **2 times** in some cases. It's about **20 times** faster than ?the one that we had in 1.6. * The memory footprint of some of our RPython modules has been drastically ?improved. This should impact any applications using for example cryptography, ?like tornado. * There was some progress in exposing even more CPython C API via cpyext. Things that didn't make it, expect in 1.8 soon ============================================== There is an ongoing work, which while didn't make it to the release, is probably worth mentioning here. This is what you should probably expect in 1.8 some time soon: * Specialized list implementation. There is a branch that implements lists of ?integers/floats/strings as compactly as array.array. This should drastically ?improve performance/memory impact of some applications * NumPy effort is progressing forward, with multi-dimensional arrays coming ?soon. * There are two brand new JIT assembler backends, notably for the PowerPC and ?ARM processors. Fundraising =========== It's maybe worth mentioning that we're running fundraising campaigns for NumPy effort in PyPy and for Python 3 in PyPy. In case you want to see any of those happen faster, we urge you to donate to `numpy proposal`_ or `py3k proposal`_. In case you want PyPy to progress, but you trust us with the general direction, you can always donate to the `general pot`_. .. _`numpy proposal`: http://pypy.org/numpydonate.html .. _`py3k proposal`: http://pypy.org/py3donate.html .. _`general pot`: http://pypy.org From victor.stinner at haypocalc.com Mon Nov 21 12:53:17 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 21 Nov 2011 12:53:17 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function Message-ID: <201111211253.17106.victor.stinner@haypocalc.com> Hi, With the PEP 393, the Py_UNICODE is now deprecated and scheduled for removal in Python 4. PyUnicode_AsUnicode() and PyUnicode_AsUnicodeAndSize() functions are still commonly used on Windows to get the string as wchar_t* without having to care of freeing the memory: it's a borrowed reference (pointer). I would like to add a new PyUnicode_AsWideChar() function which would return the borrowed reference, exactly as PyUnicode_AsUnicode(). The problem is that "PyUnicode_AsWideChar" already exists in Python 3.2, as PyUnicode_AsWideCharString. Do you have an suggestion for a name of such function? PyUnicode_AsWideCharBorrowed? PyUnicode_AsFastWideChar? PyUnicode_ToWideChar? PyUnicode_AsWchar_t? Victor From solipsis at pitrou.net Mon Nov 21 16:04:06 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 21 Nov 2011 16:04:06 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function References: <201111211253.17106.victor.stinner@haypocalc.com> Message-ID: <20111121160406.46b9be3e@pitrou.net> On Mon, 21 Nov 2011 12:53:17 +0100 Victor Stinner wrote: > > I would like to add a new PyUnicode_AsWideChar() function which would return > the borrowed reference, exactly as PyUnicode_AsUnicode(). The problem is that > "PyUnicode_AsWideChar" already exists in Python 3.2, as > PyUnicode_AsWideCharString. This is not very clear. You are proposing to add a function which already exists, except that you have to free the pointer yourself? I don't think that's a good idea, the API is already large enough. Regards Antoine. From victor.stinner at haypocalc.com Mon Nov 21 16:53:10 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 21 Nov 2011 16:53:10 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function In-Reply-To: <20111121160406.46b9be3e@pitrou.net> References: <201111211253.17106.victor.stinner@haypocalc.com> <20111121160406.46b9be3e@pitrou.net> Message-ID: <1479578.LIbA5bexKU@dsk000552> Le Lundi 21 Novembre 2011 16:04:06 Antoine Pitrou a ?crit : > On Mon, 21 Nov 2011 12:53:17 +0100 > > Victor Stinner wrote: > > I would like to add a new PyUnicode_AsWideChar() function which would > > return the borrowed reference, exactly as PyUnicode_AsUnicode(). The > > problem is that "PyUnicode_AsWideChar" already exists in Python 3.2, as > > PyUnicode_AsWideCharString. > > This is not very clear. You are proposing to add a function which > already exists, except that you have to free the pointer yourself? > I don't think that's a good idea, the API is already large enough. I want to rename PyUnicode_AsUnicode() and change its result type (Py_UNICODE* => wchar_t*). The result will be a "borrowed reference", ie. you don't have to free the memory, it will be done when the Unicode string will be destroyed (by Py_DECREF). The problem is that Py_UNICODE type is now deprecated. Victor From solipsis at pitrou.net Mon Nov 21 16:55:05 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 21 Nov 2011 16:55:05 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function References: <201111211253.17106.victor.stinner@haypocalc.com> <20111121160406.46b9be3e@pitrou.net> <1479578.LIbA5bexKU@dsk000552> Message-ID: <20111121165505.69d10918@pitrou.net> On Mon, 21 Nov 2011 16:53:10 +0100 Victor Stinner wrote: > Le Lundi 21 Novembre 2011 16:04:06 Antoine Pitrou a ?crit : > > On Mon, 21 Nov 2011 12:53:17 +0100 > > > > Victor Stinner wrote: > > > I would like to add a new PyUnicode_AsWideChar() function which would > > > return the borrowed reference, exactly as PyUnicode_AsUnicode(). The > > > problem is that "PyUnicode_AsWideChar" already exists in Python 3.2, as > > > PyUnicode_AsWideCharString. > > > > This is not very clear. You are proposing to add a function which > > already exists, except that you have to free the pointer yourself? > > I don't think that's a good idea, the API is already large enough. > > I want to rename PyUnicode_AsUnicode() and change its result type (Py_UNICODE* > => wchar_t*). The result will be a "borrowed reference", ie. you don't have to > free the memory, it will be done when the Unicode string will be destroyed (by > Py_DECREF). But this is almost the same as PyUnicode_AsWideCharString, right? From merwok at netwok.org Mon Nov 21 17:36:57 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 21 Nov 2011 17:36:57 +0100 Subject: [Python-Dev] patch metadata - to use or not to use? In-Reply-To: References: Message-ID: <4ECA7E29.1050101@netwok.org> Hi, > I recently got some patches accepted for inclusion in 3.3, and each time, > the patch metadata (such as my name and my commit comment) were stripped by > applying the patch manually, instead of hg importing it. This makes it > clear in the history who eventually reviewed and applied the patch, but > less visible who wrote it (except for the entry in Misc/NEWS). We had a similar discussion on python-committers a while back, and the gist of the replies was that there is no such thing as a patch ready for commit, i.e. the core dev always edits something. As Antoine said, we?ve switched to Mercurial to ease contributions, but we still work with patches, not directly with changesets. That said, I remember that once I got a patch that was complete, and I just used hg import and hg push since it was so easy. I share the opinion that putting contributors? names in the spotlight is a good way to encourage them. Cheers From victor.stinner at haypocalc.com Mon Nov 21 18:02:36 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 21 Nov 2011 18:02:36 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function In-Reply-To: <20111121165505.69d10918@pitrou.net> References: <201111211253.17106.victor.stinner@haypocalc.com> <1479578.LIbA5bexKU@dsk000552> <20111121165505.69d10918@pitrou.net> Message-ID: <10767768.kLA9kAqqlv@dsk000552> Le Lundi 21 Novembre 2011 16:55:05 Antoine Pitrou a ?crit : > > I want to rename PyUnicode_AsUnicode() and change its result type > > (Py_UNICODE* => wchar_t*). The result will be a "borrowed reference", > > ie. you don't have to free the memory, it will be done when the Unicode > > string will be destroyed (by Py_DECREF). > > But this is almost the same as PyUnicode_AsWideCharString, right? You have to free the memory for PyUnicode_AsWideCharString(). With PyUnicode_AsWideCharXXX(), as PyUnicode_AsUnicode(), you don't have to. The memory is handled by the Unicode object. Victor From solipsis at pitrou.net Mon Nov 21 18:04:01 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 21 Nov 2011 18:04:01 +0100 Subject: [Python-Dev] Chose a name for a "get unicode as wide character, borrowed reference" function References: <201111211253.17106.victor.stinner@haypocalc.com> <1479578.LIbA5bexKU@dsk000552> <20111121165505.69d10918@pitrou.net> <10767768.kLA9kAqqlv@dsk000552> Message-ID: <20111121180401.6e243fa6@pitrou.net> On Mon, 21 Nov 2011 18:02:36 +0100 Victor Stinner wrote: > Le Lundi 21 Novembre 2011 16:55:05 Antoine Pitrou a ?crit : > > > I want to rename PyUnicode_AsUnicode() and change its result type > > > (Py_UNICODE* => wchar_t*). The result will be a "borrowed reference", > > > ie. you don't have to free the memory, it will be done when the Unicode > > > string will be destroyed (by Py_DECREF). > > > > But this is almost the same as PyUnicode_AsWideCharString, right? > > You have to free the memory for PyUnicode_AsWideCharString(). That's why I said "almost". I don't think it's a good idea to add this function, for two reasons: - the unicode API is already big enough, we don't need redundant functions with differing refcount behaviours - the internal wchar_t representation is certainly meant to disappear in the long term; adding an API which *relies* on that representation is silly, especially after we deliberately deprecated the Py_UNICODE APIs Regards Antoine. From fuzzyman at voidspace.org.uk Mon Nov 21 18:22:48 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 21 Nov 2011 17:22:48 +0000 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: References: <4EC82CCA.7040105@voidspace.org.uk> <2F92C28C-B2E8-4329-AB85-BF46939D766D@voidspace.org.uk> Message-ID: <4ECA88E8.3090606@voidspace.org.uk> On 20/11/2011 21:41, Guido van Rossum wrote: > On Sun, Nov 20, 2011 at 10:44 AM, Michael Foord > wrote: >> On 20 Nov 2011, at 16:35, Guido van Rossum wrote: >> >>> Um, what?! __class__ *already* has a special meaning. Those examples >>> violate that meaning. No wonder they get garbage results. >>> >>> The correct way to override isinstance is explained here: >>> http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass >>> . >>> >> >> Proxy classes have been using __class__ as a descriptor for this purpose for years before ABCs were introduced. This worked fine up until Python 3 where the compiler magic broke it when super is used. That is now fixed anyway. > Hm, okay. Though it's disheartening that it took three releases of 3.x > to figure this out. And there was a PEP even! > >> If I understand correctly, ABCs are great for allowing classes of objects to pass isinstance checks (etc) - what proxy, lazy and mock objects need is to be able to allow individual instances to pass different isinstance checks. > Ah, oops. Yes, __instancecheck__ is for the class to override > isinstance(inst, cls); for the *instance* to override apparently > you'll need to mess with __class__. > > I guess my request at this point would be to replace '@__class__' with > some other *legal* __identifier__ that doesn't clash with existing use > -- I don't like the arbitrary use of @ here. > The problem with using a valid identifier name is that it leaves open the possibility of the same "broken" behaviour (removing from the class namespace) for whatever name we pick. That means we should document the name used - and it's then more likely that users will start to rely on this odd (but documented) internal implementation detail. This in turn puts a burden on other implementations to use the same mechanism, even if this is less than ideal for them. This is why a deliberately invalid identifier was picked. All the best, Michael Foord > --Guido > >> All the best, >> >> Michael Foord >> >>> --Guido >>> >>> On Sat, Nov 19, 2011 at 6:13 PM, Michael Foord >>> wrote: >>>> >>>> On 19 November 2011 23:11, Vinay Sajip wrote: >>>>> Michael Foord voidspace.org.uk> writes: >>>>> >>>>>> That works fine in Python 3 (mock.Mock does it): >>>>>> >>>>>> >>> class Foo(object): >>>>>> ... @property >>>>>> ... def __class__(self): >>>>>> ... return int >>>>>> ... >>>>>> >>> a = Foo() >>>>>> >>> isinstance(a, int) >>>>>> True >>>>>> >>> a.__class__ >>>>>> >>>>>> >>>>>> There must be something else going on here. >>>>>> >>>>> Michael, thanks for the quick response. Okay, I'll dig in a bit further: >>>>> the >>>>> definition in SimpleLazyObject is >>>>> >>>>> __class__ = property(new_method_proxy(operator.attrgetter("__class__"))) >>>>> >>>>> so perhaps the problem is something related to the specifics of the >>>>> definition. >>>>> Here's what I found in initial exploration: >>>>> >>>>> -------------------------------------------------------------------------- >>>>> Python 2.7.2+ (default, Oct 4 2011, 20:06:09) >>>>> [GCC 4.6.1] on linux2 >>>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>>> from django.utils.functional import SimpleLazyObject >>>>>>>> fake_bool = SimpleLazyObject(lambda: True) >>>>>>>> fake_bool.__class__ >>>>> >>>>>>>> fake_bool.__dict__ >>>>> {'_setupfunc': at 0xca9ed8>, '_wrapped': True} >>>>>>>> SimpleLazyObject.__dict__ >>>>> dict_proxy({ >>>>> '__module__': 'django.utils.functional', >>>>> '__nonzero__':, >>>>> '__deepcopy__':, >>>>> '__str__':, >>>>> '_setup':, >>>>> '__class__':, >>>>> '__hash__':, >>>>> '__unicode__':, >>>>> '__bool__':, >>>>> '__eq__':, >>>>> '__doc__': '\n A lazy object initialised from any function.\n\n >>>>> Designed for compound objects of unknown type. For builtins or >>>>> objects of\n known type, use django.utils.functional.lazy.\n ', >>>>> '__init__': >>>>> }) >>>>> -------------------------------------------------------------------------- >>>>> Python 3.2.2 (default, Sep 5 2011, 21:17:14) >>>>> [GCC 4.6.1] on linux2 >>>>> Type "help", "copyright", "credits" or "license" for more information. >>>>>>>> from django.utils.functional import SimpleLazyObject >>>>>>>> fake_bool = SimpleLazyObject(lambda : True) >>>>>>>> fake_bool.__class__ >>>>> >>>>>>>> fake_bool.__dict__ >>>>> { >>>>> '_setupfunc': at 0x1c36ea8>, >>>>> '_wrapped': >>>>> } >>>>>>>> SimpleLazyObject.__dict__ >>>>> dict_proxy({ >>>>> '__module__': 'django.utils.functional', >>>>> '__nonzero__':, >>>>> '__deepcopy__':, >>>>> '__str__':, >>>>> '_setup':, >>>>> '__hash__':, >>>>> '__unicode__':, >>>>> '__bool__':, >>>>> '__eq__':, >>>>> '__doc__': '\n A lazy object initialised from any function.\n\n >>>>> Designed for compound objects of unknown type. For builtins or >>>>> objects of\n known type, use django.utils.functional.lazy.\n ', >>>>> '__init__': >>>>> }) >>>>> -------------------------------------------------------------------------- >>>>> >>>>> In Python 3, there's no __class__ property as there is in Python 2, >>>>> the fake_bool's type isn't bool, and the callable to set up the wrapped >>>>> object never gets called (which is why _wrapped is not set to True, but to >>>>> an anonymous object - this is set in SimpleLazyObject.__init__). >>>>> >>>> The Python compiler can do strange things with assignment to __class__ in >>>> the presence of super. This issue has now been fixed, but it may be what is >>>> biting you: >>>> >>>> http://bugs.python.org/issue12370 >>>> >>>> If this *is* the problem, then see the workaround suggested in the issue. >>>> (alias super to _super in the module scope and use the old style super >>>> calling convention.) >>>> >>>> Michael >>>> >>>> >>>>> Puzzling! >>>>> >>>>> Regards, >>>>> >>>>> Vinay Sajip >>>>> >>>>> _______________________________________________ >>>>> Python-Dev mailing list >>>>> Python-Dev at python.org >>>>> http://mail.python.org/mailman/listinfo/python-dev >>>>> Unsubscribe: >>>>> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk >>>>> >>>> >>>> >>>> -- >>>> >>>> http://www.voidspace.org.uk/ >>>> >>>> May you do good and not evil >>>> May you find forgiveness for yourself and forgive others >>>> >>>> May you share freely, never taking more than you give. >>>> -- the sqlite blessing http://www.sqlite.org/different.html >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> http://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: >>>> http://mail.python.org/mailman/options/python-dev/guido%40python.org >>>> >>>> >>> >>> >>> -- >>> --Guido van Rossum (python.org/~guido) >>> >> >> -- >> http://www.voidspace.org.uk/ >> >> >> May you do good and not evil >> May you find forgiveness for yourself and forgive others >> May you share freely, never taking more than you give. >> -- the sqlite blessing >> http://www.sqlite.org/different.html >> >> >> >> >> >> > > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From guido at python.org Mon Nov 21 19:42:31 2011 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Nov 2011 10:42:31 -0800 Subject: [Python-Dev] Python 3, new-style classes and __class__ In-Reply-To: <4ECA88E8.3090606@voidspace.org.uk> References: <4EC82CCA.7040105@voidspace.org.uk> <2F92C28C-B2E8-4329-AB85-BF46939D766D@voidspace.org.uk> <4ECA88E8.3090606@voidspace.org.uk> Message-ID: On Mon, Nov 21, 2011 at 9:22 AM, Michael Foord wrote: > On 20/11/2011 21:41, Guido van Rossum wrote: >> >> On Sun, Nov 20, 2011 at 10:44 AM, Michael Foord >> ?wrote: >>> >>> On 20 Nov 2011, at 16:35, Guido van Rossum wrote: >>> >>>> Um, what?! __class__ *already* has a special meaning. Those examples >>>> violate that meaning. No wonder they get garbage results. >>>> >>>> The correct way to override isinstance is explained here: >>>> >>>> http://www.python.org/dev/peps/pep-3119/#overloading-isinstance-and-issubclass >>>> . >>>> >>> >>> Proxy classes have been using __class__ as a descriptor for this purpose >>> for years before ABCs were introduced. This worked fine up until Python 3 >>> where the compiler magic broke it when super is used. That is now fixed >>> anyway. >> >> Hm, okay. Though it's disheartening that it took three releases of 3.x >> to figure this out. And there was a PEP even! >> >>> If I understand correctly, ABCs are great for allowing classes of objects >>> to pass isinstance checks (etc) - what proxy, lazy and mock objects need is >>> to be able to allow individual instances to pass different isinstance >>> checks. >> >> Ah, oops. Yes, __instancecheck__ is for the class to override >> isinstance(inst, cls); for the *instance* to override apparently >> you'll need to mess with __class__. >> >> I guess my request at this point would be to replace '@__class__' with >> some other *legal* __identifier__ that doesn't clash with existing use >> -- I don't like the arbitrary use of @ here. >> > > The problem with using a valid identifier name is that it leaves open the > possibility of the same "broken" behaviour (removing from the class > namespace) for whatever name we pick. > > That means we should document the name used - and it's then more likely that > users will start to rely on this odd (but documented) internal > implementation detail. This in turn puts a burden on other implementations > to use the same mechanism, even if this is less than ideal for them. > > This is why a deliberately invalid identifier was picked. Hm. There are many, many places in Python where a __special__ identifier is used in such a way that a user who stomps on it can cause themselves pain. This is why the language reference is quite serious about reserving *all* __special__ names and states that only documented uses of them are allowed (and at least implying that undocumented uses are not necessarily flagged as errors). While I see that PEP 3119 made a mistake in giving __class__ two different, incompatible special uses, I don't agree that this case is so special that we should use an "invalid" identifier. I don't see that the name use should actually be documented -- users should not make *any* use of undocumented __names__. Let's please continue the tradition of allowing experts to mess around creatively with internals. -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon Nov 21 20:55:55 2011 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Nov 2011 11:55:55 -0800 Subject: [Python-Dev] Committing PEP 3155 In-Reply-To: <20111118211438.782ae82a@pitrou.net> References: <20111118211438.782ae82a@pitrou.net> Message-ID: I've approved the latest version of this PEP. Congrats, Antoine! --Guido On Fri, Nov 18, 2011 at 12:14 PM, Antoine Pitrou wrote: > > Hello, > > I haven't seen any strong objections, so I would like to go ahead and > commit PEP 3155 (*) soon. Is anyone against it? > > (*) "Qualified name for classes and functions" > ? ?http://www.python.org/dev/peps/pep-3155/ > > Thank you > > Antoine. -- --Guido van Rossum (python.org/~guido) From victor.stinner at haypocalc.com Mon Nov 21 21:39:53 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Mon, 21 Nov 2011 21:39:53 +0100 Subject: [Python-Dev] PyUnicode_EncodeDecimal Message-ID: <201111212139.53637.victor.stinner@haypocalc.com> Hi, I'm trying to rewrite PyUnicode_EncodeDecimal() to upgrade it to the new Unicode API. The problem is that the function is not accessible in Python nor tested. Should we document and test it, leave it unchanged and deprecate it, or simply remove it? -- Python has a PyUnicode_EncodeDecimal() function. It was used in Python 2 by int, long and complex constructors. In Python 3, the function is no more used: it has been replaced by PyUnicode_TransformDecimalToASCII() in Python <= 3.2 and _PyUnicode_TransformDecimalAndSpaceToASCII() in Python 3.3. PyUnicode_EncodeDecimal() goes into an unlimited loop if there is more than one unencodable character. It's a known bug and there is a patch: http://bugs.python.org/issue13093 PyUnicode_EncodeDecimal() is undocumented and not tested: http://bugs.python.org/issue8646 Stefan Krah uses PyUnicode_EncodeDecimal() in its cdecimal project. See also "Malformed error message from float()" issue: http://bugs.python.org/issue10557 Python 3.3 has now 3 encoders to decimal: - PyUnicode_EncodeDecimal() - PyUnicode_TransformDecimalToASCII() - _PyUnicode_TransformDecimalAndSpaceToASCII() (new in 3.3) _PyUnicode_TransformDecimalAndSpaceToASCII() replaces also Unicode spaces with ASCII spaces. PyUnicode_EncodeDecimal() and PyUnicode_TransformDecimalToASCII() take Py_UNICODE* strings. PyUnicode_EncodeDecimal() requires an output buffer and it has no argument for the size of the output buffer. It is unsafe: it leads to buffer overflow if the buffer is too small. Victor From tjreedy at udel.edu Mon Nov 21 22:36:10 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 Nov 2011 16:36:10 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: On 11/21/2011 5:36 AM, Maciej Fijalkowski wrote: > ================================== > PyPy 1.7 - widening the sweet spot > ================================== > > We're pleased to announce the 1.7 release of PyPy. As became a habit, this > release brings a lot of bugfixes and performance improvements over the 1.6 > release. However, unlike the previous releases, the focus has been on widening > the "sweet spot" of PyPy. That is, classes of Python code that PyPy can greatly > speed up should be vastly improved with this release. You can download the 1.7 > release here: > > http://pypy.org/download.html ... > The main topic of this release is widening the range of code which PyPy > can greatly speed up. On average on > our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up > to **20 times** faster on some benchmarks. > > .. _`pypy 1.7 and cpython 2.7.1`: http://speed.pypy.org If I understand right, pypy is generally slower than cpython without jit and faster with jit. (There is obviously a spurious datapoint in the pypy-c timeline for raytracing-simple.) This site is a nice piece of work. ... > .. _`py3k proposal`: http://pypy.org/py3donate.html I strongly recommend that where it makes a difference, the pypy python3 project target 3.3. In particular, don't reproduce the buggy narrow-build behavior of 3.2 and before (perhaps pypy avoids this already). Do include the new unicode capi in cpyext. I anticipate that 3.3 will see more production use than 3.2 -- Terry Jan Reedy From amauryfa at gmail.com Mon Nov 21 23:46:13 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 21 Nov 2011 23:46:13 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: 2011/11/21 Terry Reedy > I strongly recommend that where it makes a difference, the pypy python3 > project target 3.3. In particular, don't reproduce the buggy narrow-build > behavior of 3.2 and before (perhaps pypy avoids this already). Do include > the new unicode capi in cpyext. I anticipate that 3.3 will see more > production use than 3.2 > In the current 2.7-compatible version, PyPy already uses wchar_t for its Unicode string, i.e. it is always a wide build with gcc and a narrow build on Windows. But this will certainly change for the 3.x port. PyPy already supports different internal representations for the same visible user type, and it makes sense to have 1-byte, 2-bytes and 4-bytes unicode types and try to choose the most efficient representation. As for the C API... getting a pointer out of a PyPy string already requires to allocate and fill a new non-movable buffer (since all memory allocated by PyPy is movable). So cpyext could support the new API for sure, but it's unlikely to give any performance benefit to an extension module. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Tue Nov 22 02:02:05 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 22 Nov 2011 02:02:05 +0100 Subject: [Python-Dev] PyUnicode_EncodeDecimal In-Reply-To: <201111212139.53637.victor.stinner@haypocalc.com> References: <201111212139.53637.victor.stinner@haypocalc.com> Message-ID: <201111220202.05187.victor.stinner@haypocalc.com> Le lundi 21 novembre 2011 21:39:53, Victor Stinner a ?crit : > I'm trying to rewrite PyUnicode_EncodeDecimal() to upgrade it to the new > Unicode API. The problem is that the function is not accessible in Python > nor tested. I added tests for this function in Python 2.7, 3.2 and 3.3. > PyUnicode_EncodeDecimal() goes into an unlimited loop if there is more than > one unencodable character. It's a known bug and there is a patch: > http://bugs.python.org/issue13093 I fixed this issue. I was wrong: it was not possible to DoS Python, the bug was not an unlimited loop (but there was a bug on error handling). > PyUnicode_EncodeDecimal() requires an output buffer and it has no argument > for the size of the output buffer. It is unsafe: it leads to buffer > overflow if the buffer is too small. This function is broken by design if an error handler is specified: the caller cannot know the size of the output buffer, whereas the caller has to allocate this buffer. I propose to raise an error if an error handler (different than "strict") is specified) and do this change in Python 2.7, 3.2 and 3.3. In Python 2.7 code base, PyUnicode_EncodeDecimal() is always called with errors=NULL. In Python 3.x, the function is no more called. > Should we document and test it, leave it unchanged and > deprecate it, or simply remove it? If we change PyUnicode_EncodeDecimal() to reject error handlers different than strict, we can keep this function for some release and deprecate it. The function is already deprecated beacuse it uses the deprecated Py_UNICODE type. Victor From victor.stinner at haypocalc.com Tue Nov 22 02:08:17 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 22 Nov 2011 02:08:17 +0100 Subject: [Python-Dev] PyUnicode_Resize Message-ID: <201111220208.17514.victor.stinner@haypocalc.com> Hi, In Python 3.2, PyUnicode_Resize() expects a number of Py_UNICODE units, whereas Python 3.3 expects a number of characters. It is tricky to convert a number of Py_UNICODE units to a number of characters, so it is diffcult to provide a backward compatibility PyUnicode_Resize() function taking a number of Py_UNICODE units in Python 3.3. Should we rename PyUnicode_Resize() in Python 3.3 to avoid surprising bugs? The issue only concerns Windows with non-BMP characters, so a very rare use case. The easiest solution is to do nothing in Python 3.3: the API changed, but it doesn't really matter. Developers just have to be careful on this particular issue (which is not well documented today). Victor From g.rodola at gmail.com Tue Nov 22 10:21:26 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Nov 2011 10:21:26 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: 2011/11/21 Terry Reedy : > I strongly recommend that where it makes a difference, the pypy python3 > project target 3.3. In particular, don't reproduce the buggy narrow-build > behavior of 3.2 and before (perhaps pypy avoids this already). Do include > the new unicode capi in cpyext. I anticipate that 3.3 will see more > production use than 3.2 Is there a reason in particular? --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ From stefan at bytereef.org Tue Nov 22 13:23:26 2011 From: stefan at bytereef.org (Stefan Krah) Date: Tue, 22 Nov 2011 13:23:26 +0100 Subject: [Python-Dev] PyUnicode_EncodeDecimal In-Reply-To: <201111220202.05187.victor.stinner@haypocalc.com> References: <201111212139.53637.victor.stinner@haypocalc.com> <201111220202.05187.victor.stinner@haypocalc.com> Message-ID: <20111122122326.GA14907@sleipnir.bytereef.org> Victor Stinner wrote: > > Should we document and test it, leave it unchanged and > > deprecate it, or simply remove it? > > If we change PyUnicode_EncodeDecimal() to reject error handlers different than > strict, we can keep this function for some release and deprecate it. The > function is already deprecated beacuse it uses the deprecated Py_UNICODE type. I'd be fine with removing the function in 3.4. For consistency, it might be better to remove it in 4.0 together with all the other deprecated functions (at least I understood that this was the plan). Stefan Krah From victor.stinner at haypocalc.com Tue Nov 22 13:28:12 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 22 Nov 2011 13:28:12 +0100 Subject: [Python-Dev] PyUnicode_EncodeDecimal In-Reply-To: <201111220202.05187.victor.stinner@haypocalc.com> References: <201111212139.53637.victor.stinner@haypocalc.com> <201111220202.05187.victor.stinner@haypocalc.com> Message-ID: <201111221328.12379.victor.stinner@haypocalc.com> Le mardi 22 novembre 2011 02:02:05, Victor Stinner a ?crit : > This function is broken by design if an error handler is specified: the > caller cannot know the size of the output buffer, whereas the caller has > to allocate this buffer. > > I propose to raise an error if an error handler (different than "strict") > is specified) and do this change in Python 2.7, 3.2 and 3.3. > > In Python 2.7 code base, PyUnicode_EncodeDecimal() is always called with > errors=NULL. In Python 3.x, the function is no more called. I opened the following issue for this point: http://bugs.python.org/issue13452 Victor From stefan_ml at behnel.de Tue Nov 22 14:15:10 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 22 Nov 2011 14:15:10 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: Giampaolo Rodol?, 22.11.2011 10:21: > 2011/11/21 Terry Reedy: >> I strongly recommend that where it makes a difference, the pypy python3 >> project target 3.3. In particular, don't reproduce the buggy narrow-build >> behavior of 3.2 and before (perhaps pypy avoids this already). Do include >> the new unicode capi in cpyext. I anticipate that 3.3 will see more >> production use than 3.2 > > Is there a reason in particular? Well, Py3 still has a lot to catch up in terms of wide spread distribution compared to Py2.x, and new users will usually start using the most up to date release, which will soon be 3.3. Besides, 3.3 has received various optimisations that make it more suitable for production use than 3.2, including the above mentioned Unicode optimisations. Stefan From amauryfa at gmail.com Tue Nov 22 14:33:18 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 22 Nov 2011 14:33:18 +0100 Subject: [Python-Dev] PyUnicode_Resize In-Reply-To: <201111220208.17514.victor.stinner@haypocalc.com> References: <201111220208.17514.victor.stinner@haypocalc.com> Message-ID: 2011/11/22 Victor Stinner > Hi, > > In Python 3.2, PyUnicode_Resize() expects a number of Py_UNICODE units, > whereas Python 3.3 expects a number of characters. > > It is tricky to convert a number of Py_UNICODE units to a number of > characters, so it is diffcult to provide a backward compatibility > PyUnicode_Resize() function taking a number of Py_UNICODE units in Python > 3.3. > > Should we rename PyUnicode_Resize() in Python 3.3 to avoid surprising bugs? > > The issue only concerns Windows with non-BMP characters, so a very rare use > case. > > The easiest solution is to do nothing in Python 3.3: the API changed, but > it > doesn't really matter. Developers just have to be careful on this > particular > issue (which is not well documented today). > +1. A note in the "Porting C code" section of whatsnew/3.3 should be enough. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Nov 22 15:46:41 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 22 Nov 2011 16:46:41 +0200 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: On Tue, Nov 22, 2011 at 3:15 PM, Stefan Behnel wrote: > Giampaolo Rodol?, 22.11.2011 10:21: >> >> 2011/11/21 Terry Reedy: >>> >>> I strongly recommend that where it makes a difference, the pypy python3 >>> project target 3.3. In particular, don't reproduce the buggy narrow-build >>> behavior of 3.2 and before (perhaps pypy avoids this already). Do include >>> the new unicode capi in cpyext. I anticipate that 3.3 will see more >>> production use than 3.2 >> >> Is there a reason in particular? > > Well, Py3 still has a lot to catch up in terms of wide spread distribution > compared to Py2.x, and new users will usually start using the most up to > date release, which will soon be 3.3. > > Besides, 3.3 has received various optimisations that make it more suitable > for production use than 3.2, including the above mentioned Unicode > optimisations. > > Stefan > PyPy's py3k branch targets Python 3.2 until 3.3 is released and very likely 3.3 afterwards. Optimizations are irrelevant really in the case of PyPy. Cheers, fijal From barry at python.org Tue Nov 22 15:55:09 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 22 Nov 2011 09:55:09 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: <20111122095509.474420a6@limelight.wooz.org> On Nov 22, 2011, at 02:15 PM, Stefan Behnel wrote: >Well, Py3 still has a lot to catch up in terms of wide spread distribution >compared to Py2.x, and new users will usually start using the most up to date >release, which will soon be 3.3. > >Besides, 3.3 has received various optimisations that make it more suitable >for production use than 3.2, including the above mentioned Unicode >optimisations. 3.3 won't be released (according to PEP 398's current schedule) until August of next year. I think that's too long to wait before pushing for widespread adoption of Python 3. Hopefully, we're going to be making a dent in that in the next version of Ubuntu. We're actively starting to port a handful of desktop applications (including their dependency stacks) to Python 3.2, which I think is a fine release for doing so. I owe a blog post about this, but please do contact me if you want to get involved. Cheers, -Barry From stefan_ml at behnel.de Tue Nov 22 16:35:14 2011 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 22 Nov 2011 16:35:14 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: Maciej Fijalkowski, 22.11.2011 15:46: > On Tue, Nov 22, 2011 at 3:15 PM, Stefan Behnel wrote: >> Giampaolo Rodol?, 22.11.2011 10:21: >>> 2011/11/21 Terry Reedy: >>>> >>>> I strongly recommend that where it makes a difference, the pypy python3 >>>> project target 3.3. In particular, don't reproduce the buggy narrow-build >>>> behavior of 3.2 and before (perhaps pypy avoids this already). Do include >>>> the new unicode capi in cpyext. I anticipate that 3.3 will see more >>>> production use than 3.2 >>> >>> Is there a reason in particular? >> >> Well, Py3 still has a lot to catch up in terms of wide spread distribution >> compared to Py2.x, and new users will usually start using the most up to >> date release, which will soon be 3.3. >> >> Besides, 3.3 has received various optimisations that make it more suitable >> for production use than 3.2, including the above mentioned Unicode >> optimisations. Note that I was referring to Terry's "more production use" comment here, not to the "PyPy should target 3.3 instead of 3.2" part. > PyPy's py3k branch targets Python 3.2 until 3.3 is released and very > likely 3.3 afterwards. Optimizations are irrelevant really in the case > of PyPy. I admit that I wasn't very clear in my wording. As Terry pointed out, the Unicode changes in Py3.3 are not only for speed and/or memory performance improvements, they also improve the compliance of the Unicode implementation, which implies a behavioural change. Since PyPy appears to have implemented the current wide/narrow behaviour of Py2 and Py3.[012] already, I see no reason not to target 3.2 for the time being as it does not require substantial changes in this part. Compliance with Py3.3 will then require implementing the new behaviour. Stefan From solipsis at pitrou.net Tue Nov 22 17:08:32 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 22 Nov 2011 17:08:32 +0100 Subject: [Python-Dev] cpython: fix wrong credit and issue id given in previous commit References: Message-ID: <20111122170832.15af0fb8@pitrou.net> On Tue, 22 Nov 2011 13:38:03 +0100 giampaolo.rodola wrote: > diff --git a/Misc/ACKS b/Misc/ACKS > --- a/Misc/ACKS > +++ b/Misc/ACKS > @@ -11,7 +11,7 @@ > PS: In the standard Python distribution, this file is encoded in UTF-8 > and the list is in rough alphabetical order by last names. > > -Matt Mulsow > +Chris Clark > Rajiv Abraham "The list is in rough alphabetical order by last names". Regarfs Antoine. From stephen at xemacs.org Tue Nov 22 17:41:46 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 01:41:46 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <20111122095509.474420a6@limelight.wooz.org> References: <20111122095509.474420a6@limelight.wooz.org> Message-ID: <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Barry Warsaw writes: > Hopefully, we're going to be making a dent in that in the next version of > Ubuntu. This is still a big mess in Gentoo and MacPorts, though. MacPorts hasn't done anything about ceating a transition infrastructure AFAICT. Gentoo has its "eselect python set VERSION" stuff, but it's very dangerous to set to a Python 3 version, as many things go permanently wonky once you do. (So far I've been able to work around problems this creates, but it's not much fun.) I have no experience with this in Debian, Red Hat (and derivatives) or *BSD, but I have to suspect they're no better. (Well, maybe Red Hat has learned from its 1.5.2 experience! :-) I don't have any connections to the distros, so can't really offer to help directly. I think it might be a good idea for users to lobby (politely!) their distros to work on the transition. > I owe a blog post about this, but please do contact me if you want > to get involved. Yes, please, to the blog post! From python-dev at masklinn.net Tue Nov 22 18:10:18 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Tue, 22 Nov 2011 18:10:18 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> On 2011-11-22, at 17:41 , Stephen J. Turnbull wrote: > Barry Warsaw writes: >> Hopefully, we're going to be making a dent in that in the next version of >> Ubuntu. > > This is still a big mess in Gentoo and MacPorts, though. MacPorts > hasn't done anything about ceating a transition infrastructure AFAICT. What kind of "transition infrastructure" would it need? It's definitely not going to replace the Apple-provided Python out of the box, so setting `python` to a python3 is not going to happen. It doesn't define a `python3`, so maybe that? Is there a document somewhere on what kind of things distros need for a transition plan? > Gentoo has its "eselect python set VERSION" stuff, but it's very > dangerous to set to a Python 3 version, as many things go permanently > wonky once you do. (So far I've been able to work around problems > this creates, but it's not much fun.) Macports provide `port select` which I believe has the same function (you need to install the `python_select` for it to be configured for the Python group), the syntax is port `select --set python $VERSION`: > python --version Python 2.6.1 > sudo port select --set python python32 Selecting 'python32' for 'python' succeeded. 'python32' is now active. > python --version Python 3.2.2 From a.badger at gmail.com Tue Nov 22 18:13:58 2011 From: a.badger at gmail.com (Toshio Kuratomi) Date: Tue, 22 Nov 2011 09:13:58 -0800 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20111122171358.GD6114@unaka.lan> On Wed, Nov 23, 2011 at 01:41:46AM +0900, Stephen J. Turnbull wrote: > Barry Warsaw writes: > > > Hopefully, we're going to be making a dent in that in the next version of > > Ubuntu. > > This is still a big mess in Gentoo and MacPorts, though. MacPorts > hasn't done anything about ceating a transition infrastructure AFAICT. > Gentoo has its "eselect python set VERSION" stuff, but it's very > dangerous to set to a Python 3 version, as many things go permanently > wonky once you do. (So far I've been able to work around problems > this creates, but it's not much fun.) I have no experience with this > in Debian, Red Hat (and derivatives) or *BSD, but I have to suspect > they're no better. (Well, maybe Red Hat has learned from its 1.5.2 > experience! :-) > For Fedora (and currently, Red Hat is based on Fedora -- a little more about that later, though), we have parallel python2 and python3 stacks. As time goes on we've slowly brought more python-3 compatible modules onto the python3 stack (I believe someone had the goal a year and a half ago to get a complete pylons web development stack running on python3 on Fedora which brought a lot of packages forward). Unlike Barry's work with Ubuntu, though, we're mostly chiselling around the edges; we're working at the level where there's a module that someone needs to run something (or run some optional features of something) that runs on python3. > I don't have any connections to the distros, so can't really offer to > help directly. I think it might be a good idea for users to lobby > (politely!) their distros to work on the transition. > Where distros aren't working on parallel stacks, there definitely needs to be some transition plan. With my experience with parallel stacks, the best help there is to 1) help upstreams port to py3k (If someone can get PIL's py3k support finished and into a released package, that would free up a few things). 2) open bugs or help with creating python3 packages of modules when the upstream support is there. Depending on what software Barry's talking about porting to python3, that could be a big incentive as well. Just like with the push in Fedora to have pylons run on python3, I think that having certain applications that run on python3 and therefore need to have stacks of modules that support it is one of the prime ways that distros become motivated to provide python3 packages and support. This is basically the "killer app" idea in a new venue :-) -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From dmalcolm at redhat.com Tue Nov 22 18:27:47 2011 From: dmalcolm at redhat.com (David Malcolm) Date: Tue, 22 Nov 2011 12:27:47 -0500 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <20111122171358.GD6114@unaka.lan> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <20111122171358.GD6114@unaka.lan> Message-ID: <1321982869.4311.26.camel@surprise> On Tue, 2011-11-22 at 09:13 -0800, Toshio Kuratomi wrote: > On Wed, Nov 23, 2011 at 01:41:46AM +0900, Stephen J. Turnbull wrote: > > Barry Warsaw writes: > > > > > Hopefully, we're going to be making a dent in that in the next version of > > > Ubuntu. > > > > This is still a big mess in Gentoo and MacPorts, though. MacPorts > > hasn't done anything about ceating a transition infrastructure AFAICT. > > Gentoo has its "eselect python set VERSION" stuff, but it's very > > dangerous to set to a Python 3 version, as many things go permanently > > wonky once you do. (So far I've been able to work around problems > > this creates, but it's not much fun.) I have no experience with this > > in Debian, Red Hat (and derivatives) or *BSD, but I have to suspect > > they're no better. (Well, maybe Red Hat has learned from its 1.5.2 > > experience! :-) > > > For Fedora (and currently, Red Hat is based on Fedora -- a little more about > that later, though), we have parallel python2 and python3 stacks. As time > goes on we've slowly brought more python-3 compatible modules onto the > python3 stack (I believe someone had the goal a year and a half ago to get > a complete pylons web development stack running on python3 on Fedora which > brought a lot of packages forward). FWIW, current status of Fedora's Python 3 stack can be seen here: http://fedoraproject.org/wiki/Python3 and that page may be of interest to other distributions - I know of at least one other distribution that's screen-scraping it ;) From g.rodola at gmail.com Tue Nov 22 21:20:10 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Nov 2011 21:20:10 +0100 Subject: [Python-Dev] cpython: fix wrong credit and issue id given in previous commit In-Reply-To: <20111122170832.15af0fb8@pitrou.net> References: <20111122170832.15af0fb8@pitrou.net> Message-ID: Sorry, thanks (fixed). --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ 2011/11/22 Antoine Pitrou : > On Tue, 22 Nov 2011 13:38:03 +0100 > giampaolo.rodola wrote: >> diff --git a/Misc/ACKS b/Misc/ACKS >> --- a/Misc/ACKS >> +++ b/Misc/ACKS >> @@ -11,7 +11,7 @@ >> ?PS: In the standard Python distribution, this file is encoded in UTF-8 >> ?and the list is in rough alphabetical order by last names. >> >> -Matt Mulsow >> +Chris Clark >> ?Rajiv Abraham > > "The list is in rough alphabetical order by last names". > > Regarfs > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com > From nadeem.vawda at gmail.com Tue Nov 22 21:26:40 2011 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Tue, 22 Nov 2011 22:26:40 +0200 Subject: [Python-Dev] [Python-checkins] cpython: sort last committed name in alphabetical order In-Reply-To: References: Message-ID: Did you mean to also modify sched.py in this changeset? From pjenvey at underboss.org Tue Nov 22 21:28:54 2011 From: pjenvey at underboss.org (Philip Jenvey) Date: Tue, 22 Nov 2011 12:28:54 -0800 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> On Nov 22, 2011, at 7:35 AM, Stefan Behnel wrote: > Maciej Fijalkowski, 22.11.2011 15:46: >> PyPy's py3k branch targets Python 3.2 until 3.3 is released and very >> likely 3.3 afterwards. Optimizations are irrelevant really in the case >> of PyPy. > > I admit that I wasn't very clear in my wording. As Terry pointed out, the Unicode changes in Py3.3 are not only for speed and/or memory performance improvements, they also improve the compliance of the Unicode implementation, which implies a behavioural change. Since PyPy appears to have implemented the current wide/narrow behaviour of Py2 and Py3.[012] already, I see no reason not to target 3.2 for the time being as it does not require substantial changes in this part. Compliance with Py3.3 will then require implementing the new behaviour. One reason to target 3.2 for now is it's not a moving target. There's overhead involved in managing modifications to the pure python standard lib needed for PyPy, tracking 3.3 changes as they happen as well exacerbates this. The plans to split the standard lib into its own repo separate from core CPython will of course help alternative implementations here. -- Philip Jenvey From amauryfa at gmail.com Tue Nov 22 21:29:04 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 22 Nov 2011 21:29:04 +0100 Subject: [Python-Dev] cpython: fix wrong credit and issue id given in previous commit In-Reply-To: References: <20111122170832.15af0fb8@pitrou.net> Message-ID: Hi, 2011/11/22 Giampaolo Rodol? > Sorry, thanks (fixed). > You also modified Lib/sched.py in the same commit. Was it intended? If not, please revert it. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From amauryfa at gmail.com Tue Nov 22 21:43:22 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 22 Nov 2011 21:43:22 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: 2011/11/22 Philip Jenvey > One reason to target 3.2 for now is it's not a moving target. There's > overhead involved in managing modifications to the pure python standard lib > needed for PyPy, tracking 3.3 changes as they happen as well exacerbates > this. > > The plans to split the standard lib into its own repo separate from core > CPython will of course help alternative implementations here. > I don't see how it would help here. Copying the CPython Lib/ directory is not difficult, even though PyPy made slight modifications to the files, and even without any merge tool. OTOH when PyPy changed minor versions (from 2.7.0 to 2.7.2 IIRC) most of the work was to follow the various tiny fixes made to the built-in modules: _io, _ssl and _multiprocessing. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Nov 22 22:32:11 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 22 Nov 2011 22:32:11 +0100 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly References: Message-ID: <20111122223211.7425c55b@pitrou.net> On Tue, 22 Nov 2011 21:29:43 +0100 benjamin.peterson wrote: > http://hg.python.org/cpython/rev/77ab830930ae > changeset: 73697:77ab830930ae > user: Benjamin Peterson > date: Tue Nov 22 15:29:32 2011 -0500 > summary: > fix compiler warning by implementing this more cleverly You mean "more obscurely"? Obfuscating the original intent in order to disable a compiler warning doesn't seem very wise to me. Regards Antoine. > files: > Objects/unicodeobject.c | 7 +------ > 1 files changed, 1 insertions(+), 6 deletions(-) > > > diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c > --- a/Objects/unicodeobject.c > +++ b/Objects/unicodeobject.c > @@ -6164,12 +6164,7 @@ > kind = PyUnicode_KIND(unicode); > data = PyUnicode_DATA(unicode); > len = PyUnicode_GET_LENGTH(unicode); > - > - switch(kind) { > - case PyUnicode_1BYTE_KIND: expandsize = 4; break; > - case PyUnicode_2BYTE_KIND: expandsize = 6; break; > - case PyUnicode_4BYTE_KIND: expandsize = 10; break; > - } > + expandsize = kind * 2 + 2; > > if (len > PY_SSIZE_T_MAX / expandsize) > return PyErr_NoMemory(); > > -- > Repository URL: http://hg.python.org/cpython From benjamin at python.org Tue Nov 22 22:42:35 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 22 Nov 2011 16:42:35 -0500 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: <20111122223211.7425c55b@pitrou.net> References: <20111122223211.7425c55b@pitrou.net> Message-ID: 2011/11/22 Antoine Pitrou : > On Tue, 22 Nov 2011 21:29:43 +0100 > benjamin.peterson wrote: >> http://hg.python.org/cpython/rev/77ab830930ae >> changeset: ? 73697:77ab830930ae >> user: ? ? ? ?Benjamin Peterson >> date: ? ? ? ?Tue Nov 22 15:29:32 2011 -0500 >> summary: >> ? fix compiler warning by implementing this more cleverly > > You mean "more obscurely"? > Obfuscating the original intent in order to disable a compiler warning > doesn't seem very wise to me. Well, I think it makes sense that the kind tells you how many bytes are in it. -- Regards, Benjamin From barry at python.org Tue Nov 22 22:51:32 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 22 Nov 2011 16:51:32 -0500 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <20111122171358.GD6114@unaka.lan> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <20111122171358.GD6114@unaka.lan> Message-ID: <20111122165132.029b9d75@limelight.wooz.org> On Nov 22, 2011, at 09:13 AM, Toshio Kuratomi wrote: >For Fedora (and currently, Red Hat is based on Fedora -- a little more about >that later, though), we have parallel python2 and python3 stacks. Debian (and thus Ubuntu) also has separate Python 2 and 3 stacks. In general, if you have a Python package (e.g. on PyPI) called 'foo', you'll have a Debian binary package called python-foo for the Python 2 version, and python3-foo for the Python 3 version. /usr/bin/python will always (modulo perhaps PEP 394) point to Python 2.x with Python 3 accessible via /usr/bin/python3. The minor version numbered Python binaries are also available. Debian's infrastructure makes it fairly easy to support multiple versions of Python at the same time, and of course to support both a Python 2 and 3 stack simultaneously. It's also fairly easy to switch the default Python version. Binary packages containing pure-Python are space efficient, sharing one copy of the Python source code for all supported versions. A symlink farm is used to manage the incompatibilities in .pyc files, but only for Python 2, since PEPs 3147 and 3149 solve this problem in a better way for Python 3 (no symlink farms necessary). The one additional complication though is that extension modules must be built for each supported version, and all .so's are included in a single binary package. E.g. if python-foo has an extension module, it will contain the 2.6 .so and the 2.7 .so. For the next version of Ubuntu, we will be dropping Python 2.6 support, so our binary packages are rebuilt to contain only the 2.7 version of the extension module. Details on how Debian packages Python, including its deviations from upstream, are available here: http://wiki.debian.org/Python Ubuntu's deviations from Debian and other details are available here: https://wiki.ubuntu.com/Python >Unlike Barry's work with Ubuntu, though, we're mostly chiselling around the >edges; we're working at the level where there's a module that someone needs >to run something (or run some optional features of something) that runs on >python3. This is great, because it means Fedora's taking kind of a bottom-up approach, while Ubuntu is taking a more top-down approach. Working together, we'll change the world. :) The key here is that we push as many of the changes as possible as far upstream as possible. I know Toshio and David agree, we want to get upstream package authors and application developers to support Python 3 as much as possible. I hope there will be no cases where a distro has to fork a package or application to support Python 3, although we will do it if there's no other way. Most likely for Ubuntu though, that would be pushing the changes into Debian. >Where distros aren't working on parallel stacks, there definitely needs to >be some transition plan. With my experience with parallel stacks, the best >help there is to 1) help upstreams port to py3k (If someone can get PIL's >py3k support finished and into a released package, that would free up a few >things). 2) open bugs or help with creating python3 packages of modules >when the upstream support is there. +1 >Depending on what software Barry's talking about porting to python3, that >could be a big incentive as well. Just like with the push in Fedora to have >pylons run on python3, I think that having certain applications that run on >python3 and therefore need to have stacks of modules that support it is one >of the prime ways that distros become motivated to provide python3 packages >and support. This is basically the "killer app" idea in a new venue :-) Again, wholehearted +1. For now, we are not spending much time on server applications, though I've seen promising talk about Twisted ports to Python 3. We're looking specifically at desktop applications, such as Update Manager, Software Center, Computer Janitor, etc. Those may be fairly Ubuntu and/or Debian specific, but the actual applications themselves aren't too difficult to port. E.g. switching to Gnome object introspection, which already supports Python 3. We can easily identify the dependency stack for the desktop applications we're targeting, which leads us to looking at ports of the dependent libraries, and that benefits all Python users. Our goal is for the Ubuntu 14.04 LTS release (in April 2014) to have no Python 2 on the release images, or in our "main" archive, so everything you'd get on your desktop in a default install would be Python 3. For the upcoming 12.04 LTS release, I'd be happy if we had just a couple of Python 3 applications on the desktop by default. I see the work going on in Fedora/RedHat, Debian/Ubuntu, and other distributions as applying some positive momentum on pushing the Python community over the tipping point for Python 3 support. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From barry at python.org Tue Nov 22 23:14:07 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 22 Nov 2011 17:14:07 -0500 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> Message-ID: <20111122171407.5d52c9c0@limelight.wooz.org> On Nov 22, 2011, at 06:10 PM, Xavier Morel wrote: >It's definitely not going to replace the Apple-provided Python out of the >box, so setting `python` to a python3 is not going to happen. Nor should it! PEP 394 attempts to codify the Python project's recommendations for what version 'python' (e.g. /usr/bin/python) points to, but I think there is widespread agreement that it would be a mistake to point it at Python 3. >It doesn't define a `python3`, so maybe that? Is there a document >somewhere on what kind of things distros need for a transition plan? This is probably fairly distro specific, so I doubt any such document exists, or would be helpful. E.g. Debian's approach is fairly intimately tied to its build tools, rules files, and policies. There is, in fact, a separate Python policy document for Debian. What this means for Debian is that well-behaved distutils-based packages can be built for all available Python 2 versions with about 3 lines of code in your debian/rules file. You don't even really need to think about it, which is especially nice during Python version transitions. Of course, in Ubuntu, we'll never have to do one of those again . The tools are not quite there for Python 3, though they are being actively worked on. This means it takes more effort from the distro packager to get Python 2 and Python 3 binary packages built (assuming upstream supports both), and to built it for multiple versions of Python 3 (not an issue right now though, since 3.2 is the minimum version we're all targeting). It's definitely possible, but it's not as trivially easy as it usually is for Python 2. I fully expect that to improve over time. I do occasionally fiddle with MacPorts, and have used Gentoo's system in a previous life, but I don't really know enough about them to make anything other than general recommendations. OTOH, I think Fedora's and Debian's experience is that separate Python 2 and Python 3 stacks is the best way to avoid insanity for operating systems and their users. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Tue Nov 22 23:20:40 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 22 Nov 2011 17:20:40 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: Message-ID: On 11/22/2011 10:35 AM, Stefan Behnel wrote: > Maciej Fijalkowski, 22.11.2011 15:46: >>>> 2011/11/21 Terry Reedy: >>>>> I strongly recommend that where it makes a difference, the pypy >>>>> python3 >>>>> project target 3.3. In particular, don't reproduce the buggy >>>>> narrow-build >>>>> behavior of 3.2 and before (perhaps pypy avoids this already). Do >>>>> include >>>>> the new unicode capi in cpyext. I anticipate that 3.3 will see more >>>>> production use than 3.2 [snip} >> PyPy's py3k branch targets Python 3.2 until 3.3 is released and very >> likely 3.3 afterwards. Optimizations are irrelevant really in the case >> of PyPy. > I admit that I wasn't very clear in my wording. As Terry pointed out, > the Unicode changes in Py3.3 are not only for speed and/or memory > performance improvements, they also improve the compliance of the > Unicode implementation, which implies a behavioral change. One of the major features of Python 3 is the expansion of the directly supported character set from ascii to unicode. Python's original narrow and wide build unicode implementation has problems that were somewhat tolerable in an optional, alternate text class but which are much less so for *the* text class. The general problem is that the two builds give different answers for operations and functions on strings containing non-BMP characters. This differences potentially affects anything that uses strings, such as the re module, without guarding against the differences. One can view the narrow build results as wrong and buggy. Extended chars were practically non-existent when the implementation was written, but are becoming more common now and in the future. In any case, Python string code no longer works the same across all x.y builds. On *nix platforms that can have both narrow and wide builds, there can also be binary conflicts for extension modules. On Windows, there is no conflict because one is stuck with a buggy narrow build. This is all besides the space issue. In my view, Python 3.3 will be the first fully satisfactory Python 3 version. It should be the version of choice for any app doing full unicode text or document processing on platforms that include, in particular, Windows. > Since PyPy > appears to have implemented the current wide/narrow behavior of Py2 and > Py3.[012] already, I see no reason not to target 3.2 for the time being > as it does not require substantial changes in this part. Compliance with > Py3.3 will then require implementing the new behavior. Thinking about how PyPy will do that should start well before 3.3 is released. My impression from reading the PyPy and Python 3 page, linked in the original post, is that releasing PyPy fully ready for Python 3, with all listed phases completed, will take close to a year anyway. Hence my comment. -- Terry Jan Reedy From tjreedy at udel.edu Tue Nov 22 23:33:13 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 22 Nov 2011 17:33:13 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On 11/22/2011 3:28 PM, Philip Jenvey wrote: > One reason to target 3.2 for now is it's not a moving target. Neither is the basic design and behavior of the new unicode implementation. On 3.2 narrow builds, including Windows >>> len('\U00010101') 2 With 3.3, the answer will be, properly, 1. I suspect that becoming compatible with that, and all that it implies for many other examples, will be the biggest hurdle for PyPy becoming compatible with 3.3. -- Terry Jan Reedy From g.rodola at gmail.com Tue Nov 22 23:34:02 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Nov 2011 23:34:02 +0100 Subject: [Python-Dev] [Python-checkins] cpython: sort last committed name in alphabetical order In-Reply-To: References: Message-ID: Nope, the commit involving sched was the previous one. This one was just an unrelated fix. --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ 2011/11/22 Nadeem Vawda : > Did you mean to also modify sched.py in this changeset? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com > From g.rodola at gmail.com Tue Nov 22 23:49:22 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Nov 2011 23:49:22 +0100 Subject: [Python-Dev] [Python-checkins] cpython: sort last committed name in alphabetical order In-Reply-To: References: Message-ID: You're right. I committed sched.py by accident. I'm going to revert it. --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ 2011/11/22 Giampaolo Rodol? : > Nope, the commit involving sched was the previous one. > This one was just an unrelated fix. > > --- Giampaolo > http://code.google.com/p/pyftpdlib/ > http://code.google.com/p/psutil/ > > > 2011/11/22 Nadeem Vawda : >> Did you mean to also modify sched.py in this changeset? >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com >> > From g.rodola at gmail.com Tue Nov 22 23:49:47 2011 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Tue, 22 Nov 2011 23:49:47 +0100 Subject: [Python-Dev] cpython: fix wrong credit and issue id given in previous commit In-Reply-To: References: <20111122170832.15af0fb8@pitrou.net> Message-ID: 2011/11/22 Amaury Forgeot d'Arc : > Hi, > 2011/11/22 Giampaolo Rodol? >> >> Sorry, thanks (fixed). > > You also modified?Lib/sched.py in the same commit. > Was it intended? If not, please revert it. > -- > Amaury Forgeot d'Arc You're right. I committed sched.py by accident. I'm going to revert it. --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ From amauryfa at gmail.com Wed Nov 23 00:00:07 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 23 Nov 2011 00:00:07 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: 2011/11/22 Terry Reedy > On 11/22/2011 3:28 PM, Philip Jenvey wrote: > > One reason to target 3.2 for now is it's not a moving target. >> > > Neither is the basic design and behavior of the new unicode > implementation. On 3.2 narrow builds, including Windows > > >>> len('\U00010101') > 2 > > With 3.3, the answer will be, properly, 1. I suspect that becoming > compatible with that, and all that it implies for many other examples, will > be the biggest hurdle for PyPy becoming compatible with 3.3. PyPy currently defines unicode as arrays of wchar_t, so only Windows uses a narrow unicode build. It will probably change though, and for performance reasons it makes indeed sense to have different internal representations for the same external type. PyPy already does this for several types (there is a special version of dict specialized for string keys, and the 2.7 range() returns a list that does not need to allocate its items, and can turn into a "real" list as soon as you modify it), so I would not qualify this task as a big hurdle, compared to other optimizations done in similar areas. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Nov 22 22:43:01 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 22 Nov 2011 22:43:01 +0100 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> Message-ID: <20111122224301.1b72ccb7@pitrou.net> On Tue, 22 Nov 2011 16:42:35 -0500 Benjamin Peterson wrote: > 2011/11/22 Antoine Pitrou : > > On Tue, 22 Nov 2011 21:29:43 +0100 > > benjamin.peterson wrote: > >> http://hg.python.org/cpython/rev/77ab830930ae > >> changeset: ? 73697:77ab830930ae > >> user: ? ? ? ?Benjamin Peterson > >> date: ? ? ? ?Tue Nov 22 15:29:32 2011 -0500 > >> summary: > >> ? fix compiler warning by implementing this more cleverly > > > > You mean "more obscurely"? > > Obfuscating the original intent in order to disable a compiler warning > > doesn't seem very wise to me. > > Well, I think it makes sense that the kind tells you how many bytes are in it. Yes, but "kind * 2 + 2" looks like a magical formula, while the explicit switch let you check mentally that each estimate was indeed correct. Regards Antoine. From benjamin at python.org Wed Nov 23 01:42:24 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 22 Nov 2011 19:42:24 -0500 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: <20111122224301.1b72ccb7@pitrou.net> References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: 2011/11/22 Antoine Pitrou : > On Tue, 22 Nov 2011 16:42:35 -0500 > Benjamin Peterson wrote: >> 2011/11/22 Antoine Pitrou : >> > On Tue, 22 Nov 2011 21:29:43 +0100 >> > benjamin.peterson wrote: >> >> http://hg.python.org/cpython/rev/77ab830930ae >> >> changeset: ? 73697:77ab830930ae >> >> user: ? ? ? ?Benjamin Peterson >> >> date: ? ? ? ?Tue Nov 22 15:29:32 2011 -0500 >> >> summary: >> >> ? fix compiler warning by implementing this more cleverly >> > >> > You mean "more obscurely"? >> > Obfuscating the original intent in order to disable a compiler warning >> > doesn't seem very wise to me. >> >> Well, I think it makes sense that the kind tells you how many bytes are in it. > > Yes, but "kind * 2 + 2" looks like a magical formula, while the > explicit switch let you check mentally that each estimate was indeed > correct. I don't see how it's more magic than hardcoding 4, 6, and 10. Don't you have to mentally check that those are correct? -- Regards, Benjamin From solipsis at pitrou.net Wed Nov 23 01:46:15 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 23 Nov 2011 01:46:15 +0100 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: <20111123014615.45b20a8c@pitrou.net> On Tue, 22 Nov 2011 19:42:24 -0500 Benjamin Peterson wrote: > 2011/11/22 Antoine Pitrou : > > On Tue, 22 Nov 2011 16:42:35 -0500 > > Benjamin Peterson wrote: > >> 2011/11/22 Antoine Pitrou : > >> > On Tue, 22 Nov 2011 21:29:43 +0100 > >> > benjamin.peterson wrote: > >> >> http://hg.python.org/cpython/rev/77ab830930ae > >> >> changeset: ? 73697:77ab830930ae > >> >> user: ? ? ? ?Benjamin Peterson > >> >> date: ? ? ? ?Tue Nov 22 15:29:32 2011 -0500 > >> >> summary: > >> >> ? fix compiler warning by implementing this more cleverly > >> > > >> > You mean "more obscurely"? > >> > Obfuscating the original intent in order to disable a compiler warning > >> > doesn't seem very wise to me. > >> > >> Well, I think it makes sense that the kind tells you how many bytes are in it. > > > > Yes, but "kind * 2 + 2" looks like a magical formula, while the > > explicit switch let you check mentally that each estimate was indeed > > correct. > > I don't see how it's more magic than hardcoding 4, 6, and 10. Don't > you have to mentally check that those are correct? I don't know. Perhaps I'm saying that because I *have* already done the mental check :) Regards Antoine. From larry at hastings.org Wed Nov 23 04:50:53 2011 From: larry at hastings.org (Larry Hastings) Date: Tue, 22 Nov 2011 19:50:53 -0800 Subject: [Python-Dev] Python 3.4 Release Manager Message-ID: <4ECC6D9D.4020105@hastings.org> I've volunteered to be the Release Manager for Python 3.4. The FLUFL has already given it his Sloppy Wet Kiss Of Approval, and we talked to Georg and he was for it too. There's no formal process for selecting the RM, so I may already be stuck with the job, but I thought it best to pipe up on python-dev in case someone had a better idea. But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? I look forward to seeing how the sausage is made, /larry/ From stephen at xemacs.org Wed Nov 23 04:51:32 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 12:51:32 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> Message-ID: <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Xavier Morel writes: > On 2011-11-22, at 17:41 , Stephen J. Turnbull wrote: > > Barry Warsaw writes: > >> Hopefully, we're going to be making a dent in that in the next version of > >> Ubuntu. > > This is still a big mess in Gentoo and MacPorts, though. MacPorts > > hasn't done anything about ceating a transition infrastructure AFAICT. > What kind of "transition infrastructure" would it need? It's definitely > not going to replace the Apple-provided Python out of the box, so > setting `python` to a python3 is not going to happen. Sure, but many things do shadow Apple-provided software if you set PATH=/opt/local/bin:$PATH. I'm not sure what infrastructure is required, but I can't really see MacPorts volunteers doing a 100% conversion the way that Ubuntu's paid developers can. So there will be a long transition period, and I wouldn't be surprised if multiple versions of Python 2 and multiple versions of Python 3 will typically need to be simultaneously available to different ports. > It doesn't define a `python3`, so maybe that? A python3 symlink or script would help a little bit, but I don't think that's necessary or sufficient, because ports already can and do depend on Python x.y, not just Python x. > Is there a document somewhere on what kind of things distros need > for a transition plan? I'm hoping Barry's blog will be a good start. > Macports provide `port select` which I believe has the same function > (you need to install the `python_select` for it to be configured for > the Python group), the syntax is port `select --set python $VERSION`: Sure. I haven't had the nerve to do this on MacPorts because "port" is such a flaky thing (not so much port itself, but so many ports assume that the port maintainer's local configuration is what others' systems use, so I stay as vanilla as possible -- I rather doubt that many ports are ready for Python 3, and I'm not willing to be a guinea pig). The problem that I've run into with Gentoo is that *even when the ebuild is prepared for Python 3* assumptions about the Python current when the ebuild is installed/upgraded gets baked into the installation (eg, print statement vs. print function), but some of the support scripts just call "python" or something like that. OTOH, a few ebuilds don't support Python 3 (or in a ebuild that nominally supports Python 3, upstream does something perfectly reasonable for Python 2 like assume that Latin-1 characters are acceptable in a ChangeLog, and the ebuild maintainer doesn't test under Python 3 so it slips through) so I have to do an eselect dance while emerging ... and in the meantime things that expect Python 3 as the system Python break. So far, in Gentoo I've always been able to wiggle out of such problems by doing the eselect dance two or three times with the ebuild that is outdated, and then a couple of principal prerequisites or dependencies at most. Given my experience with MacPorts I *very much* expect similar issues with its ports. From raymond.hettinger at gmail.com Wed Nov 23 05:27:24 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 22 Nov 2011 20:27:24 -0800 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <4ECC6D9D.4020105@hastings.org> References: <4ECC6D9D.4020105@hastings.org> Message-ID: <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> On Nov 22, 2011, at 7:50 PM, Larry Hastings wrote: > > > I've volunteered to be the Release Manager for Python 3.4. Awesome. Thanks for stepping up. > The FLUFL has already given it his Sloppy Wet Kiss Of Approval, Ewwww! > and we talked to Georg and he was for it too. There's no formal process for selecting the RM, so I may already be stuck with the job, but I thought it best to pipe up on python-dev in case someone had a better idea. > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) Raymond From a.badger at gmail.com Wed Nov 23 05:32:59 2011 From: a.badger at gmail.com (Toshio Kuratomi) Date: Tue, 22 Nov 2011 20:32:59 -0800 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> References: <4ECC6D9D.4020105@hastings.org> <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> Message-ID: <20111123043258.GO6114@unaka.lan> On Tue, Nov 22, 2011 at 08:27:24PM -0800, Raymond Hettinger wrote: > > On Nov 22, 2011, at 7:50 PM, Larry Hastings wrote: > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? > > You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) > Wow! My release engineering team needs to take classes from you guys! -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From solipsis at pitrou.net Wed Nov 23 05:33:31 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 23 Nov 2011 05:33:31 +0100 Subject: [Python-Dev] Python 3.4 Release Manager References: <4ECC6D9D.4020105@hastings.org> <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> Message-ID: <20111123053331.10f04486@pitrou.net> On Tue, 22 Nov 2011 20:27:24 -0800 Raymond Hettinger wrote: > > > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? > > You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) How about: PHP 5.5 IS NOW OPEN FOR COMMIT ? From senthil at uthcode.com Wed Nov 23 05:50:19 2011 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 23 Nov 2011 12:50:19 +0800 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <4ECC6D9D.4020105@hastings.org> References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 11:50 AM, Larry Hastings wrote: > I've volunteered to be the Release Manager for Python 3.4. ?The FLUFL has That's cool. But just my thought, wouldn't it be better for someone who regularly commits, fixes bugs and feature requests be better for a RM role? Once a developer gets bored with those and wants more, could take up RM role. Is there anything wrong with this kind of thinking? Thanks, Senthil From ncoghlan at gmail.com Wed Nov 23 07:32:31 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Nov 2011 16:32:31 +1000 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 2:50 PM, Senthil Kumaran wrote: > On Wed, Nov 23, 2011 at 11:50 AM, Larry Hastings wrote: >> I've volunteered to be the Release Manager for Python 3.4. ?The FLUFL has > > That's cool. ?But just my thought, wouldn't it be better for someone > who regularly commits, fixes bugs and feature requests be better for a > RM role? Once a developer gets bored with those and wants more, could > take up RM role. Is there anything wrong with this kind of thinking? The main (thoroughly informal) criteria are having commit privileges, having shown some evidence of "getting it" when it comes to the release process and then actually putting your hand up to volunteer. Most people who pass the second criterion seem to demonstrate this odd reluctance to meet the third criterion ;) There's probably a fourth criterion of the other devs not going "Arrg, no, not *them*!", but, to my knowledge, that's never actually come up... I'm sure Larry will now be paying close attention as Georg shepherds 3.3 towards release next year, so it sounds like a perfectly reasonable idea to me. +1 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Wed Nov 23 07:49:28 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 23 Nov 2011 01:49:28 -0500 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: On 11/22/2011 7:42 PM, Benjamin Peterson wrote: > 2011/11/22 Antoine Pitrou: >> On Tue, 22 Nov 2011 16:42:35 -0500 >> Benjamin Peterson wrote: >>> 2011/11/22 Antoine Pitrou: >>>> On Tue, 22 Nov 2011 21:29:43 +0100 >>>> benjamin.peterson wrote: >>>>> http://hg.python.org/cpython/rev/77ab830930ae >>>>> changeset: 73697:77ab830930ae >>>>> user: Benjamin Peterson >>>>> date: Tue Nov 22 15:29:32 2011 -0500 >>>>> summary: >>>>> fix compiler warning by implementing this more cleverly >>>> >>>> You mean "more obscurely"? >>>> Obfuscating the original intent in order to disable a compiler warning >>>> doesn't seem very wise to me. >>> >>> Well, I think it makes sense that the kind tells you how many bytes are in it. >> >> Yes, but "kind * 2 + 2" looks like a magical formula, while the >> explicit switch let you check mentally that each estimate was indeed >> correct. > > I don't see how it's more magic than hardcoding 4, 6, and 10. Don't > you have to mentally check that those are correct? I personally strongly prefer the one-line formula to the hardcoded magic numbers calculated from the formula. I find it much more readable. To me, the only justification for the switch would be if there is a serious worry about the kind being changed to something other than 1, 2, or 4. But the fact that this is checked with an assert that can be optimized away negates that. The one-liner could be followed by assert(kind==1 || kind==2 || kind==4) which would also serve to remind the reader of the possibilities. You could even follow the formula with /* 4, 6, or 10 */ I think you reverted too soon. -- Terry Jan Reedy From ncoghlan at gmail.com Wed Nov 23 08:07:15 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 Nov 2011 17:07:15 +1000 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: <20111122223211.7425c55b@pitrou.net> <20111122224301.1b72ccb7@pitrou.net> Message-ID: On Wed, Nov 23, 2011 at 4:49 PM, Terry Reedy wrote: > I personally strongly prefer the one-line formula to the hardcoded magic > numbers calculated from the formula. I find it much more readable. To me, > the only justification for the switch would be if there is a serious worry > about the kind being changed to something other than 1, 2, or 4. But the > fact that this is checked with an assert that can be optimized away negates > that. The one-liner could be followed by > ?assert(kind==1 || kind==2 || kind==4) > which would also serve to remind the reader of the possibilities. You could > even follow the formula with /* 4, 6, or 10 */ I think you reverted too > soon. +1 to what Terry said here, although I would add a genuinely explanatory comment that gives the calculation meaning: /* For each character, allow for "\U" prefix and 2 hex digits per byte */ expandsize = 2 + 2 * kind; Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From nad at acm.org Wed Nov 23 08:12:21 2011 From: nad at acm.org (Ned Deily) Date: Tue, 22 Nov 2011 23:12:21 -0800 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: In article <87fwhfqywr.fsf at uwakimon.sk.tsukuba.ac.jp>, "Stephen J. Turnbull" wrote: > I haven't had the nerve to do this on MacPorts because "port" is such > a flaky thing (not so much port itself, but so many ports assume that > the port maintainer's local configuration is what others' systems use, > so I stay as vanilla as possible -- I rather doubt that many ports are > ready for Python 3, and I'm not willing to be a guinea pig). I think your fears are unfounded. MacPort's individual port files are supposed to be totally independent of the setting of 'port select'. In other words, there are separate ports for each Python version, i.e. py24-distribute, py25-distribute, py26-distribute, py27-distribute, py31-distribute, and py32-distribute. Or, for ports that are not principally Python packages, there may be port variants, i.e. +python27, +python32, etc. If you do find a port that somewhere uses an unversioned 'python', you should report it as a bug; they will fix that. Also, fairly recently, the MacPorts introduced a python ports group infrastructure behind the scenes that makes it possible for them to maintain one meta portfile that will generate ports for each of the supported Python versions also supported by the package. The project has been busily converting Python package port files over to this new system and, thus, increasing the number of ports available for Python 3.2. Currently, I count 30 'py32' ports and '38 'py31' ports compared to 468 'py26' and 293 'py27' ports so, yes, there is still a lot to be done. But my observation of the MacPorts project is that they respond well to requests. If people request existing packages be made available for py32, or - even better - provide patches to do so, it will happen. Also right now besides the Python port group transition, the project has been swamped with issues arising from the Xcode 4 introduction for Lion, mandating the transition from gcc to clang or llvm-gcc. -- Ned Deily, nad at acm.org From python-dev at masklinn.net Wed Nov 23 08:15:10 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Wed, 23 Nov 2011 08:15:10 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 2011-11-23, at 04:51 , Stephen J. Turnbull wrote: > Xavier Morel writes: >> On 2011-11-22, at 17:41 , Stephen J. Turnbull wrote: >>> Barry Warsaw writes: > >>>> Hopefully, we're going to be making a dent in that in the next version of >>>> Ubuntu. > >>> This is still a big mess in Gentoo and MacPorts, though. MacPorts >>> hasn't done anything about ceating a transition infrastructure AFAICT. > >> What kind of "transition infrastructure" would it need? It's definitely >> not going to replace the Apple-provided Python out of the box, so >> setting `python` to a python3 is not going to happen. > > Sure, but many things do shadow Apple-provided software if you set > PATH=/opt/local/bin:$PATH. > Some I'm sure do, but "many" is more doubtful, and I have not seen any do that in the Python ecosystem: macports definitely won't install a bare (unversioned) `python` without the user asking. > I'm not sure what infrastructure is required, but I can't really see > MacPorts volunteers doing a 100% conversion the way that Ubuntu's paid > developers can. So there will be a long transition period, and I > wouldn't be surprised if multiple versions of Python 2 and multiple > versions of Python 3 will typically need to be simultaneously > available to different ports. That's already the case so it's not much of a change. > >> It doesn't define a `python3`, so maybe that? > A python3 symlink or script would help a little bit, but I don't think > that's necessary or sufficient, because ports already can and do > depend on Python x.y, not just Python x. Yes indeed, which is why I was wondering in the first place: other distributions are described as "fine" because they have separate Python2 and Python3 stacks, macports has a Python stack *per Python version* so why would it be more problematic when it should have even less conflicts? >> Macports provide `port select` which I believe has the same function >> (you need to install the `python_select` for it to be configured for >> the Python group), the syntax is port `select --set python $VERSION`: > > Sure. > > I haven't had the nerve to do this on MacPorts because "port" is such > a flaky thing (not so much port itself, but so many ports assume that > the port maintainer's local configuration is what others' systems use, > so I stay as vanilla as possible -- I rather doubt that many ports are > ready for Python 3, and I'm not willing to be a guinea pig). That is what I'd expect as well, I was just giving the corresponding tool to the gentoo version thereof. > The problem that I've run into with Gentoo is that *even when the > ebuild is prepared for Python 3* assumptions about the Python current > when the ebuild is installed/upgraded gets baked into the installation > (eg, print statement vs. print function), but some of the support > scripts just call "python" or something like that. OTOH, a few > ebuilds don't support Python 3 (or in a ebuild that nominally supports > Python 3, upstream does something perfectly reasonable for Python 2 > like assume that Latin-1 characters are acceptable in a ChangeLog, and > the ebuild maintainer doesn't test under Python 3 so it slips through) > so I have to do an eselect dance while emerging ... and in the > meantime things that expect Python 3 as the system Python break. > > So far, in Gentoo I've always been able to wiggle out of such problems > by doing the eselect dance two or three times with the ebuild that is > outdated, and then a couple of principal prerequisites or dependencies > at most. > > Given my experience with MacPorts I *very much* expect similar > issues with its ports. Yes I would as well, although: 1. A bare `python` call would always call into the Apple-provided Python, this has no reason to change so ports doing that should not be affected 2. Few ports should use Python (therefore assume things about Python) in their configuration/installation section (outside upstream's own assumptions): ports are tcl, not bash, so there shouldn't be too much reason to call Python from them From hodgestar+pythondev at gmail.com Wed Nov 23 10:06:18 2011 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Wed, 23 Nov 2011 11:06:18 +0200 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: References: <4ECC6D9D.4020105@hastings.org> Message-ID: On Wed, Nov 23, 2011 at 6:50 AM, Senthil Kumaran wrote: > That's cool. ?But just my thought, wouldn't it be better for someone > who regularly commits, fixes bugs and feature requests be better for a > RM role? Once a developer gets bored with those and wants more, could > take up RM role. Is there anything wrong with this kind of thinking? There is something to be said for letting those people continue to regularly commit and fix bugs rather than saddling them with the RM role. :) Schiavo Simon From stephen at xemacs.org Wed Nov 23 10:24:18 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 18:24:18 +0900 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <20111123053331.10f04486@pitrou.net> References: <4ECC6D9D.4020105@hastings.org> <0BC15048-D383-41F9-862D-667DB1C50078@gmail.com> <20111123053331.10f04486@pitrou.net> Message-ID: <87ehwzqji5.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > On Tue, 22 Nov 2011 20:27:24 -0800 > Raymond Hettinger wrote: > > > > > > But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? Needs work? > > > > You could try a more positive leadership style: THAT LOOKS GREAT, I'M SURE THE RM FOR PYTHON 3.5 WILL LOVE IT ;-) > > How about: PHP 5.5 IS NOW OPEN FOR COMMIT ? I thought Larry's version was somewhat more encouraging. From victor.stinner at haypocalc.com Wed Nov 23 10:40:36 2011 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 23 Nov 2011 10:40:36 +0100 Subject: [Python-Dev] cpython: fix compiler warning by implementing this more cleverly In-Reply-To: References: Message-ID: <9311648.o2tbS2xZmn@dsk000552> Le Mercredi 23 Novembre 2011 01:49:28 Terry Reedy a ?crit : > The one-liner could be followed by > assert(kind==1 || kind==2 || kind==4) > which would also serve to remind the reader of the possibilities. For a ready string, kind must be 1, 2 or 4. We might rename "kind" to "charsize" because its value changed from 1, 2, 3 to 1, 2, 4 (to make it easy to compute the size of a string: length * kind). You are not supposed to see the secret kind==0 case. This value is only used for string created by _PyUnicode_New() and not ready yet: str = _PyUnicode_New() /* use str */ assert(PyUnicode_KIND(str) == 0); if (PyUnicode_READY(str) < 0) /* error */ assert(PyUnicode_KIND(str) != 0); /* kind is 1, 2, 4 */ Thanks to the effort of t0rsten, Martin and me, quite all functions use the new API (PyUnicode_New). For example, PyUnicode_AsRawUnicodeEscapeString() starts by ensuring that the string is ready. For your information, PyUnicode_KIND() fails with an assertion error in debug mode if the string is not ready. -- I don't have an opinion about the one-liner vs the switch :-) But if you want to fix compiler warnings, you should use "enum PyUnicode_Kind" type and PyUnicode_WCHAR_KIND should be removed from the enum. Victor From dirkjan at ochtman.nl Wed Nov 23 10:52:16 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 23 Nov 2011 10:52:16 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Tue, Nov 22, 2011 at 17:41, Stephen J. Turnbull wrote: > This is still a big mess in Gentoo and MacPorts, though. ?MacPorts > hasn't done anything about ceating a transition infrastructure AFAICT. > Gentoo has its "eselect python set VERSION" stuff, but it's very > dangerous to set to a Python 3 version, as many things go permanently > wonky once you do. ?(So far I've been able to work around problems > this creates, but it's not much fun.) Problems like what? > I don't have any connections to the distros, so can't really offer to > help directly. ?I think it might be a good idea for users to lobby > (politely!) ?their distros to work on the transition. Please create a connection to your distro by filing bugs as you encounter them? The Gentoo Python team is woefully understaffed (and I've been busy with some Real Life things, although that should improve in a couple more weeks), but we definitely care about providing an environment where you can successfully run python2 and python3 in parallel. Cheers, Dirkjan From stephen at xemacs.org Wed Nov 23 13:21:28 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 21:21:28 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Dirkjan Ochtman writes: > On Tue, Nov 22, 2011 at 17:41, Stephen J. Turnbull wrote: > > This is still a big mess in Gentoo and MacPorts, though. ?MacPorts > > hasn't done anything about ceating a transition infrastructure AFAICT. > > Gentoo has its "eselect python set VERSION" stuff, but it's very > > dangerous to set to a Python 3 version, as many things go permanently > > wonky once you do. ?(So far I've been able to work around problems > > this creates, but it's not much fun.) > > Problems like what? Like those I explained later in the post, which you cut. But I'll repeat. Some ebuilds are not prepared for Python 3, so must be emerged with a Python 2 eselected (and sometimes they need a specific Python 2). Some which are prepared don't get linted often enough, so new ebuilds are DOA because of an accented character in a changelog triggering a Unicode exception or similar dumb things like that. > > I don't have any connections to the distros, so can't really offer to > > help directly. ?I think it might be a good idea for users to lobby > > (politely!) ?their distros to work on the transition. > > Please create a connection to your distro by filing bugs as you > encounter them? No, thank you. File bugs, maybe, although most of the bugs I encounter in Gentoo are already in the database (often with multiple regressions going back a year or more), I could do a little more of that. (Response in the past has not been encouraging.) But I don't have time for distro politics. Is lack of Python 3-readiness considered a bug by Gentoo? From dirkjan at ochtman.nl Wed Nov 23 14:19:24 2011 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Wed, 23 Nov 2011 14:19:24 +0100 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On Wed, Nov 23, 2011 at 13:21, Stephen J. Turnbull wrote: > ?> Problems like what? > > Like those I explained later in the post, which you cut. ?But I'll They were in a later post, I didn't cut them. :) > ?> Please create a connection to your distro by filing bugs as you > ?> encounter them? > > No, thank you. ?File bugs, maybe, although most of the bugs I > encounter in Gentoo are already in the database (often with multiple > regressions going back a year or more), I could do a little more of > that.?(Response in the past has not been encouraging.)?But I don't > have time for distro politics. I'm sorry for the lack of response in the past. I looked at Gentoo's Bugzilla and didn't find any related bugs you reported or were CC'ed on, can you name some of them? > Is lack of Python 3-readiness considered a bug by Gentoo? Definitely. Again, we are trying to hard to make things better, but there's a lot to do and going through version bumps sometimes wins out over addressing the hard problems. Be assured, though, that we're also trying to make progress on the latter. If you're ever on IRC, come hang out in #gentoo-python, where distro politics should be minimal and the crew is generally friendly and responsive. Cheers, Dirkjan From stephen at xemacs.org Wed Nov 23 15:45:26 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 23 Nov 2011 23:45:26 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <9A04EB61-4EA7-4D6C-8676-50AF6DD1373A@masklinn.net> <87fwhfqywr.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87bos2rj7d.fsf@uwakimon.sk.tsukuba.ac.jp> Ned Deily writes: > In article <87fwhfqywr.fsf at uwakimon.sk.tsukuba.ac.jp>, > "Stephen J. Turnbull" wrote: > > I haven't had the nerve to do this on MacPorts because "port" is such > > a flaky thing (not so much port itself, but so many ports assume that > > the port maintainer's local configuration is what others' systems use, > > so I stay as vanilla as possible -- I rather doubt that many ports are > > ready for Python 3, and I'm not willing to be a guinea pig). > > I think your fears are unfounded. MacPort's individual port files are > supposed to be totally independent of the setting of 'port select'. If you think I'm complaining or imagining things, you're missing the point. My fears are *not* unfounded. For personal use, I wanted Python 2.6 to be default using "port select", and things went wonky. Some things just didn't work, or disappeared. Reverting to 2.5 fixed, so I left it that way for a while. I tried it again with Python 2.7, same deal, different ports. Maybe those would have been considered bugs in "port select", I don't know. But reverting was easy, "fixed" things, and I won't try it with Python 3 (until I have a sacrificial system available). Also, the MacPorts solution is very resource intensive for users: I have *seven* Python stacks on the Mac where I'm typing this -- the only version of Python I've been able to eliminate once it has been installed so far is 3.0! although I could probably get rid of 3.1 now). It also leads to fragmentation (*all* of my 2.x stacks are incomplete, I can't do without any of them), and a couple of extra frustrating steps in finding the code that raised an exceptions or whatever. Not to mention that it's in my face daily: "port outdated" frequently lines up 3, occasionally 4 versions of the same port. This *only* happens with Python! And there's no way that many ports are ready for Python 3, because their upstreams aren't! I think that projects that would like to move to Python 3 are going to find they get pushback from Mac users who "don't need" *yet another* Python stack installed. Note that Gentoo has globally switched off the python USE flag[1] (I suspect that the issue is that one-time configuration utilities can pull in a whole Python stack that mostly duplicates Python content required for Gentoo to work at all). > Also right now besides the Python port group transition, the > project has been swamped with issues arising from the Xcode 4 > introduction for Lion, mandating the transition from gcc to clang > or llvm-gcc. Sure, I understand that kind of thing. That doesn't mean it improves the user experience with Python, especially Python 3. It helps if you can get widespread adoption at similar pace across the board rather than uneven diffusion with a few niches moving really fast. It's like Lao Tse didn't quite say: the most successful leaders are those who hustle and get a few steps ahead of the crowd wherever it's heading. But you need a crowd moving in the same direction to execute that strategy! So I'd like see people who *already* have the credibility with their distros to advocate Python 3. If Ubuntu's going to lead, now's a good time to join them. (Other things being equal, of course -- but then, other things are never equal, so it may as well be now anyway.) If that doesn't happen, well, Python and Python 3 will survive. But I'd rather to see them dominate. Footnotes: [1] According to the notes for the ibus ebuild. From barry at python.org Wed Nov 23 16:24:04 2011 From: barry at python.org (Barry Warsaw) Date: Wed, 23 Nov 2011 10:24:04 -0500 Subject: [Python-Dev] Python 3.4 Release Manager In-Reply-To: <4ECC6D9D.4020105@hastings.org> References: <4ECC6D9D.4020105@hastings.org> Message-ID: <20111123102404.496fca93@limelight.wooz.org> On Nov 22, 2011, at 07:50 PM, Larry Hastings wrote: >I've volunteered to be the Release Manager for Python 3.4. The FLUFL has >already given it his Sloppy Wet Kiss Of Approval, I think you mistook that for my slackjaw droolings when you repeatedly ignored my warnings to run as far from it as possible. But you're persistent, I'll give you that. Looks like that persistence will be punis^H^H^H^H^Hrewarded with your first RMship! Congratulations (?). >and we talked to Georg and he was for it too. There's no formal process for >selecting the RM, so I may already be stuck with the job, but I thought it >best to pipe up on python-dev in case someone had a better idea. > >But look! I'm already practicing: NO YOU CAN'T CHECK THAT IN. How's that? >Needs work? > >I look forward to seeing how the sausage is made, Undoubtedly it will make you a vegan, but hopefully not a Go programmer! :) Seriously though, I think it's great for you to work with Georg through the 3.3 process, so you can take over for 3.4. And I also think it's great that someone new wants to do it. I kid, but the mechanics of releasing really isn't very difficult these days as we've continually automated the boring parts. The fun part is really making the decisions about what is a showstopper, what changes can go in at the last minute, etc. Like the president of the USA, I just hope your hair is already gray! -Barry From stephen at xemacs.org Wed Nov 23 19:57:12 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 24 Nov 2011 03:57:12 +0900 Subject: [Python-Dev] Promoting Python 3 [was: PyPy 1.7 - widening the sweet spot] In-Reply-To: References: <20111122095509.474420a6@limelight.wooz.org> <87mxboqfcl.fsf@uwakimon.sk.tsukuba.ac.jp> <87d3cjqbav.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87aa7mr7jr.fsf@uwakimon.sk.tsukuba.ac.jp> Dirkjan Ochtman writes: > I'm sorry for the lack of response in the past. I looked at Gentoo's > Bugzilla and didn't find any related bugs you reported or were CC'ed > on, can you name some of them? This isn't about my bugs; I've been able to work through them satisfactorily. It's about what I perceive as a need for simultaneous improvement in Python 3 support across several distros, covering enough users to establish momentum. I don't think Python 3 needs to (or even can) replace Python 2 as the system python in the near future. But the "python" that is visible to users (at least on single-user systems) should be choosable by the user. eselect (on Gentoo) and port select (on MacPorts) *appear* to provide this, but it doesn't work very well. > > Is lack of Python 3-readiness considered a bug by Gentoo? > > Definitely. Again, we are trying to hard to make things better, but > there's a lot to do and going through version bumps sometimes wins out > over addressing the hard problems. Well, as I see it the two hard problems are (1) the stack-per-python- minor-version solution is ugly and unattractive to users, and (2) the "select python" utility needs to be safe. This probably means that python-using ebuilds need to complain if the Python they find isn't a Python 2 version by default. From martin at v.loewis.de Wed Nov 23 20:58:23 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 23 Nov 2011 20:58:23 +0100 Subject: [Python-Dev] PyUnicode_Resize In-Reply-To: <201111220208.17514.victor.stinner@haypocalc.com> References: <201111220208.17514.victor.stinner@haypocalc.com> Message-ID: <4ECD505F.8070209@v.loewis.de> > In Python 3.2, PyUnicode_Resize() expects a number of Py_UNICODE units, > whereas Python 3.3 expects a number of characters. Is that really the case? If the string is not ready (i.e. the kind is WCHAR_KIND), then it does count Py_UNICODE units, no? Callers are supposed to call PyUnicode_Resize only while the string is under construction, i.e. when it is not ready. If they resize it after it has been readied, changes to the Py_UNICODE representation wouldn't be reflected in the canonical representation, anyway. > Should we rename PyUnicode_Resize() in Python 3.3 to avoid surprising bugs? IIUC (and please correct me if I'm wrong) this issue won't cause memory corruption: if they specify a new size assuming it's Py_UNICODE units, but interpreted as code points, then the actual Py_UNICODE buffer can only be larger than expected - right? If so, callers could happily play with Py_UNICODE representation. It won't have the desired effect if the string was ready, but it won't crash Python, either. > The easiest solution is to do nothing in Python 3.3: the API changed, but it > doesn't really matter. Developers just have to be careful on this particular > issue (which is not well documented today). See above. I think there actually is no issue in the first place. Please do correct me if I'm wrong. Regards, Martin From pjenvey at underboss.org Wed Nov 23 22:13:38 2011 From: pjenvey at underboss.org (Philip Jenvey) Date: Wed, 23 Nov 2011 13:13:38 -0800 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Nov 22, 2011, at 12:43 PM, Amaury Forgeot d'Arc wrote: > 2011/11/22 Philip Jenvey > One reason to target 3.2 for now is it's not a moving target. There's overhead involved in managing modifications to the pure python standard lib needed for PyPy, tracking 3.3 changes as they happen as well exacerbates this. > > The plans to split the standard lib into its own repo separate from core CPython will of course help alternative implementations here. > > I don't see how it would help here. > Copying the CPython Lib/ directory is not difficult, even though PyPy made slight modifications to the files, and even without any merge tool. Pulling in a separate stdlib as a subrepo under the PyPy repo would certainly make this whole process easier. But you're right, if we track CPython's default branch (3.3) we can make many if not all of the PyPy modifications upstream (until the 3.3rc1 code freeze) instead of in PyPy's modified-3.x directory. Maintaining that modified-3.x dir after every resync can be tedious. -- Philip Jenvey From guido at python.org Thu Nov 24 01:28:39 2011 From: guido at python.org (Guido van Rossum) Date: Wed, 23 Nov 2011 16:28:39 -0800 Subject: [Python-Dev] PEP 380 Message-ID: Mea culpa for not keeping track, but what's the status of PEP 380? I really want this in Python 3.3! -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Thu Nov 24 05:06:43 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Nov 2011 14:06:43 +1000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: > Mea culpa for not keeping track, but what's the status of PEP 380? I > really want this in Python 3.3! There are two relevant tracker issues (both with me for the moment). The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 That's really just missing the doc updates - I haven't had a chance to look at Zbyszek's latest offering on that front, but it shouldn't be far off being complete (the *text* in his previous docs patch actually seemed reasonable - I mainly objected to way it was organised). However, the PEP 380 test suite updates have a dependency on a new dis module feature that provides an iterator over a structured description of bytecode instructions: http://bugs.python.org/issue11816 I find Meador's suggestion to change the name of the new API to something involving the word "instruction" appealing, so I plan to do that, which will have a knock-on effect on the tests in the PEP 380 branch. However, even once I get that done, Raymond specifically said he wanted to review the dis module patch before I check it in, so I don't plan to commit it until he gives the OK (either because he reviewed it, or because he decides he's OK with it going in without his review and he can review and potentially update it in Mercurial any time before 3.3 is released). I currently plan to update my working branches for both of those on the 3rd of December, so hopefully they'll be ready to go within the next couple of weeks. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From fijall at gmail.com Thu Nov 24 13:20:38 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 24 Nov 2011 14:20:38 +0200 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Wed, Nov 23, 2011 at 11:13 PM, Philip Jenvey wrote: > > On Nov 22, 2011, at 12:43 PM, Amaury Forgeot d'Arc wrote: > >> 2011/11/22 Philip Jenvey >> One reason to target 3.2 for now is it's not a moving target. There's overhead involved in managing modifications to the pure python standard lib needed for PyPy, tracking 3.3 changes as they happen as well exacerbates this. >> >> The plans to split the standard lib into its own repo separate from core CPython will of course help alternative implementations here. >> >> I don't see how it would help here. >> Copying the CPython Lib/ directory is not difficult, even though PyPy made slight modifications to the files, and even without any merge tool. > > Pulling in a separate stdlib as a subrepo under the PyPy repo would certainly make this whole process easier. > > But you're right, if we track CPython's default branch (3.3) we can make many if not all of the PyPy modifications upstream ? (until the 3.3rc1 code freeze) instead of in PyPy's modified-3.x directory. Maintaining that modified-3.x dir after every resync can be tedious. > > -- > Philip Jenvey The problem is not with maintaining the modified directory. The problem was always things like changing interface between the C version and the Python version or introduction of new stuff that does not run on pypy because it relies on refcounting. I don't see how having a subrepo helps here. From ncoghlan at gmail.com Thu Nov 24 13:46:01 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Nov 2011 22:46:01 +1000 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski wrote: > The problem is not with maintaining the modified directory. The > problem was always things like changing interface between the C > version and the Python version or introduction of new stuff that does > not run on pypy because it relies on refcounting. I don't see how > having a subrepo helps here. Indeed, the main thing that can help on this front is to get more modules to the same state as heapq, io, datetime (and perhaps a few others that have slipped my mind) where the CPython repo actually contains both C and Python implementations and the test suite exercises both to make sure their interfaces remain suitably consistent (even though, during normal operation, CPython users will only ever hit the C accelerated version). This not only helps other implementations (by keeping a Python version of the module continuously up to date with any semantic changes), but can help people that are porting CPython to new platforms: the C extension modules are far more likely to break in that situation than the pure Python equivalents, and a relatively slow fallback is often going to be better than no fallback at all. (Note that ctypes based pure Python modules *aren't* particularly useful for this purpose, though - due to the libffi dependency, ctypes is one of the extension modules most likely to break when porting). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jcea at jcea.es Thu Nov 24 17:43:11 2011 From: jcea at jcea.es (Jesus Cea) Date: Thu, 24 Nov 2011 17:43:11 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python Message-ID: <4ECE741F.10303@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I have a question and I would rather have an answer instead of actually trying and getting myself in a messy situation. Let say we have the following scenario: 1. A programer clones hg.python.org. 2. Programer creates a named branch and start to develop a new feature. 3. She adds her repository&named branch to the bugtracker. 4. From time to time, she posts updates in the tracker using the "Create Patch" button. So far so good. Now, the question: 5. Development of the new feature is taking a long time, and python canonical version keeps moving forward. The clone+branch and the original python version are diverging. Eventually there are changes in python that the programmer would like in her version, so she does a "pull" and then a merge for the original python branch to her named branch. 6. What would be posted in the bug tracker when she does a new "Create Patch"?. Only her changes, her changes SINCE the merge, her changes plus merged changes or something else?. What if the programmer cherrypick changesets from the original python branch?. Thanks! :-). - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs50H5lgi5GaxT1NAQJsTAP6AsUsLo2REdxxyVvPBDQ51GjZermCXD08 jOqKkKY9cre4OHx/+uZHEvO8j7RJ5X3o2/0Yl4OeDSTBDY8/eWINc9cgtuNqrJdW W27fu1+UTIpgl1oLh06P23ufOEWPWh90gsV6eiVnFlj7r+b3HkP7PNdZCmqU2+UW 92Ac9B1JOvU= =goYv -----END PGP SIGNATURE----- From merwok at netwok.org Thu Nov 24 18:08:53 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Thu, 24 Nov 2011 18:08:53 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE741F.10303@jcea.es> References: <4ECE741F.10303@jcea.es> Message-ID: <4ECE7A25.5000701@netwok.org> Hi, > I have a question and I would rather have an answer instead of > actually trying and getting myself in a messy situation. Clones are cheap, trying is cheap! > Let say we have the following scenario: > > 1. A programer clones hg.python.org. > 2. Programer creates a named branch and start to develop a new feature. > 3. She adds her repository&named branch to the bugtracker. > 4. From time to time, she posts updates in the tracker using the > "Create Patch" button. > > So far so good. Now, the question: > > 5. Development of the new feature is taking a long time, and python > canonical version keeps moving forward. The clone+branch and the > original python version are diverging. Eventually there are changes in > python that the programmer would like in her version, so she does a > "pull" and then a merge for the original python branch to her named > branch. I do this all the time. I work on a fix-nnnn branch, and once a week for example I pull and merge the base branch. Sometimes there are no conflicts except Misc/NEWS, sometimes I have to adapt my code because of other people?s changes before I can commit the merge. > 6. What would be posted in the bug tracker when she does a new "Create > Patch"?. Only her changes, her changes SINCE the merge, her changes > plus merged changes or something else?. The diff would be equivalent to ?hg diff -r base? and would contain all the changes she did to add the bug fix or feature. Merging only makes sure that the computed diff does not appear to touch unrelated files, IOW that it applies cleanly. (Barring bugs in Mercurial-Roundup integration, we have a few of these in the metatracker.) > What if the programmer cherrypick changesets from the original python > branch?. Then her branch will revert some changes done in the original branch. Therefore, cherry-picking is not a good idea. Regards From eliben at gmail.com Thu Nov 24 19:15:25 2011 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 24 Nov 2011 20:15:25 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? Message-ID: Hi there, I was doing some experiments with the buffer interface of bytearray today, for the purpose of quickly reading a file's contents into a bytearray which I can then modify. I decided to do some benchmarking and ran into surprising results. Here are the functions I was timing: def justread(): # Just read a file's contents into a string/bytes object f = open(FILENAME, 'rb') s = f.read() def readandcopy(): # Read a file's contents and copy them into a bytearray. # An extra copy is done here. f = open(FILENAME, 'rb') b = bytearray(f.read()) def readinto(): # Read a file's contents directly into a bytearray, # hopefully employing its buffer interface f = open(FILENAME, 'rb') b = bytearray(os.path.getsize(FILENAME)) f.readinto(b) FILENAME is the name of a 3.6MB text file. It is read in binary mode, however, for fullest compatibility between 2.x and 3.x Now, running this under Python 2.7.2 I got these results ($1 just reflects the executable name passed to a bash script I wrote to automate these runs): $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 461 usec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.81 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 1000 loops, best of 3: 697 usec per loop Which make sense. The readinto() approach is much faster than copying the read buffer into the bytearray. But with Python 3.2.2 (built from the 3.2 branch today): $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 336 usec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.62 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 100 loops, best of 3: 2.69 msec per loop Oops, readinto takes the same time as copying. This is a real shame, because readinto in conjunction with the buffer interface was supposed to avoid the redundant copy. Is there a real performance regression here, is this a well-known issue, or am I just missing something obvious? Eli P.S. The machine is quad-core i7-2820QM, running 64-bit Ubuntu 10.04 -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Nov 24 19:29:33 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 24 Nov 2011 19:29:33 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: Message-ID: <20111124192933.393cf96f@pitrou.net> On Thu, 24 Nov 2011 20:15:25 +0200 Eli Bendersky wrote: > > Oops, readinto takes the same time as copying. This is a real shame, > because readinto in conjunction with the buffer interface was supposed to > avoid the redundant copy. > > Is there a real performance regression here, is this a well-known issue, or > am I just missing something obvious? Can you try with latest 3.3 (from the default branch)? Thanks Antoine. From eliben at gmail.com Thu Nov 24 19:53:30 2011 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 24 Nov 2011 20:53:30 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111124192933.393cf96f@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> Message-ID: On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou wrote: > On Thu, 24 Nov 2011 20:15:25 +0200 > Eli Bendersky wrote: > > > > Oops, readinto takes the same time as copying. This is a real shame, > > because readinto in conjunction with the buffer interface was supposed to > > avoid the redundant copy. > > > > Is there a real performance regression here, is this a well-known issue, > or > > am I just missing something obvious? > > Can you try with latest 3.3 (from the default branch)? > Sure. Updated the default branch just now and built: $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' 1000 loops, best of 3: 1.14 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 2.78 msec per loop $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' 1000 loops, best of 3: 1.6 msec per loop Strange. Although here, like in python 2, the performance of readinto is close to justread and much faster than readandcopy, but justread itself is much slower than in 2.7 and 3.2! Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 24 21:55:40 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 06:55:40 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE7A25.5000701@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: I've never been able to get the Create Patch button to work reliably with my BitBucket repo, so I still just run "hg diff -r default" locally and upload the patch directly. It would be nice if I could just specify both the feature branch *and* the branch to diff against rather than having to work out why Roundup is guessing wrong... -- Nick Coghlan (via Gmail on Android, so likely to be more terse than usual) -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Thu Nov 24 22:23:26 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 24 Nov 2011 22:23:26 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ECEB5CE.5050309@v.loewis.de> Am 24.11.2011 21:55, schrieb Nick Coghlan: > I've never been able to get the Create Patch button to work reliably > with my BitBucket repo, so I still just run "hg diff -r default" locally > and upload the patch directly. Please submit a bug report to the meta tracker. > It would be nice if I could just specify both the feature branch *and* > the branch to diff against rather than having to work out why Roundup is > guessing wrong... Why would you not diff against the default branch? Regards, Martin From python-dev at masklinn.net Thu Nov 24 22:46:51 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Thu, 24 Nov 2011 22:46:51 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: On 2011-11-24, at 21:55 , Nick Coghlan wrote: > I've never been able to get the Create Patch button to work reliably with > my BitBucket repo, so I still just run "hg diff -r default" locally and > upload the patch directly. Wouldn't it be simpler to just use MQ and upload the patch(es) from the series? Would be easier to keep in sync with the development tip too. From anacrolix at gmail.com Thu Nov 24 23:02:15 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 09:02:15 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: What if you broke up the read and built the final string object up. I always assumed this is where the real gain was with read_into. On Nov 25, 2011 5:55 AM, "Eli Bendersky" wrote: > On Thu, Nov 24, 2011 at 20:29, Antoine Pitrou wrote: > >> On Thu, 24 Nov 2011 20:15:25 +0200 >> Eli Bendersky wrote: >> > >> > Oops, readinto takes the same time as copying. This is a real shame, >> > because readinto in conjunction with the buffer interface was supposed >> to >> > avoid the redundant copy. >> > >> > Is there a real performance regression here, is this a well-known >> issue, or >> > am I just missing something obvious? >> >> Can you try with latest 3.3 (from the default branch)? >> > > Sure. Updated the default branch just now and built: > > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' > 1000 loops, best of 3: 1.14 msec per loop > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 2.78 msec per loop > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' > 1000 loops, best of 3: 1.6 msec per loop > > Strange. Although here, like in python 2, the performance of readinto is > close to justread and much faster than readandcopy, but justread itself is > much slower than in 2.7 and 3.2! > > Eli > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Thu Nov 24 23:41:02 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 00:41:02 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: > What if you broke up the read and built the final string object up. I > always assumed this is where the real gain was with read_into. > Matt, I'm not sure what you mean by this - can you suggest the code? Also, I'd be happy to know if anyone else reproduces this as well on other machines/OSes. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Nov 24 23:45:58 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 08:45:58 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECEB5CE.5050309@v.loewis.de> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEB5CE.5050309@v.loewis.de> Message-ID: On Fri, Nov 25, 2011 at 7:23 AM, "Martin v. L?wis" wrote: > Am 24.11.2011 21:55, schrieb Nick Coghlan: >> I've never been able to get the Create Patch button to work reliably >> with my BitBucket repo, so I still just run "hg diff -r default" locally >> and upload the patch directly. > > Please submit a bug report to the meta tracker. Done: http://psf.upfronthosting.co.za/roundup/meta/issue428 >> It would be nice if I could just specify both the feature branch *and* >> the branch to diff against rather than having to work out why Roundup is >> guessing wrong... > > Why would you not diff against the default branch? I usually do - the only case I have at the moment where diffing against a branch other than default sometimes make sense is the dependency from the PEP 380 branch on the dis.get_opinfo() feature branch (http://bugs.python.org/issue11682). In fact, I believe that's also the case that confuses the diff generator. My workflow in the repo is: - update default from hg.python.org/cpython - merge into get_opinfo branch from default - merge into pep380 branch from the get_opinfo branch So, after merging into the pep380 branch, "hg diff -r default" gives a full patch from default -> pep380 (including the dis module updates), while "hg diff -r get_opinfo" gives a patch that assumes the dis changes have already been applied separately. I'm now wondering if doing an explicit "hg merge default" before I do the merges from the get_opinfo branch in my sandbox might be enough to get the patch generator back on track... Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu Nov 24 23:46:23 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 Nov 2011 08:46:23 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: On Fri, Nov 25, 2011 at 7:46 AM, Xavier Morel wrote: > On 2011-11-24, at 21:55 , Nick Coghlan wrote: >> I've never been able to get the Create Patch button to work reliably with >> my BitBucket repo, so I still just run "hg diff -r default" locally and >> upload the patch directly. > Wouldn't it be simpler to just use MQ and upload the patch(es) from the series? Would be easier to keep in sync with the development tip too. >From my (admittedly limited) experience, using MQ means I can only effectively collaborate with other people also using MQ (e.g. the Roundup integration doesn't work if the only thing that is published on BitBucket is a patch queue). I'll stick with named branches until MQ becomes a builtin Hg feature that better integrates with other tools. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jcea at jcea.es Fri Nov 25 01:01:13 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 01:01:13 +0100 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement Message-ID: <4ECEDAC9.7040903@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Trying to clear the licensing issues surrounding my DTrace work (http://bugs.python.org/issue13405) I am contacting Sun/Oracle guys. Checking documentation abut the contributor license agreement, I had encounter a wrong HTML link in http://www.python.org/about/help/ : * "Python Patch Guidelines" points to http://www.python.org/dev/patches/, that doesn't exist. Other links in that page seems OK. PS: The devguide doesn't say anything (AFAIK) about the contributor agreement. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7ayZlgi5GaxT1NAQLOfwQAoa1GFuQZKhbXD3FnmH3XUiylzTMBmXMh vB++AdDP8fcEBC/NYZ9j0DH+AGspXrPg4YVta09zJJ/1kHa2UxRVmtXM8centl3V Jkad+6lJw6YYjtXXgM4QExlzClsYNn1ByhYaRSiSa8g9dtsFq4YTlKzfeAXLPC50 DUju8DavMyo= =xOEe -----END PGP SIGNATURE----- From jcea at jcea.es Fri Nov 25 01:20:17 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 01:20:17 +0100 Subject: [Python-Dev] webmaster@python.org address not working Message-ID: <4ECEDF41.2030008@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 When mailing there, I get this error. Not sure where to report. """ Final-Recipient: rfc822; sdrees at sdrees.de Original-Recipient: rfc822;webmaster at python.org Action: failed Status: 5.1.1 Remote-MTA: dns; stefan.zinzdrees.de Diagnostic-Code: smtp; 550 5.1.1 : Recipient address rejected: User unknown in local recipient table """ - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1 yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT TMmsu39qf2E= =ecRu -----END PGP SIGNATURE----- From solipsis at pitrou.net Fri Nov 25 01:23:19 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 01:23:19 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> Message-ID: <20111125012319.52a69ebb@pitrou.net> On Thu, 24 Nov 2011 20:53:30 +0200 Eli Bendersky wrote: > > Sure. Updated the default branch just now and built: > > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.justread()' > 1000 loops, best of 3: 1.14 msec per loop > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 2.78 msec per loop > $1 -m timeit -s'import fileread_bytearray' 'fileread_bytearray.readinto()' > 1000 loops, best of 3: 1.6 msec per loop > > Strange. Although here, like in python 2, the performance of readinto is > close to justread and much faster than readandcopy, but justread itself is > much slower than in 2.7 and 3.2! This seems to be a side-effect of http://hg.python.org/cpython/rev/f8a697bc3ca8/ Now I'm not sure if these numbers matter a lot. 1.6ms for a 3.6MB file is still more than 2 GB/s. Regards Antoine. From tjreedy at udel.edu Fri Nov 25 01:31:12 2011 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 24 Nov 2011 19:31:12 -0500 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: On 11/24/2011 5:02 PM, Matt Joiner wrote: > What if you broke up the read and built the final string object up. I > always assumed this is where the real gain was with read_into. If a pure read takes twice as long in 3.3 as in 3.2, that is a concern regardless of whether there is a better way. -- Terry Jan Reedy From anacrolix at gmail.com Fri Nov 25 01:49:19 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 11:49:19 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: Eli, Example coming shortly, the differences are quite significant. On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky wrote: > On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: >> >> What if you broke up the read and built the final string object up. I >> always assumed this is where the real gain was with read_into. > > Matt, I'm not sure what you mean by this - can you suggest the code? > > Also, I'd be happy to know if anyone else reproduces this as well on other > machines/OSes. > > Eli > > From anacrolix at gmail.com Fri Nov 25 02:02:17 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 12:02:17 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> Message-ID: It's my impression that the readinto method does not fully support the buffer interface I was expecting. I've never had cause to use it until now. I've created a question on SO that describes my confusion: http://stackoverflow.com/q/8263899/149482 Also I saw some comments on "top-posting" am I guilty of this? Gmail defaults to putting my response above the previous email. On Fri, Nov 25, 2011 at 11:49 AM, Matt Joiner wrote: > Eli, > > Example coming shortly, the differences are quite significant. > > On Fri, Nov 25, 2011 at 9:41 AM, Eli Bendersky wrote: >> On Fri, Nov 25, 2011 at 00:02, Matt Joiner wrote: >>> >>> What if you broke up the read and built the final string object up. I >>> always assumed this is where the real gain was with read_into. >> >> Matt, I'm not sure what you mean by this - can you suggest the code? >> >> Also, I'd be happy to know if anyone else reproduces this as well on other >> machines/OSes. >> >> Eli >> >> > From solipsis at pitrou.net Fri Nov 25 02:07:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 02:07:00 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> Message-ID: <20111125020700.4c38aab7@pitrou.net> On Fri, 25 Nov 2011 12:02:17 +1100 Matt Joiner wrote: > It's my impression that the readinto method does not fully support the > buffer interface I was expecting. I've never had cause to use it until > now. I've created a question on SO that describes my confusion: > > http://stackoverflow.com/q/8263899/149482 Just use a memoryview and slice it: b = bytearray(...) m = memoryview(b) n = f.readinto(m[some_offset:]) > Also I saw some comments on "top-posting" am I guilty of this? Kind of :) Regards Antoine. From fuzzyman at voidspace.org.uk Fri Nov 25 03:00:35 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 02:00:35 +0000 Subject: [Python-Dev] webmaster@python.org address not working In-Reply-To: <4ECEDF41.2030008@jcea.es> References: <4ECEDF41.2030008@jcea.es> Message-ID: <4ECEF6C3.3060901@voidspace.org.uk> On 25/11/2011 00:20, Jesus Cea wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > When mailing there, I get this error. Not sure where to report. The address works fine. It would be nice if someone fixed the annoying bounce however. :-) Michael > """ > Final-Recipient: rfc822; sdrees at sdrees.de > Original-Recipient: rfc822;webmaster at python.org > Action: failed > Status: 5.1.1 > Remote-MTA: dns; stefan.zinzdrees.de > Diagnostic-Code: smtp; 550 5.1.1: Recipient address > rejected: User unknown in local recipient table > """ > > - -- > Jesus Cea Avion _/_/ _/_/_/ _/_/_/ > jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ > jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ > . _/_/ _/_/ _/_/ _/_/ _/_/ > "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ > "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ > "El amor es poner tu felicidad en la felicidad de otro" - Leibniz > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQCVAwUBTs7fQZlgi5GaxT1NAQLxrQQAmph2w/KrLbwK34IVFKNKAn3P78uY19U1 > yoUslB7J4u4IhqQHd5r/FD0v6q6W12t9H8UFNdKELc/zRnRWtE7wKI+3RAeBMAfe > pTV6OY7kbGtUfDk9na8o6+oEQ7iZUWT1LbBtMpSusHBuif239RD3HMeaaJ6u/BFT > TMmsu39qf2E= > =ecRu > -----END PGP SIGNATURE----- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From jcea at jcea.es Fri Nov 25 03:39:16 2011 From: jcea at jcea.es (Jesus Cea) Date: Fri, 25 Nov 2011 03:39:16 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECE7A25.5000701@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ECEFFD4.5030601@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 24/11/11 18:08, ?ric Araujo wrote: >> I have a question and I would rather have an answer instead of >> actually trying and getting myself in a messy situation. > Clones are cheap, trying is cheap! I would need to publish another repository online, and instruct the bug tracker to use it and create a patch, and play for the best or risk polluding the tracker. Maybe I would be hitting a corner case and be lucky this time, but not next time. Better to ask people that know better, I guess. >> 5. Development of the new feature is taking a long time, and >> python canonical version keeps moving forward. The clone+branch >> and the original python version are diverging. Eventually there >> are changes in python that the programmer would like in her >> version, so she does a "pull" and then a merge for the original >> python branch to her named branch. > I do this all the time. I work on a fix-nnnn branch, and once a > week for example I pull and merge the base branch. Sometimes there > are no conflicts except Misc/NEWS, sometimes I have to adapt my > code because of other people?s changes before I can commit the > merge. That is good, because that means your patch is always able to be applied to the original branch tip, and that you changes work with current work in the mainline. That is what I want to do, but I need to know that it is safe to do so (from the "Create Patch" perspective). >> 6. What would be posted in the bug tracker when she does a new >> "Create Patch"?. Only her changes, her changes SINCE the merge, >> her changes plus merged changes or something else?. > The diff would be equivalent to ?hg diff -r base? and would contain > all the changes she did to add the bug fix or feature. Merging > only makes sure that the computed diff does not appear to touch > unrelated files, IOW that it applies cleanly. (Barring bugs in > Mercurial-Roundup integration, we have a few of these in the > metatracker.) So you are saying that "Create patch" will ONLY get the differences in the development branch and not the changes brought in from the merge?. A "hg diff -r base" -as you indicate- should show all changes in the branch since creation, included the merges, if I understand it correctly. I don't want to include the merges, although I want their effect in my own work (like changing patch offset). That is, is that merge safe for "Create Patch"?. Your answer seems to indicate "yes", but I rather prefer an explicit "yes" that an "implicit" yes :). Python Zen! :). PS: Sorry if I am being blunt. My (lack of) social skills are legendary. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTs7/1Jlgi5GaxT1NAQJUDAQAhQi5e3utsVdOveO/3r1EDr/9BUTpB8Tb DxIe12HEt+KT33CJR+HGTt9OBqNGmVb4Q3h8lj3YIi7WdIXjc3CQ3+dO1NF1jTZO 0rt5EbEU99RAkgqOT0r3ziKy6MSSWhTuZlQy6pvcivEJet0GANiNqbdw6xFBETeZ a5m85Q793iU= =1Kg3 -----END PGP SIGNATURE----- From anacrolix at gmail.com Fri Nov 25 07:13:45 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 17:13:45 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125020700.4c38aab7@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 12:07 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 12:02:17 +1100 > Matt Joiner wrote: >> It's my impression that the readinto method does not fully support the >> buffer interface I was expecting. I've never had cause to use it until >> now. I've created a question on SO that describes my confusion: >> >> http://stackoverflow.com/q/8263899/149482 > > Just use a memoryview and slice it: > > b = bytearray(...) > m = memoryview(b) > n = f.readinto(m[some_offset:]) Cheers, this seems to be what I wanted. Unfortunately it doesn't perform noticeably better if I do this. Eli, the use pattern I was referring to is when you read in chunks, and and append to a running buffer. Presumably if you know in advance the size of the data, you can readinto directly to a region of a bytearray. There by avoiding having to allocate a temporary buffer for the read, and creating a new buffer containing the running buffer, plus the new. Strangely, I find that your readandcopy is faster at this, but not by much, than readinto. Here's the code, it's a bit explicit, but then so was the original: BUFSIZE = 0x10000 def justread(): # Just read a file's contents into a string/bytes object f = open(FILENAME, 'rb') s = b'' while True: b = f.read(BUFSIZE) if not b: break s += b def readandcopy(): # Read a file's contents and copy them into a bytearray. # An extra copy is done here. f = open(FILENAME, 'rb') s = bytearray() while True: b = f.read(BUFSIZE) if not b: break s += b def readinto(): # Read a file's contents directly into a bytearray, # hopefully employing its buffer interface f = open(FILENAME, 'rb') s = bytearray(os.path.getsize(FILENAME)) o = 0 while True: b = f.readinto(memoryview(s)[o:o+BUFSIZE]) if not b: break o += b And the timings: $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.justread()' 10 loops, best of 3: 298 msec per loop $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.readandcopy()' 100 loops, best of 3: 9.22 msec per loop $ python3 -O -m timeit 'import fileread_bytearray' 'fileread_bytearray.readinto()' 100 loops, best of 3: 9.31 msec per loop The file was 10MB. I expected readinto to perform much better than readandcopy. I expected readandcopy to perform slightly better than justread. This clearly isn't the case. > >> Also I saw some comments on "top-posting" am I guilty of this? If tehre's a magical option in gmail someone knows about, please tell. > > Kind of :) > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From eliben at gmail.com Fri Nov 25 07:38:48 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 08:38:48 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125012319.52a69ebb@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> Message-ID: > On Thu, 24 Nov 2011 20:53:30 +0200 > Eli Bendersky wrote: > > > > Sure. Updated the default branch just now and built: > > > > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.justread()' > > 1000 loops, best of 3: 1.14 msec per loop > > $1 -m timeit -s'import fileread_bytearray' > > 'fileread_bytearray.readandcopy()' > > 100 loops, best of 3: 2.78 msec per loop > > $1 -m timeit -s'import fileread_bytearray' > 'fileread_bytearray.readinto()' > > 1000 loops, best of 3: 1.6 msec per loop > > > > Strange. Although here, like in python 2, the performance of readinto is > > close to justread and much faster than readandcopy, but justread itself > is > > much slower than in 2.7 and 3.2! > > This seems to be a side-effect of > http://hg.python.org/cpython/rev/f8a697bc3ca8/ > > Now I'm not sure if these numbers matter a lot. 1.6ms for a 3.6MB file > is still more than 2 GB/s. > Just to be clear, there were two separate issues raised here. One is the speed regression of readinto() from 2.7 to 3.2, and the other is the relative slowness of justread() in 3.3 Regarding the second, I'm not sure it's an issue because I tried a larger file (100MB and then also 300MB) and the speed of 3.3 is now on par with 3.2 and 2.7 However, the original question remains - on the 100MB file also, although in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the same speed (even a few % slower). That said, I now observe with Python 3.3 the same speed as with 2.7, including the readinto() speedup - so it appears that the readinto() regression has been solved in 3.3? Any clue about where it happened (i.e. which bug/changeset)? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Fri Nov 25 07:41:47 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 08:41:47 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: > > Eli, the use pattern I was referring to is when you read in chunks, > and and append to a running buffer. Presumably if you know in advance > the size of the data, you can readinto directly to a region of a > bytearray. There by avoiding having to allocate a temporary buffer for > the read, and creating a new buffer containing the running buffer, > plus the new. > > Strangely, I find that your readandcopy is faster at this, but not by > much, than readinto. Here's the code, it's a bit explicit, but then so > was the original: > > BUFSIZE = 0x10000 > > def justread(): > # Just read a file's contents into a string/bytes object > f = open(FILENAME, 'rb') > s = b'' > while True: > b = f.read(BUFSIZE) > if not b: > break > s += b > > def readandcopy(): > # Read a file's contents and copy them into a bytearray. > # An extra copy is done here. > f = open(FILENAME, 'rb') > s = bytearray() > while True: > b = f.read(BUFSIZE) > if not b: > break > s += b > > def readinto(): > # Read a file's contents directly into a bytearray, > # hopefully employing its buffer interface > f = open(FILENAME, 'rb') > s = bytearray(os.path.getsize(FILENAME)) > o = 0 > while True: > b = f.readinto(memoryview(s)[o:o+BUFSIZE]) > if not b: > break > o += b > > And the timings: > > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.justread()' > 10 loops, best of 3: 298 msec per loop > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.readandcopy()' > 100 loops, best of 3: 9.22 msec per loop > $ python3 -O -m timeit 'import fileread_bytearray' > 'fileread_bytearray.readinto()' > 100 loops, best of 3: 9.31 msec per loop > > The file was 10MB. I expected readinto to perform much better than > readandcopy. I expected readandcopy to perform slightly better than > justread. This clearly isn't the case. > > What is 'python3' on your machine? If it's 3.2, then this is consistent with my results. Try it with 3.3 and for a larger file (say ~100MB and up), you may see the same speed as on 2.7 Also, why do you think chunked reads are better here than slurping the whole file into the bytearray in one go? If you need it wholly in memory anyway, why not just issue a single read? Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Fri Nov 25 08:38:28 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 25 Nov 2011 16:38:28 +0900 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <87vcq8ps7f.fsf@uwakimon.sk.tsukuba.ac.jp> Nick Coghlan writes: > I'll stick with named branches until MQ becomes a builtin Hg > feature that better integrates with other tools. AFAIK MQ *is* considered to be a *stable, standard* part of Hg functionality that *happens* (for several reasons *not* including "it's not ready for Prime Time") to be packaged as an extension. If you want more/different functionality from it, you probably should file a feature request with the Mercurial developers. From soltysh at wp.pl Fri Nov 25 09:18:38 2011 From: soltysh at wp.pl (Maciej Szulik) Date: Fri, 25 Nov 2011 09:18:38 +0100 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement In-Reply-To: <4ECEDAC9.7040903@jcea.es> References: <4ECEDAC9.7040903@jcea.es> Message-ID: <4ecf4f5ef399d0.71104851@wp.pl> Dnia 25-11-2011 o godz. 1:01 Jesus Cea napisa?(a): > > PS: The devguide doesn't say anything (AFAIK) about the contributor > agreement. There is info in the Contributing part of the devguide, follow How to Become a Core Developer link which points to http://docs.python.org/devguide/coredev.html where Contributor Agreement is mentioned. Regards, Maciej From anacrolix at gmail.com Fri Nov 25 10:34:21 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 20:34:21 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 5:41 PM, Eli Bendersky wrote: >> Eli, the use pattern I was referring to is when you read in chunks, >> and and append to a running buffer. Presumably if you know in advance >> the size of the data, you can readinto directly to a region of a >> bytearray. There by avoiding having to allocate a temporary buffer for >> the read, and creating a new buffer containing the running buffer, >> plus the new. >> >> Strangely, I find that your readandcopy is faster at this, but not by >> much, than readinto. Here's the code, it's a bit explicit, but then so >> was the original: >> >> BUFSIZE = 0x10000 >> >> def justread(): >> ? ?# Just read a file's contents into a string/bytes object >> ? ?f = open(FILENAME, 'rb') >> ? ?s = b'' >> ? ?while True: >> ? ? ? ?b = f.read(BUFSIZE) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?s += b >> >> def readandcopy(): >> ? ?# Read a file's contents and copy them into a bytearray. >> ? ?# An extra copy is done here. >> ? ?f = open(FILENAME, 'rb') >> ? ?s = bytearray() >> ? ?while True: >> ? ? ? ?b = f.read(BUFSIZE) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?s += b >> >> def readinto(): >> ? ?# Read a file's contents directly into a bytearray, >> ? ?# hopefully employing its buffer interface >> ? ?f = open(FILENAME, 'rb') >> ? ?s = bytearray(os.path.getsize(FILENAME)) >> ? ?o = 0 >> ? ?while True: >> ? ? ? ?b = f.readinto(memoryview(s)[o:o+BUFSIZE]) >> ? ? ? ?if not b: >> ? ? ? ? ? ?break >> ? ? ? ?o += b >> >> And the timings: >> >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.justread()' >> 10 loops, best of 3: 298 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readandcopy()' >> 100 loops, best of 3: 9.22 msec per loop >> $ python3 -O -m timeit 'import fileread_bytearray' >> 'fileread_bytearray.readinto()' >> 100 loops, best of 3: 9.31 msec per loop >> >> The file was 10MB. I expected readinto to perform much better than >> readandcopy. I expected readandcopy to perform slightly better than >> justread. This clearly isn't the case. >> > > What is 'python3' on your machine? If it's 3.2, then this is consistent with > my results. Try it with 3.3 and for a larger file (say ~100MB and up), you > may see the same speed as on 2.7 It's Python 3.2. I tried it for larger files and got some interesting results. readinto() for 10MB files, reading 10MB all at once: readinto/2.7 100 loops, best of 3: 8.6 msec per loop readinto/3.2 10 loops, best of 3: 29.6 msec per loop readinto/3.3 100 loops, best of 3: 19.5 msec per loop With 100KB chunks for the 10MB file (annotated with #): matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import readinto' "readinto.$f()"; done; done read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually faster than the 10MB read read/3.2 10 loops, best of 3: 253 msec per loop # wtf? read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? bytearray_read/2.7 100 loops, best of 3: 7.9 msec per loop bytearray_read/3.2 100 loops, best of 3: 7.48 msec per loop bytearray_read/3.3 100 loops, best of 3: 15.8 msec per loop # wtf? readinto/2.7 100 loops, best of 3: 8.93 msec per loop readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 is performing well? readinto/3.3 10 loops, best of 3: 20.4 msec per loop Here's the code: http://pastebin.com/nUy3kWHQ > > Also, why do you think chunked reads are better here than slurping the whole > file into the bytearray in one go? If you need it wholly in memory anyway, > why not just issue a single read? Sometimes it's not available all at once, I do a lot of socket programming, so this case is of interest to me. As shown above, it's also faster for python2.7. readinto() should also be significantly faster for this case, tho it isn't. > > Eli > > From solipsis at pitrou.net Fri Nov 25 11:55:04 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 11:55:04 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> Message-ID: <20111125115504.65fcd400@pitrou.net> On Fri, 25 Nov 2011 08:38:48 +0200 Eli Bendersky wrote: > > Just to be clear, there were two separate issues raised here. One is the > speed regression of readinto() from 2.7 to 3.2, and the other is the > relative slowness of justread() in 3.3 > > Regarding the second, I'm not sure it's an issue because I tried a larger > file (100MB and then also 300MB) and the speed of 3.3 is now on par with > 3.2 and 2.7 > > However, the original question remains - on the 100MB file also, although > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > same speed (even a few % slower). That said, I now observe with Python 3.3 > the same speed as with 2.7, including the readinto() speedup - so it > appears that the readinto() regression has been solved in 3.3? Any clue > about where it happened (i.e. which bug/changeset)? It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ Regards Antoine. From solipsis at pitrou.net Fri Nov 25 12:04:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 12:04:00 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> Message-ID: <20111125120400.53ce8ca1@pitrou.net> On Fri, 25 Nov 2011 20:34:21 +1100 Matt Joiner wrote: > > It's Python 3.2. I tried it for larger files and got some interesting results. > > readinto() for 10MB files, reading 10MB all at once: > > readinto/2.7 100 loops, best of 3: 8.6 msec per loop > readinto/3.2 10 loops, best of 3: 29.6 msec per loop > readinto/3.3 100 loops, best of 3: 19.5 msec per loop > > With 100KB chunks for the 10MB file (annotated with #): > > matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for > v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > readinto' "readinto.$f()"; done; done > read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > faster than the 10MB read > read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? No "wtf" here, the read() loop is quadratic since you're building a new, larger, bytes object every iteration. Python 2 has a fragile optimization for concatenation of strings, which can avoid the quadratic behaviour on some systems (depends on realloc() being fast). > readinto/2.7 100 loops, best of 3: 8.93 msec per loop > readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 > is performing well? > readinto/3.3 10 loops, best of 3: 20.4 msec per loop What if you allocate the bytearray outside of the timed function? Regards Antoine. From anacrolix at gmail.com Fri Nov 25 12:23:03 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 22:23:03 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125115504.65fcd400@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: You can see in the tests on the largest buffer size tested, 8192, that the naive "read" actually outperforms readinto(). It's possibly by extrapolating into significantly larger buffer sizes that readinto() gets left behind. It's also reasonable to assume that this wasn't tested thoroughly. On Fri, Nov 25, 2011 at 9:55 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 08:38:48 +0200 > Eli Bendersky wrote: >> >> Just to be clear, there were two separate issues raised here. One is the >> speed regression of readinto() from 2.7 to 3.2, and the other is the >> relative slowness of justread() in 3.3 >> >> Regarding the second, I'm not sure it's an issue because I tried a larger >> file (100MB and then also 300MB) and the speed of 3.3 is now on par with >> 3.2 and 2.7 >> >> However, the original question remains - on the 100MB file also, although >> in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> same speed (even a few % slower). That said, I now observe with Python 3.3 >> the same speed as with 2.7, including the readinto() speedup - so it >> appears that the readinto() regression has been solved in 3.3? Any clue >> about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From anacrolix at gmail.com Fri Nov 25 12:37:49 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 22:37:49 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125120400.53ce8ca1@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 20:34:21 +1100 > Matt Joiner wrote: >> >> It's Python 3.2. I tried it for larger files and got some interesting results. >> >> readinto() for 10MB files, reading 10MB all at once: >> >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop >> >> With 100KB chunks for the 10MB file (annotated with #): >> >> matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import >> readinto' "readinto.$f()"; done; done >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually >> faster than the 10MB read >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > No "wtf" here, the read() loop is quadratic since you're building a > new, larger, bytes object every iteration. ?Python 2 has a fragile > optimization for concatenation of strings, which can avoid the > quadratic behaviour on some systems (depends on realloc() being fast). Is there any way to bring back that optimization? a 30 to 100x slow down on probably one of the most common operations... string contatenation, is very noticeable. In python3.3, this is representing a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. > >> readinto/2.7 100 loops, best of 3: 8.93 msec per loop >> readinto/3.2 100 loops, best of 3: 10.3 msec per loop # suddenly 3.2 >> is performing well? >> readinto/3.3 10 loops, best of 3: 20.4 msec per loop > > What if you allocate the bytearray outside of the timed function? This change makes readinto() faster for 100K chunks than the other 2 methods and clears the differences between the versions. readinto/2.7 100 loops, best of 3: 6.54 msec per loop readinto/3.2 100 loops, best of 3: 7.64 msec per loop readinto/3.3 100 loops, best of 3: 7.39 msec per loop Updated test code: http://pastebin.com/8cEYG3BD > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > So as I think Eli suggested, the readinto() performance issue goes away with large enough reads, I'd put down the differences to some unrelated language changes. However the performance drop on read(): Python 3.2 is 30x slower than 2.7, and 3.3 is 100x slower than 2.7. From eliben at gmail.com Fri Nov 25 12:56:05 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 13:56:05 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: <20111125115504.65fcd400@pitrou.net> References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: > > However, the original question remains - on the 100MB file also, although > > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the > > same speed (even a few % slower). That said, I now observe with Python > 3.3 > > the same speed as with 2.7, including the readinto() speedup - so it > > appears that the readinto() regression has been solved in 3.3? Any clue > > about where it happened (i.e. which bug/changeset)? > > It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > Great, thanks. This is an important change, definitely something to wait for in 3.3 Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From anacrolix at gmail.com Fri Nov 25 13:02:53 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Fri, 25 Nov 2011 23:02:53 +1100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: I was under the impression this is already in 3.3? On Nov 25, 2011 10:58 PM, "Eli Bendersky" wrote: > > >> > However, the original question remains - on the 100MB file also, although >> > in 2.7 readinto is 35% faster than readandcopy(), on 3.2 it's about the >> > same speed (even a few % slower). That said, I now observe with Python 3.3 >> > the same speed as with 2.7, including the readinto() speedup - so it >> > appears that the readinto() regression has been solved in 3.3? Any clue >> > about where it happened (i.e. which bug/changeset)? >> >> It would probably be http://hg.python.org/cpython/rev/a1d77c6f4ec1/ > > > Great, thanks. This is an important change, definitely something to wait for in 3.3 > Eli > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eliben at gmail.com Fri Nov 25 13:14:02 2011 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 25 Nov 2011 14:14:02 +0200 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125012319.52a69ebb@pitrou.net> <20111125115504.65fcd400@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 14:02, Matt Joiner wrote: > I was under the impression this is already in 3.3? > Sure, but 3.3 wasn't released yet. Eli P.S. Top-posting again ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 25 13:11:57 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 13:11:57 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: <20111125131157.343ff63b@pitrou.net> On Fri, 25 Nov 2011 22:37:49 +1100 Matt Joiner wrote: > On Fri, Nov 25, 2011 at 10:04 PM, Antoine Pitrou wrote: > > On Fri, 25 Nov 2011 20:34:21 +1100 > > Matt Joiner wrote: > >> > >> It's Python 3.2. I tried it for larger files and got some interesting results. > >> > >> readinto() for 10MB files, reading 10MB all at once: > >> > >> readinto/2.7 100 loops, best of 3: 8.6 msec per loop > >> readinto/3.2 10 loops, best of 3: 29.6 msec per loop > >> readinto/3.3 100 loops, best of 3: 19.5 msec per loop > >> > >> With 100KB chunks for the 10MB file (annotated with #): > >> > >> matt at stanley:~/Desktop$ for f in read bytearray_read readinto; do for > >> v in 2.7 3.2 3.3; do echo -n "$f/$v "; "python$v" -m timeit -s 'import > >> readinto' "readinto.$f()"; done; done > >> read/2.7 100 loops, best of 3: 7.86 msec per loop # this is actually > >> faster than the 10MB read > >> read/3.2 10 loops, best of 3: 253 msec per loop # wtf? > >> read/3.3 10 loops, best of 3: 747 msec per loop # wtf?? > > > > No "wtf" here, the read() loop is quadratic since you're building a > > new, larger, bytes object every iteration. ?Python 2 has a fragile > > optimization for concatenation of strings, which can avoid the > > quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. Well, extending a bytearray() (as you saw yourself) is the proper solution in such cases. Note that you probably won't see a difference when concatenating very small strings. It would be interesting if you could run the same benchmarks on other OSes (Windows or OS X, for example). Regards Antoine. From p.f.moore at gmail.com Fri Nov 25 15:48:00 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 25 Nov 2011 14:48:00 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On 25 November 2011 11:37, Matt Joiner wrote: >> No "wtf" here, the read() loop is quadratic since you're building a >> new, larger, bytes object every iteration. ?Python 2 has a fragile >> optimization for concatenation of strings, which can avoid the >> quadratic behaviour on some systems (depends on realloc() being fast). > > Is there any way to bring back that optimization? a 30 to 100x slow > down on probably one of the most common operations... string > contatenation, is very noticeable. In python3.3, this is representing > a 0.7s stall building a 10MB string. Python 2.7 did this in 0.007s. It's a fundamental, but sadly not well-understood, consequence of having immutable strings. Concatenating immutable strings in a loop is quadratic. There are many ways of working around it (languages like C# and Java have string builder classes, I believe, and in Python you can use StringIO or build a list and join at the end) but that's as far as it goes. The optimisation mentioned was an attempt (by mutating an existing string when the runtime determined that it was safe to do so) to hide the consequences of this fact from end-users who didn't fully understand the issues. It was relatively effective, but like any such case (floating point is another common example) it did some level of harm at the same time as it helped (by obscuring the issue further). It would be nice to have the optimisation back if it's easy enough to do so, for quick-and-dirty code, but it is not a good idea to rely on it (and it's especially unwise to base benchmarks on it working :-)) Paul. From amauryfa at gmail.com Fri Nov 25 16:07:59 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 25 Nov 2011 16:07:59 +0100 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: 2011/11/25 Paul Moore > The optimisation mentioned was an attempt (by mutating an existing > string when the runtime determined that it was safe to do so) to hide > the consequences of this fact from end-users who didn't fully > understand the issues. It was relatively effective, but like any such > case (floating point is another common example) it did some level of > harm at the same time as it helped (by obscuring the issue further). > > It would be nice to have the optimisation back if it's easy enough to > do so, for quick-and-dirty code, but it is not a good idea to rely on > it (and it's especially unwise to base benchmarks on it working :-)) > Note that this string optimization hack is still present in Python 3, but it now acts on *unicode* strings, not bytes. -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From aahz at pythoncraft.com Fri Nov 25 16:40:12 2011 From: aahz at pythoncraft.com (Aahz) Date: Fri, 25 Nov 2011 07:40:12 -0800 Subject: [Python-Dev] webmaster@python.org address not working In-Reply-To: <4ECEDF41.2030008@jcea.es> References: <4ECEDF41.2030008@jcea.es> Message-ID: <20111125154011.GC7042@panix.com> On Fri, Nov 25, 2011, Jesus Cea wrote: > > When mailing there, I get this error. Not sure where to report. > > """ > Final-Recipient: rfc822; sdrees at sdrees.de > Original-Recipient: rfc822;webmaster at python.org > Action: failed > Status: 5.1.1 > Remote-MTA: dns; stefan.zinzdrees.de > Diagnostic-Code: smtp; 550 5.1.1 : Recipient address > rejected: User unknown in local recipient table > """ You reported it to the correct place, I pinged Stefan at the contact address listed by whois. Note that webmaster at python.org is a plain alias, so anyone whose e-mail isn't working will generate a bounce. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ WiFi is the SCSI of the 21st Century -- there are fundamental technical reasons for sacrificing a goat. (with no apologies to John Woods) From p.f.moore at gmail.com Fri Nov 25 16:48:23 2011 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 25 Nov 2011 15:48:23 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111124192933.393cf96f@pitrou.net> <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: > 2011/11/25 Paul Moore >> It would be nice to have the optimisation back if it's easy enough to >> do so, for quick-and-dirty code, but it is not a good idea to rely on >> it (and it's especially unwise to base benchmarks on it working :-)) > > Note that this string optimization hack is still present in Python 3, > but it now acts on *unicode* strings, not bytes. Ah, yes. That makes sense. Paul From fuzzyman at voidspace.org.uk Fri Nov 25 16:50:25 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 15:50:25 +0000 Subject: [Python-Dev] file.readinto performance regression in Python 3.2 vs. 2.7? In-Reply-To: References: <20111125020700.4c38aab7@pitrou.net> <20111125120400.53ce8ca1@pitrou.net> Message-ID: <4ECFB941.7040206@voidspace.org.uk> On 25/11/2011 15:48, Paul Moore wrote: > On 25 November 2011 15:07, Amaury Forgeot d'Arc wrote: >> 2011/11/25 Paul Moore >>> It would be nice to have the optimisation back if it's easy enough to >>> do so, for quick-and-dirty code, but it is not a good idea to rely on >>> it (and it's especially unwise to base benchmarks on it working :-)) >> Note that this string optimization hack is still present in Python 3, >> but it now acts on *unicode* strings, not bytes. > Ah, yes. That makes sense. Although for concatenating immutable bytes presumably the same hack would be *possible*. Michael > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From status at bugs.python.org Fri Nov 25 18:07:30 2011 From: status at bugs.python.org (Python tracker) Date: Fri, 25 Nov 2011 18:07:30 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20111125170730.4BA8A1CC5C@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2011-11-18 - 2011-11-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3134 (+19) closed 22128 (+31) total 25262 (+50) Open issues with patches: 1328 Issues opened (41) ================== #2286: Stack overflow exception caused by test_marshal on Windows x64 http://bugs.python.org/issue2286 reopened by brian.curtin #13387: suggest assertIs(type(obj), cls) for exact type checking http://bugs.python.org/issue13387 reopened by eric.araujo #13433: String format documentation contains error regarding %g http://bugs.python.org/issue13433 opened by Christian.Iversen #13434: time.xmlrpc.com dead http://bugs.python.org/issue13434 opened by pitrou #13435: Copybutton does not hide tracebacks http://bugs.python.org/issue13435 opened by lehmannro #13436: compile() doesn't work on ImportFrom with level=None http://bugs.python.org/issue13436 opened by Janosch.Gr??f #13437: Provide links to the source code for every module in the docum http://bugs.python.org/issue13437 opened by Julian #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 opened by Oleg.Plakhotnyuk #13439: turtle: Errors in docstrings of onkey and onkeypress http://bugs.python.org/issue13439 opened by smichr #13440: Explain the "status quo wins a stalemate" principle in the dev http://bugs.python.org/issue13440 opened by ncoghlan #13441: TestEnUSCollation.test_strxfrm() fails on Solaris http://bugs.python.org/issue13441 opened by haypo #13443: wrong links and examples in the functional HOWTO http://bugs.python.org/issue13443 opened by eli.bendersky #13444: closed stdout causes error on stderr when the interpreter unco http://bugs.python.org/issue13444 opened by Ronny.Pfannschmidt #13445: Enable linking the module pysqlite with Berkeley DB SQL instea http://bugs.python.org/issue13445 opened by Lauren.Foutz #13446: imaplib, fetch: improper behaviour on read-only selected mailb http://bugs.python.org/issue13446 opened by char.nikolaou #13447: Add tests for Tools/scripts/reindent.py http://bugs.python.org/issue13447 opened by eric.araujo #13448: PEP 3155 implementation http://bugs.python.org/issue13448 opened by pitrou #13449: sched - provide an "async" argument for run() method http://bugs.python.org/issue13449 opened by giampaolo.rodola #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 opened by akira #13451: sched.py: speedup cancel() method http://bugs.python.org/issue13451 opened by giampaolo.rodola #13452: PyUnicode_EncodeDecimal: reject error handlers different than http://bugs.python.org/issue13452 opened by haypo #13453: Tests and network timeouts http://bugs.python.org/issue13453 opened by haypo #13454: crash when deleting one pair from tee() http://bugs.python.org/issue13454 opened by PyryP #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 opened by ezio.melotti #13456: Providing a custom HTTPResponse class to HTTPConnection http://bugs.python.org/issue13456 opened by r.david.murray #13461: Error on test_issue_1395_5 with Python 2.7 and VS2010 http://bugs.python.org/issue13461 opened by sable #13462: Improve code and tests for Mixin2to3 http://bugs.python.org/issue13462 opened by eric.araujo #13463: Fix parsing of package_data http://bugs.python.org/issue13463 opened by eric.araujo #13464: HTTPResponse is missing an implementation of readinto http://bugs.python.org/issue13464 opened by r.david.murray #13465: A Jython section in the dev guide would be great http://bugs.python.org/issue13465 opened by fwierzbicki #13466: new timezones http://bugs.python.org/issue13466 opened by Rioky #13467: Typo in doc for library/sysconfig http://bugs.python.org/issue13467 opened by naoki #13471: setting access time beyond Jan. 2038 on remote share failes on http://bugs.python.org/issue13471 opened by Thorsten.Simons #13472: devguide doesn???t list all build dependencies http://bugs.python.org/issue13472 opened by eric.araujo #13473: Add tests for files byte-compiled by distutils[2] http://bugs.python.org/issue13473 opened by eric.araujo #13474: Mention of "-m" Flag Missing From Doc on Execution Model http://bugs.python.org/issue13474 opened by eric.snow #13475: Add '-p'/'--path0' command line option to override sys.path[0] http://bugs.python.org/issue13475 opened by ncoghlan #13476: Simple exclusion filter for unittest autodiscovery http://bugs.python.org/issue13476 opened by ncoghlan #13477: tarfile module should have a command line http://bugs.python.org/issue13477 opened by brandon-rhodes #13478: No documentation for timeit.default_timer http://bugs.python.org/issue13478 opened by sandro.tosi #13479: pickle too picky on re-defined classes http://bugs.python.org/issue13479 opened by kxroberto Most recent 15 issues with no replies (15) ========================================== #13478: No documentation for timeit.default_timer http://bugs.python.org/issue13478 #13476: Simple exclusion filter for unittest autodiscovery http://bugs.python.org/issue13476 #13474: Mention of "-m" Flag Missing From Doc on Execution Model http://bugs.python.org/issue13474 #13467: Typo in doc for library/sysconfig http://bugs.python.org/issue13467 #13464: HTTPResponse is missing an implementation of readinto http://bugs.python.org/issue13464 #13463: Fix parsing of package_data http://bugs.python.org/issue13463 #13456: Providing a custom HTTPResponse class to HTTPConnection http://bugs.python.org/issue13456 #13438: "Delete patch set" review action doesn't work http://bugs.python.org/issue13438 #13421: PyCFunction_* are not documented anywhere http://bugs.python.org/issue13421 #13413: time.daylight incorrect behavior in linux glibc http://bugs.python.org/issue13413 #13408: Rename packaging.resources back to datafiles http://bugs.python.org/issue13408 #13403: Option for XMLPRC Server to support HTTPS http://bugs.python.org/issue13403 #13397: Option for XMLRPC clients to automatically transform Fault exc http://bugs.python.org/issue13397 #13372: handle_close called twice in poll2 http://bugs.python.org/issue13372 #13369: timeout with exit code 0 while re-running failed tests http://bugs.python.org/issue13369 Most recent 15 issues waiting for review (15) ============================================= #13473: Add tests for files byte-compiled by distutils[2] http://bugs.python.org/issue13473 #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 #13452: PyUnicode_EncodeDecimal: reject error handlers different than http://bugs.python.org/issue13452 #13451: sched.py: speedup cancel() method http://bugs.python.org/issue13451 #13450: add assertions to implement the intent in ''.format_map test http://bugs.python.org/issue13450 #13449: sched - provide an "async" argument for run() method http://bugs.python.org/issue13449 #13448: PEP 3155 implementation http://bugs.python.org/issue13448 #13446: imaplib, fetch: improper behaviour on read-only selected mailb http://bugs.python.org/issue13446 #13436: compile() doesn't work on ImportFrom with level=None http://bugs.python.org/issue13436 #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 #13420: newer() function in dep_util.py discard changes in the same se http://bugs.python.org/issue13420 #13410: String formatting bug in interactive mode http://bugs.python.org/issue13410 #13405: Add DTrace probes http://bugs.python.org/issue13405 #13402: Document absoluteness of sys.executable http://bugs.python.org/issue13402 #13396: new method random.getrandbytes() http://bugs.python.org/issue13396 Top 10 most discussed issues (10) ================================= #13441: TestEnUSCollation.test_strxfrm() fails on Solaris http://bugs.python.org/issue13441 13 msgs #10318: "make altinstall" installs many files with incorrect shebangs http://bugs.python.org/issue10318 9 msgs #13429: provide __file__ to extension init function http://bugs.python.org/issue13429 9 msgs #13448: PEP 3155 implementation http://bugs.python.org/issue13448 9 msgs #12328: multiprocessing's overlapped PipeConnection on Windows http://bugs.python.org/issue12328 8 msgs #13455: Reorganize tracker docs in the devguide http://bugs.python.org/issue13455 8 msgs #9530: integer undefined behaviors http://bugs.python.org/issue9530 7 msgs #12776: argparse: type conversion function should be called only once http://bugs.python.org/issue12776 7 msgs #12890: cgitb displays

tags when executed in text mode http://bugs.python.org/issue12890 7 msgs #13466: new timezones http://bugs.python.org/issue13466 7 msgs Issues closed (29) ================== #7049: decimal.py: Three argument power issues http://bugs.python.org/issue7049 closed by mark.dickinson #8270: Should socket.PF_PACKET be removed, in favor of socket.AF_PACK http://bugs.python.org/issue8270 closed by neologix #9614: _pickle is not entirely 64-bit safe http://bugs.python.org/issue9614 closed by pitrou #12156: test_multiprocessing.test_notify_all() timeout (1 hour) on Fre http://bugs.python.org/issue12156 closed by neologix #12245: Document the meaning of FLT_ROUNDS constants for sys.float_inf http://bugs.python.org/issue12245 closed by mark.dickinson #12934: pysetup doesn???t work for the docutils project http://bugs.python.org/issue12934 closed by eric.araujo #13093: Redundant code in PyUnicode_EncodeDecimal() http://bugs.python.org/issue13093 closed by haypo #13156: _PyGILState_Reinit assumes auto thread state will always exist http://bugs.python.org/issue13156 closed by neologix #13215: multiprocessing Manager.connect() aggressively retries refused http://bugs.python.org/issue13215 closed by neologix #13245: sched.py kwargs addition and default time functions http://bugs.python.org/issue13245 closed by giampaolo.rodola #13313: test_time fails: tzset() do not change timezone http://bugs.python.org/issue13313 closed by haypo #13338: Not all enumerations used in _Py_ANNOTATE_MEMORY_ORDER http://bugs.python.org/issue13338 closed by python-dev #13393: Improve BufferedReader.read1() http://bugs.python.org/issue13393 closed by pitrou #13401: test_argparse fails with root permissions http://bugs.python.org/issue13401 closed by python-dev #13411: Hashable memoryviews http://bugs.python.org/issue13411 closed by pitrou #13415: del os.environ[key] ignores errors http://bugs.python.org/issue13415 closed by python-dev #13417: faster utf-8 decoding http://bugs.python.org/issue13417 closed by pitrou #13425: http.client.HTTPMessage.getallmatchingheaders() always returns http://bugs.python.org/issue13425 closed by stachjankowski #13431: Pass context information into the extension module init functi http://bugs.python.org/issue13431 closed by scoder #13432: Encoding alias "unicode" http://bugs.python.org/issue13432 closed by georg.brandl #13442: Better support for pipe I/O encoding in subprocess http://bugs.python.org/issue13442 closed by ncoghlan #13457: Display module name as string in `ImportError` http://bugs.python.org/issue13457 closed by cool-RR #13458: _ssl memory leak in _get_peer_alt_names http://bugs.python.org/issue13458 closed by pitrou #13459: logger.propagate=True behavior clarification http://bugs.python.org/issue13459 closed by python-dev #13460: urllib methods should demand unicode, instead demand str http://bugs.python.org/issue13460 closed by r.david.murray #13468: Python 2.7.1 SegmentationFaults when given high recursion limi http://bugs.python.org/issue13468 closed by benjamin.peterson #13469: TimedRotatingFileHandler fails to handle intervals of several http://bugs.python.org/issue13469 closed by vinay.sajip #13470: A user may need ... when she has ... http://bugs.python.org/issue13470 closed by pitrou #13480: range exits loop without action when start is higher than end http://bugs.python.org/issue13480 closed by r.david.murray From brett at python.org Fri Nov 25 18:37:59 2011 From: brett at python.org (Brett Cannon) Date: Fri, 25 Nov 2011 12:37:59 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > wrote: > > The problem is not with maintaining the modified directory. The > > problem was always things like changing interface between the C > > version and the Python version or introduction of new stuff that does > > not run on pypy because it relies on refcounting. I don't see how > > having a subrepo helps here. > > Indeed, the main thing that can help on this front is to get more > modules to the same state as heapq, io, datetime (and perhaps a few > others that have slipped my mind) where the CPython repo actually > contains both C and Python implementations and the test suite > exercises both to make sure their interfaces remain suitably > consistent (even though, during normal operation, CPython users will > only ever hit the C accelerated version). > > This not only helps other implementations (by keeping a Python version > of the module continuously up to date with any semantic changes), but > can help people that are porting CPython to new platforms: the C > extension modules are far more likely to break in that situation than > the pure Python equivalents, and a relatively slow fallback is often > going to be better than no fallback at all. (Note that ctypes based > pure Python modules *aren't* particularly useful for this purpose, > though - due to the libffi dependency, ctypes is one of the extension > modules most likely to break when porting). > And the other reason I plan to see this through before I die is to help distribute the maintenance burden. Why should multiple VMs fix bad assumptions made by CPython in their own siloed repos and then we hope the change gets pushed upstream to CPython when it could be fixed once in a single repo that everyone works off of? -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri Nov 25 18:37:46 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 Nov 2011 18:37:46 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: <20111125183746.45ab20b5@pitrou.net> On Fri, 25 Nov 2011 12:37:59 -0500 Brett Cannon wrote: > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > > > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > > wrote: > > > The problem is not with maintaining the modified directory. The > > > problem was always things like changing interface between the C > > > version and the Python version or introduction of new stuff that does > > > not run on pypy because it relies on refcounting. I don't see how > > > having a subrepo helps here. > > > > Indeed, the main thing that can help on this front is to get more > > modules to the same state as heapq, io, datetime (and perhaps a few > > others that have slipped my mind) where the CPython repo actually > > contains both C and Python implementations and the test suite > > exercises both to make sure their interfaces remain suitably > > consistent (even though, during normal operation, CPython users will > > only ever hit the C accelerated version). > > > > This not only helps other implementations (by keeping a Python version > > of the module continuously up to date with any semantic changes), but > > can help people that are porting CPython to new platforms: the C > > extension modules are far more likely to break in that situation than > > the pure Python equivalents, and a relatively slow fallback is often > > going to be better than no fallback at all. (Note that ctypes based > > pure Python modules *aren't* particularly useful for this purpose, > > though - due to the libffi dependency, ctypes is one of the extension > > modules most likely to break when porting). > > > > And the other reason I plan to see this through before I die Uh! Any bad news? :/ From amauryfa at gmail.com Fri Nov 25 19:21:48 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 25 Nov 2011 19:21:48 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: 2011/11/25 Brett Cannon > > > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski >> wrote: >> > The problem is not with maintaining the modified directory. The >> > problem was always things like changing interface between the C >> > version and the Python version or introduction of new stuff that does >> > not run on pypy because it relies on refcounting. I don't see how >> > having a subrepo helps here. >> >> Indeed, the main thing that can help on this front is to get more >> modules to the same state as heapq, io, datetime (and perhaps a few >> others that have slipped my mind) where the CPython repo actually >> contains both C and Python implementations and the test suite >> exercises both to make sure their interfaces remain suitably >> consistent (even though, during normal operation, CPython users will >> only ever hit the C accelerated version). >> >> This not only helps other implementations (by keeping a Python version >> of the module continuously up to date with any semantic changes), but >> can help people that are porting CPython to new platforms: the C >> extension modules are far more likely to break in that situation than >> the pure Python equivalents, and a relatively slow fallback is often >> going to be better than no fallback at all. (Note that ctypes based >> pure Python modules *aren't* particularly useful for this purpose, >> though - due to the libffi dependency, ctypes is one of the extension >> modules most likely to break when porting). >> > > And the other reason I plan to see this through before I die is to help > distribute the maintenance burden. Why should multiple VMs fix bad > assumptions made by CPython in their own siloed repos and then we hope the > change gets pushed upstream to CPython when it could be fixed once in a > single repo that everyone works off of? > PyPy copied the CPython stdlib in a directory named "2.7", which is never modified; instead, adaptations are made by copying the file into "modified-2.7", and fixed there. Both directories appear in sys.path This was done for this very reason: so that it's easy to identify the differences and suggest changes to push upstream. But this process was not very successful for several reasons: - The definition of "bad assumptions" used to be very strict. It's much much better nowadays, thanks to the ResourceWarning in 3.x for example (most changes in modified-2.7 are related to the garbage collector), and wider acceptance by the core developers of the "@impl_detail" decorators in tests. - 2.7 was already in maintenance mode, and such changes were not considered as bug fixes, so modified-2.7 never shrinks. It was a bit hard to find the motivation to fix only the 3.2 version of the stdlib, which you can not even test with PyPy! - Some modules in the stdlib rely on specific behaviors of the VM or extension modules that are not always easy to implement correctly in PyPy. The ctypes module is the most obvious example to me, but also the pickle/copy modules which were modified because of subtle differences around built-in methods (or was it the __builtins__ module?) And oh, I almost forgot distutils, which needs to parse some Makefile which of course does not exist in PyPy. - Differences between C extensions and pure Python modules are sometimes considered "undefined behaviour" and are rejected. See issue13274, this one has an happy ending, but I remember that the _pyio.py module chose to not fix some obscure reentrancy issues (which I completely agree with) -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Fri Nov 25 23:14:04 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 25 Nov 2011 22:14:04 +0000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >> Mea culpa for not keeping track, but what's the status of PEP 380? I >> really want this in Python 3.3! > > There are two relevant tracker issues (both with me for the moment). > > The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 > > That's really just missing the doc updates - I haven't had a chance to > look at Zbyszek's latest offering on that front, but it shouldn't be > far off being complete (the *text* in his previous docs patch actually > seemed reasonable - I mainly objected to way it was organised). > > However, the PEP 380 test suite updates have a dependency on a new dis > module feature that provides an iterator over a structured description > of bytecode instructions: http://bugs.python.org/issue11816 Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. Michael > > I find Meador's suggestion to change the name of the new API to > something involving the word "instruction" appealing, so I plan to do > that, which will have a knock-on effect on the tests in the PEP 380 > branch. However, even once I get that done, Raymond specifically said > he wanted to review the dis module patch before I check it in, so I > don't plan to commit it until he gives the OK (either because he > reviewed it, or because he decides he's OK with it going in without > his review and he can review and potentially update it in Mercurial > any time before 3.3 is released). > > I currently plan to update my working branches for both of those on > the 3rd of December, so hopefully they'll be ready to go within the > next couple of weeks. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From jcea at jcea.es Sat Nov 26 03:18:51 2011 From: jcea at jcea.es (Jesus Cea) Date: Sat, 26 Nov 2011 03:18:51 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> Message-ID: <4ED04C8B.8000502@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/11/11 16:56, ?ric Araujo wrote: > Ezio and I chatted a bit about his on IRC and he may try to write > a Python parser for Misc/NEWS in order to write a fully automated > merge tool. Anything new in this front? :-) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtBMi5lgi5GaxT1NAQLKsgP6At6qnzHknuTjq35mHfxVSOxJnMuZ8/vx 5ZXHcxCuPJud9GJz0+NEmDPImQAtRUZyV41ud9nQYIfhYE5rV4qBiK7KwMspg39o kclfRhMIPsQV3PkB4dDWy+gEkck+Q16pSzdtxbzKx7DpYk7lnFp/vsHQbNC5iqC9 pfmMny4L0s8= =NlDr -----END PGP SIGNATURE----- From ncoghlan at gmail.com Sat Nov 26 05:39:45 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2011 14:39:45 +1000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord wrote: > > On 24 Nov 2011, at 04:06, Nick Coghlan wrote: > >> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>> really want this in Python 3.3! >> >> There are two relevant tracker issues (both with me for the moment). >> >> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >> >> That's really just missing the doc updates - I haven't had a chance to >> look at Zbyszek's latest offering on that front, but it shouldn't be >> far off being complete (the *text* in his previous docs patch actually >> seemed reasonable - I mainly objected to way it was organised). >> >> However, the PEP 380 test suite updates have a dependency on a new dis >> module feature that provides an iterator over a structured description >> of bytecode instructions: http://bugs.python.org/issue11816 > > > Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. The affected tests aren't testing the PEP 380 semantics, they're specifically testing CPython's bytecode generation for yield from expressions and disassembly of same. Just because they aren't of any interest to other implementations doesn't mean *we* don't need them :) There are plenty of behavioural tests to go along with the bytecode specific ones, and those *will* be useful to other implementations. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Sat Nov 26 06:14:59 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Fri, 25 Nov 2011 21:14:59 -0800 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > On 12/11/11 16:56, ?ric Araujo wrote: >> Ezio and I chatted a bit about his on IRC and he may try to write >> a Python parser for Misc/NEWS in order to write a fully automated >> merge tool. > > Anything new in this front? :-) To me, it would make more sense to split the file into a Misc/NEWS3.2 and Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a piece of cake and would avoid adding a parser (an its idiosyncracies) to the toolchain. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Nov 26 06:29:34 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 Nov 2011 15:29:34 +1000 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: On Sat, Nov 26, 2011 at 3:14 PM, Raymond Hettinger wrote: > > On Nov 25, 2011, at 6:18 PM, Jesus Cea wrote: > > On 12/11/11 16:56, ?ric Araujo wrote: > > Ezio and I chatted a bit about his on IRC and he may try to write > > a Python parser for Misc/NEWS in order to write a fully automated > > merge tool. > > Anything new in this front? :-) > > To me, it would make more sense to split the file into a Misc/NEWS3.2 and > Misc/NEWS3.3 much as we've done with whatsnew. ?That would make merging a > piece of cake and would avoid adding a parser (and its idiosyncracies) to the > toolchain. +1 A simple-but-it-works approach to this problem sounds good to me. We'd still need to work out a few conventions about how changes that affect both versions get recorded (I still favour putting independent entries in both files), but simply eliminating the file name collision will also eliminate most of the merge conflicts. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From fijall at gmail.com Sat Nov 26 08:46:29 2011 From: fijall at gmail.com (Maciej Fijalkowski) Date: Sat, 26 Nov 2011 09:46:29 +0200 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: On Sat, Nov 26, 2011 at 6:39 AM, Nick Coghlan wrote: > On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord > wrote: >> >> On 24 Nov 2011, at 04:06, Nick Coghlan wrote: >> >>> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>>> really want this in Python 3.3! >>> >>> There are two relevant tracker issues (both with me for the moment). >>> >>> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >>> >>> That's really just missing the doc updates - I haven't had a chance to >>> look at Zbyszek's latest offering on that front, but it shouldn't be >>> far off being complete (the *text* in his previous docs patch actually >>> seemed reasonable - I mainly objected to way it was organised). >>> >>> However, the PEP 380 test suite updates have a dependency on a new dis >>> module feature that provides an iterator over a structured description >>> of bytecode instructions: http://bugs.python.org/issue11816 >> >> >> Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. > > The affected tests aren't testing the PEP 380 semantics, they're > specifically testing CPython's bytecode generation for yield from > expressions and disassembly of same. Just because they aren't of any > interest to other implementations doesn't mean *we* don't need them :) > > There are plenty of behavioural tests to go along with the bytecode > specific ones, and those *will* be useful to other implementations. > > Cheers, > Nick. > I'm with nick on this one, seems like a very useful test, just remember to mark it as @impl_detail (or however the decorator is called). Cheers, fijal From fuzzyman at voidspace.org.uk Sat Nov 26 14:46:12 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 26 Nov 2011 13:46:12 +0000 Subject: [Python-Dev] PEP 380 In-Reply-To: References: Message-ID: <4ED0EDA4.6050309@voidspace.org.uk> On 26/11/2011 07:46, Maciej Fijalkowski wrote: > On Sat, Nov 26, 2011 at 6:39 AM, Nick Coghlan wrote: >> On Sat, Nov 26, 2011 at 8:14 AM, Michael Foord >> wrote: >>> On 24 Nov 2011, at 04:06, Nick Coghlan wrote: >>> >>>> On Thu, Nov 24, 2011 at 10:28 AM, Guido van Rossum wrote: >>>>> Mea culpa for not keeping track, but what's the status of PEP 380? I >>>>> really want this in Python 3.3! >>>> There are two relevant tracker issues (both with me for the moment). >>>> >>>> The main tracker issue for PEP 380 is here: http://bugs.python.org/issue11682 >>>> >>>> That's really just missing the doc updates - I haven't had a chance to >>>> look at Zbyszek's latest offering on that front, but it shouldn't be >>>> far off being complete (the *text* in his previous docs patch actually >>>> seemed reasonable - I mainly objected to way it was organised). >>>> >>>> However, the PEP 380 test suite updates have a dependency on a new dis >>>> module feature that provides an iterator over a structured description >>>> of bytecode instructions: http://bugs.python.org/issue11816 >>> >>> Is it necessary to test parts of PEP 380 through bytecode structures rather than semantics? Those tests aren't going to be usable by other implementations. >> The affected tests aren't testing the PEP 380 semantics, they're >> specifically testing CPython's bytecode generation for yield from >> expressions and disassembly of same. Just because they aren't of any >> interest to other implementations doesn't mean *we* don't need them :) >> >> There are plenty of behavioural tests to go along with the bytecode >> specific ones, and those *will* be useful to other implementations. >> >> Cheers, >> Nick. >> > I'm with nick on this one, seems like a very useful test, just > remember to mark it as @impl_detail (or however the decorator is > called). Fair enough. :-) If other tests are failing (the semantics are wrong) then having a test that shows you the semantics are screwed because the bytecode has been incorrectly generated will be a useful diagnostic tool. On the other hand it is hard to see that bytecode generation could be "wrong" without it affecting some test of semantics that should also fail - so as tests in their own right the bytecode tests *must* be superfluous (or there is some aspect of the semantics that is *only* tested through the bytecode and that seems bad, particularly for other implementations). All the best, Michael > Cheers, > fijal > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From merwok at netwok.org Sat Nov 26 14:52:32 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 14:52:32 +0100 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> Message-ID: <4ED0EF20.1060203@netwok.org> Le 25/11/2011 19:21, Amaury Forgeot d'Arc a ?crit : > And oh, I almost forgot distutils, which needs to parse some Makefile which > of course does not exist in PyPy. This is a bug (#10764) that I intend to fix for the next releases of 2.7 and 3.2. I also want to fix all modules that use sys.version[:2] to get 'X.Y', which is a CPython implementation detail. I find PyPy and excellent project, so you can send any bugs in distutils, sysconfig, site and friends my way! I also hope to make distutils2 compatible with PyPy before 2012. Cheers From merwok at netwok.org Sat Nov 26 17:13:03 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:13:03 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <4ED1100F.4090701@netwok.org> Le 26/11/2011 03:18, Jesus Cea a ?crit : > On 12/11/11 16:56, ?ric Araujo wrote: >> Ezio and I chatted a bit about his on IRC and he may try to write >> a Python parser for Misc/NEWS in order to write a fully automated >> merge tool. > Anything new in this front? :-) Not from me. I don?t have the roundtuits, and I find my hgddmt script good enough. Cheers From aahz at pythoncraft.com Sat Nov 26 17:14:35 2011 From: aahz at pythoncraft.com (Aahz) Date: Sat, 26 Nov 2011 08:14:35 -0800 Subject: [Python-Dev] 404 in (important) documentation in www.python.org and contributor agreement In-Reply-To: <4ECEDAC9.7040903@jcea.es> References: <4ECEDAC9.7040903@jcea.es> Message-ID: <20111126161435.GA24540@panix.com> On Fri, Nov 25, 2011, Jesus Cea wrote: > > Checking documentation abut the contributor license agreement, I had > encounter a wrong HTML link in http://www.python.org/about/help/ : > > * "Python Patch Guidelines" points to > http://www.python.org/dev/patches/, that doesn't exist. Fixed > PS: The devguide doesn't say anything (AFAIK) about the contributor > agreement. The devguide seems to now be hosted on docs.python.org and AFAIK the web team doesn't deal with that. Someone from python-dev needs to lead. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ WiFi is the SCSI of the 21st Century -- there are fundamental technical reasons for sacrificing a goat. (with no apologies to John Woods) From merwok at netwok.org Sat Nov 26 17:16:44 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:16:44 +0100 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: <4ED110EC.3080702@netwok.org> Le 26/11/2011 06:29, Nick Coghlan a ?crit : > On Sat, Nov 26, 2011 at 3:14 PM, Raymond Hettinger wrote: >> To me, it would make more sense to split the file into a Misc/NEWS3.2 and >> Misc/NEWS3.3 much as we've done with whatsnew. That would make merging a >> piece of cake and would avoid adding a parser (and its idiosyncracies) to the >> toolchain. > > +1 > > A simple-but-it-works approach to this problem sounds good to me. > > We'd still need to work out a few conventions about how changes that > affect both versions get recorded (I still favour putting independent > entries in both files), but simply eliminating the file name collision > will also eliminate most of the merge conflicts. Maybe I?m not seeing something, but adding an entry by hand does not sound much better than solving conflicts by hand. Another idea: If we had different sections for bug fixes and new features (with or without another level of core/lib/doc/tests separation), then there should be less (no?) conflicts. Regards From merwok at netwok.org Sat Nov 26 17:23:54 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:23:54 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> Message-ID: <4ED1129A.5090702@netwok.org> Le 24/11/2011 22:46, Xavier Morel a ?crit : > Wouldn't it be simpler to just use MQ and upload the patch(es) > from the series? MQ is a very powerful and useful tool, but its learning curve is steeper than regular Mercurial, and it is not designed for long-term development. Rebasing patches is more fragile and less user-friendly than merging branches, and it?s also easier to corrupt your MQ patch queue than your Mercurial repo. I like Mercurial merges and I don?t like diffs of diffs, so I avoid MQ. > Would be easier to keep in sync with the development tip too. How so? With a regular clone you have to pull and merge regularly, with MQ you have to pull and rebase. Regards From merwok at netwok.org Sat Nov 26 17:30:22 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:30:22 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ECEFFD4.5030601@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> Message-ID: <4ED1141E.9050604@netwok.org> Le 25/11/2011 03:39, Jesus Cea a ?crit : > On 24/11/11 18:08, ?ric Araujo wrote: >>> I have a question and I would rather have an answer instead of >>> actually trying and getting myself in a messy situation. >> Clones are cheap, trying is cheap! > [snip valid reasons for not trying] My reply was tongue-in-cheek :) FYI, it?s not considered pollution to use a tracker issue to test hooks or Mercurial integration (there?s even one issue entirely devoted to such tests, but I can?t find its number). >>> 5. Development of the new feature is taking a long time, and >>> python canonical version keeps moving forward. The clone+branch >>> and the original python version are diverging. Eventually there >>> are changes in python that the programmer would like in her >>> version, so she does a "pull" and then a merge for the original >>> python branch to her named branch. >> I do this all the time. I work on a fix-nnnn branch, and once a >> week for example I pull and merge the base branch. Sometimes there >> are no conflicts except Misc/NEWS, sometimes I have to adapt my >> code because of other people?s changes before I can commit the >> merge. > That is good, because that means your patch is always able to be > applied to the original branch tip, and that you changes work with > current work in the mainline. > > That is what I want to do, but I need to know that it is safe to do so > (from the "Create Patch" perspective). I don?t understand ?safe?. >>> 6. What would be posted in the bug tracker when she does a new >>> "Create Patch"?. Only her changes, her changes SINCE the merge, >>> her changes plus merged changes or something else?. >> The diff would be equivalent to ?hg diff -r base? and would contain >> all the changes she did to add the bug fix or feature. Merging >> only makes sure that the computed diff does not appear to touch >> unrelated files, IOW that it applies cleanly. (Barring bugs in >> Mercurial-Roundup integration, we have a few of these in the >> metatracker.) > So you are saying that "Create patch" will ONLY get the differences in > the development branch and not the changes brought in from the merge?. I don?t really understand how you understood what I said :( The merge brings in changes from default; when you diff your branch against default later, it will not show the changes brought by the merge, but it will apply cleanly on top of default. Does this wording make sense? Regards From merwok at netwok.org Sat Nov 26 17:44:59 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:44:59 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4EA560E3.8060307@gmail.com> References: <4EA560E3.8060307@gmail.com> Message-ID: <4ED1178B.6030203@netwok.org> Hi, +1 to all Ezio said. One specific remark: PendingDeprecationWarning could just become an alias of DeprecationWarning, but maybe there is code out there that relies on the distinction, and there is no real value in making it an alias (there is value in removing it altogether, but we can?t do that, can we?). I don?t see the need to deprecate PDW, except in documentation, and am -1 to the metaclass idea (no need). Cheers From merwok at netwok.org Sat Nov 26 17:53:02 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Sat, 26 Nov 2011 17:53:02 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <20110811183114.701DF3A406B@sparrow.telecommunity.com> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> Message-ID: <4ED1196E.8090505@netwok.org> Hi, Going through my email backlog. > Le 11/08/2011 20:30, P.J. Eby a ?crit : >> At 04:39 PM 8/11/2011 +0200, ?ric Araujo wrote: >>>> (By the way, both of these additions to the import protocol (i.e. the >>>> dynamically-added ``__path__``, and dynamically-created modules) >>>> apply recursively to child packages, using the parent package's >>>> ``__path__`` in place of ``sys.path`` as a basis for generating a >>>> child ``__path__``. This means that self-contained and virtual >>>> packages can contain each other without limitation, with the caveat >>>> that if you put a virtual package inside a self-contained one, it's >>>> gonna have a really short ``__path__``!) >>> I don't understand the caveat or its implications. >> Since each package's __path__ is the same length or shorter than its >> parent's by default, then if you put a virtual package inside a >> self-contained one, it will be functionally speaking no different >> than a self-contained one, in that it will have only one path >> entry. So, it's not really useful to put a virtual package inside a >> self-contained one, even though you can do it. (Apart form it >> letting you avoid a superfluous __init__ module, assuming it's indeed >> superfluous.) I still don?t understand why this matters or what negative effects it could have on code, but I?m fine with not understanding. I?ll trust that people writing or maintaining import-related tools will agree or complain about that item. >>> I?ll just regret that it's not possible to provide a module docstring >>> to inform that this is a namespace package used for X and Y. >> It *is* possible - you'd just have to put it in a "zc.py" file. IOW, >> this PEP still allows "namespace-defining packages" to exist, as was >> requested by early commenters on PEP 382. It just doesn't *require* >> them to exist in order for the namespace contents to be importable. That?s quite cool. I guess such a namespace-defining module (zc.py here) would be importable, right? Also, would it cause worse performance for other zc.* packages than if there were no zc.py? >>> This was probably said on import-sig, but here I go: yet another import >>> artifact in the sys module! I hope we get ImportEngine in 3.3 to clean >>> up all this. >> Well, I rather *like* having them there, personally, vs. having to >> learn yet another API, but oh well, whatever. Agreed with ?whatever? :) I just like to grunt sometimes. >> AFAIK, ImportEngine isn't going to do away with the need for the >> global ones to live somewhere, Yep, but as Nick replied, at least we?ll gain one structure to rule them all. >>> Let's imagine my application Spam has a namespace spam.ext for plugins. >>> To use a custom directory where plugins are stored, or a zip file with >>> plugins (I don't use eggs, so let me talk about zip files here), I'd >>> have to call sys.path.append *and* pkgutil.extend_virtual_paths? >> As written in the current proposal, yes. There was some discussion >> on Python-Dev about having this happen automatically, and I proposed >> that it could be done by making virtual packages' __path__ attributes >> an iterable proxy object, rather than a list: That sounds a bit too complicated. What about just having pkgutil.extend_virtual_paths call sys.path.append? For maximum flexibility, extend_virtual_paths could have an argument to avoid calling sys.path.append. >>> Besides, putting data files in a Python package is held very poorly by >>> some (mostly people following the File Hierarchy Standard), >> ISTM that anybody who thinks that is being inconsistent in >> considering the Python code itself to not be a "data file" by that >> same criterion... especially since one of the more common uses for >> such "data" files are for e.g. HTML templates (which usually contain >> some sort of code) or GUI resources (which are pretty tightly bound >> to the code). A good example is documentation: Having a unique location (/usr/share/doc) for all installed software makes my life easier. Another example is JavaScript files used with HTML documents, such as jQuery: Debian recently split the jQuery file out of their Sphinx package, so that there is only one library installed that all packages can use and that can be updated and fixed once for all. (I?m simplifying; there can be multiple versions of libraries, but not multiple copies. I?ll stop here; I?m not one of the authors of the Filesystem Hierarchy Standard, and I?ll rant against package_data in distutils mailing lists :) >>> A pure virtual package having no source file, I think it should have no >>> __file__ at all. Antoine and someone else thought likewise (I can find the link if you want); do you consider it consensus enough to update the PEP? Regards From benjamin at python.org Sat Nov 26 20:36:25 2011 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 26 Nov 2011 13:36:25 -0600 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> <52625A45-0613-43DE-9892-2EB6DFA635C2@gmail.com> Message-ID: 2011/11/25 Raymond Hettinger : > To me, it would make more sense to split the file into a Misc/NEWS3.2 and > Misc/NEWS3.3 much as we've done with whatsnew. ?That would make merging a > piece of cake and would avoid adding a parser (an its idiosyncracies) to the > toolchain. Would we not add fixes from 3.2, which were ported to 3.3 to the NEWS3.3 file then? -- Regards, Benjamin From zbyszek at in.waw.pl Sat Nov 26 18:54:13 2011 From: zbyszek at in.waw.pl (=?UTF-8?B?WmJpZ25pZXcgSsSZZHJ6ZWpld3NraS1Tem1law==?=) Date: Sat, 26 Nov 2011 18:54:13 +0100 Subject: [Python-Dev] ImportError: No module named multiarray (is back) In-Reply-To: <4ECD1D31.7080802@netwok.org> References: <4ECBFF19.8080100@in.waw.pl> <4ECD1D31.7080802@netwok.org> Message-ID: <4ED127C5.1060004@in.waw.pl> Hi, I apologize in advance for the length of this mail. sys.path ======== When a script or a module is executed by invoking python with proper arguments, sys.path is extended. When a path to script is given, the directory containing the script is prepended. When '-m' or '-c' is used, $CWD is prepended. This is documented in http://docs.python.org/dev/using/cmdline.html, so far ok. sys.path and $PYTHONPATH is like $PATH -- if you can convince someone to put a directory under your control in any of them, you can execute code as this someone. Therefore, sys.path is dangerous and important. Unfortunately, sys.path manipulations are only described very briefly, and without any commentary, in the on-line documentation. python(1) manpage doesn't even mention them. The problem: each of the commands below is insecure: python /tmp/script.py (when script.py is safe by itself) ('/tmp' is added to sys.path, so an attacker can override any module imported in /tmp/script.py by writing to /tmp/module.py) cd /tmp && python -mtimeit -s 'import numpy' 'numpy.test()' (UNIX users are accustomed to being able to safely execute programs in any directory, e.g. ls, or gcc, or something. Here '' is added to sys.path, so it is not secure to run python is other-user-writable directories.) cd /tmp/ && python -c 'import numpy; print(numpy.version.version)' (The same as above, '' is added to sys.path.) cd /tmp && python (The same as above). IMHO, if this (long-lived) behaviour is necessary, it should at least be prominently documented. Also in the manpage. Prepending realpath(dirname(scriptname)) ======================================== Before adding a directory to sys.path as described above, Python actually runs os.path.realpath over it. This means that if the path to a script given on the commandline is actually a symlink, the directory containing the real file will be executed. This behaviour is not really documented (the documentation only says "the directory containing that file is added to the start of sys.path"), but since the integrity of sys.path is so important, it should be, IMHO. Using realpath instead of the (expected) path specified by the user breaks imports of non-pure-python (mixed .py and .so) modules from modules executed as scripts on Debian. This is because Debian installs architecture-independent python files in /usr/share/pyshared, and symlinks those files into /usr/lib/pymodules/pythonX.Y/. The architecture-dependent .so and python-version-dependent .pyc files are installed in /usr/lib/pymodules/pythonX.Y/. When a script, e.g. /usr/lib/pymodules/pythonX.Y/script.py, is executed, the directory /usr/share/pyshared is prepended to sys.path. If the script tries to import a module which has architecture-dependent parts (e.g. numpy) it first sees the incomplete module in /usr/share/pyshared and fails. This happens for example in parallel python (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=620551) and recently when packaging CellProfiler for Debian. Again, if this is on purpose, it should be documented. PEP 395 (Qualified Names for Modules) ===================================== PEP 395 proposes another sys.path manipulation. When running a script, the directory tree will be walked upwards as long as there are __init__.py files, and then the first directory without will be added. This is of course a fine idea, but it makes a scenario, which was previously safe, insecure. More precisely, when executing a script in a directory in a parent directory-writable-by-other-users, the parent directory will be added to sys.path. So the (safe) operation of downloading an archive with a package, unzipping it in /tmp, changing into the created directory, checking that the script doesn't do anything bad, and running a script is now insecure if there is __init__.py in the archive root. I guess that it would be useful to have an option to turn off those sys.path manipulations. Zbyszek From ncoghlan at gmail.com Sun Nov 27 01:28:33 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 27 Nov 2011 10:28:33 +1000 Subject: [Python-Dev] ImportError: No module named multiarray (is back) In-Reply-To: <4ED127C5.1060004@in.waw.pl> References: <4ECBFF19.8080100@in.waw.pl> <4ECD1D31.7080802@netwok.org> <4ED127C5.1060004@in.waw.pl> Message-ID: 2011/11/27 Zbigniew J?drzejewski-Szmek : > I guess that it would be useful to have an option to turn off those sys.path > manipulations. Yeah, I recently proposed exactly that (a '--nopath0' option) in http://bugs.python.org/issue13475 (that issue also proposes a "-p/--path0" option to *override* the automatic initialisation of sys.path[0] with a different value). I may still make this general question part of the proposals in PEP 395, though, since it's fairly closely related to many of the issues already discussed by that PEP and is something that will need to be thought out fairly well to make sure it achieves the objective of avoiding cross-user interference. There are limits to what we can do by default due to backwards compatibility concerns, but it should at least be possible to provide better tools to help manage the problem. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From petri at digip.org Sun Nov 27 20:33:29 2011 From: petri at digip.org (Petri Lehtinen) Date: Sun, 27 Nov 2011 21:33:29 +0200 Subject: [Python-Dev] Merging 3.2 to 3.3 is messy because "Misc/NEWS" In-Reply-To: <4ED04C8B.8000502@jcea.es> References: <4EB94F97.6020002@jcea.es> <9bf460d39f263e856f6ff5042f28dfc6@netwok.org> <4ED04C8B.8000502@jcea.es> Message-ID: <20111127193329.GA2219@ihaa> Jesus Cea wrote: > On 12/11/11 16:56, ?ric Araujo wrote: > > Ezio and I chatted a bit about his on IRC and he may try to write > > a Python parser for Misc/NEWS in order to write a fully automated > > merge tool. > > Anything new in this front? :-) I don't see what's the problem really. The most common case is to have one conflicting file with one conflict. I'm completely fine with removing the conflict markers and possibly moving my own entry above the other entries. Petri From jcea at jcea.es Mon Nov 28 05:21:18 2011 From: jcea at jcea.es (Jesus Cea) Date: Mon, 28 Nov 2011 05:21:18 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED1141E.9050604@netwok.org> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> Message-ID: <4ED30C3E.4090700@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 26/11/11 17:30, ?ric Araujo wrote: >> That is what I want to do, but I need to know that it is safe to >> do so (from the "Create Patch" perspective). > I don?t understand ?safe?. "Safe", in this context, means "when clicking 'create patch' the created patch ONLY includes my code in the development branch, EVEN if the branch merged-in the original mainline branch several times". >>>> 6. What would be posted in the bug tracker when she does a >>>> new "Create Patch"?. Only her changes, her changes SINCE the >>>> merge, her changes plus merged changes or something else?. >>> The diff would be equivalent to ?hg diff -r base? and would >>> contain all the changes she did to add the bug fix or feature. >>> Merging only makes sure that the computed diff does not appear >>> to touch unrelated files, IOW that it applies cleanly. >>> (Barring bugs in Mercurial-Roundup integration, we have a few >>> of these in the metatracker.) >> So you are saying that "Create patch" will ONLY get the >> differences in the development branch and not the changes brought >> in from the merge?. > I don?t really understand how you understood what I said :( The > merge brings in changes from default; when you diff your branch > against default later, it will not show the changes brought by the > merge, but it will apply cleanly on top of default. But I am not doing that diff, it is the tracker who is doing that diff. I agree that the following procedure would work. In fact it is the way I used to work, before publishing my repository and using "create patch" in the tracker: 1. Branch. 2. Develop in the branch. 3. Merge changes from mainline INTO the branch. 4. Jump to 2 as many times as needed. 5. When done: 5.1. Do a final merge from mainline to branch. 5.2. Do a DIFF from branch to mainline. After 5.2, the diff shows only the code I have patched in the branch. PERFECT. But I don't know if the tracker does that or not. Without the final merge, a diff between my branch and mainline tips will show my changes PLUS the "undoing" of any change in mainline that I didn't merge in my branch. Since "create patch" (in the tracker) doesn't compare against the tip of mainline (at least not in a trivial way), I guess it is comparing against the BASE of the branch. That is ok... as far as I don't merge changes from mainline to the branch. If so, when diffing the branch tip from the branch base it will show all changes in the branch, both my code and the code imported via merges. So, in this context, if the tracker "create patch" diff from BASE, it is not "safe" to merge changes from mainline to the branch, because if so "create patch" would include code not related to my work. I could try actually merging and clicking "create patch" but if the result is unpleasant my repository would be in a state "not compatible" with "create patch" tool in the tracker. I rather prefer to avoid that, if somebody knows the answer. If nobody can tell, experimentation would be the only option, although any experimental result would be suspicious because the hooks can be changes later or you are hitting any strange corner case. Another approach, that I am doing so far, is to avoid merging from mainline while developing in a branch, just in case. But I am hitting now a situation while there are changes in mainline that overlap my effort so I am quite forced to merge that code in, instead of dealing with a huge divergent code in a month. So, I have avoid to merge in the past and was happy, but I would need to merge now (changes from mainline) and I am unsure on what is going to happen with the "create patch" option in the tracker. Anybody knows the mercurial command used to implement "create patch"?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtMMPplgi5GaxT1NAQKzhQP8DzAql1PAJkyEROsWl8CgPpW9ie8jNM1V +K5jLx/dCukzFXrZ2Ba1Tu5IFYFZxH7Wj4rg4sQ47zlKBi6gQELgtGV+bCYPAEt/ WQo7uGUCj+xLmBKXuQQlXrl1pNl9XhlufTNXIzW34o7SPKMEQy7N7uUxpxgwV8JX KoJoYAbiH88= =9lYm -----END PGP SIGNATURE----- From ncoghlan at gmail.com Mon Nov 28 06:06:56 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Nov 2011 15:06:56 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED30C3E.4090700@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: On Mon, Nov 28, 2011 at 2:21 PM, Jesus Cea wrote: > Since "create patch" (in the tracker) doesn't compare against the tip > of mainline (at least not in a trivial way), I guess it is comparing > against the BASE of the branch. That is ok... as far as I don't merge > changes from mainline to the branch. If so, when diffing the branch > tip from the branch base it will show all changes in the branch, both > my code and the code imported via merges. > > So, in this context, if the tracker "create patch" diff from BASE, it > is not "safe" to merge changes from mainline to the branch, because if > so "create patch" would include code not related to my work. No, "Create Patch" is smarter than that. What it does (or tries to do) is walk back through your branch history, trying to find the last point where you merged in a changeset that it recognises as coming from the main CPython repo. It then uses that revision of the CPython repo as the basis for the diff. So if you're just working on a feature branch, periodically pulling from hg.python.org/cpython and merging from default, then it should all work fine. Branches-of-branches (i.e. where you've merged from CPython via another named branch in your local repo) seems to confuse it though - I plan to change my workflow for those cases to merge each branch from the same version of default before merging from the other branch. > Anybody knows the mercurial command used to implement "create patch"?. It's not a single command - it's a short script MvL wrote that uses the Mercurial API to traverse the branch history and find an appropriate revision to diff against. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From raymond.hettinger at gmail.com Mon Nov 28 10:30:53 2011 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 28 Nov 2011 01:30:53 -0800 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4EA560E3.8060307@gmail.com> References: <4EA560E3.8060307@gmail.com> Message-ID: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > Hi, > our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: > 1) deprecate something and add a DeprecationWarning; > 2) forget about it after a while; > 3) wait a few versions until someone notices it; > 4) actually remove it; > > I suggest to follow the following process: > 1) deprecate something and add a DeprecationWarning; > 2) decide how long the deprecation should last; > 3) use the deprecated-remove[1] directive to document it; > 4) add a test that fails after the update so that we remember to remove it[2]; How about we agree that actually removing things is usually bad for users. It will be best if the core devs had a strong aversion to removal. Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. There is rarely a need to actually remove support for something in the standard library. That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From catch-all at masklinn.net Mon Nov 28 10:53:24 2011 From: catch-all at masklinn.net (Xavier Morel) Date: Mon, 28 Nov 2011 10:53:24 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On 2011-11-28, at 10:30 , Raymond Hettinger wrote: > On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. > Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. > There is rarely a need to actually remove support for something in the standard library. The problem with "deprecating and not removing" (and worse, only informally deprecating by leaving a note in the documentation) is that you end up with zombie APIs: there are tons of tutorials & such on the web talking about them, they're not maintained, nobody really cares about them (but users who found them via Google) and they're all around harmful. It's the current state of many JDK 1.0 and 1.1 APIs and it's dreadful, most of them are more than a decade out of date, sometimes retrofitted for new interfaces (but APIs using them usually are *not* fixed, keeping them in their state of partial death), sometimes still *taught*, all of that because they're only informally deprecated (at best, sometimes not even that as other APIs still depend on them). It's bad for (language) users because they use outdated and partially unmaintained (at least in that it's not improved) APIs and it's bad for (language) maintainers in that once in a while they still have to dive into those things and fix bugs cropping up without the better understanding they have from the old APIs or the cleaner codebase they got from it. Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. From ncoghlan at gmail.com Mon Nov 28 13:06:48 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 28 Nov 2011 22:06:48 +1000 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel wrote: > Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. But restricting ourselves to cleaning out such APIs every 10 years or so with a major version bump is also a potentially viable option. So long as the old APIs are fully tested and aren't actively *harmful* to creating reasonable code (e.g. optparse) then refraining from killing them before the (still hypothetical) 4.0 is reasonable. OTOH, genuinely problematic APIs that ideally wouldn't have survived even the 3.x transition (e.g. the APIs that the 3.x subprocess module inherited from the 2.x commands module that run completely counter to the design principles of the subprocess module) should probably still be considered for removal as soon as is reasonable after a superior alternative is made available. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Mon Nov 28 13:14:46 2011 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 28 Nov 2011 23:14:46 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <4ED37B36.7080208@pearwood.info> Xavier Morel wrote: > Not being too eager to kill APIs is good, but giving rise to this kind of > living-dead APIs is no better in my opinion, even more so since Python has > lost one of the few tools it had to manage them (as DeprecationWarning was > silenced by default). Both choices are harmful to users, but in the long > run I do think zombie APIs are worse. I would much rather have my code relying on "zombie" APIs and keep working, than to have that code suddenly stop working when the zombie is removed. Working code should stay working. Unless the zombie is actively harmful, what's the big deal if there is a newer, better way of doing something? If it works, and if it's fast enough, why force people to "fix" it? It is a good thing that code or tutorials from Python 1.5 still (mostly) work, even when there are newer, better ways of doing something. I see a lot of newbies, and the frustration they suffer when they accidentally (carelessly) try following 2.x instructions in Python3, or vice versa, is great. It's bad enough (probably unavoidable) that this happens during a major transition like 2 to 3, without it also happening during minor releases. Unless there is a good reason to actively remove an API, it should stay as long as possible. "I don't like this and it should go" is not a good reason, nor is "but there's a better way you should use". When in doubt, please don't break people's code. -- Steven From exarkun at twistedmatrix.com Mon Nov 28 13:45:24 2011 From: exarkun at twistedmatrix.com (exarkun at twistedmatrix.com) Date: Mon, 28 Nov 2011 12:45:24 -0000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED37B36.7080208@pearwood.info> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: <20111128124524.2308.1242187804.divmod.xquotient.325@localhost.localdomain> On 12:14 pm, steve at pearwood.info wrote: >Xavier Morel wrote: >>Not being too eager to kill APIs is good, but giving rise to this kind >>of >>living-dead APIs is no better in my opinion, even more so since Python >>has >>lost one of the few tools it had to manage them (as DeprecationWarning >>was >>silenced by default). Both choices are harmful to users, but in the >>long >>run I do think zombie APIs are worse. > >I would much rather have my code relying on "zombie" APIs and keep >working, than to have that code suddenly stop working when the zombie >is removed. Working code should stay working. Unless the zombie is >actively harmful, what's the big deal if there is a newer, better way >of doing something? If it works, and if it's fast enough, why force >people to "fix" it? > >It is a good thing that code or tutorials from Python 1.5 still >(mostly) work, even when there are newer, better ways of doing >something. I see a lot of newbies, and the frustration they suffer when >they accidentally (carelessly) try following 2.x instructions in >Python3, or vice versa, is great. It's bad enough (probably >unavoidable) that this happens during a major transition like 2 to 3, >without it also happening during minor releases. > >Unless there is a good reason to actively remove an API, it should stay >as long as possible. "I don't like this and it should go" is not a good >reason, nor is "but there's a better way you should use". When in >doubt, please don't break people's code. +1 Jean-Paul From python-dev at masklinn.net Mon Nov 28 13:56:31 2011 From: python-dev at masklinn.net (Xavier Morel) Date: Mon, 28 Nov 2011 13:56:31 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: On 2011-11-28, at 13:06 , Nick Coghlan wrote: > On Mon, Nov 28, 2011 at 7:53 PM, Xavier Morel wrote: >> Not being too eager to kill APIs is good, but giving rise to this kind of living-dead APIs is no better in my opinion, even more so since Python has lost one of the few tools it had to manage them (as DeprecationWarning was silenced by default). Both choices are harmful to users, but in the long run I do think zombie APIs are worse. > > But restricting ourselves to cleaning out such APIs every 10 years or > so with a major version bump is also a potentially viable option. > > So long as the old APIs are fully tested and aren't actively *harmful* > to creating reasonable code (e.g. optparse) then refraining from > killing them before the (still hypothetical) 4.0 is reasonable. Sure, the original proposal leaves the deprecation timelines as TBD and I hope I did not give the impression of setting up a timeline (that was not the intention). Ezio's original proposal could simply be implemented by having the second step ("decide how long the deprecation should last") default to "the next major release", I don't think that goes against his proposal, and in case APIs are actively harmful (e.g. very hard to use correctly) the deprecation timeline can be accelerated specifically for that case. From petri at digip.org Mon Nov 28 14:36:03 2011 From: petri at digip.org (Petri Lehtinen) Date: Mon, 28 Nov 2011 15:36:03 +0200 Subject: [Python-Dev] Deprecation policy In-Reply-To: <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <20111128133603.GD32511@p16> Raymond Hettinger wrote: > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. > Instead, it is best to mark APIs as obsolete with a recommendation to use > something else instead. > > There is rarely a need to actually remove support for something in > the standard library. > > That may serve a notion of tidyness or somesuch but in reality it is > a PITA for users making it more difficult to upgrade python versions > and making it more difficult to use published recipes. I'm strongly against breaking backwards compatiblity between minor versions (e.g. 3.2 and 3.3). If something is removed in this manner, the transition period should at least be very, very long. To me, deprecating an API means "this code will not get new features and possibly not even (big) fixes". It's important for the long term health of a project to be able to deprecate and eventually remove code that is no longer maintained. So, I think we should have a clear and working deprecation policy, and Ezio's suggestion sounds good to me. There should be a clean way to state, in both code and documentation, that something is deprecated, do not use in new code. Furthermore, deprecated code should actually be removed when the time comes, be it Python 4.0 or something else. Petri From barry at python.org Mon Nov 28 14:53:44 2011 From: barry at python.org (Barry Warsaw) Date: Mon, 28 Nov 2011 08:53:44 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111128133603.GD32511@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> Message-ID: <20111128085344.012bf983@resist.wooz.org> On Nov 28, 2011, at 03:36 PM, Petri Lehtinen wrote: >Raymond Hettinger wrote: >> That may serve a notion of tidyness or somesuch but in reality it is >> a PITA for users making it more difficult to upgrade python versions >> and making it more difficult to use published recipes. > >I'm strongly against breaking backwards compatiblity between minor >versions (e.g. 3.2 and 3.3). If something is removed in this manner, >the transition period should at least be very, very long. +1 It's even been a pain when porting between Python 2.x and 3. You'll see some things that were carried forward into Python 3.0 and 3.1 but are now gone in 3.2. So if you port from 2.7 -> 3.2 for example, you'll find a few things missing (the intobject.h aliases come to mind). For those reasons I think we need to be conservative about removing stuff. Once the world is all on Python 3 we can think about removing code. Cheers, -Barry From fuzzyman at voidspace.org.uk Mon Nov 28 15:33:50 2011 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Mon, 28 Nov 2011 14:33:50 +0000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111128133603.GD32511@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> Message-ID: <4ED39BCE.5040000@voidspace.org.uk> On 28/11/2011 13:36, Petri Lehtinen wrote: > Raymond Hettinger wrote: >> How about we agree that actually removing things is usually bad for users. >> It will be best if the core devs had a strong aversion to removal. >> Instead, it is best to mark APIs as obsolete with a recommendation to use >> something else instead. >> >> There is rarely a need to actually remove support for something in >> the standard library. >> >> That may serve a notion of tidyness or somesuch but in reality it is >> a PITA for users making it more difficult to upgrade python versions >> and making it more difficult to use published recipes. > I'm strongly against breaking backwards compatiblity between minor > versions (e.g. 3.2 and 3.3). If something is removed in this manner, > the transition period should at least be very, very long. We tend to see 3.2 -> 3.3 as a "major version" increment, but that's just Python's terminology. Nonetheless, our usual deprecation policy has been a *minimum* of deprecated for two releases and removed in a third (if at all) - which is about five years from deprecation to removal given our normal release rate. The water is muddied by Python 3, where we may deprecate something in Python 3.1 and remove in 3.3 (hypothetically) - but users may go straight from Python 2.7 to 3.3 and skip the deprecation period altogether... So we should be extra conservative about removals in Python 3 (for the moment at least). > To me, deprecating an API means "this code will not get new features > and possibly not even (big) fixes". It's important for the long term > health of a project to be able to deprecate and eventually remove code > that is no longer maintained. The issue is that deprecated code can still be a maintenance burden. Keeping deprecated APIs around can require effort just to keep them working and may actively *prevent* other changes / improvements. All the best, Michael Foord > So, I think we should have a clear and working deprecation policy, and > Ezio's suggestion sounds good to me. There should be a clean way to > state, in both code and documentation, that something is deprecated, > do not use in new code. Furthermore, deprecated code should actually > be removed when the time comes, be it Python 4.0 or something else. > > Petri > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From anacrolix at gmail.com Mon Nov 28 15:38:18 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Tue, 29 Nov 2011 01:38:18 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED37B36.7080208@pearwood.info> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: On Mon, Nov 28, 2011 at 11:14 PM, Steven D'Aprano wrote: > Xavier Morel wrote: > >> Not being too eager to kill APIs is good, but giving rise to this kind of >> living-dead APIs is no better in my opinion, even more so since Python has >> lost one of the few tools it had to manage them (as DeprecationWarning was >> silenced by default). Both choices are harmful to users, but in the long >> run I do think zombie APIs are worse. > > I would much rather have my code relying on "zombie" APIs and keep working, > than to have that code suddenly stop working when the zombie is removed. > Working code should stay working. Unless the zombie is actively harmful, > what's the big deal if there is a newer, better way of doing something? If > it works, and if it's fast enough, why force people to "fix" it? > > It is a good thing that code or tutorials from Python 1.5 still (mostly) > work, even when there are newer, better ways of doing something. I see a lot > of newbies, and the frustration they suffer when they accidentally > (carelessly) try following 2.x instructions in Python3, or vice versa, is > great. It's bad enough (probably unavoidable) that this happens during a > major transition like 2 to 3, without it also happening during minor > releases. > > Unless there is a good reason to actively remove an API, it should stay as > long as possible. "I don't like this and it should go" is not a good reason, > nor is "but there's a better way you should use". When in doubt, please > don't break people's code. This is a great argument. But people want to see new, bigger better things in the standard library, and the #1 reason cited against this is "we already have too much". I think that's where the issue lies: Either lots of cool nice stuff is added and supported (we all want our favourite things in the standard lib for this reason), and or the old stuff lingers... I'm sure a while ago there was mention of a "staging" area for inclusion in the standard library. This attracts interest, stabilization, and quality from potential modules for inclusion. Better yet, the existing standard library ownership is somehow detached from the CPython core, so that changes enabling easier customization to fit other implementations (jpython, pypy etc.) are possible. tl;dr old stuff blocks new hotness. make room or separate standard library concerns from cpython > > > > -- > Steven > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From brett at python.org Mon Nov 28 16:11:57 2011 From: brett at python.org (Brett Cannon) Date: Mon, 28 Nov 2011 10:11:57 -0500 Subject: [Python-Dev] PyPy 1.7 - widening the sweet spot In-Reply-To: <20111125183746.45ab20b5@pitrou.net> References: <6F9490B9-DBEF-4CF6-89B7-26EA0C8A88E2@underboss.org> <20111125183746.45ab20b5@pitrou.net> Message-ID: On Fri, Nov 25, 2011 at 12:37, Antoine Pitrou wrote: > On Fri, 25 Nov 2011 12:37:59 -0500 > Brett Cannon wrote: > > On Thu, Nov 24, 2011 at 07:46, Nick Coghlan wrote: > > > > > On Thu, Nov 24, 2011 at 10:20 PM, Maciej Fijalkowski > > > > wrote: > > > > The problem is not with maintaining the modified directory. The > > > > problem was always things like changing interface between the C > > > > version and the Python version or introduction of new stuff that does > > > > not run on pypy because it relies on refcounting. I don't see how > > > > having a subrepo helps here. > > > > > > Indeed, the main thing that can help on this front is to get more > > > modules to the same state as heapq, io, datetime (and perhaps a few > > > others that have slipped my mind) where the CPython repo actually > > > contains both C and Python implementations and the test suite > > > exercises both to make sure their interfaces remain suitably > > > consistent (even though, during normal operation, CPython users will > > > only ever hit the C accelerated version). > > > > > > This not only helps other implementations (by keeping a Python version > > > of the module continuously up to date with any semantic changes), but > > > can help people that are porting CPython to new platforms: the C > > > extension modules are far more likely to break in that situation than > > > the pure Python equivalents, and a relatively slow fallback is often > > > going to be better than no fallback at all. (Note that ctypes based > > > pure Python modules *aren't* particularly useful for this purpose, > > > though - due to the libffi dependency, ctypes is one of the extension > > > modules most likely to break when porting). > > > > > > > And the other reason I plan to see this through before I die > > Uh! Any bad news? :/ Sorry, turn of phrase in English which didn't translate well. I just meant "when I get to it, which could quite possibly be a *long* time from now". This year has been absolutely insane for me personally (if people care, the details are shared on Google+ or you can just ask me), so I am just not promising anything for Python on a short timescale (although I'm still hoping the final details for bootstrapping importlib won't be difficult to work out so I can meet a personal deadline of PyCon). -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon Nov 28 16:19:50 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 29 Nov 2011 00:19:50 +0900 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> Message-ID: <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> Matt Joiner writes: > This is a great argument. But people want to see new, bigger better > things in the standard library, and the #1 reason cited against this > is "we already have too much". I think that's where the issue lies: > Either lots of cool nice stuff is added and supported (we all want our > favourite things in the standard lib for this reason), and or the old > stuff lingers... Deprecated features are pretty much irrelevant to the height of the bar for new features. The problem is that there are a limited number of folks doing long term maintenance of the standard library, and an essentially unlimited supply of one-off patches to add cool new features (not backed by a long term warranty of maintenance by the contributor). So deprecated features do add some burden of maintenance for the core developers, as Michael points out -- but removing *all* of them on short notice would not really make it possible to *add* features *in a maintainable way* any faster. > I'm sure a while ago there was mention of a "staging" area for > inclusion in the standard library. This attracts interest, > stabilization, and quality from potential modules for inclusion. But there's no particular reason to believe it will attract more contributors willing to do long-term maintenance, and *somebody* has to maintain the staging area. From solipsis at pitrou.net Mon Nov 28 16:46:23 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 Nov 2011 16:46:23 +0100 Subject: [Python-Dev] New features References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20111128164623.1fcc8835@pitrou.net> On Tue, 29 Nov 2011 00:19:50 +0900 "Stephen J. Turnbull" wrote: > > Deprecated features are pretty much irrelevant to the height of the > bar for new features. The problem is that there are a limited number > of folks doing long term maintenance of the standard library, and an > essentially unlimited supply of one-off patches to add cool new > features (not backed by a long term warranty of maintenance by the > contributor). Actually, we don't often get patches for new features. Many new features are implemented by core developers themselves. Regards Antoine. From stephen at xemacs.org Mon Nov 28 18:19:58 2011 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 29 Nov 2011 02:19:58 +0900 Subject: [Python-Dev] New features In-Reply-To: <20111128164623.1fcc8835@pitrou.net> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <4ED37B36.7080208@pearwood.info> <874nxop949.fsf@uwakimon.sk.tsukuba.ac.jp> <20111128164623.1fcc8835@pitrou.net> Message-ID: <8739d8p3k1.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > Actually, we don't often get patches for new features. Many new > features are implemented by core developers themselves. Right. That's not inconsistent with what I wrote, as long as would-be feature submitters realize what the standards for an acceptable feature patch are. From solipsis at pitrou.net Mon Nov 28 18:37:24 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 Nov 2011 18:37:24 +0100 Subject: [Python-Dev] Deprecation policy References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> Message-ID: <20111128183724.1a50a0ee@pitrou.net> Hi, On Mon, 28 Nov 2011 01:30:53 -0800 Raymond Hettinger wrote: > > On Oct 24, 2011, at 5:58 AM, Ezio Melotti wrote: > > > Hi, > > our current deprecation policy is not so well defined (see e.g. [0]), and it seems to me that it's something like: > > 1) deprecate something and add a DeprecationWarning; > > 2) forget about it after a while; > > 3) wait a few versions until someone notices it; > > 4) actually remove it; > > > > I suggest to follow the following process: > > 1) deprecate something and add a DeprecationWarning; > > 2) decide how long the deprecation should last; > > 3) use the deprecated-remove[1] directive to document it; > > 4) add a test that fails after the update so that we remember to remove it[2]; > > How about we agree that actually removing things is usually bad for users. > It will be best if the core devs had a strong aversion to removal. Well, it's not like we aren't already conservative in deprecating things. > Instead, it is best to mark APIs as obsolete with a recommendation to use something else instead. > There is rarely a need to actually remove support for something in the standard library. > That may serve a notion of tidyness or somesuch but in reality it is a PITA for users making it more difficult to upgrade python versions and making it more difficult to use published recipes. I agree with Xavier's answer that having recipes around which use outdated (and possibly inefficient/insecure/etc.) APIs is a nuisance. Also, deprecated-but-not-removed APIs come at a maintenance and support cost. Regards Antoine. From jcea at jcea.es Mon Nov 28 21:47:55 2011 From: jcea at jcea.es (Jesus Cea) Date: Mon, 28 Nov 2011 21:47:55 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: <4ED3F37B.4030103@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 28/11/11 06:06, Nick Coghlan wrote: >> So, in this context, if the tracker "create patch" diff from >> BASE, it is not "safe" to merge changes from mainline to the >> branch, because if so "create patch" would include code not >> related to my work. > > No, "Create Patch" is smarter than that. What it does (or tries to > do) is walk back through your branch history, trying to find the > last point where you merged in a changeset that it recognises as > coming from the main CPython repo. It then uses that revision of > the CPython repo as the basis for the diff. Oh, that sounds quite the right way. Clever. > So if you're just working on a feature branch, periodically > pulling from hg.python.org/cpython and merging from default, then > it should all work fine. So, my original question is answered as "yes, you can merge in changes from mainline, and 'create patch' will work as it should". Good!!. Thanks!!!. >> Anybody knows the mercurial command used to implement "create >> patch"?. > > It's not a single command - it's a short script MvL wrote that > uses the Mercurial API to traverse the branch history and find an > appropriate revision to diff against. Publish out somewhere would be useful, I guess. This is a problem I have found in a few other projects. I can see even a modifier for "hg diff" for a future mercurial version :-). Could be implemented as a command line command using "revsets"?. Propose a new revset to mercurial devels? - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBTtPze5lgi5GaxT1NAQIT9gP+N4urbw7TgCWTa7EFZ4rjj7/o9f3aBq4I kYBnVZGmh98YqjHL0MzHhhu2a+G6pC/Zksf9CyIinPol4DJR8zGhBDIxo6SNIja+ QsSyQ7DhBWkSwKZAKqBNSdBBH0fu/DpdmNv6fP0s04Ju6sllvHAbEN/oj9zWqxWM KjAMzrgPcSA= =zViH -----END PGP SIGNATURE----- From ncoghlan at gmail.com Mon Nov 28 22:00:28 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Nov 2011 07:00:28 +1000 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED3F37B.4030103@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> <4ED3F37B.4030103@jcea.es> Message-ID: It's published as part of the tracker repo, although I'm not sure exactly where it lives. -- Nick Coghlan (via Gmail on Android, so likely to be more terse than usual) On Nov 29, 2011 6:50 AM, "Jesus Cea" wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 28/11/11 06:06, Nick Coghlan wrote: > >> So, in this context, if the tracker "create patch" diff from > >> BASE, it is not "safe" to merge changes from mainline to the > >> branch, because if so "create patch" would include code not > >> related to my work. > > > > No, "Create Patch" is smarter than that. What it does (or tries to > > do) is walk back through your branch history, trying to find the > > last point where you merged in a changeset that it recognises as > > coming from the main CPython repo. It then uses that revision of > > the CPython repo as the basis for the diff. > > Oh, that sounds quite the right way. Clever. > > > So if you're just working on a feature branch, periodically > > pulling from hg.python.org/cpython and merging from default, then > > it should all work fine. > > So, my original question is answered as "yes, you can merge in changes > from mainline, and 'create patch' will work as it should". > > Good!!. Thanks!!!. > > >> Anybody knows the mercurial command used to implement "create > >> patch"?. > > > > It's not a single command - it's a short script MvL wrote that > > uses the Mercurial API to traverse the branch history and find an > > appropriate revision to diff against. > > Publish out somewhere would be useful, I guess. This is a problem I > have found in a few other projects. I can see even a modifier for "hg > diff" for a future mercurial version :-). > > Could be implemented as a command line command using "revsets"?. > Propose a new revset to mercurial devels? > > - -- > Jesus Cea Avion _/_/ _/_/_/ _/_/_/ > jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ > jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ > . _/_/ _/_/ _/_/ _/_/ _/_/ > "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ > "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ > "El amor es poner tu felicidad en la felicidad de otro" - Leibniz > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.10 (GNU/Linux) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQCVAwUBTtPze5lgi5GaxT1NAQIT9gP+N4urbw7TgCWTa7EFZ4rjj7/o9f3aBq4I > kYBnVZGmh98YqjHL0MzHhhu2a+G6pC/Zksf9CyIinPol4DJR8zGhBDIxo6SNIja+ > QsSyQ7DhBWkSwKZAKqBNSdBBH0fu/DpdmNv6fP0s04Ju6sllvHAbEN/oj9zWqxWM > KjAMzrgPcSA= > =zViH > -----END PGP SIGNATURE----- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petri at digip.org Tue Nov 29 13:46:06 2011 From: petri at digip.org (Petri Lehtinen) Date: Tue, 29 Nov 2011 14:46:06 +0200 Subject: [Python-Dev] Deprecation policy In-Reply-To: <4ED39BCE.5040000@voidspace.org.uk> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> Message-ID: <20111129124606.GC21346@p16> Michael Foord wrote: > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > just Python's terminology. Even though (in the documentation) Python's version number components are called major, minor, micro, releaselevel and serial, in this order? So when the minor version component is increased it's a major version increment? :) From petri at digip.org Tue Nov 29 13:58:39 2011 From: petri at digip.org (Petri Lehtinen) Date: Tue, 29 Nov 2011 14:58:39 +0200 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> Message-ID: <20111129125839.GD21346@p16> Nick Coghlan wrote: > > So, in this context, if the tracker "create patch" diff from BASE, it > > is not "safe" to merge changes from mainline to the branch, because if > > so "create patch" would include code not related to my work. > > No, "Create Patch" is smarter than that. What it does (or tries to do) > is walk back through your branch history, trying to find the last > point where you merged in a changeset that it recognises as coming > from the main CPython repo. It then uses that revision of the CPython > repo as the basis for the diff. > > So if you're just working on a feature branch, periodically pulling > from hg.python.org/cpython and merging from default, then it should > all work fine. > > Branches-of-branches (i.e. where you've merged from CPython via > another named branch in your local repo) seems to confuse it though - > I plan to change my workflow for those cases to merge each branch from > the same version of default before merging from the other branch. The ancestor() revset could help for the confusion: http://stackoverflow.com/a/6744163/639276 In this case, the user would have to be able to tell the branch against which he wants the diff. Petri From solipsis at pitrou.net Tue Nov 29 13:59:48 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 29 Nov 2011 13:59:48 +0100 Subject: [Python-Dev] Deprecation policy References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: <20111129135948.23a2e5cc@pitrou.net> On Tue, 29 Nov 2011 14:46:06 +0200 Petri Lehtinen wrote: > Michael Foord wrote: > > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > > just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) Well, that's why I think the version number components are not correctly named. I don't think any of the 2.x or 3.x releases can be called "minor" by any stretch of the word. A quick glance at http://docs.python.org/dev/whatsnew/index.html should be enough. Regards Antoine. From phd at phdru.name Tue Nov 29 13:53:58 2011 From: phd at phdru.name (Oleg Broytman) Date: Tue, 29 Nov 2011 16:53:58 +0400 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129124606.GC21346@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: <20111129125358.GA7839@iskra.aviel.ru> On Tue, Nov 29, 2011 at 02:46:06PM +0200, Petri Lehtinen wrote: > Michael Foord wrote: > > We tend to see 3.2 -> 3.3 as a "major version" increment, but that's > > just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) When the major version component is increased it's a World Shattering Change, isn't it?! ;-) Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From barry at python.org Tue Nov 29 16:13:06 2011 From: barry at python.org (Barry Warsaw) Date: Tue, 29 Nov 2011 10:13:06 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129135948.23a2e5cc@pitrou.net> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> Message-ID: <20111129101306.08a931b9@resist.wooz.org> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >Well, that's why I think the version number components are not >correctly named. I don't think any of the 2.x or 3.x releases can be >called "minor" by any stretch of the word. A quick glance at >http://docs.python.org/dev/whatsnew/index.html should be enough. Agreed, but it's too late to change it. I look at it as the attributes of the namedtuple being evocative of the traditional names for the digit positions, not the assignment of those positions to Python's semantics. -Barry From g.brandl at gmx.net Tue Nov 29 19:28:40 2011 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 Nov 2011 19:28:40 +0100 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129124606.GC21346@p16> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> Message-ID: Am 29.11.2011 13:46, schrieb Petri Lehtinen: > Michael Foord wrote: >> We tend to see 3.2 -> 3.3 as a "major version" increment, but that's >> just Python's terminology. > > Even though (in the documentation) Python's version number components > are called major, minor, micro, releaselevel and serial, in this > order? So when the minor version component is increased it's a major > version increment? :) Yes. Georg From nadeem.vawda at gmail.com Tue Nov 29 23:59:11 2011 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Wed, 30 Nov 2011 00:59:11 +0200 Subject: [Python-Dev] LZMA support has landed Message-ID: Hey folks, I'm pleased to announce that as of changeset 74d182cf0187, the standard library now includes support for the LZMA compression algorithm (as well as the associated .xz and .lzma file formats). The new lzma module has a very similar API to the existing bz2 module; it should serve as a drop-in replacement for most use cases. If anyone has any feedback or any suggestions for improvement, please let me know. I'd like to ask the owners of (non-Windows) buildbots to install the XZ Utils development headers so that they can build the new module. For Debian-derived Linux distros, the relevant package is named "liblzma-dev"; on Fedora I believe the correct package is "xz-devel". Binaries for OS X are available from the project's homepage at . Finally, a big thanks to everyone who contributed feedback during this module's development! Cheers, Nadeem From solipsis at pitrou.net Wed Nov 30 00:07:00 2011 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 30 Nov 2011 00:07:00 +0100 Subject: [Python-Dev] cpython: Issue #6715: Add module for compression using the LZMA algorithm. References: Message-ID: <20111130000700.56e9c9bc@pitrou.net> On Tue, 29 Nov 2011 23:36:58 +0100 nadeem.vawda wrote: > http://hg.python.org/cpython/rev/74d182cf0187 > changeset: 73794:74d182cf0187 > user: Nadeem Vawda > date: Wed Nov 30 00:25:06 2011 +0200 > summary: > Issue #6715: Add module for compression using the LZMA algorithm. Congratulations, Nadeem! Regards Antoine. From amauryfa at gmail.com Wed Nov 30 00:34:49 2011 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Wed, 30 Nov 2011 00:34:49 +0100 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: 2011/11/29 Nadeem Vawda > I'm pleased to announce that as of changeset 74d182cf0187, the > standard library now includes support for the LZMA compression > algorithm Congratulations! > I'd like to ask the owners of (non-Windows) buildbots to install the > XZ Utils development headers so that they can build the new module. > And don't worry about Windows builbots, they will automatically download the XZ prebuilt binaries from the usual place. (svn export http://svn.python.org/projects/external/xz-5.0.3) Next step: add support for tar.xz files (issue5689)... -- Amaury Forgeot d'Arc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Nov 30 00:45:19 2011 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 Nov 2011 09:45:19 +1000 Subject: [Python-Dev] Deprecation policy In-Reply-To: <20111129101306.08a931b9@resist.wooz.org> References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: > On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: > >>Well, that's why I think the version number components are not >>correctly named. I don't think any of the 2.x or 3.x releases can be >>called "minor" by any stretch of the word. A quick glance at >>http://docs.python.org/dev/whatsnew/index.html should be enough. > > Agreed, but it's too late to change it. ?I look at it as the attributes of the > namedtuple being evocative of the traditional names for the digit positions, > not the assignment of those positions to Python's semantics. Hmm, I wonder about that. Perhaps we could add a second set of names in parallel with the "major.minor.micro" names: "series.feature.maint". That would, after all, reflect what is actually said in practice: - release series: 2.x, 3.x (usually used in a form like "In the 3.x series, X is true. In 2.x, Y is true) - feature release: 2.7, 3.2, etc - maintenance release: 2.7.2, 3.2.1, etc I know I tend to call feature releases major releases and I'm far from alone in that. The discrepancy in relation to sys.version_info is confusing, but we can't make 'major' refer to a different field without breaking existing programs. But we *can* change: >>> sys.version_info sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) to instead read: sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) while allowing 'major' as an alias of 'series', 'minor' as an alias of 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd have started down the path towards coherent terminology for the three fields in the version numbers (by accepting that 'major' has now become irredeemably ambiguous in the context of CPython releases). This idea of renaming all three fields has come up before, but I believe we got stuck on the question of what to call the first number (i.e. the one I'm calling the "series" here). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Wed Nov 30 01:00:45 2011 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 29 Nov 2011 19:00:45 -0500 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: 2011/11/29 Nick Coghlan : > On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: >> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >> >>>Well, that's why I think the version number components are not >>>correctly named. I don't think any of the 2.x or 3.x releases can be >>>called "minor" by any stretch of the word. A quick glance at >>>http://docs.python.org/dev/whatsnew/index.html should be enough. >> >> Agreed, but it's too late to change it. ?I look at it as the attributes of the >> namedtuple being evocative of the traditional names for the digit positions, >> not the assignment of those positions to Python's semantics. > > Hmm, I wonder about that. Perhaps we could add a second set of names > in parallel with the "major.minor.micro" names: > "series.feature.maint". > > That would, after all, reflect what is actually said in practice: > - release series: 2.x, 3.x ?(usually used in a form like "In the 3.x > series, X is true. In 2.x, Y is true) > - feature release: 2.7, 3.2, etc > - maintenance release: 2.7.2, 3.2.1, etc > > I know I tend to call feature releases major releases and I'm far from > alone in that. The discrepancy in relation to sys.version_info is > confusing, but we can't make 'major' refer to a different field > without breaking existing programs. But we *can* change: > >>>> sys.version_info > sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) > > to instead read: > > sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) > > while allowing 'major' as an alias of 'series', 'minor' as an alias of > 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd > have started down the path towards coherent terminology for the three > fields in the version numbers (by accepting that 'major' has now > become irredeemably ambiguous in the context of CPython releases). > > This idea of renaming all three fields has come up before, but I > believe we got stuck on the question of what to call the first number > (i.e. the one I'm calling the "series" here). Can we drop this now? Too much effort for very little benefit. We call releases what we call releases. -- Regards, Benjamin From anacrolix at gmail.com Wed Nov 30 06:26:27 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 30 Nov 2011 16:26:27 +1100 Subject: [Python-Dev] Deprecation policy In-Reply-To: References: <4EA560E3.8060307@gmail.com> <9D996A31-5968-4485-B953-4961B8623DB8@gmail.com> <20111128133603.GD32511@p16> <4ED39BCE.5040000@voidspace.org.uk> <20111129124606.GC21346@p16> <20111129135948.23a2e5cc@pitrou.net> <20111129101306.08a931b9@resist.wooz.org> Message-ID: I like this article on it: http://semver.org/ The following snippets being relevant here: Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. Major version X (X.y.z | X > 0) MUST be incremented if any backwards incompatible changes are introduced to the public API. With the exception of actually dropping stuff (however this only occurs in terms of modules, which hardly count in special cases?), Python already conforms to this standard very well. On Wed, Nov 30, 2011 at 11:00 AM, Benjamin Peterson wrote: > 2011/11/29 Nick Coghlan : >> On Wed, Nov 30, 2011 at 1:13 AM, Barry Warsaw wrote: >>> On Nov 29, 2011, at 01:59 PM, Antoine Pitrou wrote: >>> >>>>Well, that's why I think the version number components are not >>>>correctly named. I don't think any of the 2.x or 3.x releases can be >>>>called "minor" by any stretch of the word. A quick glance at >>>>http://docs.python.org/dev/whatsnew/index.html should be enough. >>> >>> Agreed, but it's too late to change it. ?I look at it as the attributes of the >>> namedtuple being evocative of the traditional names for the digit positions, >>> not the assignment of those positions to Python's semantics. >> >> Hmm, I wonder about that. Perhaps we could add a second set of names >> in parallel with the "major.minor.micro" names: >> "series.feature.maint". >> >> That would, after all, reflect what is actually said in practice: >> - release series: 2.x, 3.x ?(usually used in a form like "In the 3.x >> series, X is true. In 2.x, Y is true) >> - feature release: 2.7, 3.2, etc >> - maintenance release: 2.7.2, 3.2.1, etc >> >> I know I tend to call feature releases major releases and I'm far from >> alone in that. The discrepancy in relation to sys.version_info is >> confusing, but we can't make 'major' refer to a different field >> without breaking existing programs. But we *can* change: >> >>>>> sys.version_info >> sys.version_info(major=2, minor=7, micro=2, releaselevel='final', serial=0) >> >> to instead read: >> >> sys.version_info(series=2, feature=7, maint=2, releaselevel='final', serial=0) >> >> while allowing 'major' as an alias of 'series', 'minor' as an alias of >> 'feature' and 'micro' as an alias of 'maint'. Nothing breaks, and we'd >> have started down the path towards coherent terminology for the three >> fields in the version numbers (by accepting that 'major' has now >> become irredeemably ambiguous in the context of CPython releases). >> >> This idea of renaming all three fields has come up before, but I >> believe we got stuck on the question of what to call the first number >> (i.e. the one I'm calling the "series" here). > > Can we drop this now? Too much effort for very little benefit. We call > releases what we call releases. > > > > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com From anacrolix at gmail.com Wed Nov 30 06:28:54 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 30 Nov 2011 16:28:54 +1100 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: Congrats, this is an excellent feature. On Wed, Nov 30, 2011 at 10:34 AM, Amaury Forgeot d'Arc wrote: > 2011/11/29 Nadeem Vawda >> >> I'm pleased to announce that as of changeset 74d182cf0187, the >> standard library now includes support for the LZMA compression >> algorithm > > > Congratulations! > >> >> I'd like to ask the owners of (non-Windows) buildbots to install the >> XZ Utils development headers so that they can build the new module. > > > And don't worry about Windows builbots, they will automatically download > the XZ prebuilt?binaries from the usual place. > (svn export http://svn.python.org/projects/external/xz-5.0.3) > > Next step: add support for tar.xz files (issue5689)... > > -- > Amaury Forgeot d'Arc > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com > From meadori at gmail.com Wed Nov 30 06:50:26 2011 From: meadori at gmail.com (Meador Inge) Date: Tue, 29 Nov 2011 23:50:26 -0600 Subject: [Python-Dev] LZMA support has landed In-Reply-To: References: Message-ID: On Tue, Nov 29, 2011 at 4:59 PM, Nadeem Vawda wrote: > "liblzma-dev"; on Fedora I believe the correct package is "xz-devel". "xz-devel" is right. I just verified a build of the new module on a fresh F16 system. -- Meador From martin at v.loewis.de Wed Nov 30 09:01:55 2011 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 30 Nov 2011 09:01:55 +0100 Subject: [Python-Dev] Long term development external named branches and periodic merges from python In-Reply-To: <4ED3F37B.4030103@jcea.es> References: <4ECE741F.10303@jcea.es> <4ECE7A25.5000701@netwok.org> <4ECEFFD4.5030601@jcea.es> <4ED1141E.9050604@netwok.org> <4ED30C3E.4090700@jcea.es> <4ED3F37B.4030103@jcea.es> Message-ID: <4ED5E2F3.4050200@v.loewis.de> > Could be implemented as a command line command using "revsets"?. > Propose a new revset to mercurial devels? It *is* implemented as a command line command using "revsets". The revset is max(ancestors(branch("%s")))-outgoing("%s")) where the first parameter is the branch that contains your changes, and the second one is the "path" of the repository you want to diff against. In English: find the most recent revision in the ancestry of your branch that is not an outgoing change wrt. the base repository. ancestors(branch(yours)) gives all revisions preceding your branches' tip, which will be your own changes, plus all changes from the "default" branch that have been merged into your branch (including the changes from where you originally forked the branch). Subtracting outgoing removes all changes that are not yet in cpython, leaving only the changes in your ancestry that come from cpython. max() then finds the most recent such change, which will be the "default" parent of your last merge, or the branch point if you haven't merged after branching. HTH, Martin From anacrolix at gmail.com Wed Nov 30 15:31:14 2011 From: anacrolix at gmail.com (Matt Joiner) Date: Thu, 1 Dec 2011 01:31:14 +1100 Subject: [Python-Dev] STM and python Message-ID: Given GCC's announcement that Intel's STM will be an extension for C and C++ in GCC 4.7, what does this mean for Python, and the GIL? I've seen efforts made to make STM available as a context, and for use in user code. I've also read about the "old attempts way back" that attempted to use finer grain locking. The understandably failed due to the heavy costs involved in both the locking mechanisms used, and the overhead of a reference counting garbage collection system. However given advances in locking and garbage collection in the last decade, what attempts have been made recently to try these new ideas out? In particular, how unlikely is it that all the thread safe primitives, global contexts, and reference counting functions be made __transaction_atomic, and magical parallelism performance boosts ensue? I'm aware that C89, platforms without STM/GCC, and single threaded performance are concerns. Please ignore these for the sake of discussion about possibilities. http://gcc.gnu.org/wiki/TransactionalMemory http://linux.die.net/man/4/futex From benjamin at python.org Wed Nov 30 16:25:20 2011 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 30 Nov 2011 10:25:20 -0500 Subject: [Python-Dev] STM and python In-Reply-To: References: Message-ID: 2011/11/30 Matt Joiner : > Given GCC's announcement that Intel's STM will be an extension for C > and C++ in GCC 4.7, what does this mean for Python, and the GIL? > > I've seen efforts made to make STM available as a context, and for use > in user code. I've also read about the "old attempts way back" that > attempted to use finer grain locking. The understandably failed due to > the heavy costs involved in both the locking mechanisms used, and the > overhead of a reference counting garbage collection system. > > However given advances in locking and garbage collection in the last > decade, what attempts have been made recently to try these new ideas > out? In particular, how unlikely is it that all the thread safe > primitives, global contexts, and reference counting functions be made > __transaction_atomic, and magical parallelism performance boosts > ensue? Have you seen http://morepypy.blogspot.com/2011/08/we-need-software-transactional-memory.html ? -- Regards, Benjamin From pje at telecommunity.com Wed Nov 30 16:28:32 2011 From: pje at telecommunity.com (PJ Eby) Date: Wed, 30 Nov 2011 10:28:32 -0500 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <4ED1196E.8090505@netwok.org> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> Message-ID: On Sat, Nov 26, 2011 at 11:53 AM, ?ric Araujo wrote: > > Le 11/08/2011 20:30, P.J. Eby a ?crit : > >> At 04:39 PM 8/11/2011 +0200, ?ric Araujo wrote: > >>> I?ll just regret that it's not possible to provide a module docstring > >>> to inform that this is a namespace package used for X and Y. > >> It *is* possible - you'd just have to put it in a "zc.py" file. IOW, > >> this PEP still allows "namespace-defining packages" to exist, as was > >> requested by early commenters on PEP 382. It just doesn't *require* > >> them to exist in order for the namespace contents to be importable. > > That?s quite cool. I guess such a namespace-defining module (zc.py > here) would be importable, right? Yes. > Also, would it cause worse > performance for other zc.* packages than if there were no zc.py? > No. The first import of a subpackage sets up the __path__, and all subsequent imports use it. > >>> A pure virtual package having no source file, I think it should have no >>> __file__ at all. > > Antoine and someone else thought likewise (I can find the link if you > want); do you consider it consensus enough to update the PEP? > Sure. At this point, though, before doing any more work on the PEP I'd like to have some idea of whether there's any chance of it being accepted. At this point, there seems to be a lot of passive, "Usenet nod syndrome" type support for it, but little active support. It doesn't help at all that I'm not really in a position to provide an implementation, and the persons most likely to implement have been leaning somewhat towards 382, or wanting to modify 402 such that it uses .pyp directory extensions so that PEP 395 can be supported... And while 402 is an extension of an idea that Guido proposed a few years ago, he hasn't weighed in lately on whether he still likes that idea, let alone whether he likes where I've taken it. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From neologix at free.fr Wed Nov 30 17:45:07 2011 From: neologix at free.fr (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Wed, 30 Nov 2011 17:45:07 +0100 Subject: [Python-Dev] STM and python In-Reply-To: References: Message-ID: > However given advances in locking and garbage collection in the last > decade, what attempts have been made recently to try these new ideas > out? In particular, how unlikely is it that all the thread safe > primitives, global contexts, and reference counting functions be made > __transaction_atomic, and magical parallelism performance boosts > ensue? > I'd say that given that the current libitm implementation uses a single global lock, you're more likely to see a performance loss. TM is useful to synchronize non-trivial operations: an increment or decrement of a reference count is highly trivial (and expensive when performed atomically, as noted), and TM's never going to help if you put each refcount operation in its own transaction: see Armin's http://morepypy.blogspot.com/2011/08/we-need-software-transactional-memory.html for more realistic use cases. From merwok at netwok.org Wed Nov 30 17:52:17 2011 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Wed, 30 Nov 2011 17:52:17 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> Message-ID: <4ED65F41.2000400@netwok.org> Hi, Thanks for the replies. > At this point, though, before doing any more work on the PEP I'd > like to have some idea of whether there's any chance of it being accepted. > At this point, there seems to be a lot of passive, "Usenet nod syndrome" > type support for it, but little active support. If this helps, I am +1, and I?m sure other devs will chime in. I think the feature is useful, and I prefer 402?s way to 382?s pyp directories. I do acknowledge that 402 poses problems to PEP 395 which 382 does not, and as I?m not in a position to help, my vote may count less. Cheers From martin at v.loewis.de Wed Nov 30 18:33:32 2011 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 30 Nov 2011 18:33:32 +0100 Subject: [Python-Dev] PEP 402: Simplified Package Layout and Partitioning In-Reply-To: <4ED65F41.2000400@netwok.org> References: <4E43E9A6.7020608@netwok.org> <20110811183114.701DF3A406B@sparrow.telecommunity.com> <4ED1196E.8090505@netwok.org> <4ED65F41.2000400@netwok.org> Message-ID: <4ED668EC.3090700@v.loewis.de> > If this helps, I am +1, and I?m sure other devs will chime in. I think > the feature is useful, and I prefer 402?s way to 382?s pyp directories. If that's the obstacle to adopting PEP 382, it would be easy to revert the PEP back to having file markers to indicate package-ness. I insist on having markers of some kind, though (IIUC, this is also what PEP 395 requires). The main problem with file markers is that a) they must not overlap across portions of a package, and b) the actual file name and content is irrelevant. a) means that package authors have to come up with some name, and b) means that the name actually doesn't matter (but the file name extension would). UUIDs would work, as would the name of the portion/distribution. I think the specific choice of name will confuse people into interpreting things in the file name that aren't really intended. Regards, Martin