From ncoghlan at gmail.com Tue May 1 04:21:48 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 May 2012 12:21:48 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Handle a possible race condition In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 10:35 AM, raymond.hettinger wrote: > http://hg.python.org/cpython/rev/b3aeaef6c315 > changeset: ? 76675:b3aeaef6c315 > user: ? ? ? ?Raymond Hettinger > date: ? ? ? ?Mon Apr 30 14:14:28 2012 -0700 > summary: > ?Handle a possible race condition > > files: > ?Lib/functools.py | ?6 ++++++ > ?1 files changed, 6 insertions(+), 0 deletions(-) > > > diff --git a/Lib/functools.py b/Lib/functools.py > --- a/Lib/functools.py > +++ b/Lib/functools.py > @@ -241,6 +241,12 @@ > ? ? ? ? ? ? ? ? ? ? ? ? return result > ? ? ? ? ? ? ? ? result = user_function(*args, **kwds) > ? ? ? ? ? ? ? ? with lock: > + ? ? ? ? ? ? ? ? ? ?if key in cache: > + ? ? ? ? ? ? ? ? ? ? ? ?# getting here means that this same key was added to the > + ? ? ? ? ? ? ? ? ? ? ? ?# cache while the lock was released. ?since the link > + ? ? ? ? ? ? ? ? ? ? ? ?# update is already done, we need only return the > + ? ? ? ? ? ? ? ? ? ? ? ?# computed result and update the count of misses. > + ? ? ? ? ? ? ? ? ? ? ? ?pass > ? ? ? ? ? ? ? ? ? ? if currsize < maxsize: > ? ? ? ? ? ? ? ? ? ? ? ? # put result in a new link at the front of the queue > ? ? ? ? ? ? ? ? ? ? ? ? last = root[PREV] To get the desired effect, I believe you also need s/if currsize/elif currsize/ Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at gmail.com Tue May 1 10:35:56 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 1 May 2012 10:35:56 +0200 Subject: [Python-Dev] cpython: Issue #14428: Use the new time.perf_counter() and time.process_time() functions In-Reply-To: References: Message-ID: >> diff --git a/Lib/timeit.py b/Lib/timeit.py >> --- a/Lib/timeit.py >> +++ b/Lib/timeit.py >> @@ -15,8 +15,8 @@ >> ? ?-n/--number N: how many times to execute 'statement' (default: see below) >> ? ?-r/--repeat N: how many times to repeat the timer (default 3) >> ? ?-s/--setup S: statement to be executed once initially (default 'pass') >> - ?-t/--time: use time.time() (default on Unix) >> - ?-c/--clock: use time.clock() (default on Windows) >> + ?-t/--time: use time.time() >> + ?-c/--clock: use time.clock() > > Does it make sense to keep the options this way? ?IMO the distinction should be > to use either perf_counter() or process_time(), and the options could implement > this (-t -> perf_counter, -c -> process_time). You might need to use exactly the same clock to compare performance of Python 3.2 and 3.3. Adding an option to use time.process_time() is a good idea. Is anyone interested to implement it? Victor From g.brandl at gmx.net Tue May 1 11:59:48 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 01 May 2012 11:59:48 +0200 Subject: [Python-Dev] cpython: Issue #14428: Use the new time.perf_counter() and time.process_time() functions In-Reply-To: References: Message-ID: On 01.05.2012 10:35, Victor Stinner wrote: >>> diff --git a/Lib/timeit.py b/Lib/timeit.py >>> --- a/Lib/timeit.py >>> +++ b/Lib/timeit.py >>> @@ -15,8 +15,8 @@ >>> -n/--number N: how many times to execute 'statement' (default: see below) >>> -r/--repeat N: how many times to repeat the timer (default 3) >>> -s/--setup S: statement to be executed once initially (default 'pass') >>> - -t/--time: use time.time() (default on Unix) >>> - -c/--clock: use time.clock() (default on Windows) >>> + -t/--time: use time.time() >>> + -c/--clock: use time.clock() >> >> Does it make sense to keep the options this way? IMO the distinction should be >> to use either perf_counter() or process_time(), and the options could implement >> this (-t -> perf_counter, -c -> process_time). > > You might need to use exactly the same clock to compare performance of > Python 3.2 and 3.3. > > Adding an option to use time.process_time() is a good idea. Is anyone > interested to implement it? I implemented it in d43a8aa9dbef. I also updated the docs in 552c207f65e4. Georg From solipsis at pitrou.net Tue May 1 12:37:51 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 1 May 2012 12:37:51 +0200 Subject: [Python-Dev] cpython: Move make_key() out of the decorator body. Make keys that only need to be References: Message-ID: <20120501123751.15469102@pitrou.net> On Tue, 01 May 2012 07:32:36 +0200 raymond.hettinger wrote: > http://hg.python.org/cpython/rev/f981fe3b8bf7 > changeset: 76681:f981fe3b8bf7 > user: Raymond Hettinger > date: Mon Apr 30 22:32:16 2012 -0700 > summary: > Move make_key() out of the decorator body. Make keys that only need to be hashed once. How does it work? A new _CacheKey instance is created at each cache lookup anyway. Regards Antoine. From g.brandl at gmx.net Tue May 1 13:57:50 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 01 May 2012 13:57:50 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 Message-ID: With 3.3a3 tagged and the beta stage currently 2 months away, I would like to draw your attention to the following list of possible features for 3.3 as specified by PEP 398: Candidate PEPs: * PEP 362: Function Signature Object * PEP 395: Qualified Names for Modules * PEP 397: Python launcher for Windows * PEP 402: Simplified Package Layout (likely a new PEP derived from it) -- I assume PEP 420 is a candidate for that? * PEP 405: Python Virtual Environments * PEP 421: Adding sys.implementation * PEP 3143: Standard daemon process library * PEP 3144: IP Address manipulation library * PEP 3154: Pickle protocol version 4 Other planned large-scale changes: * Addition of the "regex" module * Email version 6 * A standard event-loop interface (PEP by Jim Fulton pending) * Breaking out standard library and docs in separate repos? Benjamin: I'd also like to know what will become of PEP 415. If anyone feels strongly about one of these items, please get ready to finalize and implement it well before June 23 (beta 1), or we have to discuss about adding another alpha. Also, if I missed any obvious candidate PEP or change, please let me know. cheers, Georg From eric at trueblade.com Tue May 1 14:11:15 2012 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 01 May 2012 08:11:15 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: <4F9FD2E3.4050701@trueblade.com> On 5/1/2012 7:57 AM, Georg Brandl wrote: > With 3.3a3 tagged and the beta stage currently 2 months away, I would like > to draw your attention to the following list of possible features for 3.3 > as specified by PEP 398: ... > Also, if I missed any obvious candidate PEP or change, please let me know. I'd like to include PEP 420, Implicit Namespace Packages. We discussed it at PyCon, and a sample implementation is available at features/pep-420. Barry Warsaw, Jason Coombs, and I are sprinting this Thursday to hopefully finish up tests and other loose ends. Then we'll ask that it be accepted. If accepted, we should be able to get it in before alpha 4. Eric. From eric at trueblade.com Tue May 1 14:24:24 2012 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 01 May 2012 08:24:24 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <4F9FD2E3.4050701@trueblade.com> References: <4F9FD2E3.4050701@trueblade.com> Message-ID: <4F9FD5F8.20409@trueblade.com> On 5/1/2012 8:11 AM, Eric V. Smith wrote: > On 5/1/2012 7:57 AM, Georg Brandl wrote: >> With 3.3a3 tagged and the beta stage currently 2 months away, I would like >> to draw your attention to the following list of possible features for 3.3 >> as specified by PEP 398: > ... > >> Also, if I missed any obvious candidate PEP or change, please let me know. > > I'd like to include PEP 420, Implicit Namespace Packages. Oops, I missed your reference to PEP 402 and PEP 420. Sorry about that. It is indeed 420 that would replace 402. Eric. From ncoghlan at gmail.com Tue May 1 15:30:39 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 May 2012 23:30:39 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 9:57 PM, Georg Brandl wrote: > With 3.3a3 tagged and the beta stage currently 2 months away, I would like > to draw your attention to the following list of possible features for 3.3 > as specified by PEP 398: A few of those are on my plate, soo... > * PEP 395: Qualified Names for Modules I'm currently thinking I'll defer this to 3.4. With the importlib change and PEP 420, there's already going to be an awful lot of churn in that space for 3.3, plus I have other things that I consider more important that I want to get done first. > * PEP 405: Python Virtual Environments I pinged Carl and Vinay about the remaining open issues yesterday, and indicated I'd really like to have something I can pronounce on soon so we can get it into the fourth alpha on May 26. I'm hoping we'll see the next draft of the PEP soon, but the ball is back in their court for the moment. > * PEP 3144: IP Address manipulation library This is pretty close to approval. Peter's addressed all the substantive comments that were made regarding the draft API, and he's going to provide an update to the PEP shortly that should get it into a state where I can mark it as Approved. Integration of the library and tests shouldn't be too hard, but it would really help if a sphinx expert could take a look at my Stack Overflow question [1] about generating an initial version of the API reference docs. (I've been meaning to figure out the right mailing list to send sphinx questions to, but haven't got around to it yet). [1] http://stackoverflow.com/questions/10377576/emit-restructuredtext-from-sphinx-autodoc > * Breaking out standard library and docs in separate repos? Our current development infrastructure simply isn't set up to cope with this. With both 407 and 413 still open (and not likely to go anywhere any time soon), this simply isn't going to happen for 3.3. > Benjamin: I'd also like to know what will become of PEP 415. I emailed Guido and Benjamin about that one the other day. I'll be PEP czar, and the most likely outcome is that I'll approve the PEP as is and we'll create a separate tracker issue to discuss the exact behaviour of the traceback display functions when they're handed exceptions with __suppress_context__ set to False and __cause__ and __context__ are both non-None (Benjamin's patch preserves the status quo of only displaying __cause__ in that case, which I don't think is ideal, but also don't think is worth holding up PEP 415 over). I'm still waiting to hear back from Benjamin though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eliben at gmail.com Tue May 1 15:34:05 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 1 May 2012 16:34:05 +0300 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: >> * PEP 3144: IP Address manipulation library > > This is pretty close to approval. Peter's addressed all the > substantive comments that were made regarding the draft API, and he's > going to provide an update to the PEP shortly that should get it into > a state where I can mark it as Approved. Integration of the library > and tests shouldn't be too hard, but it would really help if a sphinx > expert could take a look at my Stack Overflow question [1] about > generating an initial version of the API reference docs. (I've been > meaning to figure out the right mailing list to send sphinx questions > to, but haven't got around to it yet). > > [1] http://stackoverflow.com/questions/10377576/emit-restructuredtext-from-sphinx-autodoc > Will this package go through the provisional state mandated by PEP 411 ? Eli From benjamin at python.org Tue May 1 15:38:41 2012 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 1 May 2012 09:38:41 -0400 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: References: Message-ID: I've now renamed "is_monotonic" to "monotonic" and "is_adjusted" to "adjusted". 2012/4/29 Benjamin Peterson : > Hi, > I see PEP 418 gives time.clock_info() two boolean fields named > "is_monotonic" and "is_adjusted". I think the "is_" is unnecessary and > a bit ugly, and they could just be renamed "monotonic" and "adjusted". > > Thoughts? > > -- > Regards, > Benjamin -- Regards, Benjamin From ncoghlan at gmail.com Tue May 1 15:43:22 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 May 2012 23:43:22 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 11:34 PM, Eli Bendersky wrote: >>> * PEP 3144: IP Address manipulation library >> >> This is pretty close to approval. Peter's addressed all the >> substantive comments that were made regarding the draft API, and he's >> going to provide an update to the PEP shortly that should get it into >> a state where I can mark it as Approved. Integration of the library >> and tests shouldn't be too hard, but it would really help if a sphinx >> expert could take a look at my Stack Overflow question [1] about >> generating an initial version of the API reference docs. (I've been >> meaning to figure out the right mailing list to send sphinx questions >> to, but haven't got around to it yet). >> >> [1] http://stackoverflow.com/questions/10377576/emit-restructuredtext-from-sphinx-autodoc >> > > Will this package go through the provisional state mandated by PEP 411 ? Yeah, it will. While the ipaddr heritage means we can be confident the underlying implementation is solid, there's no need to be hasty in locking down the cleaned up API. Clarifying that is one of the updates I've asked Peter to make to the PEP before I can accept it. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Tue May 1 15:43:38 2012 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 1 May 2012 09:43:38 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: 2012/5/1 Eli Bendersky : > Will this package go through the provisional state mandated by PEP 411 ? I don't see PEP 411 requiring any module to go through its process. -- Regards, Benjamin From ncoghlan at gmail.com Tue May 1 15:46:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 1 May 2012 23:46:42 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 11:43 PM, Benjamin Peterson wrote: > 2012/5/1 Eli Bendersky : >> Will this package go through the provisional state mandated by PEP 411 ? > > I don't see PEP 411 requiring any module to go through its process. Indeed, it's a decision to be made on a case-by-case basis when a module is up for inclusion. For example, the unittest.mock API isn't provisional, since it's already been well tested on PyPI. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From yselivanov.ml at gmail.com Tue May 1 16:26:45 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 1 May 2012 10:26:45 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> On 2012-05-01, at 7:57 AM, Georg Brandl wrote: > With 3.3a3 tagged and the beta stage currently 2 months away, I would like > to draw your attention to the following list of possible features for 3.3 > as specified by PEP 398: > > Candidate PEPs: > > * PEP 362: Function Signature Object Regarding PEP 362: there are some outstanding issues with the PEP, that should be resolved. I've outlined some in this email: http://mail.python.org/pipermail/python-dev/2012-March/117540.html If Brett is tied up with the importlib integration, I'd be glad to offer my help with adjustment of the PEP and reference implementation update. - Yury From brett at python.org Tue May 1 16:26:39 2012 From: brett at python.org (Brett Cannon) Date: Tue, 1 May 2012 10:26:39 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 07:57, Georg Brandl wrote: > With 3.3a3 tagged and the beta stage currently 2 months away, I would like > to draw your attention to the following list of possible features for 3.3 > as specified by PEP 398: > > Candidate PEPs: > > * PEP 362: Function Signature Object > This is mine and I can say that the chance of me getting to this in time is near zero. If someone wants to pick it up and try to finish up the work (which involves addressing Guido's comments on the PEP and seeing if the patch someone submitted is worth looking at) then I'm fine with that. Else this PEP will become a 3.4 addition. -Brett > * PEP 395: Qualified Names for Modules > * PEP 397: Python launcher for Windows > * PEP 402: Simplified Package Layout (likely a new PEP derived from it) -- > I assume PEP 420 is a candidate for that? > * PEP 405: Python Virtual Environments > * PEP 421: Adding sys.implementation > * PEP 3143: Standard daemon process library > * PEP 3144: IP Address manipulation library > * PEP 3154: Pickle protocol version 4 > > Other planned large-scale changes: > > * Addition of the "regex" module > * Email version 6 > * A standard event-loop interface (PEP by Jim Fulton pending) > * Breaking out standard library and docs in separate repos? > > Benjamin: I'd also like to know what will become of PEP 415. > > If anyone feels strongly about one of these items, please get ready to > finalize and implement it well before June 23 (beta 1), or we have to > discuss about adding another alpha. > > Also, if I missed any obvious candidate PEP or change, please let me know. > > cheers, > Georg > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue May 1 16:40:08 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 01 May 2012 10:40:08 -0400 Subject: [Python-Dev] Email6 status (was Open PEPs and large-scale changes for 3.3) In-Reply-To: References: Message-ID: <20120501144009.720AE250147@webabinitio.net> On Tue, 01 May 2012 13:57:50 +0200, Georg Brandl wrote: > Other planned large-scale changes: > > * Addition of the "regex" module > * Email version 6 I guess it's time to talk about my plans for this one :) RIM/QNX is currently paying me to work on their stuff rather than email6, (but it does leave me with some time for email6). However, while QNX directly funded a big chunk of email6, as a consequence of their current priorities the whole of the email6 spec isn't going to be implemented for Python3.3. There is, however, a very useful big chunk of it that is pretty much done: the improved header parsing, header API, and header folding. I covered the primary improvements in my PyCon talk, for those who were there or have seen the video. Even that is not quite complete, but I'm currently planning to finish it before alpha 4. (There may be a couple of details that won't make it in until beta1.) At the PyCon sprints I finished the folding implementation. It's every bit as ugly as the old folding implementation that I simplified some time ago, but it gets a lot more corner cases right, and implements an important feature that the old folding algorithm got wrong more often than not: folding at "higher level syntactic breaks". So while I'd like to revisit that code and improve it, it *works*. So any further work on that can be bug-fix stage. Also at the sprints I started on a performance refactoring. It has been bothering me for a while that any program using the new code would have been doing a complete RFC5322 parse on every header in every message, even if it was processing a boatload of messages, only cared about the content of a few headers, and wanted to just pass the rest through. I was treating fixing that as a premature optimization, though I had some thoughts about how to do so. Well, to my great surprise, the most logical way of fixing it turned out to have two significant benefits: the code got simpler, and it provides a way to maintain pretty much 100% backward compatibility with Python3.2. I guess some optimizations aren't premature. The basic scheme (which I have almost completely implemented in the email6 feature repo at this point) is to continue to store the raw data from a parse in the Message just like we always have, and only do the full RFC5322 parse when either an application program asks for the header, or a generator needs to re-fold that header for some reason. By setting the policy controls appropriately and being aware of the consequences of looking at a header, an application could take advantage of the new header parsing for headers of interest with minimal performance impact compared to 3.2. Now, here's the tricky bit. The new API for headers has been out on PyPI for review for almost a year now, but hasn't seen what you would call widespread use. In particular, I haven't gotten any feedback about it. It seems to me that introducing this new API in 3.3 would be a perfect application of PEP 411...except that email is already a package in the standard library. This is where the backward-compatibility of my performance refactor comes in. The way this works is that the policy object, which has already been added to the 3.3 codebase and *has* gotten some review and feedback, controls what happens to the headers. The way the code in the 'nemail6' branch of /features/email6 currently works is that the policy used by default is named 'compat32'. (Actually it's compat5 right now in the repository, but I plan to change the name today.) That policy implements the exact same header handling that 3.2 currently uses (bugs and all). The new header handling is introduced by any *other* pre-defined policy an application may select. Thus, if code is not changed to use one of the new named policies, nothing changes and we have full backward compatibility. If a policy is specified, then the new header handling code (and the API it provides) is used. What I'm currently preparing is two patches. The first patch will refactor the policy code that was already committed so that the above scheme can be implemented, and so that compat32 is the default policy for 3.3. (This is the 'nemail6base' branch in /features/email6.) The second patch will use the policy hooks introduced by the first patch to add the new policies that use the new header parsing/folding code. My plan is that the first patch will go into 3.3 regardless (and should be ready for review/commit soon). What I'd like to do is have the second patch introduce the new policies as *provisional policies*. That is, in the spirit but not the letter of PEP 411, I'd like the new header API to be considered provisional and subject to improvement in 3.4 based on what we learn by having it actually out there in the field and getting tested. --David From barry at python.org Tue May 1 16:55:03 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 1 May 2012 10:55:03 -0400 Subject: [Python-Dev] Email6 status (was Open PEPs and large-scale changes for 3.3) In-Reply-To: <20120501144009.720AE250147@webabinitio.net> References: <20120501144009.720AE250147@webabinitio.net> Message-ID: <20120501105503.49774ada@resist.wooz.org> On May 01, 2012, at 10:40 AM, R. David Murray wrote: >I guess it's time to talk about my plans for this one :) Thanks for the update RDM. I really wish I had more time to contribute to email6, but I'd still really like to see this land in 3.3 if possible. I suspect you're just not going to get much practical feedback on email6 until it's available in Python's stdlib. I don't know how many Python 3 email consuming applications there are out there. The one I'm intimately familiar with still can't port to Python 3 because of its dependencies. >What I'd like to do is have the second patch introduce the new policies >as *provisional policies*. That is, in the spirit but not the letter >of PEP 411, I'd like the new header API to be considered provisional >and subject to improvement in 3.4 based on what we learn by having it >actually out there in the field and getting tested. That seems reasonable to me. The documentation should be clear as to what's provisional and what's stable. With that, and based on your level of confidence, I'd be in favor of getting email6 into Python 3.3. Cheers, -Barry From barry at python.org Tue May 1 16:57:56 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 1 May 2012 10:57:56 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: <20120501105756.185cb333@resist.wooz.org> On May 01, 2012, at 11:30 PM, Nick Coghlan wrote: >> * Breaking out standard library and docs in separate repos? > >Our current development infrastructure simply isn't set up to cope >with this. With both 407 and 413 still open (and not likely to go >anywhere any time soon), this simply isn't going to happen for 3.3. I concur. -Barry From barry at python.org Tue May 1 17:00:12 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 1 May 2012 11:00:12 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <4F9FD5F8.20409@trueblade.com> References: <4F9FD2E3.4050701@trueblade.com> <4F9FD5F8.20409@trueblade.com> Message-ID: <20120501110012.7373f7f2@resist.wooz.org> On May 01, 2012, at 08:24 AM, Eric V. Smith wrote: >Oops, I missed your reference to PEP 402 and PEP 420. Sorry about that. > >It is indeed 420 that would replace 402. And the older PEP 382. Once 420 is accepted, we should simply reject 382 and 402. At that point, I'll update them to point to 420. -Barry From eliben at gmail.com Tue May 1 17:12:52 2012 From: eliben at gmail.com (Eli Bendersky) Date: Tue, 1 May 2012 18:12:52 +0300 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On Tue, May 1, 2012 at 16:43, Benjamin Peterson wrote: > 2012/5/1 Eli Bendersky : >> Will this package go through the provisional state mandated by PEP 411 ? > > I don't see PEP 411 requiring any module to go through its process. > You're right, it doesn't require it. However, since Nick's summary above mentioned a "draft API", I thought this package can be a good candidate for a PEP-411-process. Without PEP 411, once a module gets into stdlib, its API is pretty much locked. If we are wary of such lock-in with the current state ipaddr's API is in, PEP 411 seems like a reasonable way to go. Eli From martin at v.loewis.de Tue May 1 17:48:23 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 01 May 2012 17:48:23 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: <4FA005C7.20302@v.loewis.de> > * PEP 397: Python launcher for Windows I hope to submit a rewrite of this PEP RSN. > Also, if I missed any obvious candidate PEP or change, please let me know. A big pending change is the switch to a new Visual Studio release. The challenge here is that we need to stop using the outdated VS 2008, but then, VS 2010 will soon be outdated as well, so it would be sad (IMO) if we switch from one outdated tool to the next. Therefore, I would really like to see Python 3.3 use VS 2012, except that this won't be released for a few more months (the release is likely along with the release for Windows 8, which likely happens "this summer"). So what specific VS release we use may depend on whether there will be another alpha release or not (but it may also be that another alpha release still won't buy enough time, so that we use VS 2008 for 2.7, VS 2010 for 3.3, and VS 2012 for 3.4). Regards, Martin P.S. There is (as of yet unconfirmed) rumor that VS 2012 won't support XP, which would clearly rule it out for Python 3.3, and likely also for 3.4. It also appears that VS 2012 might include the VS 2010 tool chain, which means that this tool chain won't be that outdated. P.P.S. this affects primarily the build files and the packaging, but then also affects distutils etc., and the buildbots - for the latter, switching the VS version likely means that all Windows buildbots will break, likely requiring several months for them to come back. P.P.P.S. People, please don't propose to drop VS in favor of gcc. That won't happen. From g.brandl at gmx.net Tue May 1 17:56:51 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 01 May 2012 17:56:51 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> References: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> Message-ID: On 01.05.2012 16:26, Yury Selivanov wrote: > On 2012-05-01, at 7:57 AM, Georg Brandl wrote: > >> With 3.3a3 tagged and the beta stage currently 2 months away, I would like >> to draw your attention to the following list of possible features for 3.3 >> as specified by PEP 398: >> >> Candidate PEPs: >> >> * PEP 362: Function Signature Object > > Regarding PEP 362: there are some outstanding issues with the PEP, that should > be resolved. I've outlined some in this email: > http://mail.python.org/pipermail/python-dev/2012-March/117540.html > > If Brett is tied up with the importlib integration, I'd be glad to offer my > help with adjustment of the PEP and reference implementation update. If you volunteer, and if Brett agrees to coordinate with you, that would be great. Georg From g.brandl at gmx.net Tue May 1 18:04:03 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 01 May 2012 18:04:03 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <4FA005C7.20302@v.loewis.de> References: <4FA005C7.20302@v.loewis.de> Message-ID: On 01.05.2012 17:48, "Martin v. L?wis" wrote: >> * PEP 397: Python launcher for Windows > > I hope to submit a rewrite of this PEP RSN. Good to hear. >> Also, if I missed any obvious candidate PEP or change, please let me know. > > A big pending change is the switch to a new Visual Studio release. The > challenge here is that we need to stop using the outdated VS 2008, but > then, VS 2010 will soon be outdated as well, so it would be sad (IMO) > if we switch from one outdated tool to the next. > > Therefore, I would really like to see Python 3.3 use VS 2012, except > that this won't be released for a few more months (the release is likely > along with the release for Windows 8, which likely happens "this > summer"). > > So what specific VS release we use may depend on whether there will be > another alpha release or not (but it may also be that another alpha > release still won't buy enough time, so that we use VS 2008 for 2.7, > VS 2010 for 3.3, and VS 2012 for 3.4). Do you know when a more detailed schedule for VS 2012 will be available (and confirmation regarding XP support)? While I agree that it would be best to use the most up-to-date toolchain, we shouldn't defer the beta stage indefinitely if there is no concrete date set. > P.S. There is (as of yet unconfirmed) rumor that VS 2012 won't support > XP, which would clearly rule it out for Python 3.3, and likely also for > 3.4. It also appears that VS 2012 might include the VS 2010 tool chain, > which means that this tool chain won't be that outdated. > > P.P.S. this affects primarily the build files and the packaging, > but then also affects distutils etc., and the buildbots - for the > latter, switching the VS version likely means that all Windows buildbots > will break, likely requiring several months for them to come back. Which is definitely not something we want to do during beta stage. Georg From g.brandl at gmx.net Tue May 1 18:06:54 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 01 May 2012 18:06:54 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: On 01.05.2012 15:30, Nick Coghlan wrote: > On Tue, May 1, 2012 at 9:57 PM, Georg Brandl wrote: >> With 3.3a3 tagged and the beta stage currently 2 months away, I would like >> to draw your attention to the following list of possible features for 3.3 >> as specified by PEP 398: > > A few of those are on my plate, soo... > >> * PEP 395: Qualified Names for Modules > > I'm currently thinking I'll defer this to 3.4. With the importlib > change and PEP 420, there's already going to be an awful lot of churn > in that space for 3.3, plus I have other things that I consider more > important that I want to get done first. OK, I've moved this one to the "deferred" section for now. >> * PEP 405: Python Virtual Environments > > I pinged Carl and Vinay about the remaining open issues yesterday, and > indicated I'd really like to have something I can pronounce on soon so > we can get it into the fourth alpha on May 26. I'm hoping we'll see > the next draft of the PEP soon, but the ball is back in their court > for the moment. Yes, there also was an RFC on the distutils-sig. >> * PEP 3144: IP Address manipulation library > > This is pretty close to approval. Peter's addressed all the > substantive comments that were made regarding the draft API, and he's > going to provide an update to the PEP shortly that should get it into > a state where I can mark it as Approved. Integration of the library > and tests shouldn't be too hard, but it would really help if a sphinx > expert could take a look at my Stack Overflow question [1] about > generating an initial version of the API reference docs. (I've been > meaning to figure out the right mailing list to send sphinx questions > to, but haven't got around to it yet). > > [1] http://stackoverflow.com/questions/10377576/emit-restructuredtext-from-sphinx-autodoc I can create that initial .rst for you. It is quite trivial, but not supported by Sphinx without hacking the autodoc code a little. >> * Breaking out standard library and docs in separate repos? > > Our current development infrastructure simply isn't set up to cope > with this. With both 407 and 413 still open (and not likely to go > anywhere any time soon), this simply isn't going to happen for 3.3. Agreed, and moved to deferred. >> Benjamin: I'd also like to know what will become of PEP 415. > > I emailed Guido and Benjamin about that one the other day. I'll be PEP > czar, and the most likely outcome is that I'll approve the PEP as is > and we'll create a separate tracker issue to discuss the exact > behaviour of the traceback display functions when they're handed > exceptions with __suppress_context__ set to False and __cause__ and > __context__ are both non-None (Benjamin's patch preserves the status > quo of only displaying __cause__ in that case, which I don't think is > ideal, but also don't think is worth holding up PEP 415 over). I'm > still waiting to hear back from Benjamin though. I've added 420 to the pending list in any case. Georg From martin at v.loewis.de Tue May 1 18:54:49 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 01 May 2012 18:54:49 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: <4FA005C7.20302@v.loewis.de> Message-ID: <20120501185449.Horde.NLQDb6GZi1VPoBVZOsmBK8A@webmail.df.eu> > Do you know when a more detailed schedule for VS 2012 will be available > (and confirmation regarding XP support)? Unfortunately, Microsoft doesn't publish any release dates. It's ready when it's ready :-( I just search again, and it appears that some roadmap has leaked: http://www.zdnet.com/blog/microsoft/microsoft-roadmap-leaks-for-office-15-ie-10-and-more-key-products/12417 That says that a release is scheduled for "late 2012", which would put it after the Python 3.3 release (contrary to rumors I heard elsewhere). Regards, Martin From merwok at netwok.org Tue May 1 18:58:29 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Tue, 01 May 2012 12:58:29 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: Message-ID: <4FA01635.1030801@netwok.org> Hi, Le 01/05/2012 09:30, Nick Coghlan a ?crit : >> * PEP 3144: IP Address manipulation library > This is pretty close to approval. Peter's addressed all the > substantive comments that were made regarding the draft API, and he's > going to provide an update to the PEP shortly that should get it into > a state where I can mark it as Approved. Integration of the library > and tests shouldn't be too hard, but it would really help if a sphinx > expert could take a look at my Stack Overflow question [1] about > generating an initial version of the API reference docs. (I've been > meaning to figure out the right mailing list to send sphinx questions > to, but haven't got around to it yet). IIUC sphinx-autogen (shipped with Sphinx) does that. Cheers From georg at python.org Tue May 1 21:43:13 2012 From: georg at python.org (Georg Brandl) Date: Tue, 01 May 2012 21:43:13 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 Message-ID: <4FA03CD1.7020605@python.org> On behalf of the Python development team, I'm happy to announce the third alpha release of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, Syntax for Delegating to a Subgenerator ("yield from") * PEP 393, Flexible String Representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * PEP 409, Suppressing Exception Context * PEP 3151, Reworking the OS and IO exception hierarchy * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) is based on importlib by default * The new "packaging" module, building upon the "distribute" and "distutils2" projects and deprecating "distutils" * The new "lzma" module with LZMA/XZ support * PEP 3155, Qualified name for classes and functions * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * The new "faulthandler" module that helps diagnosing crashes * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html (*) To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! (*) Please note that this document is usually finalized late in the release cycle and therefore may have stubs and missing entries at this point. -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) From brett at python.org Tue May 1 22:12:10 2012 From: brett at python.org (Brett Cannon) Date: Tue, 1 May 2012 16:12:10 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> References: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> Message-ID: On Tue, May 1, 2012 at 10:26, Yury Selivanov wrote: > On 2012-05-01, at 7:57 AM, Georg Brandl wrote: > > > With 3.3a3 tagged and the beta stage currently 2 months away, I would > like > > to draw your attention to the following list of possible features for 3.3 > > as specified by PEP 398: > > > > Candidate PEPs: > > > > * PEP 362: Function Signature Object > > Regarding PEP 362: there are some outstanding issues with the PEP, that > should > be resolved. I've outlined some in this email: > http://mail.python.org/pipermail/python-dev/2012-March/117540.html > > If Brett is tied up with the importlib integration, Yes I am. =) > I'd be glad to offer my > help with adjustment of the PEP and reference implementation update. > That would be great! First thing is addressing Guido's concerns from http://mail.python.org/pipermail/python-dev/2012-March/117515.html and then handling any issues you found. Not sure if Larry was asking about this out of curiosity or because he too wanted to help. I think the overall trick is keeping the API simple so it's easy to use but exposes what one could reasonably need (e.g. I wouldn't try to keep the order of keyword-only arguments). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben+python at benfinney.id.au Wed May 2 02:24:14 2012 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 02 May 2012 10:24:14 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 References: Message-ID: <87havz8m6p.fsf@benfinney.id.au> Georg Brandl writes: > list of possible features for 3.3 as specified by PEP 398: > > Candidate PEPs: [?] > * PEP 3143: Standard daemon process library Our porting work will not be done in time for Python 3.3. I will update this to target Python 3.4. -- \ ?The best mind-altering drug is truth.? ?Jane Wagner, via Lily | `\ Tomlin | _o__) | Ben Finney From senthil at uthcode.com Wed May 2 05:09:00 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 2 May 2012 11:09:00 +0800 Subject: [Python-Dev] Another buildslave - Ubuntu again Message-ID: Hello, I just got a Ubuntu Server running at my disposal, which could be connected 24/7 for at least next 3 months. I am not sure how helpful it would be to have another buildbot on Ubuntu, but i wanted to play with it for a while (as I have more comfort with Ubuntu than any other Unix flavor) before I could change it to cover as OS which is not already covered by the buildbots. As instructed here - http://wiki.python.org/moin/BuildBot could someone please help create a slavename/slavepasswd on dinsdale.python.org. Also, I think the instructions in the wiki could be improved. I was not able to su - buildbot after installing through package manager. I shall edit it once I have set it up and running. Thanks, Senthil From rosuav at gmail.com Wed May 2 05:13:15 2012 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 2 May 2012 13:13:15 +1000 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: Message-ID: On Wed, May 2, 2012 at 1:09 PM, Senthil Kumaran wrote: > Also, I think the instructions in the wiki could be improved. I was > not able to su - buildbot after installing through package manager. I > shall edit it once I have set it up and running. The page does say: """... create a new user "buildbot" if it doesn't exist (your package manager might have done it for you)""", but it'd be nice if it could clarify which are known to do it and which are known not to, eg "(the Debian and Red Hat package managers will do this for you)". Or is that too much of a moving target to be worth trying to specify? ChrisA From ncoghlan at gmail.com Wed May 2 06:22:17 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 2 May 2012 14:22:17 +1000 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <4FA01635.1030801@netwok.org> References: <4FA01635.1030801@netwok.org> Message-ID: On Wed, May 2, 2012 at 2:58 AM, ?ric Araujo wrote: > Hi, > > Le 01/05/2012 09:30, Nick Coghlan a ?crit : > >>> * PEP 3144: IP Address manipulation library >> >> This is pretty close to approval. Peter's addressed all the >> substantive comments that were made regarding the draft API, and he's >> going to provide an update to the PEP shortly that should get it into >> a state where I can mark it as Approved. Integration of the library >> and tests shouldn't be too hard, but it would really help if a sphinx >> expert could take a look at my Stack Overflow question [1] about >> generating an initial version of the API reference docs. (I've been >> meaning to figure out the right mailing list to send sphinx questions >> to, but haven't got around to it yet). > > > IIUC sphinx-autogen (shipped with Sphinx) does that. As near as I can tell, autogen does the same thing "apidoc" does - inserts autodoc directives in the generated .rst files that loads the docstrings at build time. I don't want that - I want to load the docstrings at generation time in order to use them as a basis for the hand written docs. Instead, I'll just take Georg up on his offer to generate the initial file for us. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Wed May 2 07:49:49 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 02 May 2012 07:49:49 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: Message-ID: <4FA0CAFD.5020203@v.loewis.de> On 02.05.2012 05:13, Chris Angelico wrote: > On Wed, May 2, 2012 at 1:09 PM, Senthil Kumaran wrote: >> Also, I think the instructions in the wiki could be improved. I was >> not able to su - buildbot after installing through package manager. I >> shall edit it once I have set it up and running. > > The page does say: """... create a new user "buildbot" if it doesn't > exist (your package manager might have done it for you)""", but it'd > be nice if it could clarify which are known to do it and which are > known not to, eg "(the Debian and Red Hat package managers will do > this for you)". Or is that too much of a moving target to be worth > trying to specify? I think a buildbot admin should be able to figure out what user the buildbot to run under himself; if that already is a challenge, it might be better if he don't run a build slave. Regards, Martin From martin at v.loewis.de Wed May 2 07:55:27 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 02 May 2012 07:55:27 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: Message-ID: <4FA0CC4F.4070607@v.loewis.de> > I just got a Ubuntu Server running at my disposal, which could be > connected 24/7 for at least next 3 months. I am not sure how helpful > it would be to have another buildbot on Ubuntu, but i wanted to play > with it for a while (as I have more comfort with Ubuntu than any other > Unix flavor) before I could change it to cover as OS which is not > already covered by the buildbots. I'm not sure how useful it is to have a build slave which you can't commit to having for more than 3 months. So I'm -0 on adding this slave, but it is up to Antoine to decide. Regards, Martin From senthil at uthcode.com Wed May 2 08:07:09 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Wed, 2 May 2012 14:07:09 +0800 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: <4FA0CC4F.4070607@v.loewis.de> References: <4FA0CC4F.4070607@v.loewis.de> Message-ID: On Wed, May 2, 2012 at 1:55 PM, "Martin v. L?wis" wrote: > I'm not sure how useful it is to have a build slave which you can't > commit to having for more than 3 months. So I'm -0 on adding this > slave, but it is up to Antoine to decide. I am likely switch to places within 3 months, but I am hoping that having a 24/7 connected system could provide some experience for running a dedicated system in the longer run. Thanks, Senthil From larry at hastings.org Wed May 2 08:46:03 2012 From: larry at hastings.org (Larry Hastings) Date: Tue, 01 May 2012 23:46:03 -0700 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> Message-ID: <4FA0D82B.1070103@hastings.org> On 05/01/2012 01:12 PM, Brett Cannon wrote: > That would be great! First thing is addressing Guido's concerns from > http://mail.python.org/pipermail/python-dev/2012-March/117515.html and > then handling any issues you found. Not sure if Larry was asking about > this out of curiosity or because he too wanted to help. Asking, that is, off-list. So your observation was kinda out of left field for the casual observer ;-) I was asking because I was interested in helping, but I haven't looked into it too much, and I'm not sure how much of a priority it is. It's clear that Yury has spent way more time with the issue. If he'd* like my help I'll try to lend it but I bet he's got it under control. /arry * Assuming "Yury" is a he; apologies if my shot in the dark was a miss. From martin at v.loewis.de Wed May 2 09:23:44 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 02 May 2012 09:23:44 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: <4FA0CC4F.4070607@v.loewis.de> Message-ID: <4FA0E100.4030301@v.loewis.de> On 02.05.2012 08:07, Senthil Kumaran wrote: > On Wed, May 2, 2012 at 1:55 PM, "Martin v. L?wis" wrote: >> I'm not sure how useful it is to have a build slave which you can't >> commit to having for more than 3 months. So I'm -0 on adding this >> slave, but it is up to Antoine to decide. > > I am likely switch to places within 3 months, but I am hoping that > having a 24/7 connected system could provide some experience for > running a dedicated system in the longer run. You are talking about experience that you gain, right? Some of the build slaves have been connected for many years by now, so "we" (the buildbot admins) already have plenty experience, which can be summarized as "Unix good, Windows bad". I suggest that you can still gain the experience when you are able to provide a longer-term slave. You are then still free to drop out of this at any time, so you don't really need to commit to supporting this for years - but knowing that it likely is only for 3 months might be too much effort for too little gain. If you want to learn more about buildbot, I suggest that you also setup a master on your system. You will have to find one of the hg pollers as a change source, or additionally setup a local clone with a post-receive hook which pulls cpython every five minutes or so through a cron job, and posts changes to the local master. Regards, Martin From larry at hastings.org Wed May 2 10:43:32 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 02 May 2012 01:43:32 -0700 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? Message-ID: <4FA0F3B4.5070707@hastings.org> Right now the CPython trunk religiously declares all variables at the tops of scopes, before any code, because this is all C89 permits. Back in the 90s all the C compilers took a page out of the C++ playbook and independently, but nearly without exception, extended the language to allow you declaring new variables after code statements. This became an official part of the language with C99 back in 1999. It's now 2012. As I step out of my flying car onto the moving walkway that will glide me noiselessly into my platform sky dome... I can't help but think that we're a bit hidebound, slavishly devoting ourselves to C89. CPython 3.3 drops support for VMS, OS/2, and even Windows 2000. I realize we can't jump to C99 because of A Certain Compiler. (Its name rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that compiler added this extension in the early 90s. Do we officially support any C compilers that *don't* permit "intermingled variable declarations and code"? Do we *unofficially* support any? And if we do, what do we gain? Just itching to pull some local macro hijinx, is all, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at hotpy.org Wed May 2 11:55:25 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 02 May 2012 10:55:25 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA03CD1.7020605@python.org> References: <4FA03CD1.7020605@python.org> Message-ID: <4FA1048D.3010000@hotpy.org> Georg Brandl wrote: > On behalf of the Python development team, I'm happy to announce the > third alpha release of Python 3.3.0. > > This is a preview release, and its use is not recommended in > production settings. > > Python 3.3 includes a range of improvements of the 3.x series, as well > as easier porting between 2.x and 3.x. Major new features and changes > in the 3.3 release series are: > > * PEP 380, Syntax for Delegating to a Subgenerator ("yield from") > * PEP 393, Flexible String Representation (doing away with the > distinction between "wide" and "narrow" Unicode builds) > * PEP 409, Suppressing Exception Context > * PEP 3151, Reworking the OS and IO exception hierarchy > * A C implementation of the "decimal" module, with up to 80x speedup > for decimal-heavy applications > * The import system (__import__) is based on importlib by default > * The new "packaging" module, building upon the "distribute" and > "distutils2" projects and deprecating "distutils" > * The new "lzma" module with LZMA/XZ support > * PEP 3155, Qualified name for classes and functions > * PEP 414, explicit Unicode literals to help with porting > * PEP 418, extended platform-independent clocks in the "time" module > * The new "faulthandler" module that helps diagnosing crashes > * A "collections.ChainMap" class for linking mappings to a single unit > * Wrappers for many more POSIX functions in the "os" and "signal" > modules, as well as other useful functions such as "sendfile()" > * Hash randomization, introduced in earlier bugfix releases, is now > switched on by default > Don't forget PEP 412 ;) Rather than a long list of PEPs would it be better to split it into two parts? 1. language & library changes. The details are important here, so that the PEPs should probably be fairly prominent. 2. Performance enhancements People want to know how much faster 3.3 is or how less memory it uses. Who cares which PEP does what (apart from the authors)? Or maybe three parts? New features. Behavioural changes (i.e. bug fixes) Performance enhancements Cheers, Mark. From solipsis at pitrou.net Wed May 2 11:56:56 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 2 May 2012 11:56:56 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? References: <4FA0F3B4.5070707@hastings.org> Message-ID: <20120502115656.05773139@pitrou.net> On Wed, 02 May 2012 01:43:32 -0700 Larry Hastings wrote: > > I realize we can't jump to C99 because of A Certain Compiler. (Its name > rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that > compiler added this extension in the early 90s. > > Do we officially support any C compilers that *don't* permit > "intermingled variable declarations and code"? Do we *unofficially* > support any? And if we do, what do we gain? Well, there's this one called MSVC, which we support quite officially. Regards Antoine. From ncoghlan at gmail.com Wed May 2 13:01:29 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 2 May 2012 21:01:29 +1000 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA1048D.3010000@hotpy.org> References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> Message-ID: On Wed, May 2, 2012 at 7:55 PM, Mark Shannon wrote: > Or maybe three parts? > New features. > Behavioural changes (i.e. bug fixes) > Performance enhancements The release PEPs are mainly there for *our* benefit, not end users. For end users, it's the What's New document that matters. For performance numbers, the goal is to eventually have speed.python.org providing regular results, but there's a fair bit of work still involved in bringing that online with meaningful 3.x figures. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Wed May 2 13:19:18 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 02 May 2012 12:19:18 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> Message-ID: <4FA11836.8020106@hotpy.org> Nick Coghlan wrote: > On Wed, May 2, 2012 at 7:55 PM, Mark Shannon wrote: >> Or maybe three parts? >> New features. >> Behavioural changes (i.e. bug fixes) >> Performance enhancements > > The release PEPs are mainly there for *our* benefit, not end users. > > For end users, it's the What's New document that matters. For The What's New document also starts with a long list of PEPs. This seems to be the standard format as What's New for 3.2 follows the same layout. Perhaps adding an overview or highlights at the start would be a good idea. > performance numbers, the goal is to eventually have speed.python.org > providing regular results, but there's a fair bit of work still > involved in bringing that online with meaningful 3.x figures. Like some meaningful benchmarks for 3.x; there are very few :( Cheers, Mark. From anacrolix at gmail.com Wed May 2 15:37:35 2012 From: anacrolix at gmail.com (Matt Joiner) Date: Wed, 2 May 2012 21:37:35 +0800 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: <20120502115656.05773139@pitrou.net> References: <4FA0F3B4.5070707@hastings.org> <20120502115656.05773139@pitrou.net> Message-ID: On May 2, 2012 6:00 PM, "Antoine Pitrou" wrote: > > On Wed, 02 May 2012 01:43:32 -0700 > Larry Hastings wrote: > > > > I realize we can't jump to C99 because of A Certain Compiler. (Its name > > rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that > > compiler added this extension in the early 90s. > > > > Do we officially support any C compilers that *don't* permit > > "intermingled variable declarations and code"? Do we *unofficially* > > support any? And if we do, what do we gain? > > Well, there's this one called MSVC, which we support quite officially. Not sure if comic genius or can't rhyme. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/anacrolix%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Wed May 2 15:56:40 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 02 May 2012 15:56:40 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> <20120502115656.05773139@pitrou.net> Message-ID: Matt Joiner, 02.05.2012 15:37: > On May 2, 2012 6:00 PM, "Antoine Pitrou" wrote: >> On Wed, 02 May 2012 01:43:32 -0700 >> Larry Hastings wrote: >>> >>> I realize we can't jump to C99 because of A Certain Compiler. (Its name >>> rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that >>> compiler added this extension in the early 90s. >>> >>> Do we officially support any C compilers that *don't* permit >>> "intermingled variable declarations and code"? Do we *unofficially* >>> support any? And if we do, what do we gain? >> >> Well, there's this one called MSVC, which we support quite officially. > > Not sure if comic genius or can't rhyme. I'm not sure if MSVC and MSVC++ are the same thing, but I surely remember reports by MSVC users only a few years ago that Cython generated C code contained a declaration after an executed code at some point, and that failed to compile for them. So, assuming that MSVC++ "added this extension in the early 90s" and didn't remove it in the meantime, they must be two different things. Stefan From rdmurray at bitdance.com Wed May 2 16:12:01 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 02 May 2012 10:12:01 -0400 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> <20120502115656.05773139@pitrou.net> Message-ID: <20120502141201.DD3DA250147@webabinitio.net> On Wed, 02 May 2012 21:37:35 +0800, Matt Joiner wrote: > On May 2, 2012 6:00 PM, "Antoine Pitrou" wrote: > > > > On Wed, 02 May 2012 01:43:32 -0700 > > Larry Hastings wrote: > > > > > > I realize we can't jump to C99 because of A Certain Compiler. (Its name > > > rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that > > > compiler added this extension in the early 90s. > > > > > > Do we officially support any C compilers that *don't* permit > > > "intermingled variable declarations and code"? Do we *unofficially* > > > support any? And if we do, what do we gain? > > > > Well, there's this one called MSVC, which we support quite officially. > > Not sure if comic genius or can't rhyme. I had trouble with that rhyme, and I (unlike Antoine) am a native English speaker. --David From curt at hagenlocher.org Wed May 2 16:13:24 2012 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Wed, 2 May 2012 07:13:24 -0700 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> <20120502115656.05773139@pitrou.net> Message-ID: On Wed, May 2, 2012 at 6:56 AM, Stefan Behnel wrote: > I'm not sure if MSVC and MSVC++ are the same thing, but I surely remember > reports by MSVC users only a few years ago that Cython generated C code > contained a declaration after an executed code at some point, and that > failed to compile for them. So, assuming that MSVC++ "added this extension > in the early 90s" and didn't remove it in the meantime, they must be two > different things. I believe you need to tell MSVC that it's a C++ source file by using "/Tp" in order to make this work. And of course, there would be other ramifications for doing that. -Curt -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Wed May 2 16:16:44 2012 From: carl at oddbird.net (Carl Meyer) Date: Wed, 02 May 2012 08:16:44 -0600 Subject: [Python-Dev] outdated info on download pages for older versions Message-ID: <4FA141CC.8020105@oddbird.net> Hi all, Are the download pages for older Python versions supposed to be kept up to date at all? I just noticed that the 2.4.6 download page (http://www.python.org/download/releases/2.4.6/) says things like "Python 2.4 is now in security-fix-only mode" (whereas in fact it no longer gets even security fixes), and "Python 2.6 is the latest release of Python." While checking to see if there was a SIG that would be more appropriate for this question, I also noticed that if one clicks on Community | Mailing Lists in the left sidebar of python.org, there's a "Special Interest Groups" link under "Mailing Lists" which is a 404 (not to mention redundant, as there's also one parallel to "Mailing Lists" that works). (Please do let me know if there is a more appropriate forum for website issues/questions). Carl From solipsis at pitrou.net Wed May 2 16:54:25 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 2 May 2012 16:54:25 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again References: <4FA0CC4F.4070607@v.loewis.de> Message-ID: <20120502165425.06898f77@pitrou.net> On Wed, 2 May 2012 14:07:09 +0800 Senthil Kumaran wrote: > On Wed, May 2, 2012 at 1:55 PM, "Martin v. L?wis" wrote: > > I'm not sure how useful it is to have a build slave which you can't > > commit to having for more than 3 months. So I'm -0 on adding this > > slave, but it is up to Antoine to decide. > > I am likely switch to places within 3 months, but I am hoping that > having a 24/7 connected system could provide some experience for > running a dedicated system in the longer run. What are the characteristics of your machine? We already have several Linux x86/x86-64 buildbots... That said, we could also toy with other build options if someone has a request about that. Regards Antoine. From solipsis at pitrou.net Wed May 2 16:55:51 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 2 May 2012 16:55:51 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again References: Message-ID: <20120502165551.4da586d5@pitrou.net> On Wed, 2 May 2012 13:13:15 +1000 Chris Angelico wrote: > On Wed, May 2, 2012 at 1:09 PM, Senthil Kumaran wrote: > > Also, I think the instructions in the wiki could be improved. I was > > not able to su - buildbot after installing through package manager. I > > shall edit it once I have set it up and running. > > The page does say: """... create a new user "buildbot" if it doesn't > exist (your package manager might have done it for you)""", but it'd > be nice if it could clarify which are known to do it and which are > known not to, eg "(the Debian and Red Hat package managers will do > this for you)". Or is that too much of a moving target to be worth > trying to specify? That page would probably like a good cleanup. I don't even think creating an user is required - it's just good practice, and you probably want that user to have as few privileges as possible. Regards Antoine. From tjreedy at udel.edu Wed May 2 17:55:02 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 02 May 2012 11:55:02 -0400 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: <4FA141CC.8020105@oddbird.net> References: <4FA141CC.8020105@oddbird.net> Message-ID: On 5/2/2012 10:16 AM, Carl Meyer wrote: > Hi all, > > Are the download pages for older Python versions supposed to be kept up > to date at all? I just noticed that the 2.4.6 download page > (http://www.python.org/download/releases/2.4.6/) says things like > "Python 2.4 is now in security-fix-only mode" (whereas in fact it no > longer gets even security fixes), and "Python 2.6 is the latest release > of Python." > > While checking to see if there was a SIG that would be more appropriate > for this question, I also noticed that if one clicks on Community | > Mailing Lists in the left sidebar of python.org, there's a "Special > Interest Groups" link under "Mailing Lists" which is a 404 (not to > mention redundant, as there's also one parallel to "Mailing Lists" that > works). > > (Please do let me know if there is a more appropriate forum for website > issues/questions). I would send the above to webmaster at python.org (should be at the bottom of pages). We develop CPython but do not directly manage the website. -- Terry Jan Reedy From senthil at uthcode.com Wed May 2 18:25:05 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Thu, 3 May 2012 00:25:05 +0800 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: <20120502165425.06898f77@pitrou.net> References: <4FA0CC4F.4070607@v.loewis.de> <20120502165425.06898f77@pitrou.net> Message-ID: On Wed, May 2, 2012 at 10:54 PM, Antoine Pitrou wrote: > What are the characteristics of your machine? We already have several > Linux x86/x86-64 buildbots... That said, we could also toy with other > build options if someone has a request about that. It is not very unique. It is Intel x86 (32 bit) and 1 GB ram. It is running Ubuntu Server edition. Yeah if additional build options (or additional software configuration options) or some alternative coverage could be thought off with current config itself, I could do that. Thanks, Senthil From fuzzyman at voidspace.org.uk Wed May 2 18:33:42 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 2 May 2012 17:33:42 +0100 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: References: <4FA141CC.8020105@oddbird.net> Message-ID: <08471171-7B42-4915-B966-7DA3FB3108C8@voidspace.org.uk> On 2 May 2012, at 16:55, Terry Reedy wrote: > On 5/2/2012 10:16 AM, Carl Meyer wrote: >> Hi all, >> >> Are the download pages for older Python versions supposed to be kept up >> to date at all? I just noticed that the 2.4.6 download page >> (http://www.python.org/download/releases/2.4.6/) says things like >> "Python 2.4 is now in security-fix-only mode" (whereas in fact it no >> longer gets even security fixes), and "Python 2.6 is the latest release >> of Python." >> >> While checking to see if there was a SIG that would be more appropriate >> for this question, I also noticed that if one clicks on Community | >> Mailing Lists in the left sidebar of python.org, there's a "Special >> Interest Groups" link under "Mailing Lists" which is a 404 (not to >> mention redundant, as there's also one parallel to "Mailing Lists" that >> works). >> >> (Please do let me know if there is a more appropriate forum for website >> issues/questions). > > I would send the above to webmaster at python.org (should be at the bottom of pages). We develop CPython but do not directly manage the website. Not true. The download pages are administered by the release managers not the web team. For the record, the best way of contacting the web team (such as it is) is the pydotorg-www mailing list. There are precious few people (even fewer than there are in the web team...) responding to emails on the webmaster alias. :-) Michael > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From solipsis at pitrou.net Wed May 2 18:46:17 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 2 May 2012 18:46:17 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: <4FA0CC4F.4070607@v.loewis.de> <20120502165425.06898f77@pitrou.net> Message-ID: <20120502184617.33243626@pitrou.net> On Thu, 3 May 2012 00:25:05 +0800 Senthil Kumaran wrote: > On Wed, May 2, 2012 at 10:54 PM, Antoine Pitrou wrote: > > > What are the characteristics of your machine? We already have several > > Linux x86/x86-64 buildbots... That said, we could also toy with other > > build options if someone has a request about that. > > It is not very unique. It is Intel x86 (32 bit) and 1 GB ram. It is > running Ubuntu Server edition. Yeah if additional build options (or > additional software configuration options) or some alternative > coverage could be thought off with current config itself, I could do > that. Daily code coverage builds would be nice, but that's probably beyond what the current infrastructure can offer. It would be nice if someone wants to investigate that. Regards Antoine. From ezio.melotti at gmail.com Wed May 2 19:06:36 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Wed, 02 May 2012 20:06:36 +0300 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: <08471171-7B42-4915-B966-7DA3FB3108C8@voidspace.org.uk> References: <4FA141CC.8020105@oddbird.net> <08471171-7B42-4915-B966-7DA3FB3108C8@voidspace.org.uk> Message-ID: <4FA1699C.3090303@gmail.com> On 02/05/2012 19.33, Michael Foord wrote: > On 2 May 2012, at 16:55, Terry Reedy wrote: >> I would send the above to webmaster at python.org (should be at the bottom of pages). We develop CPython but do not directly manage the website. > Not true. The download pages are administered by the release managers not the web team. > > For the record, the best way of contacting the web team (such as it is) is the pydotorg-www mailing list. There are precious few people (even fewer than there are in the web team...) responding to emails on the webmaster alias. :-) > > Michael I'm pretty sure that several core devs are able (and possibly willing) to help out with the website, but AFAIU they have to request commit right for a separate repo where the website lives or report issues via mail. Is there any practical reason why the repo for the website is not on hg with all the other repos (cpython/devguide/peps/etc.) except that no one ported it yet? Best Regards, Ezio Melotti From brian at python.org Wed May 2 19:19:55 2012 From: brian at python.org (Brian Curtin) Date: Wed, 2 May 2012 12:19:55 -0500 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: <4FA1699C.3090303@gmail.com> References: <4FA141CC.8020105@oddbird.net> <08471171-7B42-4915-B966-7DA3FB3108C8@voidspace.org.uk> <4FA1699C.3090303@gmail.com> Message-ID: On Wed, May 2, 2012 at 12:06 PM, Ezio Melotti wrote: > On 02/05/2012 19.33, Michael Foord wrote: >> >> On 2 May 2012, at 16:55, Terry Reedy wrote: >>> >>> I would send the above to webmaster at python.org (should be at the bottom >>> of pages). We develop CPython but do not directly manage the website. >> >> Not true. The download pages are administered by the release managers not >> the web team. >> >> For the record, the best way of contacting the web team (such as it is) is >> the pydotorg-www mailing list. There are precious few people (even fewer >> than there are in the web team...) responding to emails on the webmaster >> alias. :-) >> >> Michael > > > I'm pretty sure that several core devs are able (and possibly willing) to > help out with the website, but AFAIU they have to request commit right for a > separate repo where the website lives or report issues via mail. ?Is there > any practical reason why the repo for the website is not on hg with all the > other repos (cpython/devguide/peps/etc.) except that no one ported it yet? I don't know if there's a practical reason, but given that the website will eventually be changing anyway, I think it's a waste of time to port it to hg. You'd also have to port the build chain to hg, since it rebuilds the site when svn is updated. Then by the time you're done, there's zero net gain and it all gets thrown away. From martin at v.loewis.de Wed May 2 20:33:33 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 02 May 2012 20:33:33 +0200 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: References: <4FA141CC.8020105@oddbird.net> Message-ID: <4FA17DFD.7060509@v.loewis.de> On 02.05.2012 17:55, Terry Reedy wrote: > On 5/2/2012 10:16 AM, Carl Meyer wrote: >> Hi all, >> >> Are the download pages for older Python versions supposed to be kept up >> to date at all? I just noticed that the 2.4.6 download page >> (http://www.python.org/download/releases/2.4.6/) says things like >> "Python 2.4 is now in security-fix-only mode" (whereas in fact it no >> longer gets even security fixes), and "Python 2.6 is the latest release >> of Python." >> >> While checking to see if there was a SIG that would be more appropriate >> for this question, I also noticed that if one clicks on Community | >> Mailing Lists in the left sidebar of python.org, there's a "Special >> Interest Groups" link under "Mailing Lists" which is a 404 (not to >> mention redundant, as there's also one parallel to "Mailing Lists" that >> works). >> >> (Please do let me know if there is a more appropriate forum for website >> issues/questions). > > I would send the above to webmaster at python.org (should be at the bottom > of pages). Please don't (unless you want your message ignored). Regards, Martin From martin at v.loewis.de Wed May 2 20:36:17 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 02 May 2012 20:36:17 +0200 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: <4FA141CC.8020105@oddbird.net> References: <4FA141CC.8020105@oddbird.net> Message-ID: <4FA17EA1.1090401@v.loewis.de> > Are the download pages for older Python versions supposed to be kept up > to date at all? I occasionally update them when I see issues with them. Your specific issue, I missed so far. If you would like to make this kind of update, please let me know. Regards, Martin From yselivanov.ml at gmail.com Wed May 2 23:14:55 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 2 May 2012 17:14:55 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: References: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> Message-ID: <818889E3-D4A4-4450-BDA7-CFC093095FAC@gmail.com> On 2012-05-01, at 4:12 PM, Brett Cannon wrote: > > That would be great! First thing is addressing Guido's concerns from http://mail.python.org/pipermail/python-dev/2012-March/117515.html and then handling any issues you found. Not sure if Larry was asking about this out of curiosity or because he too wanted to help. Great! I'll start looking into this on the weekend. - Yury From yselivanov.ml at gmail.com Wed May 2 23:17:17 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 2 May 2012 17:17:17 -0400 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <4FA0D82B.1070103@hastings.org> References: <0377A8D3-E2AB-42B6-81C7-A060413F11A5@gmail.com> <4FA0D82B.1070103@hastings.org> Message-ID: <2E751B67-6209-4DBA-8D67-6A2B5607288A@gmail.com> On 2012-05-02, at 2:46 AM, Larry Hastings wrote: > On 05/01/2012 01:12 PM, Brett Cannon wrote: >> That would be great! First thing is addressing Guido's concerns from http://mail.python.org/pipermail/python-dev/2012-March/117515.html and then handling any issues you found. Not sure if Larry was asking about this out of curiosity or because he too wanted to help. > > Asking, that is, off-list. So your observation was kinda out of left field for the casual observer ;-) > > I was asking because I was interested in helping, but I haven't looked into it too much, and I'm not sure how much of a priority it is. It's clear that Yury has spent way more time with the issue. If he'd* like my help I'll try to lend it but I bet he's got it under control. Let's work on this together. I'll revisit the PEP and Guido's comments, and will get back to you and Brett with my ideas. - Yury From ncoghlan at gmail.com Thu May 3 02:53:38 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 May 2012 10:53:38 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Fix PyUnicode_Substring() for start >= length and start > end In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 10:33 AM, victor.stinner wrote: > + ? ?if (start >= length || end < start) { > + ? ? ? ?assert(end == length); > + ? ? ? ?return PyUnicode_New(0, 0); > + ? ?} That assert doesn't look right. Consider: "abc"[4:1] Unless I'm missing something, "end" will be 1, but "length" will be 3 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From victor.stinner at gmail.com Thu May 3 03:38:33 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 3 May 2012 03:38:33 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Fix PyUnicode_Substring() for start >= length and start > end In-Reply-To: References: Message-ID: >> + ? ?if (start >= length || end < start) { >> + ? ? ? ?assert(end == length); >> + ? ? ? ?return PyUnicode_New(0, 0); >> + ? ?} > > That assert doesn't look right. Oh, you're right. I added it for the first case: start>=length. But the assertion is really useless, I removed it. Thanks! Victor From vinay_sajip at yahoo.co.uk Thu May 3 14:50:31 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 3 May 2012 12:50:31 +0000 (UTC) Subject: [Python-Dev] CRLF line endings Message-ID: To facilitate review of the PEP 405 reference implementation, I want to update my sandbox repository on hg.python.org with the relevant changes, so I can create a patch for Rietveld. I've added some files with CRLF line endings: Lib/venv/scripts/nt/Activate.ps1 Lib/venv/scripts/nt/Dectivate.ps1 Lib/venv/scripts/nt/activate.bat Although these are text files, the CRLF line endings are needed because otherwise, the files won't be presented correctly on Windows, e.g. in Notepad. I'd like to update the .hgeol file to add these entries, as otherwise the commit hook rejects them. Can anyone please let me know if they object? Otherwise I'll go ahead and add them to .hgeol in the next hour or so. Regards, Vinay Sajip From fuzzyman at voidspace.org.uk Thu May 3 16:23:44 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 3 May 2012 15:23:44 +0100 Subject: [Python-Dev] outdated info on download pages for older versions In-Reply-To: <4FA1699C.3090303@gmail.com> References: <4FA141CC.8020105@oddbird.net> <08471171-7B42-4915-B966-7DA3FB3108C8@voidspace.org.uk> <4FA1699C.3090303@gmail.com> Message-ID: On 2 May 2012, at 18:06, Ezio Melotti wrote: > On 02/05/2012 19.33, Michael Foord wrote: >> On 2 May 2012, at 16:55, Terry Reedy wrote: >>> I would send the above to webmaster at python.org (should be at the bottom of pages). We develop CPython but do not directly manage the website. >> Not true. The download pages are administered by the release managers not the web team. >> >> For the record, the best way of contacting the web team (such as it is) is the pydotorg-www mailing list. There are precious few people (even fewer than there are in the web team...) responding to emails on the webmaster alias. :-) >> >> Michael > > I'm pretty sure that several core devs are able (and possibly willing) to help out with the website, but AFAIU they have to request commit right for a separate repo where the website lives or report issues via mail. Is there any practical reason why the repo for the website is not on hg with all the other repos (cpython/devguide/peps/etc.) except that no one ported it yet? Anyone willing to assist with website maintenance (even occasional typo fixes) should email their ssh keys to the pydotorg at python.org mailing list (no need to join) and send an intro to the pydotorg-www mailing list (preferable to join). Michael > > Best Regards, > Ezio Melotti > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From rosuav at gmail.com Thu May 3 16:41:18 2012 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 4 May 2012 00:41:18 +1000 Subject: [Python-Dev] CRLF line endings In-Reply-To: References: Message-ID: On Thu, May 3, 2012 at 10:50 PM, Vinay Sajip wrote: > Although these are text files, the CRLF line endings are needed because > otherwise, the files won't be presented correctly on Windows, e.g. in Notepad. Not all Windows editors choke on \n line endings; when I'm on Windows and run into one, I open it in Wordpad (or, if I have one, a dedicated programming editor like SciTE or the Open Watcom editor). AFAIK only Notepad (of standard Windows utilities) has trouble. Not sure if that makes a difference or not. Chris Angelico From vinay_sajip at yahoo.co.uk Thu May 3 17:28:02 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Thu, 3 May 2012 15:28:02 +0000 (UTC) Subject: [Python-Dev] CRLF line endings References: Message-ID: Chris Angelico gmail.com> writes: > Not all Windows editors choke on \n line endings; when I'm on Windows > and run into one, I open it in Wordpad (or, if I have one, a dedicated > programming editor like SciTE or the Open Watcom editor). AFAIK only > Notepad (of standard Windows utilities) has trouble. > > Not sure if that makes a difference or not. It's only really an issue for new / inexperienced users, I agree. Since these files are installed only on Windows systems, there's no reason for them not to have the native line endings. Regards, Vinay Sajip From rosuav at gmail.com Thu May 3 17:30:25 2012 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 4 May 2012 01:30:25 +1000 Subject: [Python-Dev] CRLF line endings In-Reply-To: References: Message-ID: On Fri, May 4, 2012 at 1:28 AM, Vinay Sajip wrote: > It's only really an issue for new / inexperienced users, I agree. Since these > files are installed only on Windows systems, there's no reason for them not to > have the native line endings. Then sure, doesn't make a lot of difference that it's only Notepad. Somebody needs to rewrite that ancient editor and give Windows a better default... ChrisA From tjreedy at udel.edu Thu May 3 18:13:31 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 03 May 2012 12:13:31 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14687: str%tuple now uses an optimistic "unicode writer" instead of an In-Reply-To: References: Message-ID: <4FA2AEAB.3060700@udel.edu> On 5/3/2012 7:16 AM, victor.stinner wrote: > http://hg.python.org/cpython/rev/f1db931b93d3 > changeset: 76730:f1db931b93d3 > user: Victor Stinner > date: Thu May 03 13:10:40 2012 +0200 > summary: > Issue #14687: str%tuple now uses an optimistic "unicode writer" instead of an > accumulator. Directly write characters into the output (don't use a temporary > list): resize and widen the string on demand. I am curious whether these optimizations for str % tuple get applied to equivalent str.format(*tuple) calls or if you plan to make them do so. It seems to me that there could be one internal function that does the concatenation, with lengthening and resizing, of literal and formatted substrings, for both interfaces. tjr From martin at v.loewis.de Thu May 3 23:00:39 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Thu, 03 May 2012 23:00:39 +0200 Subject: [Python-Dev] CRLF line endings In-Reply-To: References: Message-ID: <20120503230039.Horde.7_geZlNNcXdPovH35U1kX5A@webmail.df.eu> Zitat von Chris Angelico : > On Thu, May 3, 2012 at 10:50 PM, Vinay Sajip wrote: >> Although these are text files, the CRLF line endings are needed because >> otherwise, the files won't be presented correctly on Windows, e.g. >> in Notepad. > > Not all Windows editors choke on \n line endings; when I'm on Windows > and run into one, I open it in Wordpad (or, if I have one, a dedicated > programming editor like SciTE or the Open Watcom editor). AFAIK only > Notepad (of standard Windows utilities) has trouble. > > Not sure if that makes a difference or not. I think that .bat files strictly *have* to have CRLF line endings. Not sure about PowerShell, though. In any case, having CRLF for these files sounds good to me. Regards, Martin From victor.stinner at gmail.com Fri May 4 00:12:06 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 May 2012 00:12:06 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14687: str%tuple now uses an optimistic "unicode writer" instead of an In-Reply-To: <4FA2AEAB.3060700@udel.edu> References: <4FA2AEAB.3060700@udel.edu> Message-ID: >> http://hg.python.org/cpython/rev/f1db931b93d3 >> changeset: ? 76730:f1db931b93d3 >> user: ? ? ? ?Victor Stinner >> date: ? ? ? ?Thu May 03 13:10:40 2012 +0200 >> summary: >> ? Issue #14687: str%tuple now uses an optimistic "unicode writer" instead >> of an >> accumulator. Directly write characters into the output (don't use a >> temporary >> list): resize and widen the string on demand. > > I am curious whether these optimizations for str % tuple get applied to > equivalent str.format(*tuple) calls or if you plan to make them do so. It > seems to me that there could be one internal function that does the > concatenation, with lengthening and resizing, of literal and formatted > substrings, for both interfaces. I just wrote a patch for str.format(). http://bugs.python.org/issue14716 The speed up is between 0% and 27%. Victor From benjamin at python.org Fri May 4 00:14:08 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 3 May 2012 18:14:08 -0400 Subject: [Python-Dev] [Python-checkins] cpython: unicode_writer: add finish() method and assertions to write_str() method In-Reply-To: References: Message-ID: 2012/5/3 victor.stinner : > ?Py_LOCAL_INLINE(void) Do these have to be marked inline? -- Regards, Benjamin From victor.stinner at gmail.com Fri May 4 01:24:13 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 May 2012 01:24:13 +0200 Subject: [Python-Dev] [Python-checkins] cpython: unicode_writer: add finish() method and assertions to write_str() method In-Reply-To: References: Message-ID: >> ?Py_LOCAL_INLINE(void) > > Do these have to be marked inline? Functions used in loops, yes: the inline keyword *does* impact performances (5% slower). I removed the keyword for the other unicode_writer methods. Victor From victor.stinner at gmail.com Fri May 4 01:45:15 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 May 2012 01:45:15 +0200 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 Message-ID: Hi, Different people are working on improving performances of Unicode strings in Python 3.3. This Python version is very different from Python 3.2 because of the PEP 393, and it is still unclear to me what is the best way to create a new Unicode string. There are different approachs: * Use the legacy (Py_UNICODE) API, PyUnicode_READY() converts the result to the canonical form. CJK codecs are still using this API. * Use a Py_UCS4 buffer and then convert to the canonical form (ASCII, UCS1 or UCS2). Approach taken by io.StringIO. io.StringIO is not only used to write, but also to read and so a Py_UCS4 buffer is a good compromise. * PyAccu API: optimized version of chunks=[]; for ...: ... chunks.append(text); return ''.join(chunks). * Two steps: compute the length and maximum character of the output string, allocate the output string and then write characters. str%args was using it. * Optimistic approach. Start with a ASCII buffer, enlarge and widen (to UCS2 and then UCS4) the buffer when new characters are written. Approach used by the UTF-8 decoder and by str%args since today. The optimistic approach uses realloc() to resize the string. It is faster than the PyAccu approach (at least for short ASCII strings), maybe because it avoids the creating of temporary short strings. realloc() looks to be efficient on Linux and Windows (at least Seven). Various notes: * PyUnicode_READ() is slower than reading a Py_UNICODE array. * Some decoders unroll the main loop to process 4 or 8 bytes (32 or 64 bits CPU) at each step. I am interested if you know other tricks to optimize Unicode strings in Python, or if you are interested to work on this topic. There are open issues related to optimizing Unicode: #11313: Speed up default encode()/decode() #12807: Optimization/refactoring for {bytearray, bytes, unicode}.strip() #14419: Faster ascii decoding #14624: Faster utf-16 decoder #14625: Faster utf-32 decoder #14654: More fast utf-8 decoding #14716: Use unicode_writer API for str.format() Victor From v+python at g.nevcal.com Fri May 4 01:46:25 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 03 May 2012 16:46:25 -0700 Subject: [Python-Dev] CRLF line endings In-Reply-To: <20120503230039.Horde.7_geZlNNcXdPovH35U1kX5A@webmail.df.eu> References: <20120503230039.Horde.7_geZlNNcXdPovH35U1kX5A@webmail.df.eu> Message-ID: <4FA318D1.7050508@g.nevcal.com> On 5/3/2012 2:00 PM, martin at v.loewis.de wrote: > I think that .bat files strictly *have* to have CRLF line endings. Nope. Both .bat and .cmd work fine with LF only in Win7 (and IIRC, in XP as well, but I just tested Win7) -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri May 4 01:47:36 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 May 2012 01:47:36 +0200 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: <4f9de5a8.e89c320a.4321.2854@mx.google.com> References: <4f9de5a8.e89c320a.4321.2854@mx.google.com> Message-ID: > To me, "adjusted" and "is_adjusted" both imply that an adjustment > has already been made; "adjustable" only implies that it is possible. The documentation is: "True if the clock can be adjusted (e.g. by a NTP daemon), False otherwise." I prefer "adjustable", because no OS tell us if the clock has an ajustement or not... except Windows: see GetSystemTimeAdjustment(). http://msdn.microsoft.com/en-us/library/windows/desktop/ms724394%28v=vs.85%29.aspx I propose to rename is_adjusted (which is now called adjusted) to adjustable, and not use GetSystemTimeAdjustment() on Windows but hardcode the value to True for the system clock, False for other functions (GetTick, QueryPerformanceCounter, ...). Victor From cs at zip.com.au Fri May 4 02:12:37 2012 From: cs at zip.com.au (Cameron Simpson) Date: Fri, 4 May 2012 10:12:37 +1000 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: References: Message-ID: <20120504001237.GA8209@cskk.homeip.net> On 04May2012 01:47, Victor Stinner wrote: | I prefer "adjustable", because no OS tell us if the clock has an | ajustement or not... except Windows: see GetSystemTimeAdjustment(). | http://msdn.microsoft.com/en-us/library/windows/desktop/ms724394%28v=vs.85%29.aspx | | I propose to rename is_adjusted (which is now called adjusted) to | adjustable, I'm -1 on that. To my mind "adjustable" suggests that the caller can adjust the clock, while "adjusted" suggests that the clock may be adjusted by a mechanism outside the caller's hands. That latter is the meaning in the context of the PEP. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ I'm not making any of this up you know. - Anna Russell From victor.stinner at gmail.com Fri May 4 02:21:44 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 May 2012 02:21:44 +0200 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: <20120504001237.GA8209@cskk.homeip.net> References: <20120504001237.GA8209@cskk.homeip.net> Message-ID: > I'm -1 on that. To my mind "adjustable" suggests that the caller can > adjust the clock, while "adjusted" suggests that the clock may be adjusted > by a mechanism outside the caller's hands. That latter is the meaning > in the context of the PEP. Anyway, the implementation and/or the documentation is buggy and should be fixed (especially the Windows case). Victor From ncoghlan at gmail.com Fri May 4 02:37:42 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 May 2012 10:37:42 +1000 Subject: [Python-Dev] CRLF line endings In-Reply-To: <20120503230039.Horde.7_geZlNNcXdPovH35U1kX5A@webmail.df.eu> References: <20120503230039.Horde.7_geZlNNcXdPovH35U1kX5A@webmail.df.eu> Message-ID: On Fri, May 4, 2012 at 7:00 AM, wrote: > In any case, having CRLF for these files sounds good to me. Right. While Windows has been getting much better at coping with LF only line endings over the years, being able to explicitly flag files for CRLF endings is the entire reason we held out for the EOL extension before making the switch to Mercurial. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Fri May 4 02:44:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 May 2012 10:44:54 +1000 Subject: [Python-Dev] time.clock_info() field names In-Reply-To: <20120504001237.GA8209@cskk.homeip.net> References: <20120504001237.GA8209@cskk.homeip.net> Message-ID: On Fri, May 4, 2012 at 10:12 AM, Cameron Simpson wrote: > On 04May2012 01:47, Victor Stinner wrote: > | I prefer "adjustable", because no OS tell us if the clock has an > | ajustement or not... except Windows: see GetSystemTimeAdjustment(). > | http://msdn.microsoft.com/en-us/library/windows/desktop/ms724394%28v=vs.85%29.aspx > | > | I propose to rename is_adjusted (which is now called adjusted) to > | adjustable, > > I'm -1 on that. To my mind "adjustable" suggests that the caller can > adjust the clock, while "adjusted" suggests that the clock may be adjusted > by a mechanism outside the caller's hands. That latter is the meaning > in the context of the PEP. +1 The connotations of "adjusted" and "adjustable" are slightly different and, in this case, "adjusted" is a better fit. The fact that "adjusted" may be misinterpreted as "this clock has been adjusted in the past" (incorrectly leaving out the "and/or may be adjusted in the future" part) is still closer to the mark than the likely misinterpretation of "adjustable" as meaning "can be adjusted directly by the application" (which is simply false, unless the application starts tinkering with the relevant platform specific time configuration interfaces, which aren't exposed by the standard library). Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Fri May 4 02:52:46 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 04 May 2012 02:52:46 +0200 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: <20120504025246.Horde.agNiONjz9kRPoyhetLvXSYA@webmail.df.eu> > Various notes: > * PyUnicode_READ() is slower than reading a Py_UNICODE array. > * Some decoders unroll the main loop to process 4 or 8 bytes (32 or > 64 bits CPU) at each step. > > I am interested if you know other tricks to optimize Unicode strings > in Python, or if you are interested to work on this topic. Beyond creation, the most frequent approach is to specialize loops for all three possible width, allowing the compiler to hard-code the element size. This brings it back in performance to the speed of accessing a Py_UNICODE array (or faster for 1-byte strings). A possible micro-optimization might be to use pointer arithmetic instead of indexing. However, I would expect that compilers will already convert a counting loop into pointer arithmetic if the index is only ever used for array access. A source of slow-down appears to be widening copy operations. I wonder whether microprocessors are able to do this faster than what the compiler generates out of a naive copying loop. Another potential area for further optimization is to better pass-through PyObject*. Some APIs still use char* or Py_UNICODE*, when the caller actually holds a PyObject*, and the callee ultimate recreates an object out of the pointers being passed. Some people (hi Larry) still think that using a rope representation for string concatenation might improve things, see #1569040. Regards, Martin From benjamin at python.org Fri May 4 07:07:04 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 4 May 2012 01:07:04 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: References: Message-ID: 2012/5/3 larry.hastings : > diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c > --- a/Modules/posixmodule.c > +++ b/Modules/posixmodule.c > @@ -3572,28 +3572,194 @@ > ?#endif /* HAVE_UNAME */ > > > +static int > +split_py_long_to_s_and_ns(PyObject *py_long, time_t *s, long *ns) > +{ > + ? ?int result = 0; > + ? ?PyObject *divmod; > + ? ?divmod = PyNumber_Divmod(py_long, billion); > + ? ?if (!divmod) > + ? ? ? ?goto exit; > + ? ?*s = _PyLong_AsTime_t(PyTuple_GET_ITEM(divmod, 0)); > + ? ?if ((*s == -1) && PyErr_Occurred()) > + ? ? ? ?goto exit; > + ? ?*ns = PyLong_AsLong(PyTuple_GET_ITEM(divmod, 1)); > + ? ?if ((*s == -1) && PyErr_Occurred()) > + ? ? ? ?goto exit; > + > + ? ?result = 1; > +exit: > + ? ?Py_XDECREF(divmod); > + ? ?return result; > +} > + > + > +typedef int (*parameter_converter_t)(PyObject *, void *); > + > +typedef struct { > + ? ?/* input only */ > + ? ?char path_format; > + ? ?parameter_converter_t converter; > + ? ?char *function_name; > + ? ?char *first_argument_name; > + ? ?PyObject *args; > + ? ?PyObject *kwargs; > + > + ? ?/* input/output */ > + ? ?PyObject **path; > + > + ? ?/* output only */ > + ? ?int now; > + ? ?time_t atime_s; > + ? ?long ? atime_ns; > + ? ?time_t mtime_s; > + ? ?long ? mtime_ns; > +} utime_arguments; > + > +#define DECLARE_UA(ua, fname) \ > + ? ?utime_arguments ua; \ > + ? ?memset(&ua, 0, sizeof(ua)); \ > + ? ?ua.function_name = fname; \ > + ? ?ua.args = args; \ > + ? ?ua.kwargs = kwargs; \ > + ? ?ua.first_argument_name = "path"; \ > + > +/* UA_TO_FILETIME doesn't declare atime and mtime for you */ > +#define UA_TO_FILETIME(ua, atime, mtime) \ > + ? ?time_t_to_FILE_TIME(ua.atime_s, ua.atime_ns, &atime); \ > + ? ?time_t_to_FILE_TIME(ua.mtime_s, ua.mtime_ns, &mtime) > + > +/* the rest of these macros declare the output variable for you */ > +#define UA_TO_TIMESPEC(ua, ts) \ > + ? ?struct timespec ts[2]; \ > + ? ?ts[0].tv_sec = ua.atime_s; \ > + ? ?ts[0].tv_nsec = ua.atime_ns; \ > + ? ?ts[1].tv_sec = ua.mtime_s; \ > + ? ?ts[1].tv_nsec = ua.mtime_ns > + > +#define UA_TO_TIMEVAL(ua, tv) \ > + ? ?struct timeval tv[2]; \ > + ? ?tv[0].tv_sec = ua.atime_s; \ > + ? ?tv[0].tv_usec = ua.atime_ns / 1000; \ > + ? ?tv[1].tv_sec = ua.mtime_s; \ > + ? ?tv[1].tv_usec = ua.mtime_ns / 1000 > + > +#define UA_TO_UTIMBUF(ua, u) \ > + ? ?struct utimbuf u; \ > + ? ?utimbuf.actime = ua.atime_s; \ > + ? ?utimbuf.modtime = ua.mtime_s > + > +#define UA_TO_TIME_T(ua, timet) \ > + ? ?time_t timet[2]; \ > + ? ?timet[0] = ua.atime_s; \ > + ? ?timet[1] = ua.mtime_s > + > + > +/* > + * utime_read_time_arguments() processes arguments for the utime > + * family of functions. > + * returns zero on failure. > + */ > +static int > +utime_read_time_arguments(utime_arguments *ua) > +{ > + ? ?PyObject *times = NULL; > + ? ?PyObject *ns = NULL; > + ? ?char format[24]; > + ? ?char *kwlist[4]; > + ? ?char **kw = kwlist; > + ? ?int return_value; > + > + ? ?*kw++ = ua->first_argument_name; > + ? ?*kw++ = "times"; > + ? ?*kw++ = "ns"; > + ? ?*kw = NULL; > + > + ? ?sprintf(format, "%c%s|O$O:%s", > + ? ? ? ? ? ?ua->path_format, > + ? ? ? ? ? ?ua->converter ? "&" : "", > + ? ? ? ? ? ?ua->function_name); > + > + ? ?if (ua->converter) > + ? ? ? ?return_value = PyArg_ParseTupleAndKeywords(ua->args, ua->kwargs, > + ? ? ? ? ? ?format, kwlist, ua->converter, ua->path, ×, &ns); > + ? ?else > + ? ? ? ?return_value = PyArg_ParseTupleAndKeywords(ua->args, ua->kwargs, > + ? ? ? ? ? ?format, kwlist, ua->path, ×, &ns); > + > + ? ?if (!return_value) > + ? ? ? ?return 0; > + > + ? ?if (times && ns) { > + ? ? ? ?PyErr_Format(PyExc_RuntimeError, Why not a ValueError or TypeError? > + ? ? ? ? ? ? ? ? ? ? "%s: you may specify either 'times'" > + ? ? ? ? ? ? ? ? ? ? " or 'ns' but not both", > + ? ? ? ? ? ? ? ? ? ? ua->function_name); > + ? ? ? ?return 0; > + ? ?} > + > + ? ?if (times && (times != Py_None)) { Conditions in parenthesis like this is not style. > + ? ? ? ?if (!PyTuple_CheckExact(times) || (PyTuple_Size(times) != 2)) { > + ? ? ? ? ? ?PyErr_Format(PyExc_TypeError, > + ? ? ? ? ? ? ? ? ? ? ? ? "%s: 'time' must be either" > + ? ? ? ? ? ? ? ? ? ? ? ? " a valid tuple of two ints or None", > + ? ? ? ? ? ? ? ? ? ? ? ? ua->function_name); > + ? ? ? ? ? ?return 0; > + ? ? ? ?} > + ? ? ? ?ua->now = 0; > + ? ? ? ?return (_PyTime_ObjectToTimespec(PyTuple_GET_ITEM(times, 0), > + ? ? ? ? ? ? ? ? ? ?&(ua->atime_s), &(ua->atime_ns)) != -1) > + ? ? ? ? ? ?&& (_PyTime_ObjectToTimespec(PyTuple_GET_ITEM(times, 1), Put && on previous line like Python. > + ? ? ? ? ? ? ? ? ? ?&(ua->mtime_s), &(ua->mtime_ns)) != -1); > + ? ?} > + > + ? ?if (ns) { > + ? ? ? ?if (!PyTuple_CheckExact(ns) || (PyTuple_Size(ns) != 2)) { > + ? ? ? ? ? ?PyErr_Format(PyExc_TypeError, > + ? ? ? ? ? ? ? ? ? ? ? ? "%s: 'ns' must be a valid tuple of two ints", > + ? ? ? ? ? ? ? ? ? ? ? ? ua->function_name); > + ? ? ? ? ? ?return 0; > + ? ? ? ?} > + ? ? ? ?ua->now = 0; > + ? ? ? ?return (split_py_long_to_s_and_ns(PyTuple_GET_ITEM(ns, 0), > + ? ? ? ? ? ? ? ? ? ?&(ua->atime_s), &(ua->atime_ns))) > + ? ? ? ? ? ?&& (split_py_long_to_s_and_ns(PyTuple_GET_ITEM(ns, 1), > + ? ? ? ? ? ? ? ? ? ?&(ua->mtime_s), &(ua->mtime_ns))); > + ? ?} > + > + ? ?/* either times=None, or neither times nor ns was specified. use "now". */ > + ? ?ua->now = 1; > + ? ?return 1; > +} -- Regards, Benjamin From senthil at uthcode.com Fri May 4 07:21:17 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Fri, 4 May 2012 13:21:17 +0800 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: <20120502184617.33243626@pitrou.net> References: <4FA0CC4F.4070607@v.loewis.de> <20120502165425.06898f77@pitrou.net> <20120502184617.33243626@pitrou.net> Message-ID: On Thu, May 3, 2012 at 12:46 AM, Antoine Pitrou wrote: > Daily code coverage builds would be nice, but that's probably beyond > what the current infrastructure can offer. It would be nice if someone > wants to investigate that. Code coverage buildbots would indeed be good. I could give a try on this. What kind of infra changes would be required? I presume, it is the server side that you are referring to. Thank you, Senthil From larry at hastings.org Fri May 4 08:04:16 2012 From: larry at hastings.org (Larry Hastings) Date: Thu, 03 May 2012 23:04:16 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: References: Message-ID: <4FA37160.4000709@hastings.org> On 05/03/2012 10:07 PM, Benjamin Peterson wrote: >> + if (times&& ns) { >> + PyErr_Format(PyExc_RuntimeError, > Why not a ValueError or TypeError? Well it's certainly not a TypeError. The 3.2 documentation defines TypeError as: Raised when an operation or function is applied to an object of inappropriate type. The associated value is a string giving details about the type mismatch. If someone called os.utime with both times and ns, and the values of each would have been legal if they'd been passed in in isolation, what would be the type mismatch? ValueError seems like a stretch. The 3.2 documentation defines ValueError as Raised when a built-in operation or function receives an argument that has the right type but an inappropriate value, and the situation is not described by a more precise exception such as IndexError. To me this describes a specific class of errors where a single value is invalid in isolation, like an overly-long string for a path on Windows, or a negative integer for some integer value that must always be 0 or greater. The error with utime is a different sort of error; you are passing in two presumably legal values, but the function requires that you pass in at most one. The only way I can see ValueError as being the right choice is from the awkward perspective of "if you passed in times, then the only valid value for ns is None" (or vice-versa). Are there existing APIs that use ValueError for just this sort of situation? I dimly recall there being something like this but I can't recall it. Is using RuntimeError some sort of Pythonic faux pas? >> + if (times&& (times != Py_None)) { > Conditions in parenthesis like this is not style. Can you point me to where this is described in PEP 7? I can't find it. >> + return (_PyTime_ObjectToTimespec(PyTuple_GET_ITEM(times, 0), >> +&(ua->atime_s),&(ua->atime_ns)) != -1) >> +&& (_PyTime_ObjectToTimespec(PyTuple_GET_ITEM(times, 1), > Put&& on previous line like Python. Okay. Since I have questions regarding two of your three suggested changes, I'll hold off on making any changes until the dust settles a little. Finally, I appreciate the feedback, but... why post it to python-dev? You could have sent me private email, or posted to the issue (#14127), the latter of which would have enabled using rich chocolaty Rietveld. I've seen a bunch of comments on checkins posted here and it all leaves me scratching my head. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri May 4 08:36:37 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 May 2012 16:36:37 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: <4FA37160.4000709@hastings.org> References: <4FA37160.4000709@hastings.org> Message-ID: On Fri, May 4, 2012 at 4:04 PM, Larry Hastings wrote: > Finally, I appreciate the feedback, but... why post it to python-dev?? You > could have sent me private email, or posted to the issue (#14127), the > latter of which would have enabled using rich chocolaty Rietveld.? I've seen > a bunch of comments on checkins posted here and it all leaves me scratching > my head. It's just the way post-checkin review is set up - the "Follow-up-to" header for the python-checkins mailing list is python-dev. Such comments are rare enough and the fact that they apply to already committed code is important enough that there hasn't been a major push to get the scheme changed to anything else (by contrast, the old process where comments went back to python-checkins *was* problematic, as they would get lost in the flow of actual checkin messages, hence the switch to the current system). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From g.brandl at gmx.net Fri May 4 09:01:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 04 May 2012 09:01:44 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: <4FA37160.4000709@hastings.org> References: <4FA37160.4000709@hastings.org> Message-ID: On 05/04/2012 08:04 AM, Larry Hastings wrote: > > On 05/03/2012 10:07 PM, Benjamin Peterson wrote: >>> + if (times && ns) { >>> + PyErr_Format(PyExc_RuntimeError, >> Why not a ValueError or TypeError? > > Well it's certainly not a TypeError. The 3.2 documentation defines TypeError as: > > Raised when an operation or function is applied to an object of > inappropriate type. The associated value is a string giving details about > the type mismatch. > > If someone called os.utime with both times and ns, and the values of each would > have been legal if they'd been passed in in isolation, what would be the type > mismatch? What exception do you get otherwise when you call a function with inappropriate argument combinations? > Is using RuntimeError some sort of Pythonic faux pas? RuntimeError is not used very much in the stdlib, and if used, then for somewhat more dramatic errors. > Finally, I appreciate the feedback, but... why post it to python-dev? You could > have sent me private email, or posted to the issue (#14127), the latter of which > would have enabled using rich chocolaty Rietveld. I've seen a bunch of comments > on checkins posted here and it all leaves me scratching my head. It has been argued in the past that python-committers is a better place for the review comments, but it was declined as being "not public enough". I agree that python-checkins or private email *definitely* isn't public enough. Georg From martin at v.loewis.de Fri May 4 09:15:24 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 04 May 2012 09:15:24 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: <20120502165551.4da586d5@pitrou.net> References: <20120502165551.4da586d5@pitrou.net> Message-ID: <4FA3820C.4040501@v.loewis.de> > That page would probably like a good cleanup. I don't even think > creating an user is required - it's just good practice, and you > probably want that user to have as few privileges as possible. That's indeed the motivation. Buildbot slave operators need to recognize that they are opening their machines to execution of arbitrary code, even though this could only be abused by committers. But suppose a committer loses the laptop, which has his SSH key on it, then anybody getting the key could commit malicious code, which then gets executed by all build slaves. Of course, it would be possible to find out whose key has been used (although *not* from the commit message), and revoke that, but the damage might already be done. Regards, Martin P.S. Another attack vector is through the master: if somebody hacks into the machine running the master, they can also compromise all slaves. Of course, we are trying to make it really hard to break into python.org. From martin at v.loewis.de Fri May 4 09:17:47 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 04 May 2012 09:17:47 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: <4FA0CC4F.4070607@v.loewis.de> <20120502165425.06898f77@pitrou.net> <20120502184617.33243626@pitrou.net> Message-ID: <4FA3829B.8080906@v.loewis.de> On 04.05.2012 07:21, Senthil Kumaran wrote: > On Thu, May 3, 2012 at 12:46 AM, Antoine Pitrou wrote: >> Daily code coverage builds would be nice, but that's probably beyond >> what the current infrastructure can offer. It would be nice if someone >> wants to investigate that. > > Code coverage buildbots would indeed be good. I could give a try on > this. What kind of infra changes would be required? I presume, it is > the server side that you are referring to. I think the setup could be similar to the daily DMG builder. If the slave generates a set of HTML files (say), those can be uploaded just fine. However, we don't have any "make coverage" target currently in the makefile. So if you contribute that, we could then have it run daily. Regards, Martin From eric at trueblade.com Fri May 4 10:21:35 2012 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 04 May 2012 04:21:35 -0400 Subject: [Python-Dev] [Python-checkins] cpython: avoid unitialized memory In-Reply-To: References: Message-ID: <4FA3918F.6000204@trueblade.com> On 5/4/2012 1:14 AM, benjamin.peterson wrote: > http://hg.python.org/cpython/rev/b0deafca6c02 > changeset: 76743:b0deafca6c02 > user: Benjamin Peterson > date: Fri May 04 01:14:03 2012 -0400 > summary: > avoid unitialized memory > > files: > Modules/posixmodule.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > > diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c > --- a/Modules/posixmodule.c > +++ b/Modules/posixmodule.c > @@ -3576,7 +3576,7 @@ > split_py_long_to_s_and_ns(PyObject *py_long, time_t *s, long *ns) > { > int result = 0; > - PyObject *divmod; > + PyObject *divmod = NULL; > divmod = PyNumber_Divmod(py_long, billion); How is that uninitialized if it's being set on the next line? From vinay_sajip at yahoo.co.uk Fri May 4 10:44:25 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 4 May 2012 08:44:25 +0000 (UTC) Subject: [Python-Dev] Python program name Message-ID: IIUC, the program name of the Python executable is set to whatever argv[0] is. Is there a reason for this, rather than using one of the various OS-specific APIs [1] for getting the name of the running executable? The reason I ask is that in a virtual environment (venv), the exe's path is the only thing you have to go on, and if you don't have that, you can't find the pyvenv.cfg file and hence the base Python from which the venv was created. Of course argv[0] is normally set to the executable's path, but there's at least one test (in test_sys) where Python is spawned (via subprocess) with argv[0] set to "nonexistent". If run from a venv created from a source build, with no Python 3.3 installed, this test fails because the spawned Python can't locate the locale encoding, and bails. It works when run from a source build ("./python ...") because the getpath.c code to find a prefix looks in the directory implied by argv[0] (in the case of "nonexistent" => "", i.e. the current directory) for "Modules/Setup", and also works from a venv if created from an installed Python 3.3 (since the value of sys.prefix is used as a fallback check, and that value will contain that Python). However, when run from a venv created from a source build, with no Python 3.3 installed, the error occurs. A workaround might be one of these: 1. Use an OS-specific API rather than argv[0] to get the executable's path for the processing done by getpath.c in all cases, or 2. If the file named by argv[0] doesn't exist, then use the OS-specific API to find the executable's path, and try with that, or 3. If using the current logic, no prefix is found, then use the OS-specific API to to find the executable's path, and try with that. I would prefer to use option 2 and change getpath.c / getpathp.c accordingly. Does anyone here see problems with that approach? Regards, Vinay Sajip [1] http://stackoverflow.com/a/933996 From storchaka at gmail.com Fri May 4 11:00:52 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Fri, 04 May 2012 12:00:52 +0300 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: 04.05.12 02:45, Victor Stinner ???????(??): > * Two steps: compute the length and maximum character of the output > string, allocate the output string and then write characters. str%args > was using it. > * Optimistic approach. Start with a ASCII buffer, enlarge and widen > (to UCS2 and then UCS4) the buffer when new characters are written. > Approach used by the UTF-8 decoder and by str%args since today. In real today UTF-8 decoder uses two-steps approach. Only after encountering an error it switches to optimistic approach. > The optimistic approach uses realloc() to resize the string. It is > faster than the PyAccu approach (at least for short ASCII strings), > maybe because it avoids the creating of temporary short strings. > realloc() looks to be efficient on Linux and Windows (at least Seven). IMHO, realloc() has no relationship to this. The case in the cost of managing of the list and creating of temporary strings. > Various notes: > * PyUnicode_READ() is slower than reading a Py_UNICODE array. And PyUnicode_WRITE() is slower than writing a Py_UNICODE/PyUCS* array. > * Some decoders unroll the main loop to process 4 or 8 bytes (32 or > 64 bits CPU) at each step. Note, this is not only CPU-, but OS-depending (LP64 vs LLP64). > I am interested if you know other tricks to optimize Unicode strings > in Python, or if you are interested to work on this topic. Optimized ASCII decoder (issue 14419) is not only reads 4 or 8 bytes at a time, but writes them all at a time. This is a very specific optimization. More general principle is replacing serial scanning and translating on an one-pass optimistic reading and writing. This improves the efficiency of the memory cache. I'm going to try it in UTF-8 decoder, it will allow to increase the speed of decoding ASCII-only strings up to speed of optimized ASCII decoder. From stefan at bytereef.org Fri May 4 11:39:51 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 4 May 2012 11:39:51 +0200 Subject: [Python-Dev] [Python-checkins] cpython: avoid unitialized memory In-Reply-To: References: Message-ID: <20120504093951.GA5987@sleipnir.bytereef.org> benjamin.peterson wrote: > summary: > avoid unitialized memory > > diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c > --- a/Modules/posixmodule.c > +++ b/Modules/posixmodule.c > @@ -3576,7 +3576,7 @@ > split_py_long_to_s_and_ns(PyObject *py_long, time_t *s, long *ns) > { > int result = 0; > - PyObject *divmod; > + PyObject *divmod = NULL; > divmod = PyNumber_Divmod(py_long, billion); > if (!divmod) > goto exit; If I'm not mistaken, divmod was already unconditionally initialized by PyNumber_Divmod(). Stefan Krah From stefan at bytereef.org Fri May 4 11:33:49 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 4 May 2012 11:33:49 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: <4FA37160.4000709@hastings.org> References: <4FA37160.4000709@hastings.org> Message-ID: <20120504093349.GA5783@sleipnir.bytereef.org> Larry Hastings wrote: > On 05/03/2012 10:07 PM, Benjamin Peterson wrote: > > + if (times && ns) { > + PyErr_Format(PyExc_RuntimeError, > > Why not a ValueError or TypeError? > > > Well it's certainly not a TypeError. The 3.2 documentation defines TypeError > as: > > Raised when an operation or function is applied to an object of > inappropriate type. The associated value is a string giving details about > the type mismatch. > > If someone called os.utime with both times and ns, and the values of each would > have been legal if they'd been passed in in isolation, what would be the type > mismatch? I had the same question a while ago, and IIRC Raymond said that the convention is to raise a TypeError if a combination of arguments cannot be handled by a function. In OCaml this would be quite natural: $ ocaml Objective Caml version 3.12.0 # type kwargs = TIMES | NS;; type kwargs = TIMES | NS let utime args = match args with | (_, TIMES) -> "Got times" | (_, NS) -> "Got NS";; val utime : 'a * kwargs -> string = # utime ("/etc/passwd", TIMES);; - : string = "Got times" # utime ("/etc/passwd", NS);; - : string = "Got NS" # utime ("/etc/passwd", TIMES, NS);; Error: This expression has type string * kwargs * kwargs but an expression was expected of type 'a * kwargs In Python it makes sense if (for the purpose of raising an error) one assumes that {"times":(0, 0)}, {"ns":(0, 0)} and {"times":(0, 0), "ns":(0, 0)} have different types. Stefan Krah From stefan at bytereef.org Fri May 4 11:45:35 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 4 May 2012 11:45:35 +0200 Subject: [Python-Dev] [Python-checkins] cpython: what is a invalid tuple? In-Reply-To: References: Message-ID: <20120504094535.GB5987@sleipnir.bytereef.org> benjamin.peterson wrote: > files: > Modules/posixmodule.c | 4 ++-- > 1 files changed, 2 insertions(+), 2 deletions(-) > > > diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c > --- a/Modules/posixmodule.c > +++ b/Modules/posixmodule.c > @@ -3702,7 +3702,7 @@ > if (!PyTuple_CheckExact(times) || (PyTuple_Size(times) != 2)) { > PyErr_Format(PyExc_TypeError, > "%s: 'time' must be either" > - " a valid tuple of two ints or None", > + " a tuple of two ints or None", Unrelated to this commit, but 'time' should be 'times'. Stefan Krah From greg.ewing at canterbury.ac.nz Fri May 4 08:22:25 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Fri, 04 May 2012 18:22:25 +1200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: <4FA37160.4000709@hastings.org> References: <4FA37160.4000709@hastings.org> Message-ID: <4FA375A1.6090206@canterbury.ac.nz> Larry Hastings wrote: > > On 05/03/2012 10:07 PM, Benjamin Peterson wrote: > >>>+ if (times && ns) { >>>+ PyErr_Format(PyExc_RuntimeError, >>> >>Why not a ValueError or TypeError? >> > > Well it's certainly not a TypeError. TypeError is not just for values of the wrong type, it's also used for passing the wrong number of arguments to a function and the like. So TypeError would be a reasonable choice here, I think. -- Greg From solipsis at pitrou.net Fri May 4 12:26:28 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 May 2012 12:26:28 +0200 Subject: [Python-Dev] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. References: Message-ID: <20120504122628.763b2f5a@pitrou.net> On Fri, 4 May 2012 01:07:04 -0400 Benjamin Peterson wrote: > > + ? ?if (times && (times != Py_None)) { > > Conditions in parenthesis like this is not style. If it's not style, then what is it? :-) From g.brandl at gmx.net Fri May 4 12:34:17 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 04 May 2012 12:34:17 +0200 Subject: [Python-Dev] cpython: Issue #14127: Fix two bugs with the Windows implementation. In-Reply-To: References: Message-ID: On 05/04/2012 11:32 AM, larry.hastings wrote: > http://hg.python.org/cpython/rev/fc5d2f4291ac > changeset: 76747:fc5d2f4291ac > user: Larry Hastings > date: Fri May 04 02:31:57 2012 -0700 > summary: > Issue #14127: Fix two bugs with the Windows implementation. Would be nice to mention what these bugs were, otherwise the commit message is not very helpful when doing e.g. bisect or annotate. Georg From pierre.chanial at gmail.com Fri May 4 14:17:56 2012 From: pierre.chanial at gmail.com (Pierre Chanial) Date: Fri, 4 May 2012 14:17:56 +0200 Subject: [Python-Dev] PEP 377 : Allow __enter__() methods to skip the statement body : real world case Message-ID: Hello, PEP 377 has been rejected for lack of use cases. Here is one. I'm writing an MPI-based application and in some cases, when there is less work items than processes, I need to create a new communicator excluding the processes that have nothing to do. This new communicator should finally be freed by the processes that had work to do (and only by them). If there is more work than processes, the usual communicator should be used. A neat way to do that would be to write: with filter_comm(comm, nworkitems) as newcomm: ... do work with communicator newcomm... the body being executed only by the processes that have work to do. It looks better than: if comm.size < nworkitems: newcomm = get_new_communicator(comm, nworkitems) else: newcomm = comm if comm.rank < nworkitems: try: ... do work with communicator newcomm... finally: if comm.size < nworkitems: newcomm.Free() Especially since I have to use that quite often. Cheers, Pierre -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Fri May 4 14:29:14 2012 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Fri, 4 May 2012 13:29:14 +0100 Subject: [Python-Dev] Python program name In-Reply-To: References: Message-ID: <6FD4A972-7439-4212-ACEC-903D53CEEE1D@voidspace.org.uk> On 4 May 2012, at 09:44, Vinay Sajip wrote: > IIUC, the program name of the Python executable is set to whatever argv[0] is. > Is there a reason for this, rather than using one of the various OS-specific > APIs [1] for getting the name of the running executable? The reason I ask is > that in a virtual environment (venv), the exe's path is the only thing you have > to go on, and if you don't have that, you can't find the pyvenv.cfg file and > hence the base Python from which the venv was created. argv[0] is the *script* name, not the executable name - surely? The executable path is normally available in sys.executable. Michael > > Of course argv[0] is normally set to the executable's path, but there's at least > one test (in test_sys) where Python is spawned (via subprocess) with argv[0] set > to "nonexistent". If run from a venv created from a source build, with no Python > 3.3 installed, this test fails because the spawned Python can't locate the > locale encoding, and bails. > > It works when run from a source build ("./python ...") because the getpath.c > code to find a prefix looks in the directory implied by argv[0] (in the case of > "nonexistent" => "", i.e. the current directory) for "Modules/Setup", and also > works from a venv if created from an installed Python 3.3 (since the value of > sys.prefix is used as a fallback check, and that value will contain that > Python). However, when run from a venv created from a source build, with no > Python 3.3 installed, the error occurs. > > A workaround might be one of these: > > 1. Use an OS-specific API rather than argv[0] to get the executable's path for > the processing done by getpath.c in all cases, or > > 2. If the file named by argv[0] doesn't exist, then use the OS-specific API to > find the executable's path, and try with that, or > > 3. If using the current logic, no prefix is found, then use the OS-specific API > to to find the executable's path, and try with that. > > I would prefer to use option 2 and change getpath.c / getpathp.c accordingly. > Does anyone here see problems with that approach? > > Regards, > > Vinay Sajip > > [1] http://stackoverflow.com/a/933996 > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.voidspace.org.uk/ May you do good and not evil May you find forgiveness for yourself and forgive others May you share freely, never taking more than you give. -- the sqlite blessing http://www.sqlite.org/different.html From stephen at xemacs.org Fri May 4 14:33:15 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 04 May 2012 21:33:15 +0900 Subject: [Python-Dev] [Python-checkins] cpython: avoid unitialized memory In-Reply-To: <4FA3918F.6000204@trueblade.com> References: <4FA3918F.6000204@trueblade.com> Message-ID: <8762cc15ys.fsf@uwakimon.sk.tsukuba.ac.jp> Eric V. Smith writes: > > - PyObject *divmod; > > + PyObject *divmod = NULL; > > divmod = PyNumber_Divmod(py_long, billion); > > How is that uninitialized if it's being set on the next line? Maybe they finally developed a Sufficiently Stupid Compiler? From solipsis at pitrou.net Fri May 4 14:33:47 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 May 2012 14:33:47 +0200 Subject: [Python-Dev] Python program name References: <6FD4A972-7439-4212-ACEC-903D53CEEE1D@voidspace.org.uk> Message-ID: <20120504143347.7b0be075@pitrou.net> On Fri, 4 May 2012 13:29:14 +0100 Michael Foord wrote: > > On 4 May 2012, at 09:44, Vinay Sajip wrote: > > > IIUC, the program name of the Python executable is set to whatever argv[0] is. > > Is there a reason for this, rather than using one of the various OS-specific > > APIs [1] for getting the name of the running executable? The reason I ask is > > that in a virtual environment (venv), the exe's path is the only thing you have > > to go on, and if you don't have that, you can't find the pyvenv.cfg file and > > hence the base Python from which the venv was created. > > > argv[0] is the *script* name, not the executable name - surely? > > The executable path is normally available in sys.executable. I think Vinay is talking about C argv, not sys.argv. Regards Antoine. From solipsis at pitrou.net Fri May 4 14:39:52 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 May 2012 14:39:52 +0200 Subject: [Python-Dev] Python program name References: Message-ID: <20120504143952.73173b26@pitrou.net> On Fri, 4 May 2012 08:44:25 +0000 (UTC) Vinay Sajip wrote: > IIUC, the program name of the Python executable is set to whatever argv[0] is. > Is there a reason for this, rather than using one of the various OS-specific > APIs [1] for getting the name of the running executable? The reason I ask is > that in a virtual environment (venv), the exe's path is the only thing you have > to go on, and if you don't have that, you can't find the pyvenv.cfg file and > hence the base Python from which the venv was created. > > Of course argv[0] is normally set to the executable's path, but there's at least > one test (in test_sys) where Python is spawned (via subprocess) with argv[0] set > to "nonexistent". If run from a venv created from a source build, with no Python > 3.3 installed, this test fails because the spawned Python can't locate the > locale encoding, and bails. If that's the only failing test, we can simply skip it when run from a venv. A non-existent argv[0] is arguably a borderline case which you should only encounter when e.g. embedding Python. > I would prefer to use option 2 and change getpath.c / getpathp.c accordingly. > Does anyone here see problems with that approach? getpath.c is sufficiently byzantine that we don't want to complexify it too much, IMHO. Regards Antoine. From ncoghlan at gmail.com Fri May 4 15:34:05 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 May 2012 23:34:05 +1000 Subject: [Python-Dev] PEP 377 : Allow __enter__() methods to skip the statement body : real world case In-Reply-To: References: Message-ID: On Fri, May 4, 2012 at 10:17 PM, Pierre Chanial wrote: > Hello, > > PEP 377 has been rejected for lack of use cases. Here is one. > > I'm writing an MPI-based application and in some cases, when there is less > work items than processes, I need to create a new communicator excluding the > processes that have nothing to do. This new communicator should finally be > freed by the processes that had work to do (and only by them). If there is > more work than processes, the usual communicator should be used. > > A neat way to do that would be to write: > > with filter_comm(comm, nworkitems) as newcomm: > > ? ? ... do work with communicator newcomm... > > the body being executed only by the processes that have work to do. > > It looks better than: > > if comm.size < nworkitems: > newcomm = get_new_communicator(comm, nworkitems) > > else: > newcomm = comm > > if comm.rank < nworkitems: > try: > ... do work with communicator newcomm... > finally: > if comm.size < nworkitems: > newcomm.Free() > > Especially since I have to use that quite often. However, your original code is not substantially better than: with filter_comm(comm, nworkitems) as newcomm: if newcomm is not None: ... do work with communicator newcomm... Where filtercomm is a context manager that: - returns None from __enter__ if this process has no work to do - cleans up in __exit__ if a new communicator was allocated in __enter__ It isn't that there are no use cases for skipping the statement body: it's that the extra machinery needed to allow a context manager to do so implicitly is quite intrusive, and the control flow is substantially clearer to the reader of the code if the context manager is instead paired with an appropriate nested if statement. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Fri May 4 17:01:40 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 4 May 2012 11:01:40 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14127: Add ns= parameter to utime, futimes, and lutimes. In-Reply-To: <4FA37160.4000709@hastings.org> References: <4FA37160.4000709@hastings.org> Message-ID: 2012/5/4 Larry Hastings : > + ? ?if (times && (times != Py_None)) { > > Conditions in parenthesis like this is not style. > > > Can you point me to where this is described in PEP 7?? I can't find it. It's not explicitly stated, but there is the following nice example: if (type->tp_dictoffset != 0 && base->tp_dictoffset == 0 && type->tp_dictoffset == b_size && (size_t)t_size == b_size + sizeof(PyObject *)) return 0; /* "Forgive" adding a __dict__ only */ There's also the consistency with surrounding code imperative. -- Regards, Benjamin From benjamin at python.org Fri May 4 17:06:16 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 4 May 2012 11:06:16 -0400 Subject: [Python-Dev] [Python-checkins] cpython: avoid unitialized memory In-Reply-To: <4FA3918F.6000204@trueblade.com> References: <4FA3918F.6000204@trueblade.com> Message-ID: 2012/5/4 Eric V. Smith : > On 5/4/2012 1:14 AM, benjamin.peterson wrote: >> http://hg.python.org/cpython/rev/b0deafca6c02 >> changeset: ? 76743:b0deafca6c02 >> user: ? ? ? ?Benjamin Peterson >> date: ? ? ? ?Fri May 04 01:14:03 2012 -0400 >> summary: >> ? avoid unitialized memory >> >> files: >> ? Modules/posixmodule.c | ?2 +- >> ? 1 files changed, 1 insertions(+), 1 deletions(-) >> >> >> diff --git a/Modules/posixmodule.c b/Modules/posixmodule.c >> --- a/Modules/posixmodule.c >> +++ b/Modules/posixmodule.c >> @@ -3576,7 +3576,7 @@ >> ?split_py_long_to_s_and_ns(PyObject *py_long, time_t *s, long *ns) >> ?{ >> ? ? ?int result = 0; >> - ? ?PyObject *divmod; >> + ? ?PyObject *divmod = NULL; >> ? ? ?divmod = PyNumber_Divmod(py_long, billion); > > How is that uninitialized if it's being set on the next line? It was a misreading on my part. -- Regards, Benjamin From dirkjan at ochtman.nl Fri May 4 17:08:07 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 4 May 2012 17:08:07 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: <4FA0F3B4.5070707@hastings.org> References: <4FA0F3B4.5070707@hastings.org> Message-ID: On Wed, May 2, 2012 at 10:43 AM, Larry Hastings wrote: > Do we officially support any C compilers that *don't* permit "intermingled > variable declarations and code"?? Do we *unofficially* support any?? And if > we do, what do we gain? This might be of interest: http://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/?nope Specifically, apparently MSVC 2010 supports variable declarations in the middle of a block in C. Also, since full C99 support won't be coming to MSVC, perhaps Python should move to compiling in C++ mode? Cheers, Dirkjan From brian at python.org Fri May 4 17:14:50 2012 From: brian at python.org (Brian Curtin) Date: Fri, 4 May 2012 10:14:50 -0500 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> Message-ID: On Fri, May 4, 2012 at 10:08 AM, Dirkjan Ochtman wrote: > On Wed, May 2, 2012 at 10:43 AM, Larry Hastings wrote: >> Do we officially support any C compilers that *don't* permit "intermingled >> variable declarations and code"?? Do we *unofficially* support any?? And if >> we do, what do we gain? > > This might be of interest: > > http://herbsutter.com/2012/05/03/reader-qa-what-about-vc-and-c99/?nope > > Specifically, apparently MSVC 2010 supports variable declarations in > the middle of a block in C. > > Also, since full C99 support won't be coming to MSVC, perhaps Python > should move to compiling in C++ mode? After seeing that same article yesterday and having the VS2010 port open, I tried this and it appears it won't work without significant code changes at least as far as I saw. I enabled /TP on the pythoncore project and got over 1363 errors. I don't have the time to figure it out right now, but I'll look more into it later. From vinay_sajip at yahoo.co.uk Fri May 4 17:19:44 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 4 May 2012 15:19:44 +0000 (UTC) Subject: [Python-Dev] Python program name References: <20120504143952.73173b26@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > If that's the only failing test, we can simply skip it when run from a > venv. A non-existent argv[0] is arguably a borderline case which you > should only encounter when e.g. embedding Python. Actually there are four module failures: test_sys, test_packaging, test_distutils and test_subprocess. I haven't looked into all of them yet, but many of the failure messages were "unable to get the locale encoding". > getpath.c is sufficiently byzantine that we don't want to complexify it > too much, IMHO. Right, but the change is unlikely to add significantly to complexity. It would be one static function e.g. named get_executable_path and one call to it, conditional on !isfile(argv[0]), in calculate_path. That would be in two places - Modules/getpath.c and PC/getpathp.c. I'll skip that test_sys test for now, and see where the other failures lead me to. Regards, Vinay Sajip From status at bugs.python.org Fri May 4 18:07:17 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 4 May 2012 18:07:17 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120504160717.10B9F1C880@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-04-27 - 2012-05-04) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3399 ( -6) closed 23101 (+45) total 26500 (+39) Open issues with patches: 1452 Issues opened (26) ================== #13183: pdb skips frames after hitting a breakpoint and running step http://bugs.python.org/issue13183 reopened by loewis #14428: Implementation of the PEP 418 http://bugs.python.org/issue14428 reopened by neologix #14684: zlib set dictionary support inflateSetDictionary http://bugs.python.org/issue14684 opened by Sam.Rushing #14689: make PYTHONWARNINGS variable work in libpython http://bugs.python.org/issue14689 opened by petere #14690: Use monotonic time for sched, trace and subprocess modules http://bugs.python.org/issue14690 opened by haypo #14692: json.joads parse_constant callback not working anymore http://bugs.python.org/issue14692 opened by Jakob.Simon-Gaarde #14693: hashlib fallback modules should be built even if openssl *is* http://bugs.python.org/issue14693 opened by dov #14695: Tools/parser/unparse.py is out of date. http://bugs.python.org/issue14695 opened by mark.dickinson #14697: parser module doesn't support set displays or set comprehensio http://bugs.python.org/issue14697 opened by mark.dickinson #14698: test_posix failures - getpwduid()/initgroups()/getgroups() http://bugs.python.org/issue14698 opened by neologix #14700: Integer overflow in classic string formatting http://bugs.python.org/issue14700 opened by storchaka #14701: parser module doesn't support 'raise ... from' http://bugs.python.org/issue14701 opened by mark.dickinson #14702: os.makedirs breaks under autofs directories http://bugs.python.org/issue14702 opened by amcnabb #14703: Update PEP metaprocesses to describe PEP czar role http://bugs.python.org/issue14703 opened by ncoghlan #14705: Add 'bool' format character to PyArg_ParseTuple* http://bugs.python.org/issue14705 opened by larry #14709: http.client fails sending read()able Object http://bugs.python.org/issue14709 opened by Tobias.Steinr??cken #14710: pkgutil.get_loader is broken http://bugs.python.org/issue14710 opened by Pavel.Aslanov #14711: Remove os.stat_float_times http://bugs.python.org/issue14711 opened by aronacher #14712: Integrate PEP 405 http://bugs.python.org/issue14712 opened by vinay.sajip #14713: PEP 414 installation hook fails with an AssertionError http://bugs.python.org/issue14713 opened by vinay.sajip #14714: PEp 414 tokenizing hook does not preserve tabs http://bugs.python.org/issue14714 opened by vinay.sajip #14715: test.support.DirsOnSysPath should be replaced by importlib.tes http://bugs.python.org/issue14715 opened by eric.smith #14716: Use unicode_writer API for str.format() http://bugs.python.org/issue14716 opened by haypo #14720: sqlite3 microseconds http://bugs.python.org/issue14720 opened by frankmillman #14721: httplib doesn't specify content-length header for POST request http://bugs.python.org/issue14721 opened by Arve.Knudsen #14722: Overflow in parsing 'float' parameters in PyArg_ParseTuple* http://bugs.python.org/issue14722 opened by storchaka Most recent 15 issues with no replies (15) ========================================== #14714: PEp 414 tokenizing hook does not preserve tabs http://bugs.python.org/issue14714 #14713: PEP 414 installation hook fails with an AssertionError http://bugs.python.org/issue14713 #14712: Integrate PEP 405 http://bugs.python.org/issue14712 #14709: http.client fails sending read()able Object http://bugs.python.org/issue14709 #14703: Update PEP metaprocesses to describe PEP czar role http://bugs.python.org/issue14703 #14695: Tools/parser/unparse.py is out of date. http://bugs.python.org/issue14695 #14689: make PYTHONWARNINGS variable work in libpython http://bugs.python.org/issue14689 #14680: pydoc with -w option does not work for a lot of help topics http://bugs.python.org/issue14680 #14679: Changes to html.parser break third-party code http://bugs.python.org/issue14679 #14674: Add link to RFC 4627 from json documentation http://bugs.python.org/issue14674 #14652: Better error messages for wsgiref validator failures http://bugs.python.org/issue14652 #14649: doctest.DocTestSuite error misleading when module has no docst http://bugs.python.org/issue14649 #14645: Generator does not translate linesep characters in certain cir http://bugs.python.org/issue14645 #14616: subprocess docs should mention pipes.quote/shlex.quote http://bugs.python.org/issue14616 #14584: Add gzip support to xmlrpc.server http://bugs.python.org/issue14584 Most recent 15 issues waiting for review (15) ============================================= #14722: Overflow in parsing 'float' parameters in PyArg_ParseTuple* http://bugs.python.org/issue14722 #14716: Use unicode_writer API for str.format() http://bugs.python.org/issue14716 #14712: Integrate PEP 405 http://bugs.python.org/issue14712 #14710: pkgutil.get_loader is broken http://bugs.python.org/issue14710 #14705: Add 'bool' format character to PyArg_ParseTuple* http://bugs.python.org/issue14705 #14701: parser module doesn't support 'raise ... from' http://bugs.python.org/issue14701 #14700: Integer overflow in classic string formatting http://bugs.python.org/issue14700 #14698: test_posix failures - getpwduid()/initgroups()/getgroups() http://bugs.python.org/issue14698 #14697: parser module doesn't support set displays or set comprehensio http://bugs.python.org/issue14697 #14695: Tools/parser/unparse.py is out of date. http://bugs.python.org/issue14695 #14693: hashlib fallback modules should be built even if openssl *is* http://bugs.python.org/issue14693 #14692: json.joads parse_constant callback not working anymore http://bugs.python.org/issue14692 #14690: Use monotonic time for sched, trace and subprocess modules http://bugs.python.org/issue14690 #14689: make PYTHONWARNINGS variable work in libpython http://bugs.python.org/issue14689 #14684: zlib set dictionary support inflateSetDictionary http://bugs.python.org/issue14684 Top 10 most discussed issues (10) ================================= #14127: add st_*time_ns fields to os.stat(), add ns keyword to os.*uti http://bugs.python.org/issue14127 20 msgs #14705: Add 'bool' format character to PyArg_ParseTuple* http://bugs.python.org/issue14705 16 msgs #13183: pdb skips frames after hitting a breakpoint and running step http://bugs.python.org/issue13183 13 msgs #14428: Implementation of the PEP 418 http://bugs.python.org/issue14428 13 msgs #14700: Integer overflow in classic string formatting http://bugs.python.org/issue14700 13 msgs #14304: Implement utf-8-bmp codec http://bugs.python.org/issue14304 11 msgs #14656: Add a macro for unreachable code http://bugs.python.org/issue14656 11 msgs #14693: hashlib fallback modules should be built even if openssl *is* http://bugs.python.org/issue14693 11 msgs #11352: Update cgi module doc http://bugs.python.org/issue11352 9 msgs #14662: shutil.move doesn't handle ENOTSUP raised by chflags on OS X http://bugs.python.org/issue14662 8 msgs Issues closed (41) ================== #6085: Logging in BaseHTTPServer.BaseHTTPRequestHandler causes lag http://bugs.python.org/issue6085 closed by orsenthil #7185: csv reader utf-8 BOM error http://bugs.python.org/issue7185 closed by r.david.murray #7707: multiprocess.Queue operations during import can lead to deadlo http://bugs.python.org/issue7707 closed by pitrou #9123: insecure os.urandom on VMS http://bugs.python.org/issue9123 closed by loewis #9154: Parser module doesn't understand function annotations. http://bugs.python.org/issue9154 closed by mark.dickinson #10433: Document unique behavior of 'getgroups' on OSX http://bugs.python.org/issue10433 closed by ned.deily #11839: argparse: unexpected behavior of default for FileType('w') http://bugs.python.org/issue11839 closed by Paolo.Elvati #14236: re: Docstring for \s and \S doesn???t mention Unicode http://bugs.python.org/issue14236 closed by ezio.melotti #14309: Deprecate time.clock() http://bugs.python.org/issue14309 closed by python-dev #14371: Add support for bzip2 compression to the zipfile module http://bugs.python.org/issue14371 closed by loewis #14387: Include\accu.h incompatible with Windows.h http://bugs.python.org/issue14387 closed by skrah #14427: urllib.request.Request get_header and header_items not documen http://bugs.python.org/issue14427 closed by orsenthil #14461: In re's positive lookbehind assertion documentation match() ca http://bugs.python.org/issue14461 closed by ezio.melotti #14519: In re's examples the example with scanf() contains wrong analo http://bugs.python.org/issue14519 closed by ezio.melotti #14521: math.copysign(1., float('nan')) returns -1. http://bugs.python.org/issue14521 closed by mark.dickinson #14558: Documentation for unittest.main does not describe some keyword http://bugs.python.org/issue14558 closed by ezio.melotti #14605: Make import machinery explicit http://bugs.python.org/issue14605 closed by brett.cannon #14610: configure script hangs on pthread verification and PTHREAD_SCO http://bugs.python.org/issue14610 closed by neologix #14618: remove modules_reloading from the interpreter state http://bugs.python.org/issue14618 closed by eric.snow #14642: Fix importlib.h build rule to not depend on hg http://bugs.python.org/issue14642 closed by loewis #14646: Require loaders set __loader__ and __package__ http://bugs.python.org/issue14646 closed by brett.cannon #14647: imp.reload() on a package leads to a segfault or a GC assertio http://bugs.python.org/issue14647 closed by brett.cannon #14666: test_sendall_interrupted hangs on FreeBSD with a zombi multipr http://bugs.python.org/issue14666 closed by pitrou #14669: test_multiprocessing failure on OS X Tiger http://bugs.python.org/issue14669 closed by neologix #14676: DeprecationWarning missing in default warning filters document http://bugs.python.org/issue14676 closed by sandro.tosi #14685: segfault in --without-threads build http://bugs.python.org/issue14685 closed by skrah #14686: SystemError in unicodeobject.c http://bugs.python.org/issue14686 closed by haypo #14687: Optimize str%tuple for the PEP 393 http://bugs.python.org/issue14687 closed by haypo #14688: Typos in sorting.rst http://bugs.python.org/issue14688 closed by rhettinger #14691: a code example not highlighted in http://docs.python.org/dev/l http://bugs.python.org/issue14691 closed by sandro.tosi #14694: Option to show leading zeros for bin/hex/oct http://bugs.python.org/issue14694 closed by mark.dickinson #14696: parser module doesn't support nonlocal statement http://bugs.python.org/issue14696 closed by mark.dickinson #14699: Calling a classmethod_descriptor directly raises a TypeError f http://bugs.python.org/issue14699 closed by python-dev #14704: NameError Issue in Multiprocessing http://bugs.python.org/issue14704 closed by mark.dickinson #14706: Inconsistent os.access os.X_OK on Solaris and AIX when running http://bugs.python.org/issue14706 closed by pitrou #14707: extend() puzzled me. http://bugs.python.org/issue14707 closed by loewis #14708: distutils's checking for MSVC compiler http://bugs.python.org/issue14708 closed by loewis #14717: In generator's .close() docstring there is one argument http://bugs.python.org/issue14717 closed by python-dev #14718: In the generator's try/finally statement a runtime error occur http://bugs.python.org/issue14718 closed by benjamin.peterson #14719: Lists: [[0]*N]*N != [[0 for _ in range(N)] for __ in range(N)] http://bugs.python.org/issue14719 closed by loewis #1303434: Please include pdb with windows distribution http://bugs.python.org/issue1303434 closed by loewis From martin at v.loewis.de Fri May 4 19:03:44 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 04 May 2012 19:03:44 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> Message-ID: <20120504190344.Horde.CQYDWcL8999PpAvwYjF1O-A@webmail.df.eu> > I don't have the time to figure it out right now, but I'll look more > into it later. I recently did an analysis here: http://mail.python.org/pipermail/python-dev/2012-January/115375.html The motivation for C++ compilation is gone meanwhile, as VS now supports C in WinRT apps quite well. However, the conclusions still stand: dealing with static type objects will be tricky. Of course, I would also like to eliminate static type objects as much as possible. This then leaves the issue with the casts, which might be considered clutter. Regards, Martin From ericsnowcurrently at gmail.com Fri May 4 19:28:51 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 4 May 2012 11:28:51 -0600 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20120504160717.10B9F1C880@psf.upfronthosting.co.za> References: <20120504160717.10B9F1C880@psf.upfronthosting.co.za> Message-ID: On Fri, May 4, 2012 at 10:07 AM, Python tracker wrote: > > ACTIVITY SUMMARY (2012-04-27 - 2012-05-04) > Python tracker at http://bugs.python.org/ > > To view or respond to any of the issues listed below, click on the issue. > Do NOT respond to this message. > > Issues counts and deltas: > ?open ? ?3399 ( -6) > ?closed 23101 (+45) > ?total ?26500 (+39) Negative delta for open, FTW! From edcjones at comcast.net Fri May 4 20:07:28 2012 From: edcjones at comcast.net (Edward C. Jones) Date: Fri, 04 May 2012 14:07:28 -0400 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages Message-ID: <4FA41AE0.4020906@comcast.net> I use up-to-date Debian testing (wheezy), amd64 architecture. I have made a "clone" of the developmental version of Python 3.3. "make -s -j3" prints: ==== ... Python build finished, but the necessary bits to build these modules were not found: _bz2 _curses _curses_panel _dbm _gdbm _lzma _sqlite3 _ssl readline zlib To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _crypt nis [101752 refs] ==== I looked into bz2. My system already contained the Debian packages libbz2-dev, libbz2-1.0, and bzip2. From the Debian website, I got the list of all the files in these three packages: ==== Filelist of package libbz2-dev in wheezy of architecture amd64 /usr/include/bzlib.h /usr/lib/x86_64-linux-gnu/libbz2.a /usr/lib/x86_64-linux-gnu/libbz2.so /usr/share/doc/libbz2-dev ==== Filelist of package libbz2-1.0 in wheezy of architecture amd64 /lib/x86_64-linux-gnu/libbz2.so.1 /lib/x86_64-linux-gnu/libbz2.so.1.0 /lib/x86_64-linux-gnu/libbz2.so.1.0.4 /usr/share/doc/libbz2-1.0/changelog.Debian.gz /usr/share/doc/libbz2-1.0/changelog.gz /usr/share/doc/libbz2-1.0/copyright ==== Filelist of package bzip2 in wheezy of architecture amd64 /bin/bunzip2 /bin/bzcat /bin/bzcmp /bin/bzdiff /bin/bzegrep /bin/bzexe /bin/bzfgrep /bin/bzgrep /bin/bzip2 /bin/bzip2recover /bin/bzless /bin/bzmore /usr/share/doc/bzip2/changelog.Debian.gz /usr/share/doc/bzip2/changelog.gz /usr/share/doc/bzip2/copyright /usr/share/man/man1/bunzip2.1.gz /usr/share/man/man1/bzcat.1.gz /usr/share/man/man1/bzcmp.1.gz /usr/share/man/man1/bzdiff.1.gz /usr/share/man/man1/bzegrep.1.gz /usr/share/man/man1/bzexe.1.gz /usr/share/man/man1/bzfgrep.1.gz /usr/share/man/man1/bzgrep.1.gz /usr/share/man/man1/bzip2.1.gz /usr/share/man/man1/bzip2recover.1.gz /usr/share/man/man1/bzless.1.gz /usr/share/man/man1/bzmore.1.gz ==== What is the problem? Does wheezy amd64 put files in unusual places? From solipsis at pitrou.net Fri May 4 20:22:22 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 4 May 2012 20:22:22 +0200 Subject: [Python-Dev] Another buildslave - Ubuntu again In-Reply-To: References: <4FA0CC4F.4070607@v.loewis.de> <20120502165425.06898f77@pitrou.net> <20120502184617.33243626@pitrou.net> Message-ID: <20120504202222.31124de0@pitrou.net> On Fri, 4 May 2012 13:21:17 +0800 Senthil Kumaran wrote: > On Thu, May 3, 2012 at 12:46 AM, Antoine Pitrou wrote: > > Daily code coverage builds would be nice, but that's probably beyond > > what the current infrastructure can offer. It would be nice if someone > > wants to investigate that. > > Code coverage buildbots would indeed be good. I could give a try on > this. What kind of infra changes would be required? I presume, it is > the server side that you are referring to. It doesn't *need* to be a buildbot. Just have a cron script somewhere to run coverage on the test suite every day and published the results somewhere in a readable format. Regards Antoine. From phd at phdru.name Fri May 4 20:39:13 2012 From: phd at phdru.name (Oleg Broytman) Date: Fri, 4 May 2012 22:39:13 +0400 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <4FA41AE0.4020906@comcast.net> References: <4FA41AE0.4020906@comcast.net> Message-ID: <20120504183913.GA22355@iskra.aviel.ru> On Fri, May 04, 2012 at 02:07:28PM -0400, "Edward C. Jones" wrote: > From the Debian website, I got the list of all the > files in these three packages: Don't know about amd64 arch, sorry. You can list content of a package from command line: dpkg [-L|--listfiles] libbz2-dev Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From carl at oddbird.net Fri May 4 22:49:03 2012 From: carl at oddbird.net (Carl Meyer) Date: Fri, 04 May 2012 14:49:03 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades Message-ID: <4FA440BF.50806@oddbird.net> Hi all, The recent virtualenv breakage in Python 2.6.8 and 2.7.3 reveals an issue that deserves to be explicitly addressed in PEP 405: what happens when the system Python underlying a venv gets an in-place bugfix upgrade. If the bugfix includes a simultaneous change to the interpreter and standard library such that the older interpreter will not work with the newer standard library, all venvs created from that Python installation will be broken until the new interpreter is copied into them. Choices for how to address this: 1) Document it and provide a tool for easily upgrading a venv in this situation. This may be adequate. In practice the situation is quite rare: 2.6.8/2.7.3 is the only actual example in the history of virtualenv that I'm aware of. The disadvantage is that if the problem does occur, the error will probably be quite confusing and seemingly unrelated to pyvenv. 2) In addition to the above, introduce a versioning marker in the standard library (is there one already?) and have some code somewhere (insert hand-waving here) check sys.version_info against the stdlib version, and fail fast with an unambiguous error if there is a mismatch. This makes the failure more explicit, but at the significant cost of making it more common: at every mismatch, not just in the apparently-rare case of a breaking change. 3) Symlink the interpreter rather than copying. I include this here for the sake of completeness, but it's already been rejected due to significant problems on older Windows' and OS X. 4) Adopt a policy of interpreter/stdlib cross-compatibility within a given X.Y version of Python. I don't expect this to be a popular choice, given the additional testing requirements it imposes, but it would certainly be the nicest option from the PEP 405 standpoint (and may also be complementary to proposals for splitting out the stdlib). In the 2.6.8/2.7.3 case, this would have been technically trivial to do, but the choice was made not to do it in order to force virtualenv users to adopt the security-fixed Python interpreter. Thoughts? Carl From rosuav at gmail.com Sat May 5 03:20:39 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 5 May 2012 11:20:39 +1000 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <4FA41AE0.4020906@comcast.net> References: <4FA41AE0.4020906@comcast.net> Message-ID: On Sat, May 5, 2012 at 4:07 AM, Edward C. Jones wrote: > /usr/include/bzlib.h > /usr/lib/x86_64-linux-gnu/libbz2.a > /usr/lib/x86_64-linux-gnu/libbz2.so > /lib/x86_64-linux-gnu/libbz2.so.1 > /lib/x86_64-linux-gnu/libbz2.so.1.0 > /lib/x86_64-linux-gnu/libbz2.so.1.0.4 I have an Ubuntu Maverick 64-bit system, not identical but hopefully similar to your Debian. I have /usr/include/bzlib.h, but the others are all one directory level higher - /usr/lib/libbz2.a, /lib/libbz2.so.1.0.4, etc. Does your /etc/ld.so.conf.d mention the appropriate directories? ChrisA From tjreedy at udel.edu Sat May 5 05:48:48 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 04 May 2012 23:48:48 -0400 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA440BF.50806@oddbird.net> References: <4FA440BF.50806@oddbird.net> Message-ID: On 5/4/2012 4:49 PM, Carl Meyer wrote: > Hi all, > > The recent virtualenv breakage in Python 2.6.8 and 2.7.3 reveals an > issue that deserves to be explicitly addressed in PEP 405: what happens > when the system Python underlying a venv gets an in-place bugfix > upgrade. If the bugfix includes a simultaneous change to the interpreter > and standard library such that the older interpreter will not work with > the newer standard library, all venvs created from that Python > installation will be broken until the new interpreter is copied into them. CPython is developed, tested, packaged, distributed, and installed as one unit. It is intended to be run as one package. If something caches a copy of python.exe, it seems to me that it should check and update as needed. Could venv check the file date of the current python.exe versus that of the one cached, much like is done with .pyc compiled code caches? > Choices for how to address this: > 1) Document it and provide a tool for easily upgrading a venv in this > situation. Right. > 4) Adopt a policy of interpreter/stdlib cross-compatibility within a > given X.Y version of Python. I don't expect this to be a popular choice, What a droll sense of humor ;=). -- Terry Jan Reedy From v+python at g.nevcal.com Sat May 5 05:58:25 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 04 May 2012 20:58:25 -0700 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: References: <4FA440BF.50806@oddbird.net> Message-ID: <4FA4A561.2070108@g.nevcal.com> On 5/4/2012 8:48 PM, Terry Reedy wrote: > CPython is developed, tested, packaged, distributed, and installed as > one unit. It is intended to be run as one package. If something caches > a copy of python.exe, it seems to me that it should check and update > as needed. Could venv check the file date of the current python.exe > versus that of the one cached, much like is done with .pyc compiled > code caches? I almost wrote this response (using different words, but the same idea) but concluded that: 1) Python wouldn't run far without its standard library, so a venv check would have to be very early, and likely coded in C, and therefore probably has to be part of Python.exe 2) If it was not part of Python.exe, it would have to work similarly to the launcher, and there would be yet one more process sitting around waiting for Python to exit (on Windows, where there is no exec). So I concluded that probably Python.exe needs to make the check, but if it is aware it existing in venv, it might be able to put out a better message than "just" the mismatch between exe and lib; or at least the message should mention the possibility of an old venv cache. Glenn -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sat May 5 06:39:03 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 05 May 2012 00:39:03 -0400 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA4A561.2070108@g.nevcal.com> References: <4FA440BF.50806@oddbird.net> <4FA4A561.2070108@g.nevcal.com> Message-ID: On 5/4/2012 11:58 PM, Glenn Linderman wrote: > On 5/4/2012 8:48 PM, Terry Reedy wrote: >> CPython is developed, tested, packaged, distributed, and installed as >> one unit. It is intended to be run as one package. If something caches >> a copy of python.exe, it seems to me that it should check and update >> as needed. Could venv check the file date of the current python.exe >> versus that of the one cached, much like is done with .pyc compiled >> code caches? > > I almost wrote this response (using different words, but the same idea) > but concluded that: > > 1) Python wouldn't run far without its standard library, so a venv check > would have to be very early, and likely coded in C, and therefore > probably has to be part of Python.exe > > 2) If it was not part of Python.exe, it would have to work similarly to > the launcher, and there would be yet one more process sitting around > waiting for Python to exit (on Windows, where there is no exec). > > So I concluded that probably Python.exe needs to make the check, but if > it is aware it existing in venv, it might be able to put out a better > message than "just" the mismatch between exe and lib; or at least the > message should mention the possibility of an old venv cache. The gist of my response is that the venv 'tail' should way the python 'dog' as little as possbile. I also wonder how often such incompatibility occurs. Optionally changing the de facto semantics of CPython's built-in dict in bug-fix releases was a rather unique event. I am sure we would all be happy to never have to do such again. -- Terry Jan Reedy From v+python at g.nevcal.com Sat May 5 07:58:48 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 04 May 2012 22:58:48 -0700 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: References: <4FA440BF.50806@oddbird.net> <4FA4A561.2070108@g.nevcal.com> Message-ID: <4FA4C198.3020900@g.nevcal.com> On 5/4/2012 9:39 PM, Terry Reedy wrote: > The gist of my response is that the venv 'tail' should way the python > 'dog' as little as possbile. Yes, that was exactly my thought too. But I'm not sure the technology permits, with Windows not having exec. On the other hand, one might speculate about how venv, instead of copying Python.exe, might instead install the launcher in the place where python.exe is currently copied. The launcher does the "next best thing to exec". Plus it would save a wee bit of space, being smaller than python.exe. On platforms that have symlinks, they could be used instead. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat May 5 08:41:47 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 May 2012 16:41:47 +1000 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA440BF.50806@oddbird.net> References: <4FA440BF.50806@oddbird.net> Message-ID: On Sat, May 5, 2012 at 6:49 AM, Carl Meyer wrote: > 1) Document it and provide a tool for easily upgrading a venv in this > situation. This may be adequate. In practice the situation is quite rare: > 2.6.8/2.7.3 is the only actual example in the history of virtualenv that I'm > aware of. The disadvantage is that if the problem does occur, the error will > probably be quite confusing and seemingly unrelated to pyvenv. I think this is the way to go, for basically the same reasons that we did it this way this time: there's no good reason to pay an ongoing cost to further mitigate the risks associated with an already incredibly rare event. It would become part of the standard venv debugging toolkit: Q X.1: Does the problem only occur inside a particular virtual environment? Q X.2: If yes, did you recently upgrade the system Python to a new point release? Q X.3: If yes, did you run ? Q X.4: If no, do so and see if the problem goes away. Even if it still doesn't work, at least you've eliminated this particular error as a possible cause. Personally, I expect that "always update your virtual environment binaries after updating the system Python to a new point release" will itself become a recommended practice when using virtual environments. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From vinay_sajip at yahoo.co.uk Sat May 5 10:38:28 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 5 May 2012 08:38:28 +0000 (UTC) Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades References: <4FA440BF.50806@oddbird.net> Message-ID: Nick Coghlan gmail.com> writes: > Personally, I expect that "always update your virtual environment > binaries after updating the system Python to a new point release" will > itself become a recommended practice when using virtual environments. Of course, the venv update tool will need to only update environments which were set up with the particular version of Python which was updated. ISTM pyvenv.cfg will need to have a version=X.Y.Z line in it, which is added during venv creation. That information will be used by the tool to only update specific environments. Regards, Vinay Sajip From rosuav at gmail.com Sat May 5 10:52:33 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 5 May 2012 18:52:33 +1000 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA440BF.50806@oddbird.net> References: <4FA440BF.50806@oddbird.net> Message-ID: On Sat, May 5, 2012 at 6:49 AM, Carl Meyer wrote: > 2) In addition to the above, introduce a versioning marker in the standard > library (is there one already?) and have some code somewhere (insert > hand-waving here) check sys.version_info against the stdlib version, and > fail fast with an unambiguous error if there is a mismatch. This makes the > failure more explicit, but at the significant cost of making it more common: > at every mismatch, not just in the apparently-rare case of a breaking > change. Variant: Could the versioning marker give a minimum and/or maximum? It'd then only cause the explicit failure in the actual case of a breaking change, and the rest of the time it could happily use any X.Y.* release. ChrisA From solipsis at pitrou.net Sat May 5 12:36:55 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 May 2012 12:36:55 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages References: <4FA41AE0.4020906@comcast.net> Message-ID: <20120505123655.1e473d0e@pitrou.net> Hello, On Fri, 04 May 2012 14:07:28 -0400 "Edward C. Jones" wrote: > Filelist of package libbz2-dev in wheezy of architecture amd64 > > /usr/include/bzlib.h > /usr/lib/x86_64-linux-gnu/libbz2.a > /usr/lib/x86_64-linux-gnu/libbz2.so > /usr/share/doc/libbz2-dev setup.py probably doesn't search in the right paths for libbz2.so. I suggest you open a bug at http://bugs.python.org Thanks for your report, Antoine. From solipsis at pitrou.net Sat May 5 12:40:05 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 May 2012 12:40:05 +0200 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades References: <4FA440BF.50806@oddbird.net> Message-ID: <20120505124005.4401ef03@pitrou.net> Hi, On Fri, 04 May 2012 14:49:03 -0600 Carl Meyer wrote: > > 3) Symlink the interpreter rather than copying. I include this here for > the sake of completeness, but it's already been rejected due to > significant problems on older Windows' and OS X. Perhaps symlinking could be used at least on symlinks-friendly OSes? I expect older Windows to disappear one day :-) So the only left outlier would be OS X. Regards Antoine. From xdegaye at gmail.com Sat May 5 12:51:42 2012 From: xdegaye at gmail.com (Xavier de Gaye) Date: Sat, 5 May 2012 12:51:42 +0200 Subject: [Python-Dev] The step command of pdb is broken In-Reply-To: References: Message-ID: On Mon, Apr 30, 2012 at 12:31 PM, Xavier de Gaye wrote: > Issue http://bugs.python.org/issue13183 raises the point that the step > command of pdb is broken. This issue is 6 months old. A patch and test > case have been proposed. Other pdb commands are also broken for the same reason (no trace function setup in the targeted caller frame). A new http://bugs.python.org/issue14728 has been submitted with a proposed patch for these commands and the corresponding test cases. The patch removes a while loop from the fast path, and that should also provide an improvement of the performance of Pdb. Xavier From ncoghlan at gmail.com Sat May 5 15:07:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 5 May 2012 23:07:13 +1000 Subject: [Python-Dev] PEP 1 updated to reflect current practices Message-ID: I just pushed an update to PEP 1 to give additional guidance to core developers that are directly updating a PEP in Mercurial, to account for the automatic generation of PEP 0 and to mention the "PEP czar" role. Updated PEP: http://www.python.org/dev/peps/pep-0001/ Changes: http://hg.python.org/peps/rev/bdbbd3ce97d9 Any additional feedback here (I'll leave the issue open for a while): http://bugs.python.org/issue14703 (although remember that the bar for this PEP is "useful and fairly accurate" rather than "perfect") Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From lists at cheimes.de Sat May 5 15:31:24 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 05 May 2012 15:31:24 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <20120505123655.1e473d0e@pitrou.net> References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> Message-ID: Am 05.05.2012 12:36, schrieb Antoine Pitrou: > > Hello, > > On Fri, 04 May 2012 14:07:28 -0400 > "Edward C. Jones" wrote: >> Filelist of package libbz2-dev in wheezy of architecture amd64 >> >> /usr/include/bzlib.h >> /usr/lib/x86_64-linux-gnu/libbz2.a >> /usr/lib/x86_64-linux-gnu/libbz2.so >> /usr/share/doc/libbz2-dev > > setup.py probably doesn't search in the right paths for libbz2.so. I > suggest you open a bug at http://bugs.python.org The issue might be caused by Debian's new multiarch libraries. In recent versions of Debian (and Ubuntu), 64bit and 32bit libraries can coexist on the same system. What's the output of "dpkg-architecture -qDEB_HOST_MULTIARCH" on your system? It should print out "x86_64-linux-gnu". setup.py supports multiarch for some time, see PyBuildExt.add_multiarch_paths(). Christian From solipsis at pitrou.net Sat May 5 15:39:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 May 2012 15:39:10 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> Message-ID: <20120505153910.3fcfe366@pitrou.net> On Sat, 05 May 2012 15:31:24 +0200 Christian Heimes wrote: > Am 05.05.2012 12:36, schrieb Antoine Pitrou: > > > > Hello, > > > > On Fri, 04 May 2012 14:07:28 -0400 > > "Edward C. Jones" wrote: > >> Filelist of package libbz2-dev in wheezy of architecture amd64 > >> > >> /usr/include/bzlib.h > >> /usr/lib/x86_64-linux-gnu/libbz2.a > >> /usr/lib/x86_64-linux-gnu/libbz2.so > >> /usr/share/doc/libbz2-dev > > > > setup.py probably doesn't search in the right paths for libbz2.so. I > > suggest you open a bug at http://bugs.python.org > > The issue might be caused by Debian's new multiarch libraries. In recent > versions of Debian (and Ubuntu), 64bit and 32bit libraries can coexist > on the same system. It probably is, but I thought Barry had tackled that in setup.py :-) Regards Antoine. From tshepang at gmail.com Sat May 5 15:43:34 2012 From: tshepang at gmail.com (Tshepang Lekhonkhobe) Date: Sat, 5 May 2012 15:43:34 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <4FA41AE0.4020906@comcast.net> References: <4FA41AE0.4020906@comcast.net> Message-ID: This is likely because you don't have dpkg-dev installed. From tshepang at gmail.com Sat May 5 15:48:30 2012 From: tshepang at gmail.com (Tshepang Lekhonkhobe) Date: Sat, 5 May 2012 15:48:30 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: References: <4FA41AE0.4020906@comcast.net> Message-ID: On Sat, May 5, 2012 at 3:43 PM, Tshepang Lekhonkhobe wrote: > This is likely ?because you don't have dpkg-dev installed. http://bugs.python.org/issue13956 From lists at cheimes.de Sat May 5 16:04:40 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 05 May 2012 16:04:40 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <20120505153910.3fcfe366@pitrou.net> References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> <20120505153910.3fcfe366@pitrou.net> Message-ID: Am 05.05.2012 15:39, schrieb Antoine Pitrou: > On Sat, 05 May 2012 15:31:24 +0200 > Christian Heimes wrote: >> Am 05.05.2012 12:36, schrieb Antoine Pitrou: >>> >>> Hello, >>> >>> On Fri, 04 May 2012 14:07:28 -0400 >>> "Edward C. Jones" wrote: >>>> Filelist of package libbz2-dev in wheezy of architecture amd64 >>>> >>>> /usr/include/bzlib.h >>>> /usr/lib/x86_64-linux-gnu/libbz2.a >>>> /usr/lib/x86_64-linux-gnu/libbz2.so >>>> /usr/share/doc/libbz2-dev >>> >>> setup.py probably doesn't search in the right paths for libbz2.so. I >>> suggest you open a bug at http://bugs.python.org >> >> The issue might be caused by Debian's new multiarch libraries. In recent >> versions of Debian (and Ubuntu), 64bit and 32bit libraries can coexist >> on the same system. > > It probably is, but I thought Barry had tackled that in setup.py :-) The fix needs the dpkg-architecture program. As Tshepang pointed out it may not be available on Edward's box. I always install build-essential on all development boxes as it includes GCC, make and dpkg-dev. Christian From solipsis at pitrou.net Sat May 5 16:13:11 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 5 May 2012 16:13:11 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> <20120505153910.3fcfe366@pitrou.net> Message-ID: <20120505161311.207c891e@pitrou.net> On Sat, 05 May 2012 16:04:40 +0200 Christian Heimes wrote: > Am 05.05.2012 15:39, schrieb Antoine Pitrou: > > On Sat, 05 May 2012 15:31:24 +0200 > > Christian Heimes wrote: > >> Am 05.05.2012 12:36, schrieb Antoine Pitrou: > >>> > >>> Hello, > >>> > >>> On Fri, 04 May 2012 14:07:28 -0400 > >>> "Edward C. Jones" wrote: > >>>> Filelist of package libbz2-dev in wheezy of architecture amd64 > >>>> > >>>> /usr/include/bzlib.h > >>>> /usr/lib/x86_64-linux-gnu/libbz2.a > >>>> /usr/lib/x86_64-linux-gnu/libbz2.so > >>>> /usr/share/doc/libbz2-dev > >>> > >>> setup.py probably doesn't search in the right paths for libbz2.so. I > >>> suggest you open a bug at http://bugs.python.org > >> > >> The issue might be caused by Debian's new multiarch libraries. In recent > >> versions of Debian (and Ubuntu), 64bit and 32bit libraries can coexist > >> on the same system. > > > > It probably is, but I thought Barry had tackled that in setup.py :-) > > The fix needs the dpkg-architecture program. As Tshepang pointed out it > may not be available on Edward's box. I always install build-essential > on all development boxes as it includes GCC, make and dpkg-dev. Perhaps setup.py should detect that? It shouldn't be too hard to parse /etc/debian_version in order to know whether the system is multiarch-enabled. That would avoid confusing build failures. Regards Antoine. From lists at cheimes.de Sat May 5 17:23:24 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 05 May 2012 17:23:24 +0200 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: <20120505161311.207c891e@pitrou.net> References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> <20120505153910.3fcfe366@pitrou.net> <20120505161311.207c891e@pitrou.net> Message-ID: Am 05.05.2012 16:13, schrieb Antoine Pitrou: > Perhaps setup.py should detect that? It shouldn't be too hard to > parse /etc/debian_version in order to know whether the system is > multiarch-enabled. That would avoid confusing build failures. This sounds like a good idea. dpkg-architecture is available on older version of Debian and Ubuntu but doesn't support DEB_HOST_MULTIARCH (which is fine). We could parse the output of platform.dist() but it's easier to just search for the apt-get command: if not find_executable('apt-get'): # no Debian based distro return if not find_executable('dpkg-architecture'): print "Warning, Debian detected but no dpkg-architecture found. Please run 'sudo apt-get install build-essential'. return Christian From barry at python.org Sat May 5 18:28:15 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 5 May 2012 12:28:15 -0400 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages In-Reply-To: References: <4FA41AE0.4020906@comcast.net> <20120505123655.1e473d0e@pitrou.net> <20120505153910.3fcfe366@pitrou.net> Message-ID: <20120505122815.45aca740@resist.wooz.org> On May 05, 2012, at 04:04 PM, Christian Heimes wrote: >The fix needs the dpkg-architecture program. As Tshepang pointed out it >may not be available on Edward's box. I always install build-essential >on all development boxes as it includes GCC, make and dpkg-dev. That's probably it. Certainly Python 2.7, 3.2, and 3.3 build just fine for me on Debian Wheezy and Ubuntu Precise. One other thing: you might want to `apt-get build-dep python3.2` to get all the build dependencies installed first, even if you're building Python from source. If you're building Python 3.3 from source, you'll also want to install liblzma-dev. Cheers, -Barry From barry at python.org Sat May 5 18:56:54 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 5 May 2012 12:56:54 -0400 Subject: [Python-Dev] [Python-checkins] peps: Update PEP 1 to better reflect current practice In-Reply-To: References: Message-ID: <20120505125654.77d3b9ab@resist.wooz.org> Thanks for doing this update Nick. I have just a few comments. On May 05, 2012, at 02:57 PM, nick.coghlan wrote: >+Developers with commit privileges for the `PEP repository`_ may claim >+PEP numbers directly by creating and committing a new PEP. When doing so, >+the developer must handle the tasks that would normally be taken care of by >+the PEP editors (see `PEP Editor Responsibilities & Workflow`_). While I certainly don't mind (in fact, prefer) those with commit privileges to just go ahead and commit their PEP to the repo, I'd like for there to be *some* communication with the PEP editors first. E.g. sanity checks on the basic format or idea (was this discussed on python-ideas first?), or reservation of PEP numbers. When you do contact the PEP editors, please also specify whether you have commit privileges or not. It's too hard to remember or know who has those rights, and too much hassle to look them up. ;) OTOH, I'm also happy to adopt an EAFP style rather than LBYL, so that the PEP editors can re-assign numbers or whatever after the fact. We've done this in a few cases, and it's never been that much of a problem. Still, core developers needn't block (for too long) on the PEP editors. >+The final authority for PEP approval is the BDFL. However, whenever a new >+PEP is put forward, any core developer that believes they are suitably >+experienced to make the final decision on that PEP may offer to serve as >+the "PEP czar" for that PEP. If their self-nomination is accepted by the >+other core developers and the BDFL, then they will have the authority to >+approve (or reject) that PEP. This process happens most frequently with PEPs >+where the BDFL has granted in principle approval for *something* to be done, >+but there are details that need to be worked out before the PEP can be >+accepted. I'd reword this to something like the following: The final authority for the PEP approval is the BDFL. However, the BDFL may delegate the final approval authority to a "PEP czar" for that PEP. This happens most frequently with PEPs where the BDFL has granted approval in principle for *something* to be done, and in agreement with the general proposals of the PEP, but there are details that need to be worked out before the final PEP can be approved. When an `PEP-Czar` header must be added to the PEP to record this delegation. The format of this header is the same as the `Author` header. This leave out the whole self-nomination text, which I think isn't very relevant to the official addition of the czar role (sadly, no clever bacronym has come to mind, and BDFOP hasn't really taken off ;). >+* Run ``./genpepindex.py`` and ``./pep2html.py `` to ensure they >+ are generated without errors. If either triggers errors, then the web site >+ will not be updated to reflect the PEP changes. Or just run "make" on systems that have that handy convenience. :) Cheers, -Barry (Nick, if you agree with these changes, please just go ahead and make them.) From barry at python.org Sat May 5 18:59:53 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 5 May 2012 12:59:53 -0400 Subject: [Python-Dev] [Python-checkins] peps: Update PEP 1 to better reflect current practice In-Reply-To: <20120505125654.77d3b9ab@resist.wooz.org> References: <20120505125654.77d3b9ab@resist.wooz.org> Message-ID: <20120505125953.1ef90d2f@resist.wooz.org> On May 05, 2012, at 12:56 PM, Barry Warsaw wrote: > before the final PEP can be approved. When an `PEP-Czar` header must be > added to the PEP to record this delegation. The format of this header is > the same as the `Author` header. s/When an/A/ -Barry From edcjones at comcast.net Sat May 5 23:59:51 2012 From: edcjones at comcast.net (Edward C. Jones) Date: Sat, 05 May 2012 17:59:51 -0400 Subject: [Python-Dev] Debian wheezy, amd64: make not finding files for bz2 and other packages Message-ID: <4FA5A2D7.4090306@comcast.net> dpkg-architecture -qDEB_HOST_MULTIARCH gives x86_64-linux-gnu Installing dpkg-dev fixed the problem. Now both 3.3a3 and a developmental "clone" work. There is already a Debian package for 3.3 alpha3. See http://packages.debian.org/source/experimental/python3.3 A large diff for Debian Python is available at this url. The following should be installed before compiling python3. This list may be incomplete. This list may include unnecessary packages. dpkg-dev sharutils libreadline6-dev libreadline5 libncursesw5-dev libncursesw5 zlib1g-dev zlib1g libbz2-dev bzip2 liblzma-dev liblzma5 libgdbm-dev libgdbm3 libdb5.3-dev libdb5.3 tk8.5-dev tk8.5 blt-dev blt libssl-dev libssl1.0.0 libexpat1-dev libexpat1 libbluetooth-dev libbluetooth3 locales libsqlite3-dev libsqlite3 libffi-dev libffi5 libgpm2 libgpm-dev libtinfo-dev libtinfo5 mime-support netbase gdb xvfb xauth python-sphinx (Implemented in python 2) From benjamin at python.org Sun May 6 02:04:29 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 5 May 2012 20:04:29 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14705: Add 'p' format character to PyArg_ParseTuple* for bool support. In-Reply-To: References: Message-ID: 2012/5/5 larry.hastings : > http://hg.python.org/cpython/rev/bc6d28e726d8 > changeset: ? 76776:bc6d28e726d8 > user: ? ? ? ?Larry Hastings > date: ? ? ? ?Sat May 05 16:54:29 2012 -0700 > summary: > ?Issue #14705: Add 'p' format character to PyArg_ParseTuple* for bool support. > > files: > ?Doc/c-api/arg.rst ? ? ? ? | ? 9 +++++++ > ?Lib/test/test_getargs2.py | ?31 +++++++++++++++++++++++++++ > ?Modules/_testcapimodule.c | ?10 ++++++++ > ?Python/getargs.c ? ? ? ? ?| ?12 ++++++++++ > ?4 files changed, 62 insertions(+), 0 deletions(-) You forgot Misc/NEWS. -- Regards, Benjamin From benjamin at python.org Sun May 6 03:11:07 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 5 May 2012 21:11:07 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Update Misc/NEWS for issues #14127 and #14705. (And, technically, #10148.) In-Reply-To: References: Message-ID: 2012/5/5 larry.hastings : > http://hg.python.org/cpython/rev/709850f1ec67 > changeset: ? 76777:709850f1ec67 > user: ? ? ? ?Larry Hastings > date: ? ? ? ?Sat May 05 17:39:09 2012 -0700 > summary: > ?Update Misc/NEWS for issues #14127 and #14705. ?(And, technically, #10148.) > > files: > ?Modules/posixmodule.c | ?372 +++++++++++++++++++++++++++-- Um? -- Regards, Benjamin From stefan at bytereef.org Sun May 6 03:17:17 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 6 May 2012 03:17:17 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Update Misc/NEWS for issues #14127 and #14705. (And, technically, #10148.) In-Reply-To: References: Message-ID: <20120506011717.GA19943@sleipnir.bytereef.org> larry.hastings wrote: > Update Misc/NEWS for issues #14127 and #14705. (And, technically, #10148.) > > + * De-vararg'd PyArg_ParseTupleAndKeywords() This looks like an accidental commit. Is there an issue number for the varargs changes (just out of interest)? Stefan Krah From larry at hastings.org Sun May 6 03:42:47 2012 From: larry at hastings.org (Larry Hastings) Date: Sat, 05 May 2012 18:42:47 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Update Misc/NEWS for issues #14127 and #14705. (And, technically, #10148.) In-Reply-To: <20120506011717.GA19943@sleipnir.bytereef.org> References: <20120506011717.GA19943@sleipnir.bytereef.org> Message-ID: <4FA5D717.6020207@hastings.org> On 05/05/2012 06:17 PM, Stefan Krah wrote: > larry.hastings wrote: >> Update Misc/NEWS for issues #14127 and #14705. (And, technically, #10148.) >> >> + * De-vararg'd PyArg_ParseTupleAndKeywords() > This looks like an accidental commit. Is there an issue number for the > varargs changes (just out of interest)? This was indeed an accidental commit, and OMG I'm so sorry about it. Thanks to Benjamin for swooping in and fixing it--I was in full-on panic mode for a few minutes there. I'll commit the proper MISC/News update when I calm down. The varargs thing is part of a proposed patch I'm working up for issue #14626. In case you look it over, keep in mind it was a bit hacked up just then so I could test the Windows path on my Linux box. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun May 6 07:08:52 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 May 2012 15:08:52 +1000 Subject: [Python-Dev] [Python-checkins] peps: Update PEP 1 to better reflect current practice In-Reply-To: <20120505125654.77d3b9ab@resist.wooz.org> References: <20120505125654.77d3b9ab@resist.wooz.org> Message-ID: On Sun, May 6, 2012 at 2:56 AM, Barry Warsaw wrote: > Thanks for doing this update Nick. ?I have just a few comments. > > On May 05, 2012, at 02:57 PM, nick.coghlan wrote: > >>+Developers with commit privileges for the `PEP repository`_ may claim >>+PEP numbers directly by creating and committing a new PEP. When doing so, >>+the developer must handle the tasks that would normally be taken care of by >>+the PEP editors (see `PEP Editor Responsibilities & Workflow`_). > > While I certainly don't mind (in fact, prefer) those with commit privileges to > just go ahead and commit their PEP to the repo, I'd like for there to be > *some* communication with the PEP editors first. ?E.g. sanity checks on the > basic format or idea (was this discussed on python-ideas first?), or > reservation of PEP numbers. > > When you do contact the PEP editors, please also specify whether you have > commit privileges or not. ?It's too hard to remember or know who has those > rights, and too much hassle to look them up. ;) Good point, especially for committers that haven't done much PEP editing in the past. > OTOH, I'm also happy to adopt an EAFP style rather than LBYL, so that the PEP > editors can re-assign numbers or whatever after the fact. ?We've done this in > a few cases, and it's never been that much of a problem. > > Still, core developers needn't block (for too long) on the PEP editors. I'll see if I can figure out something - I may just put in text like "if you're at all unsure about what needs to be done, email the PEP editors anyway". >>+The final authority for PEP approval is the BDFL. However, whenever a new >>+PEP is put forward, any core developer that believes they are suitably >>+experienced to make the final decision on that PEP may offer to serve as >>+the "PEP czar" for that PEP. If their self-nomination is accepted by the >>+other core developers and the BDFL, then they will have the authority to >>+approve (or reject) that PEP. This process happens most frequently with PEPs >>+where the BDFL has granted in principle approval for *something* to be done, >>+but there are details that need to be worked out before the PEP can be >>+accepted. > > I'd reword this to something like the following: > > ? ?The final authority for the PEP approval is the BDFL. ?However, the BDFL > ? ?may delegate the final approval authority to a "PEP czar" for that PEP. > ? ?This happens most frequently with PEPs where the BDFL has granted approval > ? ?in principle for *something* to be done, and in agreement with the general > ? ?proposals of the PEP, but there are details that need to be worked out > ? ?before the final PEP can be approved. ?When an `PEP-Czar` header must be > ? ?added to the PEP to record this delegation. ?The format of this header is > ? ?the same as the `Author` header. > > This leave out the whole self-nomination text, which I think isn't very > relevant to the official addition of the czar role (sadly, no clever bacronym > has come to mind, and BDFOP hasn't really taken off ;). Including the self-nomination wording was deliberate - it summarises the gist of an off-list conversation between Victor, Guido and myself a while back. At the time, I thought the delegation had to come directly from Guido, but it turned out Guido was happy for people to volunteer for the role (or for PEP authors to suggest someone, which pretty much amounts to the same thing), with the acceptance of nominations covered by the same "rough consensus" rules as checkins (i.e. silence is taken as assent). That way Guido only has to get involved if he is personally interested, or none of the rest of us feel entitled to make the call. Since the way the czar gets appointed is important, I figured it was worth including. (The conversation was a while ago though, so hopefully Guido will chime in if I'm mischaracterising what he wrote at the time) Agreed we should have a new header field to record the BDFL delegate, but I think I'll go with BDFL-Delegate rather than PEP-Czar. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun May 6 08:45:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 May 2012 16:45:32 +1000 Subject: [Python-Dev] Recording BDFL delegates for PEPs Message-ID: At Barry's suggestion (following my PEP 1 updates), I've also updated the PEP 0 generation machinery to handle an explicit "BDFL-Delegate" field. You can see an example here with PEP 3151: http://www.python.org/dev/peps/pep-3151/ I also updated the 3 PEPs that are on my plate (405, 415 and 3144). If there's anyone else that got tapped as a PEP czar, please update the corresponding PEPs in the repo. For the moment, I suggest leaving your email address out of this field. The email obfuscation is applied on a field-by-field basis, and the formatter for reStructuredText PEPs actually lives in the docutils upstream rather than being included directly in the PEPs repo the way the plaintext formatter is. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Sun May 6 09:29:29 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 6 May 2012 09:29:29 +0200 Subject: [Python-Dev] peps: Update PEP 1 to better reflect current practice References: <20120505125654.77d3b9ab@resist.wooz.org> Message-ID: <20120506092929.32a7b775@pitrou.net> On Sun, 6 May 2012 15:08:52 +1000 Nick Coghlan wrote: > > Agreed we should have a new header field to record the BDFL delegate, > but I think I'll go with BDFL-Delegate rather than PEP-Czar. +1 for overthrowing czars! Regards Antoine. From solipsis at pitrou.net Sun May 6 09:31:26 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 6 May 2012 09:31:26 +0200 Subject: [Python-Dev] Recording BDFL delegates for PEPs References: Message-ID: <20120506093126.588214b8@pitrou.net> On Sun, 6 May 2012 16:45:32 +1000 Nick Coghlan wrote: > > For the moment, I suggest leaving your email address out of this > field. The email obfuscation is applied on a field-by-field basis, and > the formatter for reStructuredText PEPs actually lives in the docutils > upstream rather than being included directly in the PEPs repo the way > the plaintext formatter is. I have to ask - is email obfuscation still useful these days? I would have thought most people protect from spam by using spam filters, not by trying to conceal their addresses. Regards Antoine. From ncoghlan at gmail.com Sun May 6 09:39:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 May 2012 17:39:02 +1000 Subject: [Python-Dev] peps: Update PEP 1 to better reflect current practice In-Reply-To: <20120506092929.32a7b775@pitrou.net> References: <20120505125654.77d3b9ab@resist.wooz.org> <20120506092929.32a7b775@pitrou.net> Message-ID: On Sun, May 6, 2012 at 5:29 PM, Antoine Pitrou wrote: > On Sun, 6 May 2012 15:08:52 +1000 > Nick Coghlan wrote: >> >> Agreed we should have a new header field to record the BDFL delegate, >> but I think I'll go with BDFL-Delegate rather than PEP-Czar. > > +1 for overthrowing czars! I expect PEP czar will stick as the nickname (that's why I still mention it in PEP 1), but I definitely prefer having something a bit more self explanatory as the official designation. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun May 6 09:42:13 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 6 May 2012 17:42:13 +1000 Subject: [Python-Dev] Recording BDFL delegates for PEPs In-Reply-To: <20120506093126.588214b8@pitrou.net> References: <20120506093126.588214b8@pitrou.net> Message-ID: On Sun, May 6, 2012 at 5:31 PM, Antoine Pitrou wrote: > On Sun, 6 May 2012 16:45:32 +1000 > Nick Coghlan wrote: >> >> For the moment, I suggest leaving your email address out of this >> field. The email obfuscation is applied on a field-by-field basis, and >> the formatter for reStructuredText PEPs actually lives in the docutils >> upstream rather than being included directly in the PEPs repo the way >> the plaintext formatter is. > > I have to ask - is email obfuscation still useful these days? I would > have thought most people protect from spam by using spam filters, not > by trying to conceal their addresses. I think it's one of those things where people *like* seeing it, regardless of how effective it is in practice. The delegate field isn't actually parsed at all, so people are free to include their email address if they choose to - doing so won't be *necessary* until we get an actual name collision amongst the core developers. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Sun May 6 18:07:13 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 6 May 2012 18:07:13 +0200 Subject: [Python-Dev] cpython: Make AcquirerProxy.acquire() support timeout argument References: Message-ID: <20120506180713.2430f305@pitrou.net> On Sun, 06 May 2012 17:56:55 +0200 richard.oudkerk wrote: > http://hg.python.org/cpython/rev/b4a1d9287780 > changeset: 76800:b4a1d9287780 > user: Richard Oudkerk > date: Sun May 06 16:45:02 2012 +0100 > summary: > Make AcquirerProxy.acquire() support timeout argument Should it have a Misc/NEWS entry? (and a doc addition perhaps?) Regards Antoine. From roundup-admin at psf.upfronthosting.co.za Sun May 6 18:35:44 2012 From: roundup-admin at psf.upfronthosting.co.za (Python tracker) Date: Sun, 06 May 2012 16:35:44 +0000 Subject: [Python-Dev] Failed issue tracker submission Message-ID: <20120506163544.936791CB9A@psf.upfronthosting.co.za> The node specified by the designator in the subject of your message ("14965") does not exist. Subject was: "[issue14965]" Mail Gateway Help ================= Incoming messages are examined for multiple parts: . In a multipart/mixed message or part, each subpart is extracted and examined. The text/plain subparts are assembled to form the textual body of the message, to be stored in the file associated with a "msg" class node. Any parts of other types are each stored in separate files and given "file" class nodes that are linked to the "msg" node. . In a multipart/alternative message or part, we look for a text/plain subpart and ignore the other parts. . A message/rfc822 is treated similar tomultipart/mixed (except for special handling of the first text part) if unpack_rfc822 is set in the mailgw config section. Summary ------- The "summary" property on message nodes is taken from the first non-quoting section in the message body. The message body is divided into sections by blank lines. Sections where the second and all subsequent lines begin with a ">" or "|" character are considered "quoting sections". The first line of the first non-quoting section becomes the summary of the message. Addresses --------- All of the addresses in the To: and Cc: headers of the incoming message are looked up among the user nodes, and the corresponding users are placed in the "recipients" property on the new "msg" node. The address in the From: header similarly determines the "author" property of the new "msg" node. The default handling for addresses that don't have corresponding users is to create new users with no passwords and a username equal to the address. (The web interface does not permit logins for users with no passwords.) If we prefer to reject mail from outside sources, we can simply register an auditor on the "user" class that prevents the creation of user nodes with no passwords. Actions ------- The subject line of the incoming message is examined to determine whether the message is an attempt to create a new item or to discuss an existing item. A designator enclosed in square brackets is sought as the first thing on the subject line (after skipping any "Fwd:" or "Re:" prefixes). If an item designator (class name and id number) is found there, the newly created "msg" node is added to the "messages" property for that item, and any new "file" nodes are added to the "files" property for the item. If just an item class name is found there, we attempt to create a new item of that class with its "messages" property initialized to contain the new "msg" node and its "files" property initialized to contain any new "file" nodes. Triggers -------- Both cases may trigger detectors (in the first case we are calling the set() method to add the message to the item's spool; in the second case we are calling the create() method to create a new node). If an auditor raises an exception, the original message is bounced back to the sender with the explanatory message given in the exception. $Id: mailgw.py,v 1.196 2008-07-23 03:04:44 richard Exp $ -------------- next part -------------- Return-Path: X-Original-To: report at bugs.python.org Delivered-To: roundup+tracker at psf.upfronthosting.co.za Received: from mail.python.org (mail.python.org [82.94.164.166]) by psf.upfronthosting.co.za (Postfix) with ESMTPS id 5B02E1CB4A for ; Sun, 6 May 2012 18:35:44 +0200 (CEST) Received: from albatross.python.org (localhost [127.0.0.1]) by mail.python.org (Postfix) with ESMTP id 3Vlsym0qbgzM1L for ; Sun, 6 May 2012 18:35:44 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901; t=1336322144; bh=Mpmd9a48weoSfYj5dd0SkAaR6Nf9eu6lSG6CDGbSb1k=; h=Date:Message-Id:Content-Type:MIME-Version: Content-Transfer-Encoding:From:To:Subject; b=yESPEjvi+ZH3aflpJVezllCl9rV7jAt75IuTHe8dRUjq/yetx2Lp45W5s5kVVY2bH oKtFGyN1RrRN2NOtjhj+b4G5WnZ2rSxCJTtGJ4ERXaNMVCXRkvbJ/aa/cdIE+R1mC/ ZYUuEZTBWQ0moIFEX+DJAXZ/23ELCDE0Xl+ZIUdE= Received: from localhost (HELO mail.python.org) (127.0.0.1) by albatross.python.org with SMTP; 06 May 2012 18:35:44 +0200 Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.python.org (Postfix) with ESMTPS for ; Sun, 6 May 2012 18:35:44 +0200 (CEST) Received: from localhost ([127.0.0.1] helo=dinsdale.python.org ident=hg) by dinsdale.python.org with esmtp (Exim 4.72) (envelope-from ) id 1SR4Qx-0002Si-Tp for report at bugs.python.org; Sun, 06 May 2012 18:35:43 +0200 Date: Sun, 06 May 2012 18:35:43 +0200 Message-Id: Content-Type: text/plain; charset="utf8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 From: python-dev at python.org To: report at bugs.python.org Subject: [issue14965] TmV3IGNoYW5nZXNldCBjODA1NzYzMDM4OTIgYnkgTWFyayBEaWNraW5zb24gaW4gYnJhbmNoICcz LjInOgpJc3N1ZSAjMTQ5NjU6IEZpeCBtaXNzaW5nIHN1cHBvcnQgZm9yIHN0YXJyZWQgYXNzaWdu bWVudHMgaW4gVG9vbHMvcGFyc2VyL3VucGFyc2UucHkuCmh0dHA6Ly9oZy5weXRob24ub3JnL2Nw eXRob24vcmV2L2M4MDU3NjMwMzg5MgoKCk5ldyBjaGFuZ2VzZXQgODllOTI4MDQ4OTAzIGJ5IE1h cmsgRGlja2luc29uIGluIGJyYW5jaCAnZGVmYXVsdCc6Cklzc3VlICMxNDk2NTogIEJyaW5nIFRv b2xzL3BhcnNlci91bnBhcnNlLnB5IHVwIHRvIGRhdGUgd2l0aCB0aGUgUHl0aG9uIDMuMy4gR3Jh bW1hci4KaHR0cDovL2hnLnB5dGhvbi5vcmcvY3B5dGhvbi9yZXYvODllOTI4MDQ4OTAzCg== From shibturn at gmail.com Sun May 6 19:58:10 2012 From: shibturn at gmail.com (shibturn) Date: Sun, 06 May 2012 18:58:10 +0100 Subject: [Python-Dev] cpython: Make AcquirerProxy.acquire() support timeout argument In-Reply-To: <20120506180713.2430f305@pitrou.net> References: <20120506180713.2430f305@pitrou.net> Message-ID: On 06/05/2012 5:07pm, Antoine Pitrou wrote: > On Sun, 06 May 2012 17:56:55 +0200 >> summary: >> Make AcquirerProxy.acquire() support timeout argument > > Should it have a Misc/NEWS entry? (and a doc addition perhaps?) Since proxies for locks/semaphores are supposed to work the same way as the proxied object from threading, one could argue that the lack of support in 3.2 was a bug. I notice now that multiprocessing.*.acquire() and threading.*.wait() treat negative timeouts as zero timeouts. On the other hand, threading.*.acquire() treat negative timeouts as infinite. Maybe these inconsistencies should be documented or eliminated? As currently implemented AcquirerProxy.acquire() treats negative timeouts as infinite. Cheers Richard From solipsis at pitrou.net Sun May 6 21:48:01 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 6 May 2012 21:48:01 +0200 Subject: [Python-Dev] cpython: Make AcquirerProxy.acquire() support timeout argument References: <20120506180713.2430f305@pitrou.net> Message-ID: <20120506214801.712d84f8@pitrou.net> On Sun, 06 May 2012 18:58:10 +0100 shibturn wrote: > On 06/05/2012 5:07pm, Antoine Pitrou wrote: > > On Sun, 06 May 2012 17:56:55 +0200 > >> summary: > >> Make AcquirerProxy.acquire() support timeout argument > > > > Should it have a Misc/NEWS entry? (and a doc addition perhaps?) > > Since proxies for locks/semaphores are supposed to work the same way as > the proxied object from threading, one could argue that the lack of > support in 3.2 was a bug. Ok; if it's a bug it should have a NEWS entry, though. > I notice now that multiprocessing.*.acquire() and threading.*.wait() > treat negative timeouts as zero timeouts. On the other hand, > threading.*.acquire() treat negative timeouts as infinite. > > Maybe these inconsistencies should be documented or eliminated? I don't know. Ideally both would have raised ValueError on negative timeouts, but it's probably too late :-) cheers Antoine. From carl at oddbird.net Sun May 6 23:56:30 2012 From: carl at oddbird.net (Carl Meyer) Date: Sun, 06 May 2012 15:56:30 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: References: <4FA440BF.50806@oddbird.net> Message-ID: <4FA6F38E.7060508@oddbird.net> On 05/05/2012 02:38 AM, Vinay Sajip wrote: > Nick Coghlan gmail.com> writes: > >> Personally, I expect that "always update your virtual environment >> binaries after updating the system Python to a new point release" will >> itself become a recommended practice when using virtual environments. > > Of course, the venv update tool will need to only update environments which were > set up with the particular version of Python which was updated. ISTM pyvenv.cfg > will need to have a version=X.Y.Z line in it, which is added during venv > creation. That information will be used by the tool to only update specific > environments. I don't think the added "version" key in pyvenv.cfg is needed; the "home" key provides enough information to know whether the virtualenv was created by the particular Python that was upgraded. The "version" key could in theory be useful to know whether a particular venv created by that Python has or has not yet been upgraded to match, but since the upgrade is trivial and idempotent I don't think that is important. Carl From carl at oddbird.net Mon May 7 00:07:32 2012 From: carl at oddbird.net (Carl Meyer) Date: Sun, 06 May 2012 16:07:32 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <20120505124005.4401ef03@pitrou.net> References: <4FA440BF.50806@oddbird.net> <20120505124005.4401ef03@pitrou.net> Message-ID: <4FA6F624.3000309@oddbird.net> On 05/05/2012 04:40 AM, Antoine Pitrou wrote: > On Fri, 04 May 2012 14:49:03 -0600 > Carl Meyer wrote: >> 3) Symlink the interpreter rather than copying. I include this here for >> the sake of completeness, but it's already been rejected due to >> significant problems on older Windows' and OS X. > > Perhaps symlinking could be used at least on symlinks-friendly OSes? > I expect older Windows to disappear one day :-) So the only left > outlier would be OS X. It certainly could - at one point the reference implementation did exactly this. I understand though that even on newer Windows' there are administrator-privilege issues with symlinks, and I don't know that there's any prospect of the OS X stub executable going away, so I think if we did this we should assume that we're accepting a more-or-less permanent cross-platform difference in the default behavior of venvs. Maybe that's ok; it would mean that for Linux users there'd be no need to run any venv-upgrade script at all when Python is updated, which is certainly a plus. At one point it was argued that we shouldn't symlink by default because users expect venvs to be isolated and not upgraded implicitly. I think this discussion reveals that that's a false argument, since the stdlib will be upgraded implicitly regardless, and that's just as likely to break something as an interpreter update (and more likely than upgrading them in sync). IOW, if you want real full isolation from a system Python, you build your own Python, you don't use pyvenv. Carl From carl at oddbird.net Mon May 7 00:08:27 2012 From: carl at oddbird.net (Carl Meyer) Date: Sun, 06 May 2012 16:08:27 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: References: <4FA440BF.50806@oddbird.net> Message-ID: <4FA6F65B.7090303@oddbird.net> On 05/05/2012 12:41 AM, Nick Coghlan wrote: > On Sat, May 5, 2012 at 6:49 AM, Carl Meyer wrote: >> 1) Document it and provide a tool for easily upgrading a venv in this >> situation. This may be adequate. In practice the situation is quite rare: >> 2.6.8/2.7.3 is the only actual example in the history of virtualenv that I'm >> aware of. The disadvantage is that if the problem does occur, the error will >> probably be quite confusing and seemingly unrelated to pyvenv. > > I think this is the way to go, for basically the same reasons that we > did it this way this time: there's no good reason to pay an ongoing > cost to further mitigate the risks associated with an already > incredibly rare event. This seems to be the rough consensus. I'll update the PEP with a note about this, and we'll consider switching back to symlink-by-default on Linux. Carl From vinay_sajip at yahoo.co.uk Mon May 7 02:58:32 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 7 May 2012 00:58:32 +0000 (UTC) Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades References: <4FA440BF.50806@oddbird.net> <20120505124005.4401ef03@pitrou.net> <4FA6F624.3000309@oddbird.net> Message-ID: Carl Meyer oddbird.net> writes: > them in sync). IOW, if you want real full isolation from a system > Python, you build your own Python, you don't use pyvenv. For the interpreter you can use your own Python, but you would still use pyvenv, as the venv is still useful for you to have an isolated set of library dependencies for a project. Regards, Vinay Sajip From barry at python.org Mon May 7 07:16:06 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 6 May 2012 22:16:06 -0700 Subject: [Python-Dev] Recording BDFL delegates for PEPs In-Reply-To: <20120506093126.588214b8@pitrou.net> References: <20120506093126.588214b8@pitrou.net> Message-ID: <20120506221606.24295705@resist.wooz.org> On May 06, 2012, at 09:31 AM, Antoine Pitrou wrote: >I have to ask - is email obfuscation still useful these days? I think it's more important that Python developers (especially those submitting or pronouncing on PEPs) can be contacted by other Python developers. I *personally* don't care about my pdo address getting obfuscated, and would opt for email address inclusion. I can appreciate that others might feel differently. -Barry From barry at python.org Mon May 7 07:18:54 2012 From: barry at python.org (Barry Warsaw) Date: Sun, 6 May 2012 22:18:54 -0700 Subject: [Python-Dev] [Python-checkins] peps: Update PEP 1 to better reflect current practice In-Reply-To: References: <20120505125654.77d3b9ab@resist.wooz.org> Message-ID: <20120506221854.2c8a78b6@resist.wooz.org> On May 06, 2012, at 03:08 PM, Nick Coghlan wrote: >I'll see if I can figure out something - I may just put in text like >"if you're at all unsure about what needs to be done, email the PEP >editors anyway". The diff looks good, thanks. -Barry From guido at python.org Mon May 7 07:20:10 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 6 May 2012 22:20:10 -0700 Subject: [Python-Dev] Recording BDFL delegates for PEPs In-Reply-To: <20120506221606.24295705@resist.wooz.org> References: <20120506093126.588214b8@pitrou.net> <20120506221606.24295705@resist.wooz.org> Message-ID: On Sunday, May 6, 2012, Barry Warsaw wrote: > On May 06, 2012, at 09:31 AM, Antoine Pitrou wrote: > > >I have to ask - is email obfuscation still useful these days? > > I think it's more important that Python developers (especially those > submitting or pronouncing on PEPs) can be contacted by other Python > developers. I *personally* don't care about my pdo address getting > obfuscated, and would opt for email address inclusion. I can appreciate > that > others might feel differently. +1 u -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon May 7 07:18:17 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 07 May 2012 14:18:17 +0900 Subject: [Python-Dev] Recording BDFL delegates for PEPs In-Reply-To: <20120506093126.588214b8@pitrou.net> References: <20120506093126.588214b8@pitrou.net> Message-ID: <87wr4osh5y.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > I have to ask - is email obfuscation still useful these days? It's hard to say. It's still a FAQ on Mailman lists, so people still believe it's useful. I don't think there's hard evidence either way (even guessing depends on the economics of the spamming business, and only the spammers really know that). > I would have thought most people protect from spam by using spam > filters, not by trying to conceal their addresses. Concealing addresses is most definitely a useful way to avoid spam. However, I would guess that the effective way to do it is to have a personal address that is never used anywhere that is easily trawled, not to try to obfuscate addresses that are visible on the web or in posts to open mailing lists or Usenet. From martin at v.loewis.de Mon May 7 08:27:43 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 08:27:43 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: References: <4FA0F3B4.5070707@hastings.org> <20120502115656.05773139@pitrou.net> Message-ID: <4FA76B5F.4080700@v.loewis.de> On 02.05.2012 15:37, Matt Joiner wrote: > > On May 2, 2012 6:00 PM, "Antoine Pitrou" > wrote: > > > > On Wed, 02 May 2012 01:43:32 -0700 > > Larry Hastings > wrote: > > > > > > I realize we can't jump to C99 because of A Certain Compiler. (Its > name > > > rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that > > > compiler added this extension in the early 90s. > > > > > > Do we officially support any C compilers that *don't* permit > > > "intermingled variable declarations and code"? Do we *unofficially* > > > support any? And if we do, what do we gain? > > > > Well, there's this one called MSVC, which we support quite officially. > > Not sure if comic genius or can't rhyme. This rhyming non-sense is surely above the English abilities of many of us foreigners. I had to read Larry's text five times (two times after you indicated that it indeed ought to rhyme - it finally worked when I read it aloud). So, folks: if you want to be understood, please keep the obfuscation of the English language to a fairly low level. Regards, Martin From martin at v.loewis.de Mon May 7 08:29:52 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 08:29:52 +0200 Subject: [Python-Dev] Does trunk still support any compilers that *don't* allow declaring variables after code? In-Reply-To: <4FA0F3B4.5070707@hastings.org> References: <4FA0F3B4.5070707@hastings.org> Message-ID: <4FA76BE0.3050704@v.loewis.de> > I realize we can't jump to C99 because of A Certain Compiler. (Its name > rhymes with Bike Row Soft Frizz You All See Muss Muss.) But even that > compiler added this extension in the early 90s. No, it didn't. The MSVC version that we currently use (VS 2008) still doesn't support it. Regards, Martin From martin at v.loewis.de Mon May 7 10:13:16 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 10:13:16 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA11836.8020106@hotpy.org> References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> <4FA11836.8020106@hotpy.org> Message-ID: <4FA7841C.6010102@v.loewis.de> > The What's New document also starts with a long list of PEPs. > This seems to be the standard format as What's New for 3.2 follows the > same layout. > > Perhaps adding an overview or highlights at the start would be a good > idea. You seem to assume that Python users are not able to grasp long itemized lists including numbers. I think readers are very capable of filtering this kind of information. As for presenting highlights: the PEPs *are* the highlights of a new release. The numerous bug fixes and minor enhancements don't get listed at all. Regards, Martin From mark at hotpy.org Mon May 7 11:00:42 2012 From: mark at hotpy.org (Mark Shannon) Date: Mon, 07 May 2012 10:00:42 +0100 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA7841C.6010102@v.loewis.de> References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> <4FA11836.8020106@hotpy.org> <4FA7841C.6010102@v.loewis.de> Message-ID: <4FA78F3A.6040506@hotpy.org> Martin v. L?wis wrote: >> The What's New document also starts with a long list of PEPs. >> This seems to be the standard format as What's New for 3.2 follows the >> same layout. >> >> Perhaps adding an overview or highlights at the start would be a good >> idea. > > You seem to assume that Python users are not able to grasp long itemized > lists including numbers. I think readers are very capable > of filtering this kind of information. Just because readers are capable of filtering a long list of PEPs in an arbitrary order does not mean that they should have to. Many readers will just skim the list, but would probably read a summary in full. > > As for presenting highlights: the PEPs *are* the highlights of a new > release. The numerous bug fixes and minor enhancements don't get listed > at all. But PEPs can have very different purposes. It would be useful to summarize the language changes (with links to the relevant PEPs) separately to library extensions and optimizations. If the reader is interested in new features, then information about optimisations are just clutter. And vice-versa. Cheers, Mark. From ncoghlan at gmail.com Mon May 7 11:08:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 May 2012 19:08:32 +1000 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA78F3A.6040506@hotpy.org> References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> <4FA11836.8020106@hotpy.org> <4FA7841C.6010102@v.loewis.de> <4FA78F3A.6040506@hotpy.org> Message-ID: Any such summary prose will be written by the What's New author (Raymond Hettinger for the 3.x series). Such text definitely *won't* be written until after feature freeze (which occurs with the first beta, currently planned for late June). Until that time, the draft What's New is primarily rough notes written by everyone else for Raymond's benefit (and, of course, for the benefit of anyone checking out the alpha and beta releases). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From g.brandl at gmx.net Mon May 7 11:15:00 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 07 May 2012 11:15:00 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 3 In-Reply-To: <4FA78F3A.6040506@hotpy.org> References: <4FA03CD1.7020605@python.org> <4FA1048D.3010000@hotpy.org> <4FA11836.8020106@hotpy.org> <4FA7841C.6010102@v.loewis.de> <4FA78F3A.6040506@hotpy.org> Message-ID: On 05/07/2012 11:00 AM, Mark Shannon wrote: > Martin v. L?wis wrote: >>> The What's New document also starts with a long list of PEPs. >>> This seems to be the standard format as What's New for 3.2 follows the >>> same layout. >>> >>> Perhaps adding an overview or highlights at the start would be a good >>> idea. >> >> You seem to assume that Python users are not able to grasp long itemized >> lists including numbers. I think readers are very capable >> of filtering this kind of information. > > Just because readers are capable of filtering a long list of PEPs in an > arbitrary order does not mean that they should have to. > Many readers will just skim the list, but would probably read a summary > in full. > >> >> As for presenting highlights: the PEPs *are* the highlights of a new >> release. The numerous bug fixes and minor enhancements don't get listed >> at all. > > But PEPs can have very different purposes. > It would be useful to summarize the language changes (with links to the > relevant PEPs) separately to library extensions and optimizations. > > If the reader is interested in new features, then information about > optimisations are just clutter. And vice-versa. Sorry, I think that's tough luck then. The list isn't nearly long enough to warrant splitting up. The announcement should stay compact. And as Nick said, the "What's New" will be there for those who want a longer overview by topics. Georg From martin at v.loewis.de Mon May 7 11:52:00 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 11:52:00 +0200 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA440BF.50806@oddbird.net> References: <4FA440BF.50806@oddbird.net> Message-ID: <4FA79B40.4000609@v.loewis.de> > 3) Symlink the interpreter rather than copying. I include this here for > the sake of completeness, but it's already been rejected due to > significant problems on older Windows' and OS X. That sounds the right solution to me. PEP 405 specifies that bin/python3 exists, but not that it is the actual Python interpreter binary that is normally used. For each target system, a solution should be defined that allows in-place updates of Python that also update all venvs automatically. For example, for Windows, it would be sufficient to just have the executable in bin/, as the update will only affect pythonXY.dll. That executable may be different from the regular python.exe, and it might be necessary that it locates its Python installation first. For Unix, symlinks sound fine. Not sure what the issue with OS X is. Regards, Martin From ncoghlan at gmail.com Mon May 7 13:10:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 May 2012 21:10:49 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 Message-ID: A while back I pointed out that there's no easy PEP 3115 compliant way to dynamically create a class (finding the right metaclass, calling __prepare__, etc). I initially proposed providing this as operator.build_class, and Daniel Urban created a patch that implements that API (http://bugs.python.org/issue14588). However, in starting to write the documentation for the new API, I realised that the operator module really isn't the right home for the functionality. Instead, I'm now thinking we should add a _types C extension module and expose the new function as types.build_class(). I don't want to add an entire new module just for this feature, and the types module seems like an appropriate home for it. Thoughts? Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ronaldoussoren at mac.com Mon May 7 12:26:59 2012 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Mon, 07 May 2012 12:26:59 +0200 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA79B40.4000609@v.loewis.de> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> Message-ID: <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> On 7 May, 2012, at 11:52, Martin v. L?wis wrote: >> 3) Symlink the interpreter rather than copying. I include this here for >> the sake of completeness, but it's already been rejected due to >> significant problems on older Windows' and OS X. > > That sounds the right solution to me. PEP 405 specifies that bin/python3 > exists, but not that it is the actual Python interpreter binary that is > normally used. For each target system, a solution should be defined that > allows in-place updates of Python that also update all venvs automatically. > > For example, for Windows, it would be sufficient to just have the executable in bin/, as the update will only affect pythonXY.dll. > That executable may be different from the regular python.exe, and > it might be necessary that it locates its Python installation first. > For Unix, symlinks sound fine. Not sure what the issue with OS X is. The bin/python3 executable in a framework is a small stub that execv's the real interpreter that is stuffed in a Python.app bundle inside the Python framework. That's done to ensure that GUI code can work from the command-line, Apple's GUI framework refuse to work when the executable is not in an application bundle. Because of this trick pyvenv won't know which executable the user actually called and hence cannot find the pyvenv configuration file (which is next to the stub executable). Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4788 bytes Desc: not available URL: From dickinsm at gmail.com Mon May 7 13:35:27 2012 From: dickinsm at gmail.com (Mark Dickinson) Date: Mon, 7 May 2012 12:35:27 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: On Mon, May 7, 2012 at 12:08 PM, victor.stinner wrote: > http://hg.python.org/cpython/rev/ab500b297900 > changeset: ? 76821:ab500b297900 > user: ? ? ? ?Victor Stinner > date: ? ? ? ?Mon May 07 13:02:44 2012 +0200 > summary: > ?Issue #14716: Change integer overflow check in unicode_writer_prepare() > to compute the limit at compile time instead of runtime. Patch writen by Serhiy > Storchaka. > ? ? if (newlen > PyUnicode_GET_LENGTH(writer->buffer)) { > - ? ? ? ?/* overallocate 25% to limit the number of resize */ > - ? ? ? ?if (newlen <= (PY_SSIZE_T_MAX - newlen / 4)) > + ? ? ? ?/* Overallocate 25% to limit the number of resize. > + ? ? ? ? ? Check for integer overflow: > + ? ? ? ? ? (newlen + newlen / 4) <= PY_SSIZE_T_MAX */ > + ? ? ? ?if (newlen <= (PY_SSIZE_T_MAX - PY_SSIZE_T_MAX / 5)) > ? ? ? ? ? ? newlen += newlen / 4; Hmm. Very clever, but it's not obvious that that overflow check is mathematically sound. As it turns out, the maths works provided that PY_SSIZE_T_MAX isn't congruent to 4 modulo 5; since PY_SSIZE_T_MAX will almost always be one less than a power of 2 and powers of 2 are always congruent to 1, 2 or 4 modulo 5, we're safe. Is the gain from this kind of micro-optimization really worth the cost of replacing obviously correct code with code whose correctness needs several minutes of thought? Mark From g.brandl at gmx.net Mon May 7 13:54:37 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 07 May 2012 13:54:37 +0200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: On 05/07/2012 01:10 PM, Nick Coghlan wrote: > A while back I pointed out that there's no easy PEP 3115 compliant way > to dynamically create a class (finding the right metaclass, calling > __prepare__, etc). > > I initially proposed providing this as operator.build_class, and > Daniel Urban created a patch that implements that API > (http://bugs.python.org/issue14588). > > However, in starting to write the documentation for the new API, I > realised that the operator module really isn't the right home for the > functionality. > > Instead, I'm now thinking we should add a _types C extension module > and expose the new function as types.build_class(). I don't want to > add an entire new module just for this feature, and the types module > seems like an appropriate home for it. Yay for being able to get rid of the stupidities the types module goes through to get at its types (i.e. if we start having a C module, the whole contents can go there.) As for build_class: at the moment the types module really only has types, and to add build_class there is just about as weird as in operator IMO. cheers, Georg From benjamin at python.org Mon May 7 13:56:47 2012 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 7 May 2012 07:56:47 -0400 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: 2012/5/7 Nick Coghlan : > A while back I pointed out that there's no easy PEP 3115 compliant way > to dynamically create a class (finding the right metaclass, calling > __prepare__, etc). > > I initially proposed providing this as operator.build_class, and > Daniel Urban created a patch that implements that API > (http://bugs.python.org/issue14588). > > However, in starting to write the documentation for the new API, I > realised that the operator module really isn't the right home for the > functionality. > > Instead, I'm now thinking we should add a _types C extension module > and expose the new function as types.build_class(). I don't want to > add an entire new module just for this feature, and the types module > seems like an appropriate home for it. Actually, there used to be a _types C module before we figured out that all the types could be extracted in Python. :) Maybe you could make it a static or class method of type? -- Regards, Benjamin From dickinsm at gmail.com Mon May 7 14:04:57 2012 From: dickinsm at gmail.com (Mark Dickinson) Date: Mon, 7 May 2012 13:04:57 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: On Mon, May 7, 2012 at 12:35 PM, Mark Dickinson wrote: > will almost always be one less than a power of 2 and powers of 2 are > always congruent to 1, 2 or 4 modulo 5, we're safe. Bah. That should have read "1, 2, 3 or 4 modulo 5". From ncoghlan at gmail.com Mon May 7 14:15:58 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 7 May 2012 22:15:58 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: On Mon, May 7, 2012 at 9:54 PM, Georg Brandl wrote: > As for build_class: at the moment the types module really only has types, > and to add build_class there is just about as weird as in operator IMO. Oh no, types is definitely less weird - at least it's related to the type system, whereas the operator module is about operator syntax (attrgetter, itemgetter and index are at least related to the dot operator and subscripting syntax) Benjamin's suggestion of a class method on type may be a good one, though. Then the invocation (using all arguments) would be: mcl.build_class(name, bases, keywords, exec_body) Works for me, so unless someone else can see a problem I've missed, we'll go with that. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From g.brandl at gmx.net Mon May 7 14:23:46 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 07 May 2012 14:23:46 +0200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: On 05/07/2012 02:15 PM, Nick Coghlan wrote: > On Mon, May 7, 2012 at 9:54 PM, Georg Brandl wrote: >> As for build_class: at the moment the types module really only has types, >> and to add build_class there is just about as weird as in operator IMO. > > Oh no, types is definitely less weird - at least it's related to the > type system, whereas the operator module is about operator syntax > (attrgetter, itemgetter and index are at least related to the dot > operator and subscripting syntax) > > Benjamin's suggestion of a class method on type may be a good one, > though. Then the invocation (using all arguments) would be: > > mcl.build_class(name, bases, keywords, exec_body) > > Works for me, so unless someone else can see a problem I've missed, > we'll go with that. Works for me. Georg From hrvoje.niksic at avl.com Mon May 7 15:42:11 2012 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Mon, 07 May 2012 15:42:11 +0200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: <4FA7D133.5020303@avl.com> On 05/07/2012 02:15 PM, Nick Coghlan wrote: > Benjamin's suggestion of a class method on type may be a good one, > though. Then the invocation (using all arguments) would be: > > mcl.build_class(name, bases, keywords, exec_body) > > Works for me, so unless someone else can see a problem I've missed, > we'll go with that. Note that to call mcl.build_class, you have to find a metaclass that works for bases, which is the job of build_class. Putting it as a function in the operator module seems like a better solution. From carl at oddbird.net Mon May 7 17:25:41 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 07 May 2012 09:25:41 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> Message-ID: <4FA7E975.6010106@oddbird.net> On 05/07/2012 04:26 AM, Ronald Oussoren wrote: > On 7 May, 2012, at 11:52, Martin v. L?wis wrote: >>> 3) Symlink the interpreter rather than copying. I include this >>> here for the sake of completeness, but it's already been rejected >>> due to significant problems on older Windows' and OS X. >> >> That sounds the right solution to me. PEP 405 specifies that >> bin/python3 exists, but not that it is the actual Python >> interpreter binary that is normally used. For each target system, a >> solution should be defined that allows in-place updates of Python >> that also update all venvs automatically. >> >> For example, for Windows, it would be sufficient to just have the >> executable in bin/, as the update will only affect pythonXY.dll. >> That executable may be different from the regular python.exe, and >> it might be necessary that it locates its Python installation >> first. For Unix, symlinks sound fine. Not sure what the issue with >> OS X is. > > The bin/python3 executable in a framework is a small stub that > execv's the real interpreter that is stuffed in a Python.app bundle > inside the Python framework. That's done to ensure that GUI code can > work from the command-line, Apple's GUI framework refuse to work when > the executable is not in an application bundle. > > Because of this trick pyvenv won't know which executable the user > actually called and hence cannot find the pyvenv configuration file > (which is next to the stub executable). It occurs to me, belatedly, that this also means that upgrades should be a non-issue with OS X framework builds (presuming the upgraded actual-Python-binary gets placed in the same location, and the previously copied stub will still exec it without trouble), in which case we can symlink on OS X non-framework builds and copy on OS X framework builds and be happy. Carl From storchaka at gmail.com Mon May 7 17:48:36 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 07 May 2012 18:48:36 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: 07.05.12 14:35, Mark Dickinson ???????(??): > Hmm. Very clever, but it's not obvious that that overflow check is > mathematically sound. My fault. Overflow will be at PY_SSIZE_T_MAX congruent to 4 modulo 5 (which is impossible if PY_SSIZE_T_MAX is one less than a power of 2). Mathematically strict limit must be (PY_SSIZE_T_MAX - 1 - (PY_SSIZE_T_MAX - 4) / 5). From storchaka at gmail.com Mon May 7 18:33:57 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 07 May 2012 19:33:57 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: 07.05.12 18:48, Serhiy Storchaka ???????(??): > My fault. However, it's not my fault. I suggested `newlen < (PY_SSIZE_T_MAX - PY_SSIZE_T_MAX / 5)` and not `newlen <= (PY_SSIZE_T_MAX - PY_SSIZE_T_MAX / 5)`. In this case, there is no overflow. From carl at oddbird.net Mon May 7 18:35:04 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 07 May 2012 10:35:04 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA79B40.4000609@v.loewis.de> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> Message-ID: <4FA7F9B8.5070507@oddbird.net> On 05/07/2012 03:52 AM, "Martin v. L?wis" wrote: >> 3) Symlink the interpreter rather than copying. I include this here for >> the sake of completeness, but it's already been rejected due to >> significant problems on older Windows' and OS X. > > That sounds the right solution to me. PEP 405 specifies that bin/python3 > exists, but not that it is the actual Python interpreter binary that is > normally used. For each target system, a solution should be defined that > allows in-place updates of Python that also update all venvs automatically. I propose that for Windows, that solution is to have a new enough version of Windows and the necessary privileges, and use the --symlink option to the pyvenv script, or else to manually update venvs using pyvenv --upgrade. > For example, for Windows, it would be sufficient to just have the > executable in bin/, as the update will only affect pythonXY.dll. > That executable may be different from the regular python.exe, and > it might be necessary that it locates its Python installation first. This sounds to me like a level of complexity unwarranted by the severity of the problem, especially when considering the additional burden it imposes on alternative Python implementations. Carl From solipsis at pitrou.net Mon May 7 18:38:58 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 7 May 2012 18:38:58 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() References: Message-ID: <20120507183858.36eb2966@pitrou.net> On Mon, 7 May 2012 12:35:27 +0100 Mark Dickinson wrote: > > Hmm. Very clever, but it's not obvious that that overflow check is > mathematically sound. As it turns out, the maths works provided that > PY_SSIZE_T_MAX isn't congruent to 4 modulo 5; since PY_SSIZE_T_MAX > will almost always be one less than a power of 2 and powers of 2 are > always congruent to 1, 2 or 4 modulo 5, we're safe. > > Is the gain from this kind of micro-optimization really worth the cost > of replacing obviously correct code with code whose correctness needs > several minutes of thought? Agreed that the original code is good enough. Dividing by 4 is fast, and this particular line of code is followed by a memory reallocation. In general, "clever" micro-optimizations that don't produce significant performance improvements should be avoided, IMHO :-) Regards Antoine. From s.brunthaler at uci.edu Mon May 7 21:23:47 2012 From: s.brunthaler at uci.edu (stefan brunthaler) Date: Mon, 7 May 2012 12:23:47 -0700 Subject: [Python-Dev] Assigning copyright... In-Reply-To: <4F999CF7.6020304@v.loewis.de> References: <4F98EA9B.5060906@hotpy.org> <4F999CF7.6020304@v.loewis.de> Message-ID: Hello, > http://www.python.org/psf/contrib/ I took care of the formalities. I am not sure how to proceed further. Would python-dev want me to draft a PEP? Regards, --stefan PS: Personally, I am not a 100pct convinced that having a PEP is a good thing in this case, as it makes a perfectly transparent optimization "visible." AFAIR Sun opted to keep their instruction derivatives secret, i.e., the second edition of the JVM internals does not even mention them anymore. From solipsis at pitrou.net Mon May 7 21:49:43 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 7 May 2012 21:49:43 +0200 Subject: [Python-Dev] Point of building without threads? Message-ID: <20120507214943.579045c2@pitrou.net> Hello, I guess a long time ago, threading support in operating systems wasn't very widespread, but these days all our supported platforms have it. Is it still useful for production purposes to configure --without-threads? Do people use this option for something else than curiosity of mind? Regards Antoine. From g.brandl at gmx.net Mon May 7 22:04:24 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 07 May 2012 22:04:24 +0200 Subject: [Python-Dev] Assigning copyright... In-Reply-To: References: <4F98EA9B.5060906@hotpy.org> <4F999CF7.6020304@v.loewis.de> Message-ID: On 05/07/2012 09:23 PM, stefan brunthaler wrote: > Hello, > >> http://www.python.org/psf/contrib/ > > I took care of the formalities. > > I am not sure how to proceed further. Would python-dev want me to draft a PEP? > > Regards, > --stefan > > PS: Personally, I am not a 100pct convinced that having a PEP is a > good thing in this case, as it makes a perfectly transparent > optimization "visible." AFAIR Sun opted to keep their instruction > derivatives secret, i.e., the second edition of the JVM internals does > not even mention them anymore. I think you'll find that we don't keep a lot of things secret about CPython and its implementation. Although this is different when it comes to the community. The PSU has From edcjones at comcast.net Mon May 7 01:28:32 2012 From: edcjones at comcast.net (Edward C. Jones) Date: Sun, 06 May 2012 19:28:32 -0400 Subject: [Python-Dev] Python 3.3 cannot find BeautifulSoup but Python 3.2 can Message-ID: <4FA70920.80106@comcast.net> I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes with python3.2 (and 2.6 and 2.7). I installed the Debian package python3-bs4 (BeautifulSoup). I also downloaded a "clone" developmental copy of 3.3. Python3.3a3 cannot find module bs4. Neither can the "clone". Python3.2 can find the module. Here is a session with the "clone": > ./python Python 3.3.0a3+ (default:10ccbb90a8e9, May 6 2012, 19:11:02) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import bs4 Traceback (most recent call last): File "", line 1, in File "", line 974, in _find_and_load ImportError: No module named 'bs4' [71413 refs] >>> What is the problem? From edcjones at comcast.net Mon May 7 22:42:50 2012 From: edcjones at comcast.net (Edward C. Jones) Date: Mon, 07 May 2012 16:42:50 -0400 Subject: [Python-Dev] Python 3.3 cannot import BeautifulSoup but Python 3.2 can Message-ID: <4FA833CA.80105@comcast.net> I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes with python3.2 (and 2.6 and 2.7). I installed the Debian package python3-bs4 (BeautifulSoup4 for Python3). I also downloaded a "clone" developmental copy of 3.3. Python3.3a3 cannot find module bs4. Neither can the "clone". Python3.2 can find the module. Here is a session with the "clone": > ./python Python 3.3.0a3+ (default:10ccbb90a8e9, May 6 2012, 19:11:02) [GCC 4.6.3] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import bs4 Traceback (most recent call last): File "", line 1, in File "", line 974, in _find_and_load ImportError: No module named 'bs4' [71413 refs] >>> What is the problem? From martin at v.loewis.de Mon May 7 22:51:21 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 22:51:21 +0200 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> Message-ID: <4FA835C9.1000507@v.loewis.de> > The bin/python3 executable in a framework is a small stub that > execv's the real interpreter that is stuffed in a Python.app bundle > inside the Python framework. That's done to ensure that GUI code can > work from the command-line, Apple's GUI framework refuse to work when > the executable is not in an application bundle. > > Because of this trick pyvenv won't know which executable the user > actually called and hence cannot find the pyvenv configuration file > (which is next to the stub executable). I don't understand. The "executable that the user actually called": does that refer to a) the stub (which the user *actually* called) or b) the eventual binary (which is what gets *actually* run). If a), then I think argv[0] just needs to continue to refer to the stub, which is easy to achieve in execv. If b), I wonder why the code needs to know the location to the binary inside the bundle. But if this is needed to know, I suggest that some environment variable is passed from the stub to the actual binary (akin PYTHONHOME). How does the stub normally find out where the framework is located? Regards, Martin From martin at v.loewis.de Mon May 7 22:55:04 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 22:55:04 +0200 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA7F9B8.5070507@oddbird.net> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <4FA7F9B8.5070507@oddbird.net> Message-ID: <4FA836A8.9010305@v.loewis.de> On 07.05.2012 18:35, Carl Meyer wrote: > On 05/07/2012 03:52 AM, "Martin v. L?wis" wrote: >>> 3) Symlink the interpreter rather than copying. I include this here for >>> the sake of completeness, but it's already been rejected due to >>> significant problems on older Windows' and OS X. >> >> That sounds the right solution to me. PEP 405 specifies that bin/python3 >> exists, but not that it is the actual Python interpreter binary that is >> normally used. For each target system, a solution should be defined that >> allows in-place updates of Python that also update all venvs >> automatically. > > I propose that for Windows, that solution is to have a new enough > version of Windows and the necessary privileges, and use the --symlink > option to the pyvenv script, or else to manually update venvs using > pyvenv --upgrade. Sounds fine to me as well. >> For example, for Windows, it would be sufficient to just have the >> executable in bin/, as the update will only affect pythonXY.dll. >> That executable may be different from the regular python.exe, and >> it might be necessary that it locates its Python installation first. > > This sounds to me like a level of complexity unwarranted by the severity > of the problem, especially when considering the additional burden it > imposes on alternative Python implementations. OTOH, it *significantly* reduces the burden on Python end users, for whom creation of a venv under a privileged account is a significant hassle. This being free software, anybody needs to scratch her own itches, of course. Regards, Martin From martin at v.loewis.de Mon May 7 23:02:41 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 07 May 2012 23:02:41 +0200 Subject: [Python-Dev] Assigning copyright... In-Reply-To: References: <4F98EA9B.5060906@hotpy.org> <4F999CF7.6020304@v.loewis.de> Message-ID: <4FA83871.3010908@v.loewis.de> On 07.05.2012 21:23, stefan brunthaler wrote: > Hello, > >> http://www.python.org/psf/contrib/ > > I took care of the formalities. > > I am not sure how to proceed further. Would python-dev want me to draft a PEP? Submit a patch to the bug tracker, against default's head. Regards, Martin From solipsis at pitrou.net Mon May 7 23:03:34 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 7 May 2012 23:03:34 +0200 Subject: [Python-Dev] Python 3.3 cannot import BeautifulSoup but Python 3.2 can References: <4FA833CA.80105@comcast.net> Message-ID: <20120507230334.4ed76af9@pitrou.net> Hello, On Mon, 07 May 2012 16:42:50 -0400 "Edward C. Jones" wrote: > I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled > and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes > with python3.2 (and 2.6 and 2.7). I installed the Debian package > python3-bs4 (BeautifulSoup4 for Python3). I also downloaded a "clone" > developmental copy of 3.3. > > Python3.3a3 cannot find module bs4. Neither can the "clone". Python3.2 can > find the module. Here is a session with the "clone": python-dev is for development *of* Python. For general Python questions, you should ask on python-list: http://mail.python.org/mailman/listinfo/python-list (quick answer: you must install BeautifulSoup specifically for your compiled interpreter. Python does not share libraries accross different interpreter versions) Regards Antoine. From phd at phdru.name Mon May 7 23:12:04 2012 From: phd at phdru.name (Oleg Broytman) Date: Tue, 8 May 2012 01:12:04 +0400 Subject: [Python-Dev] Python 3.3 cannot import BeautifulSoup but Python 3.2 can In-Reply-To: <4FA833CA.80105@comcast.net> References: <4FA833CA.80105@comcast.net> Message-ID: <20120507211204.GA26949@iskra.aviel.ru> On Mon, May 07, 2012 at 04:42:50PM -0400, "Edward C. Jones" wrote: > I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled > and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes > with python3.2 (and 2.6 and 2.7). I installed the Debian package > python3-bs4 (BeautifulSoup4 for Python3). I also downloaded a "clone" > developmental copy of 3.3. > > Python3.3a3 cannot find module bs4. Could it be bs4 is installed in python3.2-specific path and hence it's not in python3.3 sys.path? Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From vinay_sajip at yahoo.co.uk Mon May 7 23:25:41 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 7 May 2012 21:25:41 +0000 (UTC) Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <86CF238D-BAB4-4670-81DD-F63B10223110@mac.com> Message-ID: Ronald Oussoren mac.com> writes: > Because of this trick pyvenv won't know which executable the user actually > called and hence cannot find the pyvenv configuration file (which is next to > the stub executable). Ah, but the stub has been changed to set an environment variable, __PYTHONV_LAUNCHER__, which points to itself, before it execs the real Python. On OS X, Python code checks for this, rather than sys.executable, to determine the location of the pyvenv.cfg file. This seems to work for me (Ned Deily is looking into it more closely, I believe). Regards, Vinay Sajip From barry at python.org Mon May 7 23:42:43 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 7 May 2012 14:42:43 -0700 Subject: [Python-Dev] Python 3.3 cannot import BeautifulSoup but Python 3.2 can In-Reply-To: <4FA833CA.80105@comcast.net> References: <4FA833CA.80105@comcast.net> Message-ID: <20120507144243.41ab409e@rivendell> On May 07, 2012, at 04:42 PM, Edward C. Jones wrote: >I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled >and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes >with python3.2 (and 2.6 and 2.7). I installed the Debian package >python3-bs4 (BeautifulSoup4 for Python3). I also downloaded a "clone" >developmental copy of 3.3. > >Python3.3a3 cannot find module bs4. Neither can the "clone". Python3.2 can >find the module. Here is a session with the "clone": Remember that Debian installs its system packages into dist-packages not site-packages. This is a Debian delta from upstream. http://wiki.debian.org/Python Cheers, -Barry From s.brunthaler at uci.edu Mon May 7 23:44:23 2012 From: s.brunthaler at uci.edu (stefan brunthaler) Date: Mon, 7 May 2012 14:44:23 -0700 Subject: [Python-Dev] Assigning copyright... In-Reply-To: References: <4F98EA9B.5060906@hotpy.org> <4F999CF7.6020304@v.loewis.de> Message-ID: > I think you'll find that we don't keep a lot of things secret about CPython > and its implementation. > Yeah, I agree that this is in principal a good thing and what makes CPython ideally suited for research. However, my optimizations make use of unused opcodes, which might be used in the future by actual CPython instructions (e.g., from my previous patch to the new one the YIELD_FROM instruction has been added.) I'd say the situation is similar to the threaded code/computed goto's issue. > Although this is different when it comes to the community. ?The PSU has > ? I am going to file a patch like Martin von Loewis suggested. Thanks, --stefan From victor.stinner at gmail.com Mon May 7 23:51:48 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 7 May 2012 23:51:48 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: > However, it's not my fault. I suggested `newlen < (PY_SSIZE_T_MAX - > PY_SSIZE_T_MAX / 5)` and not `newlen <= (PY_SSIZE_T_MAX - PY_SSIZE_T_MAX / > 5)`. In this case, there is no overflow. Oh. I didn't understand why you replaced <= by <, and so I used <=. Anyway, I reverted the change for all reasons listed in this thread. Victor From p.f.moore at gmail.com Tue May 8 00:16:40 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 7 May 2012 23:16:40 +0100 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: <4FA836A8.9010305@v.loewis.de> References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <4FA7F9B8.5070507@oddbird.net> <4FA836A8.9010305@v.loewis.de> Message-ID: On 7 May 2012 21:55, "Martin v. L?wis" wrote: >> This sounds to me like a level of complexity unwarranted by the severity >> of the problem, especially when considering the additional burden it >> imposes on alternative Python implementations. > > > OTOH, it *significantly* reduces the burden on Python end users, for > whom creation of a venv under a privileged account is a significant > hassle. Personally, I would find a venv which required being run as an admin account to be essentially unusable on Windows (particularly Windows 7, where this means creating venvs in an "elevated" console window). Allowing for symlinks as an option is fine, I guess, but I'd be -1 on it being the default. Paul. From victor.stinner at gmail.com Tue May 8 00:25:43 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 8 May 2012 00:25:43 +0200 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120507214943.579045c2@pitrou.net> References: <20120507214943.579045c2@pitrou.net> Message-ID: > I guess a long time ago, threading support in operating systems wasn't > very widespread, but these days all our supported platforms have it. > Is it still useful for production purposes to configure > --without-threads? Do people use this option for something else than > curiosity of mind? At work, I'm working on embedded systems (television set top boxes) with a Linux kernel with the GNU C library, and we do use threads! I'm not sure that Python runs on slower/smaller systems because they have other constrains like having very few memory, maybe no MMU and not using the glibc but ?libc for example. There is the "python-on-a-chip" project. It is written from scratch and is very different from CPython. I don't think that it uses threads. http://code.google.com/p/python-on-a-chip/ Victor From greg.ewing at canterbury.ac.nz Tue May 8 00:59:36 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 08 May 2012 10:59:36 +1200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: <4FA853D8.7080706@canterbury.ac.nz> Nick Coghlan wrote: > Instead, I'm now thinking we should add a _types C extension module > and expose the new function as types.build_class(). I don't want to > add an entire new module just for this feature, and the types module > seems like an appropriate home for it. Dunno. Currently the only thing the types module contains is types. A function would seem a bit out of place there. I don't think there's too much wrong with putting it in the operators module -- it's a function doing something that is otherwise expressed by special syntax. -- Greg From greg.ewing at canterbury.ac.nz Tue May 8 01:12:09 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 08 May 2012 11:12:09 +1200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14716: Change integer overflow check in unicode_writer_prepare() In-Reply-To: References: Message-ID: <4FA856C9.9050206@canterbury.ac.nz> Mark Dickinson wrote: > Is the gain from this kind of micro-optimization really worth the cost > of replacing obviously correct code with code whose correctness needs > several minutes of thought? The original code isn't all that obviously correct to me either. I would need convincing that the arithmetic being used to check for overflow can't itself suffer from overflow. At least that much is obvious from the new version. -- Greg From ncoghlan at gmail.com Tue May 8 01:15:07 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 8 May 2012 09:15:07 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FA853D8.7080706@canterbury.ac.nz> References: <4FA853D8.7080706@canterbury.ac.nz> Message-ID: For those suggesting the operator module is actually a good choice, there's no way to add this function without making major changes to the module description (go read it - I only realised the problem when I went to add the docs). It's a bad fit (*much* worse than types or a class method) -- Sent from my phone, thus the relative brevity :) On May 8, 2012 9:01 AM, "Greg Ewing" wrote: > Nick Coghlan wrote: > > Instead, I'm now thinking we should add a _types C extension module >> and expose the new function as types.build_class(). I don't want to >> add an entire new module just for this feature, and the types module >> seems like an appropriate home for it. >> > > Dunno. Currently the only thing the types module contains is > types. A function would seem a bit out of place there. > > I don't think there's too much wrong with putting it in the > operators module -- it's a function doing something that is > otherwise expressed by special syntax. > > -- > Greg > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue May 8 02:57:24 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Mon, 7 May 2012 18:57:24 -0600 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: Message-ID: On Mon, May 7, 2012 at 6:15 AM, Nick Coghlan wrote: > On Mon, May 7, 2012 at 9:54 PM, Georg Brandl wrote: >> As for build_class: at the moment the types module really only has types, >> and to add build_class there is just about as weird as in operator IMO. > > Oh no, types is definitely less weird - at least it's related to the > type system, whereas the operator module is about operator syntax > (attrgetter, itemgetter and index are at least related to the dot > operator and subscripting syntax) > > Benjamin's suggestion of a class method on type may be a good one, > though. Then the invocation (using all arguments) would be: > > ?mcl.build_class(name, bases, keywords, exec_body) > > Works for me, so unless someone else can see a problem I've missed, > we'll go with that. +1 -eric From ncoghlan at gmail.com Tue May 8 03:59:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 8 May 2012 11:59:08 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FA7D133.5020303@avl.com> References: <4FA7D133.5020303@avl.com> Message-ID: On Mon, May 7, 2012 at 11:42 PM, Hrvoje Niksic wrote: > On 05/07/2012 02:15 PM, Nick Coghlan wrote: >> >> Benjamin's suggestion of a class method on type may be a good one, >> though. Then the invocation (using all arguments) would be: >> >> ? mcl.build_class(name, bases, keywords, exec_body) >> >> Works for me, so unless someone else can see a problem I've missed, >> we'll go with that. > > > Note that to call mcl.build_class, you have to find a metaclass that works > for bases, which is the job of build_class. ?Putting it as a function in the > operator module seems like a better solution. No, the "mcl" in the call is just the designated metaclass - the *actual* metaclass of the resulting class definition may be something different. That's why this is a separate method from mcl.__new__. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From josemonmaliakal at gmail.com Tue May 8 05:15:16 2012 From: josemonmaliakal at gmail.com (Josemon Maliakal) Date: Tue, 8 May 2012 08:45:16 +0530 Subject: [Python-Dev] Spread Python Message-ID: Python Freakz .........., Anyone of you interested to write a series of article regarding the great python language ?..The series must be including a brief of python history, applications , advantages, related topics and tutorials etc ...I hope it will be a great experience for you to share,learn and spread python for all.The article will be published in www.texplod.com. If any one interested please send me a private message -- *Regards* * * Cre at tivmindz www.texplod.com ra -------------- next part -------------- An HTML attachment was scrubbed... URL: From josemonmaliakal at gmail.com Tue May 8 05:19:51 2012 From: josemonmaliakal at gmail.com (Josemon Maliakal) Date: Tue, 8 May 2012 08:49:51 +0530 Subject: [Python-Dev] Spread Python In-Reply-To: References: Message-ID: Python Freakz .........., Anyone of you interested to write a series of article regarding the great python language ?..The series must be including a brief of python history, applications , advantages, related topics and tutorials etc ...I hope it will be a great experience for you to share,learn and spread python for all.The article will be published in www.texplod.com. If any one interested please send me a private message -- *Regards* * * Cre at tivmindz www.texplod.com ra -- *Regards* * * Cre at tivmindz www.texplod.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dirkjan at ochtman.nl Tue May 8 08:49:14 2012 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Tue, 8 May 2012 08:49:14 +0200 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120507214943.579045c2@pitrou.net> References: <20120507214943.579045c2@pitrou.net> Message-ID: On Mon, May 7, 2012 at 9:49 PM, Antoine Pitrou wrote: > I guess a long time ago, threading support in operating systems wasn't > very widespread, but these days all our supported platforms have it. > Is it still useful for production purposes to configure > --without-threads? Do people use this option for something else than > curiosity of mind? Gentoo (of course) allows users to build Python without threads; I'm not aware of anyone depending on that, but I sent out a quick question to gentoo-dev. Cheers, Dirkjan From kristjan at ccpgames.com Tue May 8 11:27:34 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 8 May 2012 09:27:34 +0000 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120507214943.579045c2@pitrou.net> References: <20120507214943.579045c2@pitrou.net> Message-ID: > > I guess a long time ago, threading support in operating systems wasn't very > widespread, but these days all our supported platforms have it. > Is it still useful for production purposes to configure --without-threads? Do > people use this option for something else than curiosity of mind? For EVE Online, we started out not using threads but relying solely on tasklets. We only added thread supports perhaps five years ago. Other embedded projects _might_ be omitting thread support for a leaner interpreter, but I'm not sure the difference is that large. K From vinay_sajip at yahoo.co.uk Tue May 8 12:50:08 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 8 May 2012 10:50:08 +0000 (UTC) Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades References: <4FA440BF.50806@oddbird.net> <4FA6F38E.7060508@oddbird.net> Message-ID: Carl Meyer oddbird.net> writes: > The "version" key could in theory be useful to know whether a particular > venv created by that Python has or has not yet been upgraded to match, > but since the upgrade is trivial and idempotent I don't think that is > important. Agreed it's not essential, but it also provides some useful information about the version (for a user, rather than the update script) without actually having to invoke the interpreter to check. Regards, Vinay Sajip From brett at python.org Tue May 8 17:13:19 2012 From: brett at python.org (Brett Cannon) Date: Tue, 8 May 2012 11:13:19 -0400 Subject: [Python-Dev] Python 3.3 cannot find BeautifulSoup but Python 3.2 can In-Reply-To: <4FA70920.80106@comcast.net> References: <4FA70920.80106@comcast.net> Message-ID: This really isn't the right mailing list to ask this kind of question (I know you got help last time with your Debian-specific problem, but that was because people got overly excited =). Python-dev is meant for discussing the development *of* Python, not using it or developing *with* it. I would try your question on comp.lang.python/python-list. On Sun, May 6, 2012 at 7:28 PM, Edward C. Jones wrote: > I use up-to-date Debian testing (wheezy), amd64 architecture. I compiled > and installed Python 3.3.0 alpha 3 using "altinstall". Debian wheezy comes > with python3.2 (and 2.6 and 2.7). I installed the Debian package > python3-bs4 (BeautifulSoup). I also downloaded a "clone" developmental > copy of 3.3. > > Python3.3a3 cannot find module bs4. Neither can the "clone". Python3.2 > can find > the module. Here is a session with the "clone": > > > ./python > Python 3.3.0a3+ (default:10ccbb90a8e9, May 6 2012, 19:11:02) > [GCC 4.6.3] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> import bs4 > Traceback (most recent call last): > File "", line 1, in > File "", line 974, in _find_and_load > ImportError: No module named 'bs4' > [71413 refs] > >>> > > What is the problem? > > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue May 8 17:26:08 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 8 May 2012 08:26:08 -0700 Subject: [Python-Dev] Python 3.3 cannot find BeautifulSoup but Python 3.2 can In-Reply-To: References: <4FA70920.80106@comcast.net> Message-ID: <20120508082608.2b49e00a@resist.wooz.org> On May 08, 2012, at 11:13 AM, Brett Cannon wrote: >This really isn't the right mailing list to ask this kind of question (I >know you got help last time with your Debian-specific problem, but that was >because people got overly excited =). Python-dev is meant for discussing >the development *of* Python, not using it or developing *with* it. > >I would try your question on comp.lang.python/python-list. There are lots of good resources for Debian-specific Python issues (mailing lists, IRC, etc.). Start here: http://wiki.debian.org/Python Cheers, -Barry From carl at oddbird.net Tue May 8 18:14:44 2012 From: carl at oddbird.net (Carl Meyer) Date: Tue, 08 May 2012 10:14:44 -0600 Subject: [Python-Dev] PEP 405 (pyvenv) and system Python upgrades In-Reply-To: References: <4FA440BF.50806@oddbird.net> <4FA79B40.4000609@v.loewis.de> <4FA7F9B8.5070507@oddbird.net> <4FA836A8.9010305@v.loewis.de> Message-ID: <4FA94674.3030303@oddbird.net> Hi Paul, On 05/07/2012 04:16 PM, Paul Moore wrote: > On 7 May 2012 21:55, "Martin v. L?wis" wrote: >>> This sounds to me like a level of complexity unwarranted by the severity >>> of the problem, especially when considering the additional burden it >>> imposes on alternative Python implementations. >> >> >> OTOH, it *significantly* reduces the burden on Python end users, for >> whom creation of a venv under a privileged account is a significant >> hassle. > > Personally, I would find a venv which required being run as an admin > account to be essentially unusable on Windows (particularly Windows 7, > where this means creating venvs in an "elevated" console window). > > Allowing for symlinks as an option is fine, I guess, but I'd be -1 on > it being the default. I don't think anyone has proposed making symlinks the default on Windows. At this point the two options on Windows would be to use the --symlink option explicitly, or else to need to run "pyvenv --upgrade" on your envs if you upgrade the underlying Python in-place (and there's a breaking incompatibility between the new stdlib and the old interpreter, which there almost never will be if the past is any indication). I expect most users will opt for the latter option (equivalent to how current virtualenv works, except virtualenv doesn't have an --upgrade flag so you have to upgrade manually), but the former is also available if some prefer it. In any case, the situation will be no worse than it is with virtualenv today. Carl From albl500 at york.ac.uk Tue May 8 18:21:38 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Tue, 08 May 2012 17:21:38 +0100 Subject: [Python-Dev] c/ElementTree XML serialisation Message-ID: Hi, I was just reading through the ElementTree source code, in order to figure out how I might override the serialisation on the text nodes of ('>' doesn't necessarily need escaping anyway, except as part of a ]]> sequence in text content.) -- And Clover mailto:and at doxdesk.com http://www.doxdesk.com/ gtalk:chat?jid=bobince at gmail.com From ncoghlan at gmail.com Wed May 9 10:21:24 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 9 May 2012 18:21:24 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FAA2383.1030104@hotpy.org> References: <4FA7D133.5020303@avl.com> <4FAA2383.1030104@hotpy.org> Message-ID: On Wed, May 9, 2012 at 5:57 PM, Mark Shannon wrote: > As a consequence of this, making build_class either a class method or a > static method will cause a direct call to type.build_class() to fail as > neither class method nor static method are callable. We'll make sure it *behaves* like a static method, even if it's technically something else under the hood. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Wed May 9 10:29:47 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 09 May 2012 09:29:47 +0100 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <4FAA2383.1030104@hotpy.org> Message-ID: <4FAA2AFB.1010401@hotpy.org> Nick Coghlan wrote: > On Wed, May 9, 2012 at 5:57 PM, Mark Shannon wrote: >> As a consequence of this, making build_class either a class method or a >> static method will cause a direct call to type.build_class() to fail as >> neither class method nor static method are callable. > > We'll make sure it *behaves* like a static method, even if it's > technically something else under the hood. What I am saying is that you *don't* want it to behave like a static method, you want it to behave like a builtin-function. Cheers, Mark. From steve at pearwood.info Wed May 9 11:05:01 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 9 May 2012 19:05:01 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FAA2383.1030104@hotpy.org> References: <4FA7D133.5020303@avl.com> <4FAA2383.1030104@hotpy.org> Message-ID: <20120509090501.GC8882@ando> On Wed, May 09, 2012 at 08:57:55AM +0100, Mark Shannon wrote: > As a consequence of this, making build_class either a class method or a > static method will cause a direct call to type.build_class() to fail as > neither class method nor static method are callable. This might be a good reason to make them callable, especially staticmethod. I understand that at the language summit, this was considered a good idea: http://python.6.n6.nabble.com/Callable-non-descriptor-class-attributes-td1884829.html It certainly seems long overdue: confusion due to staticmethods not being callable go back a long time: http://stackoverflow.com/questions/3932948/ http://grokbase.com/t/python/python-list/11bhhtv95y/staticmethod-makes-my-brain-hurt http://mail.python.org/pipermail/python-list/2004-August/272593.html -- Steven From stefan at bytereef.org Wed May 9 11:26:29 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 9 May 2012 11:26:29 +0200 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120508201330.4827ab8f@pitrou.net> References: <20120507214943.579045c2@pitrou.net> <20120508174032.GA13600@sleipnir.bytereef.org> <20120508201330.4827ab8f@pitrou.net> Message-ID: <20120509092629.GA24611@sleipnir.bytereef.org> Antoine Pitrou wrote: > > _decimal is about 12% faster without threads, because the expensive > > thread local context can be disabled. > > If you cached the last thread id along with the corresponding context, > perhaps it could speed things up in most scenarios? Nice. This reduces the speed difference to about 4%! Stefan Krah From martin at v.loewis.de Wed May 9 11:35:58 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 09 May 2012 11:35:58 +0200 Subject: [Python-Dev] c/ElementTree XML serialisation In-Reply-To: References: Message-ID: <4FAA3A7E.7010601@v.loewis.de> > Is there a better way? Dear Alex, As Terry indicates: python-dev is a list for the development *of* Python, not the development *with* Python. Use the general python-list or the xml-sig list for this kind of question. Regards, Martin From martin at v.loewis.de Wed May 9 11:57:59 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 09 May 2012 11:57:59 +0200 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> Message-ID: <4FAA3FA7.5070808@v.loewis.de> On 27.04.2012 09:34, Eric Snow wrote: > On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw wrote: >> It's somewhat of a corner case, but I think a PEP couldn't hurt. The >> rationale section would be useful, at least. > > http://mail.python.org/pipermail/python-ideas/2012-April/014954.html Interesting proposal. I have a number of comments: - namespace vs. dictionary. Barry was using it in the form sys.implementation.version. I think this is how it should work, yet the PEP says that sys.implementation is a dictionary, which means that you would need to write sys.implementation['version'] I think the PEP should be silent on the type of sys.implementation, in particular, it should not mandate that it be a module (else "from sys.implementation import url" ought to work) [Update: it seems this is already reflected in the PEP. I wonder where the requirement for "a new type" comes from. I think making it a module should be conforming, even though probably discouraged for cpython, as it would make people think that they can rely on it being a module. I wish there was a builtin class class record: pass which can be used to create objects which have only attributes and no methods. Making it a type should also work: class implementation: name = "cpython" version = (3,3,0) in which case it would an instance of an existing type, namely, "type"] - under-specified attributes: "run-time environment" doesn't mean much to me - my first guess is that it is the set of environment variables, i.e. a dictionary identical to os.environ. I assume you mean something different ... gc_type is supposedly a string, but I cannot guess what possible values it may have. I also wonder why it's relevant. Regards, Martin From solipsis at pitrou.net Wed May 9 12:18:42 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 9 May 2012 12:18:42 +0200 Subject: [Python-Dev] Point of building without threads? References: <20120507214943.579045c2@pitrou.net> <20120508174032.GA13600@sleipnir.bytereef.org> <20120508201330.4827ab8f@pitrou.net> <20120509092629.GA24611@sleipnir.bytereef.org> Message-ID: <20120509121842.5e96a6f2@pitrou.net> On Wed, 9 May 2012 11:26:29 +0200 Stefan Krah wrote: > Antoine Pitrou wrote: > > > _decimal is about 12% faster without threads, because the expensive > > > thread local context can be disabled. > > > > If you cached the last thread id along with the corresponding context, > > perhaps it could speed things up in most scenarios? > > Nice. This reduces the speed difference to about 4%! Note that you don't need the actual thread id, the Python thread state is sufficient: PyThreadState_GET should be a simply variable lookup in release builds. Regards Antoine. From albl500 at york.ac.uk Wed May 9 14:39:06 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Wed, 09 May 2012 13:39:06 +0100 Subject: [Python-Dev] c/ElementTree XML serialisation In-Reply-To: <8AAEED99-D7BA-4CA1-A5E3-A446C2A82F3A@masklinn.net> References: <4806763.jdQveRlFAv@metabuntu> <2274079.NFJZeUYtyu@metabuntu> <8AAEED99-D7BA-4CA1-A5E3-A446C2A82F3A@masklinn.net> Message-ID: <1663979.4FilhUXTbd@metabuntu> On Wednesday 09 May 2012 08:02:09 Xavier Morel wrote: | Erm? you have them? What do you think `<` and `>` are? I was under the impression that those (let's call them) HTML representations of < and > don't get interpreted correctly by Javascript engines. I'll have to check that though.. | | As to writing a loop in javascript without < and >, == and != generally | work rather well, as does Array.prototype.forEach[0] Thanks for the tips! Cheers, Alex From albl500 at york.ac.uk Wed May 9 14:39:22 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Wed, 09 May 2012 13:39:22 +0100 Subject: [Python-Dev] c/ElementTree XML serialisation In-Reply-To: <8AAEED99-D7BA-4CA1-A5E3-A446C2A82F3A@masklinn.net> References: <4806763.jdQveRlFAv@metabuntu> <2274079.NFJZeUYtyu@metabuntu> <8AAEED99-D7BA-4CA1-A5E3-A446C2A82F3A@masklinn.net> Message-ID: <6030384.0hQPcmLmJ3@metabuntu> On Wednesday 09 May 2012 08:02:09 Xavier Morel wrote: | Erm? you have them? What do you think `<` and `>` are? I was under the impression that those (let's call them) HTML representations of < and > don't get interpreted correctly by Javascript engines. I'll have to check that though.. | | As to writing a loop in javascript without < and >, == and != generally | work rather well, as does Array.prototype.forEach[0] Thanks for the tips! Cheers, Alex From albl500 at york.ac.uk Wed May 9 14:41:19 2012 From: albl500 at york.ac.uk (Alex Leach) Date: Wed, 09 May 2012 13:41:19 +0100 Subject: [Python-Dev] c/ElementTree XML serialisation In-Reply-To: <4FAA267A.8070400@doxdesk.com> References: <4806763.jdQveRlFAv@metabuntu> <2274079.NFJZeUYtyu@metabuntu> <4FAA267A.8070400@doxdesk.com> Message-ID: <2765620.51be3uRdfm@metabuntu> On Wednesday 09 May 2012 08:10:34 And Clover wrote: | On 2012-05-08 23:41, Alex Leach wrote: | > I still need< and> symbols. I have no idea how to write a loop in | > javascript without one. | Just &-escape them same as you do in any other element. 'style' and | 'script' do not require any special handling in XML. | | | | | | | | ('>' doesn't necessarily need escaping anyway, except as part of a ]]> | sequence in text content.) Cheers, I'll have to check that. I assumed < etc. didn't get executed properly in SVG files. Assuming you're correct (you python-devs always are!), it must have been another problem that was causing issues... From ericsnowcurrently at gmail.com Wed May 9 16:32:38 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 08:32:38 -0600 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <20120509090501.GC8882@ando> References: <4FA7D133.5020303@avl.com> <4FAA2383.1030104@hotpy.org> <20120509090501.GC8882@ando> Message-ID: On Wed, May 9, 2012 at 3:05 AM, Steven D'Aprano wrote: > I understand that at the language summit, this was > considered a good idea: > > http://python.6.n6.nabble.com/Callable-non-descriptor-class-attributes-td1884829.html -eric From brett at python.org Wed May 9 16:44:59 2012 From: brett at python.org (Brett Cannon) Date: Wed, 9 May 2012 10:44:59 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAA3FA7.5070808@v.loewis.de> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> Message-ID: On Wed, May 9, 2012 at 5:57 AM, "Martin v. L?wis" wrote: > On 27.04.2012 09:34, Eric Snow wrote: > >> On Thu, Apr 26, 2012 at 8:31 AM, Barry Warsaw wrote: >> >>> It's somewhat of a corner case, but I think a PEP couldn't hurt. The >>> rationale section would be useful, at least. >>> >> >> http://mail.python.org/**pipermail/python-ideas/2012-** >> April/014954.html >> > > Interesting proposal. I have a number of comments: > > - namespace vs. dictionary. Barry was using it in the form > sys.implementation.version. I think this is how it should work, > yet the PEP says that sys.implementation is a dictionary, which > means that you would need to write > sys.implementation['version'] > > I think the PEP should be silent on the type of sys.implementation, > in particular, it should not mandate that it be a module (else > "from sys.implementation import url" ought to work) > > [Update: it seems this is already reflected in the PEP. I wonder > where the requirement for "a new type" comes from. I think making > it a module should be conforming, even though probably discouraged > for cpython, as it would make people think that they can rely on > it being a module. That stems from people arguing over whether sys.implementation should be a dict or a tuple, and people going "it shouldn't be a sequence since it lacks a proper order", but then others saying "it shouldn't be a dict because it isn't meant to be mutated" (or something since I argued for the dict). So Eric (I suspect) went with what made sense to him. > I wish there was a builtin class > > class record: > pass > > which can be used to create objects which have only attributes > and no methods. I have heard this request now a bazillion times over the years. Why don't we have such an empty class sitting somewhere in the stdlib with a constructor classmethod to simply return new instances (and if you want to get really fancy, optional keyword arguments to update the instance with the keys/values passed in)? Is it simply because it's just two lines of Python that *everyone* has replicated at some point? -Brett > Making it a type should also work: > > class implementation: > name = "cpython" > version = (3,3,0) > > in which case it would an instance of an existing type, namely, > "type"] > > - under-specified attributes: "run-time environment" doesn't mean much > to me - my first guess is that it is the set of environment variables, > i.e. a dictionary identical to os.environ. I assume you mean something > different ... > gc_type is supposedly a string, but I cannot guess what possible > values it may have. I also wonder why it's relevant. > > Regards, > Martin > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Wed May 9 16:51:33 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 08:51:33 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAA3FA7.5070808@v.loewis.de> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> Message-ID: On Wed, May 9, 2012 at 3:57 AM, "Martin v. L?wis" wrote: > Interesting proposal. I have a number of comments: Thanks for taking a look, Martin. > - namespace vs. dictionary. Barry was using it in the form > ?sys.implementation.version. I think this is how it should work, > ?yet the PEP says that sys.implementation is a dictionary, which > ?means that you would need to write > ?sys.implementation['version'] > > ?I think the PEP should be silent on the type of sys.implementation, > ?in particular, it should not mandate that it be a module (else > ?"from sys.implementation import url" ought to work) > > ?[Update: it seems this is already reflected in the PEP. I wonder > ? where the requirement for "a new type" comes from. I think making > ? it a module should be conforming, even though probably discouraged > ? for cpython, as it would make people think that they can rely on > ? it being a module. I wish there was a builtin class > > ? ? class record: > ? ? ? ?pass > > ? which can be used to create objects which have only attributes > ? and no methods. Making it a type should also work: > > ? ?class implementation: > ? ? ? name = "cpython" > ? ? ? version = (3,3,0) > > ?in which case it would an instance of an existing type, namely, > ?"type"] The type for sys.implementation has slowly shifted from the original proposal. At this point it's settled into where I think it will stay, a custom type. I've covered the choice of type in the rationale section. However, there may be merit in not being so specific about the type. I'll give that some thought. > - under-specified attributes: "run-time environment" doesn't mean much > ?to me - my first guess is that it is the set of environment variables, > ?i.e. a dictionary identical to os.environ. I assume you mean something > ?different ... > ?gc_type is supposedly a string, but I cannot guess what possible > ?values it may have. I also wonder why it's relevant. Sorry for the confusion. These are from the examples section for sys.implementation.metadata. I believe the current version of the PEP is more clear on the distinction. Thanks again for the feedback. -eric From solipsis at pitrou.net Wed May 9 16:50:39 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 9 May 2012 16:50:39 +0200 Subject: [Python-Dev] sys.implementation References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> Message-ID: <20120509165039.23c8bf56@pitrou.net> On Wed, 9 May 2012 10:44:59 -0400 Brett Cannon wrote: > > > I wish there was a builtin class > > > > class record: > > pass > > > > which can be used to create objects which have only attributes > > and no methods. > > > I have heard this request now a bazillion times over the years. Why don't > we have such an empty class sitting somewhere in the stdlib with a > constructor classmethod to simply return new instances (and if you want to > get really fancy, optional keyword arguments to update the instance with > the keys/values passed in)? Is it simply because it's just two lines of > Python that *everyone* has replicated at some point? In this case, it's because sys is a built-in module written in C, and importing Python code is a no-go. We have a similar problem with ABCs: io jumps through hoops to register its implementation classes with the I/O ABCs. Regards Antoine. From ericsnowcurrently at gmail.com Wed May 9 16:53:54 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 08:53:54 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> Message-ID: On Wed, May 9, 2012 at 8:44 AM, Brett Cannon wrote: > On Wed, May 9, 2012 at 5:57 AM, "Martin v. L?wis" > wrote: >> ?[Update: it seems this is already reflected in the PEP. I wonder >> ? where the requirement for "a new type" comes from. I think making >> ? it a module should be conforming, even though probably discouraged >> ? for cpython, as it would make people think that they can rely on >> ? it being a module. > > > That stems from people arguing over whether sys.implementation should be a > dict or a tuple, and people going "it shouldn't be a sequence since it lacks > a proper order", but then others saying "it shouldn't be a dict because it > isn't meant to be mutated" (or something since I argued for the dict). So > Eric (I suspect) went with what made sense to him. Yep. -eric From ericsnowcurrently at gmail.com Wed May 9 17:07:03 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 09:07:03 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120509165039.23c8bf56@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> Message-ID: On Wed, May 9, 2012 at 8:50 AM, Antoine Pitrou wrote: > On Wed, 9 May 2012 10:44:59 -0400 > Brett Cannon wrote: >> >> > I wish there was a builtin class >> > >> > ? ? class record: >> > ? ? ? ?pass >> > >> > ? which can be used to create objects which have only attributes >> > ? and no methods. >> >> >> I have heard this request now a bazillion times over the years. Why don't >> we have such an empty class sitting somewhere in the stdlib with a >> constructor classmethod to simply return new instances (and if you want to >> get really fancy, optional keyword arguments to update the instance with >> the keys/values passed in)? Is it simply because it's just two lines of >> Python that *everyone* has replicated at some point? > > In this case, it's because sys is a built-in module written in C, and > importing Python code is a no-go. Something I've remotely considered is an approach like namedtuple takes: define a pure Python template, .format() it, and exec it. However, this is partly a reflection of my lack of familiarity with using the C-API. As well, the only place I've seen this done in the CPython code base is with namedtuple. Consequently, I was planning on taking the normal approach. Should the namedtuple-exec technique be avoided at the C level? -eric From vinay_sajip at yahoo.co.uk Wed May 9 17:08:47 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Wed, 9 May 2012 15:08:47 +0000 (UTC) Subject: [Python-Dev] Rietveld integration problem? Message-ID: I recently added an issue http://bugs.python.org/issue14712 to track PEP 405 integration. The code is in my sandbox repo, and I've created a patch using the "Create Patch" button on the tracker. The diff has been created, but I don't seem to see a "review" link to Rietveld. The issue is on Rietveld but for some reason the patch set hasn't attached to it. An earlier version of the patch (which I've since unlinked) had the same problem - even after several days, the "review" link never appeared. It's done so automatically in the past, e.g. for issue http://bugs.python.org/issue1521950 Is it something I'm doing wrong, or is there a problem with the issue tracker/Rietveld integration? Regards, Vinay Sajip From brett at python.org Wed May 9 17:09:27 2012 From: brett at python.org (Brett Cannon) Date: Wed, 9 May 2012 11:09:27 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120509165039.23c8bf56@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> Message-ID: On Wed, May 9, 2012 at 10:50 AM, Antoine Pitrou wrote: > On Wed, 9 May 2012 10:44:59 -0400 > Brett Cannon wrote: > > > > > I wish there was a builtin class > > > > > > class record: > > > pass > > > > > > which can be used to create objects which have only attributes > > > and no methods. > > > > > > I have heard this request now a bazillion times over the years. Why don't > > we have such an empty class sitting somewhere in the stdlib with a > > constructor classmethod to simply return new instances (and if you want > to > > get really fancy, optional keyword arguments to update the instance with > > the keys/values passed in)? Is it simply because it's just two lines of > > Python that *everyone* has replicated at some point? > > In this case, it's because sys is a built-in module written in C, and > importing Python code is a no-go. > Sure, but couldn't we define this "empty" class in C code so that you can use the C API with it as well and just provide a C function to get a new instance? -Brett > > We have a similar problem with ABCs: io jumps through hoops to register > its implementation classes with the I/O ABCs. > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed May 9 18:21:12 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 9 May 2012 09:21:12 -0700 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> Message-ID: <20120509092112.1c29dac8@resist> On May 09, 2012, at 05:20 PM, Nick Coghlan wrote: >Ah, good point. In that case, consider me convinced: static method it >is. It can join mro() as the second non-underscore method defined on >type(). +1 If I may dip into the bikeshed paint once more. I think it would be useful to establish a naming convention for alternative constructors implemented as {static,class}methods. I don't like `build_class()` much. Would you be opposed to `type.new()`? -Barry From barry at python.org Wed May 9 18:39:35 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 9 May 2012 09:39:35 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <4F9A1283.2010206@hastings.org> <20120508181402.1bed0686@resist> Message-ID: <20120509093935.168e3628@resist> On May 08, 2012, at 09:03 PM, Eric Snow wrote: >> ? This is defined as the version of the implementation, while >> ? sys.version_info is the version of the language. ?The semantics of >> ? sys.version_info have been sufficiently squishy in the past, as the XXX >> ? implies. ?This PEP shouldn't try to untangle that, so I think it be better >> ? to represent both values explicitly in sys.implementation. > >Definitely tangled. So, sys.implementation.version and >sys.implementation.lang_version? Also, my inclination is to not have >a sys.version equivalent in sys.implementation for now, in the >interest of keeping things as bare-bones as possible to start. I think it would be fine, if PEP 421 was clear about the semantics of sys.implementation.version and was silent about trying to disentangle the semantics of sys.version. IOW, the PEP can say that the semantics of sys.version are fuzzy, but not try to clear it up. Then it would be explicit (as it already is) that sys.implementation.version describes the version of the implementation, not the version of the language compliance. If the latter is useful later, then it can use the PEP 421 described process to propose a new sys.implementation value that describes a language compliance variable. >> ?* I mildly prefer sys.implementation.name to be lower cased. ?My intuition >> is ? that to be safe, most comparisons of the value will coerce to lower >> case, ? which is easy enough in Python, but perhaps a bit more of a pain in >> C. ?I ? don't feel really strongly about this though. ?(A counter argument >> is that ? the value might be printed, so a case-sensitive version would be >> better.) > >I'm not sure it makes a lot of difference. Since cache_tag will be >provided by the implementation, I don't have any strong use-cases that >would constrain the name itself. Still, my preference is for lower >case as well. I'll mull this one over. Cool. As I said, I'm on the fence about it too. :) >> ?* I've said before that I think the keys in sys.implementation should be >> ? locked down (i.e. not writable). > >I've been on and off about this. It's certainly not too hard to do, >it makes sense, and I don't see a lot of reason not to do it. I'll >give it a go. Maybe it doesn't matter. We're all adults here. I think there are two good choices. Either the PEP explicitly describes sys.implementation as immutable, or it is silent about it. IOW, I don't think the PEP should explicitly allow sys.implementation to be mutable. >>?I think sys.implementation.metadata >> ? should be the same type. > >This I wonder about. The more I think about it, the more it fits. >I'll give it a day and if that still holds I'll work it in. Cool. >Thanks for the feedback, Barry! Feels like the PEP's getting close. Indeed! Cheers, -Barry From martin at v.loewis.de Wed May 9 18:45:29 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 09 May 2012 18:45:29 +0200 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> Message-ID: <20120509184529.Horde.a7N_RNjz9kRPqp8p1bnnLBA@webmail.df.eu> > Sure, but couldn't we define this "empty" class in C code so that you can > use the C API with it as well and just provide a C function to get a new > instance? That would be easy. All you need is a dictoffset. Regards, Martin From barry at python.org Wed May 9 18:50:48 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 9 May 2012 09:50:48 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> Message-ID: <20120509095048.49e61725@resist> On May 09, 2012, at 08:51 AM, Eric Snow wrote: >The type for sys.implementation has slowly shifted from the original >proposal. At this point it's settled into where I think it will stay, >a custom type. I've covered the choice of type in the rationale >section. However, there may be merit in not being so specific about >the type. I'll give that some thought. Right. See my previous follow up for what I think the PEP should say about the semantics of the type, without being so specific about the actual type. Cheers, -Barry From barry at python.org Wed May 9 18:53:11 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 9 May 2012 09:53:11 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> Message-ID: <20120509095311.3a2c25c2@resist> On May 09, 2012, at 11:09 AM, Brett Cannon wrote: >Sure, but couldn't we define this "empty" class in C code so that you can >use the C API with it as well and just provide a C function to get a new >instance? +1 ISTM to be a companion to collections.namedtuple. IWBNI this new type was also exposed in the collections module. -Barry From albl500 at alexleach.org.uk Wed May 9 14:31:54 2012 From: albl500 at alexleach.org.uk (Alex Leach) Date: Wed, 09 May 2012 13:31:54 +0100 Subject: [Python-Dev] c/ElementTree XML serialisation In-Reply-To: <4FAA267A.8070400@doxdesk.com> References: <4806763.jdQveRlFAv@metabuntu> <2274079.NFJZeUYtyu@metabuntu> <4FAA267A.8070400@doxdesk.com> Message-ID: <19958568.0TVJAH4hef@metabuntu> On Wednesday 09 May 2012 08:10:34 And Clover wrote: | On 2012-05-08 23:41, Alex Leach wrote: | > I still need< and> symbols. I have no idea how to write a loop in | > javascript without one. | Just &-escape them same as you do in any other element. 'style' and | 'script' do not require any special handling in XML. | | | | | | | | ('>' doesn't necessarily need escaping anyway, except as part of a ]]> | sequence in text content.) Cheers, I'll have to check that. I assumed < etc. didn't get executed properly in SVG files. Assuming you're correct (you python-devs always are!), it must have been another problem that was causing issues... From brett at python.org Wed May 9 21:18:12 2012 From: brett at python.org (Brett Cannon) Date: Wed, 9 May 2012 15:18:12 -0400 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <20120509092112.1c29dac8@resist> References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> Message-ID: On Wed, May 9, 2012 at 12:21 PM, Barry Warsaw wrote: > On May 09, 2012, at 05:20 PM, Nick Coghlan wrote: > > >Ah, good point. In that case, consider me convinced: static method it > >is. It can join mro() as the second non-underscore method defined on > >type(). > > +1 > > If I may dip into the bikeshed paint once more. I think it would be > useful to > establish a naming convention for alternative constructors implemented as > {static,class}methods. I don't like `build_class()` much. Would you be > opposed to `type.new()`? Depends on how far you want this new term to go since "new" is somewhat overloaded thanks to __new__(). I personally like create(). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 10 00:14:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 08:14:55 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> Message-ID: Given that the statement form is referred to as a "class definition", and this is the dynamic equivalent, I'm inclined to go with "type.define()". Dynamic type definition is more consistent with existing terminology than dynamic type creation. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Thu May 10 01:44:01 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 09 May 2012 19:44:01 -0400 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> Message-ID: <20120509234402.037352500D2@webabinitio.net> On Thu, 10 May 2012 08:14:55 +1000, Nick Coghlan wrote: > Given that the statement form is referred to as a "class definition", and > this is the dynamic equivalent, I'm inclined to go with "type.define()". > Dynamic type definition is more consistent with existing terminology than > dynamic type creation. Yeah, but that's the statement form. I think of the characters in the .py file as the definition. If I'm creating a class dynamically...I'm creating(*) it, not defining it. I don't think it's a big deal, though. Either word will work. --David (*) Actually, come to think of it, I probably refer to it as "constructing" the class, rather than creating or defining it. It's the type equivalent of constructing an instance, perhaps? From greg.ewing at canterbury.ac.nz Thu May 10 02:03:24 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 May 2012 12:03:24 +1200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> Message-ID: <4FAB05CC.3000806@canterbury.ac.nz> Nick Coghlan wrote: > In that case, consider me convinced: static method it > is. -0.93. Static methods are generally unpythonic, IMO. Python is not Java -- we have modules. Something should only go in a class namespace if it somehow relates to that particular class, and other classes could might implement it differently. That's not the case with build_class(). -- Greg From ericsnowcurrently at gmail.com Thu May 10 02:16:35 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 18:16:35 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120509093935.168e3628@resist> References: <4F9A1283.2010206@hastings.org> <20120508181402.1bed0686@resist> <20120509093935.168e3628@resist> Message-ID: On Wed, May 9, 2012 at 10:39 AM, Barry Warsaw wrote: > On May 08, 2012, at 09:03 PM, Eric Snow wrote: >>Definitely tangled. ?So, sys.implementation.version and >>sys.implementation.lang_version? ?Also, my inclination is to not have >>a sys.version equivalent in sys.implementation for now, in the >>interest of keeping things as bare-bones as possible to start. > > I think it would be fine, if PEP 421 was clear about the semantics of > sys.implementation.version and was silent about trying to disentangle the > semantics of sys.version. ?IOW, the PEP can say that the semantics of > sys.version are fuzzy, but not try to clear it up. ?Then it would be explicit > (as it already is) that sys.implementation.version describes the version of > the implementation, not the version of the language compliance. > > If the latter is useful later, then it can use the PEP 421 described process > to propose a new sys.implementation value that describes a language compliance > variable. Whoops. I meant that I'm okay with having sys.implementation.version and sys.implementation.lang_version, both as analogs to sys.version_info. My inclination is to not include the analog to sys.version. However, with the way that you put it, I think you're right that we could put off the lang_version attribute for later. >>> ?* I've said before that I think the keys in sys.implementation should be >>> ? locked down (i.e. not writable). >> >>I've been on and off about this. ?It's certainly not too hard to do, >>it makes sense, and I don't see a lot of reason not to do it. ?I'll >>give it a go. > > Maybe it doesn't matter. ?We're all adults here. ?I think there are two good > choices. ?Either the PEP explicitly describes sys.implementation as immutable, > or it is silent about it. ?IOW, I don't think the PEP should explicitly allow > sys.implementation to be mutable. Agreed. -eric From larry at hastings.org Thu May 10 02:47:49 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 09 May 2012 17:47:49 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120509095311.3a2c25c2@resist> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> Message-ID: <4FAB1035.9040807@hastings.org> On 05/09/2012 09:53 AM, Barry Warsaw wrote: > On May 09, 2012, at 11:09 AM, Brett Cannon wrote: > >> Sure, but couldn't we define this "empty" class in C code so that you can >> use the C API with it as well and just provide a C function to get a new >> instance? > +1 > > ISTM to be a companion to collections.namedtuple. IWBNI this new type was > also exposed in the collections module. I like Alex Martelli's approach, which I recall was exactly this: class namespace: def __init__(**kwargs): self.__dict__ = kwargs That means all the initializers you pass in to the constructor get turned into members. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 10 03:33:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 11:33:14 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120509095311.3a2c25c2@resist> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> Message-ID: On Thu, May 10, 2012 at 2:53 AM, Barry Warsaw wrote: > On May 09, 2012, at 11:09 AM, Brett Cannon wrote: > >>Sure, but couldn't we define this "empty" class in C code so that you can >>use the C API with it as well and just provide a C function to get a new >>instance? > > +1 > > ISTM to be a companion to collections.namedtuple. ?IWBNI this new type was > also exposed in the collections module. Please, no. No new just-like-a-namedtuple-except-you-can't-iterate-over-it type, and definitely not one exposed in the collections module. We've been over this before: collections.namedtuple *is* the standard library's answer for structured records. TOOWTDI, and the way we have already chosen includes iterability as one of its expected properties. People shouldn't be so quick to throw away ordered iterability - it makes a lot of things like generic display routines and serialisation *much* easier, and without incurring the runtime cost of multiple calls to sorted(). The original concern (that sys.implementation may differ in length across implementations) has been eliminated by moving all implementation specific values into sys.implementation.metadata. The top-level record now has a consistent length for any given language version. The fact that the length of the record may still change in *future* versions of Python can be handled through documentation - we can simply tell people "it's OK to iterate over the fields, and even to use tuple unpacking, but if you want to future proof your code, make sure to include the trailing ', *' to ignore any fields that get added in the future". To help focus the discussion, I am going to propose a specific (albeit still somewhat hypothetical) use case: a cross-implementation testing system that wants to be able to consistently capture data about the version of Python that was tested, *without* needing implementation specific code in the metadata capture step. That produces the following set of requirements: 1. sys.implementation should be immutable for a given execution of Python 2. repr(sys.implementation) should display all recorded details of the implementation 3. It should be possible to write a generic, future-proof, serialisation of sys.implementation that captures all recorded details collections.namedtuple meets all those requirements (_structseq doesn't meet the last one at this point, but more on that later) It also shows that we only need to place very minimal constraints on sys.implementation.metadata: the type of that structure can be entirely up to the implementation, with the only requirement being that repr(sys.implementation.metadata) should produce a string that accurately captures the stored information. The only cross-implementation operation that is supported on that field would be to take its representation. Now, because this is going to be in the sys module, for CPython, we would actually need to use _structseq rather than collections.namedtuple. To do so in a useful way, _structseq should get two new additions: - the "_fields" attribute - the "_asdict" method As an added bonus, sys.float_info and sys.hash_info would also gain the new operations. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu May 10 03:45:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 11:45:50 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FAB05CC.3000806@canterbury.ac.nz> References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> Message-ID: On Thu, May 10, 2012 at 10:03 AM, Greg Ewing wrote: > Python is not Java -- we have modules. Something should > only go in a class namespace if it somehow relates to > that particular class, and other classes could might > implement it differently. That's not the case with > build_class(). Not true - you *will* get a type instance out of any sane call to type.define(). Technically, you could probably declare your metaclass such that you get a non-type object instead (just as you can with a class definition), but that means you're really just using an insanely convoluted way to make an ordinary function call. If you didn't want to invoke the full PEP 3115 find metaclass/prepare namespace/execute body/call metaclass dance, why would you be calling type.define instead of just calling the metaclass directly? Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Thu May 10 04:26:41 2012 From: barry at python.org (Barry Warsaw) Date: Wed, 9 May 2012 19:26:41 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAB1035.9040807@hastings.org> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <4FAB1035.9040807@hastings.org> Message-ID: <20120509192641.3e4db529@rivendell> On May 09, 2012, at 05:47 PM, Larry Hastings wrote: >I like Alex Martelli's approach, which I recall was exactly this: > > class namespace: > def __init__(**kwargs): > self.__dict__ = kwargs > > >That means all the initializers you pass in to the constructor get turned >into members. Well, "__init__(self, **kws)", but yeah. :) -Barry From ericsnowcurrently at gmail.com Thu May 10 06:00:18 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 9 May 2012 22:00:18 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> Message-ID: On Wed, May 9, 2012 at 7:33 PM, Nick Coghlan wrote: > Please, no. No new > just-like-a-namedtuple-except-you-can't-iterate-over-it type, and > definitely not one exposed in the collections module. > > We've been over this before: collections.namedtuple *is* the standard > library's answer for structured records. TOOWTDI, and the way we have > already chosen includes iterability as one of its expected properties. > > People shouldn't be so quick to throw away ordered iterability - it > makes a lot of things like generic display routines and serialisation > *much* easier, and without incurring the runtime cost of multiple > calls to sorted(). > > The original concern (that sys.implementation may differ in length > across implementations) has been eliminated by moving all > implementation specific values into sys.implementation.metadata. The > top-level record now has a consistent length for any given language > version. The fact that the length of the record may still change in > *future* versions of Python can be handled through documentation - we > can simply tell people "it's OK to iterate over the fields, and even > to use tuple unpacking, but if you want to future proof your code, > make sure to include the trailing ', *' to ignore any fields that get > added in the future". Good point. I'd forgotten about that new tuple unpacking syntax. FYI, a named tuple was my original choice. I'm going to sit on this a few days though. Who knows, we might be back to using a dict by then. Key points: * has dotted access * is immutable Both reflect the nature of sys.implementation as currently described (a fixed set of attributes on an dotted-access namespace). > To help focus the discussion, I am going to propose a specific (albeit > still somewhat hypothetical) use case: a cross-implementation testing > system that wants to be able to consistently capture data about the > version of Python that was tested, *without* needing implementation > specific code in the metadata capture step. > > That produces the following set of requirements: > > 1. sys.implementation should be immutable for a given execution of Python > 2. repr(sys.implementation) should display all recorded details of the > implementation > 3. It should be possible to write a generic, future-proof, > serialisation of sys.implementation that captures all recorded details > > collections.namedtuple meets all those requirements (_structseq > doesn't meet the last one at this point, but more on that later) > > It also shows that we only need to place very minimal constraints on > sys.implementation.metadata: the type of that structure can be > entirely up to the implementation, with the only requirement being > that repr(sys.implementation.metadata) should produce a string that > accurately captures the stored information. The only > cross-implementation operation that is supported on that field would > be to take its representation. Nice. > Now, because this is going to be in the sys module, for CPython, we > would actually need to use _structseq rather than > collections.namedtuple. To do so in a useful way, _structseq should > get two new additions: > - the "_fields" attribute > - the "_asdict" method Sounds good to me regardless of the PEP. -eric From ncoghlan at gmail.com Thu May 10 07:02:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 15:02:20 +1000 Subject: [Python-Dev] Allow use of sphinx-autodoc in the standard library documentation? Message-ID: One of the requirements for acceptance of PEP 3144 if the provision of a reStructuredText API reference. The current plan for dealing with that is to use Spinx apidoc to create a skeleton, and then capture the rewritten ReST produced by autodoc. However, it occurs to me that the module reference could actually *use* autodoc, with additional prose added to supplement the docstrings, rather than completely replacing them. I'd initially dismissed this idea out of hand, but recently realised I didn't have any especially strong arguments against it (and there are all the usual "avoid double-keying data" arguments in favour). So, given the advantages of autodoc, is there a concrete reason why we can't use it for the documentation of *new* standard library modules? Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Thu May 10 07:34:14 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 10 May 2012 07:34:14 +0200 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> Message-ID: <4FAB5356.3050903@v.loewis.de> > We've been over this before: collections.namedtuple *is* the standard > library's answer for structured records. And I think it's a really ugly answer, and one that deserves a parallel that is not a tuple. If this is contentious, I'll write a PEP. Regards, Martin From ncoghlan at gmail.com Thu May 10 07:40:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 15:40:02 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAB5356.3050903@v.loewis.de> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <4FAB5356.3050903@v.loewis.de> Message-ID: On Thu, May 10, 2012 at 3:34 PM, "Martin v. L?wis" wrote: >> We've been over this before: collections.namedtuple *is* the standard >> library's answer for structured records. > > > And I think it's a really ugly answer, and one that deserves a parallel > that is not a tuple. If this is contentious, I'll write a PEP. Yes, please. One of the original arguments that delayed the introduction of the collections module was the fear that it would lead to the introduction of tons of subtly different data types, making it substantially harder to choose the right data type for a given application. I see this proposal as the realisation of that fear. Unordered types can be a PITA for testing, for display and for generic serialisation, so I definitely want to see a PEP before we add a new one that basically has its sole reason for existence being "you can iterate over and index the field values in a namedtuple". Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Thu May 10 10:11:02 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 10 May 2012 09:11:02 +0100 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> Message-ID: <4FAB7816.4060705@hotpy.org> Nick Coghlan wrote: > On Thu, May 10, 2012 at 10:03 AM, Greg Ewing > wrote: >> Python is not Java -- we have modules. Something should >> only go in a class namespace if it somehow relates to >> that particular class, and other classes could might >> implement it differently. That's not the case with >> build_class(). +1 > > Not true - you *will* get a type instance out of any sane call to > type.define(). Technically, you could probably declare your metaclass > such that you get a non-type object instead (just as you can with a > class definition), but that means you're really just using an insanely > convoluted way to make an ordinary function call. If you didn't want > to invoke the full PEP 3115 find metaclass/prepare namespace/execute > body/call metaclass dance, why would you be calling type.define > instead of just calling the metaclass directly? By attaching the 'define' object to type, then the descriptor protocol causes problems if 'define' is a desriptor since type is its own metaclass. If it were a builtin-function, then there would be no problem. A module-level builtin-function is more likely to be correct and seems to me to be more Pythonic. Not that I'm a good judge of Pythonicness :) Finally, could you remind me how the proposed type.define differs from builtins.__build_class__? I can't see any difference (apart from parameter ordering and the extra name parameter in builtins.__build_class__). Cheers, Mark. From ncoghlan at gmail.com Thu May 10 10:27:23 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 18:27:23 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FAB7816.4060705@hotpy.org> References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> <4FAB7816.4060705@hotpy.org> Message-ID: On Thu, May 10, 2012 at 6:11 PM, Mark Shannon wrote: > Finally, could you remind me how the proposed type.define differs from > builtins.__build_class__? > I can't see any difference (apart from parameter ordering and the extra name > parameter in builtins.__build_class__). It's the officially supported version of that API - the current version is solely a CPython implementation detail. The main change is moving exec_body to the end and making it optional, thus bringing the interface more in line with calling a metaclass directly. The name parameter is actually still there, I just forgot to include in the examples in the thread. You'll find there's no mention of __build_class__ in the language or library references, thus there's currently no official way to programmatically define a new type in a way that complies with PEP 3115. (This is explained in the tracker issue and the previous thread that proposed the name operator.build_class) I prefer type.define(), but if the descriptor protocol does cause problems (and making static methods callable doesn't fix them), then we'll move it somewhere else (probably types.define() with a new _types module). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From solipsis at pitrou.net Thu May 10 10:57:49 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 10 May 2012 10:57:49 +0200 Subject: [Python-Dev] sys.implementation References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> Message-ID: <20120510105749.7401f1d2@pitrou.net> On Thu, 10 May 2012 11:33:14 +1000 Nick Coghlan wrote: > > The original concern (that sys.implementation may differ in length > across implementations) has been eliminated by moving all > implementation specific values into sys.implementation.metadata. Uh. It's scary the kind of things people sometimes come up with :-) sys.implementation.metadata looks like a completely over-engineered concept. Please, let's just make sys.implementation a dict and stop bothering about ordering and iterability. Regards Antoine. From mark at hotpy.org Thu May 10 11:51:46 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 10 May 2012 10:51:46 +0100 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> <4FAB7816.4060705@hotpy.org> Message-ID: <4FAB8FB2.3000106@hotpy.org> Nick Coghlan wrote: > On Thu, May 10, 2012 at 6:11 PM, Mark Shannon wrote: >> Finally, could you remind me how the proposed type.define differs from >> builtins.__build_class__? >> I can't see any difference (apart from parameter ordering and the extra name >> parameter in builtins.__build_class__). > > It's the officially supported version of that API - the current > version is solely a CPython implementation detail. The main change is > moving exec_body to the end and making it optional, thus bringing the > interface more in line with calling a metaclass directly. The name > parameter is actually still there, I just forgot to include in the > examples in the thread. > > You'll find there's no mention of __build_class__ in the language or > library references, thus there's currently no official way to > programmatically define a new type in a way that complies with PEP > 3115. > > (This is explained in the tracker issue and the previous thread that > proposed the name operator.build_class) > > I prefer type.define(), but if the descriptor protocol does cause > problems (and making static methods callable doesn't fix them), then > we'll move it somewhere else (probably types.define() with a new > _types module). The problem with any non-overriding descriptor bound to type is that when accessed as type.define it acts as a descriptor, but when accessed from any other class, say int.define it acts as a non-overriding meta-descriptor; c.f. type.mro() vs int.mro() To avoid this problem, type.define needs to be an overriding descriptor such as a property (a PyGetSetDef in C). Alternatively, just make 'define' a non-descriptor. It would unusual (unique?) to have a builtin-function (rather than a method-descriptor) bound to a class, but I can't see any fundamental reason not to. Cheers, Mark. From ncoghlan at gmail.com Thu May 10 12:19:58 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 20:19:58 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FAB8FB2.3000106@hotpy.org> References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> <4FAB7816.4060705@hotpy.org> <4FAB8FB2.3000106@hotpy.org> Message-ID: On Thu, May 10, 2012 at 7:51 PM, Mark Shannon wrote: > To avoid this problem, type.define needs to be an overriding descriptor > such as a property (a PyGetSetDef in C). > Alternatively, just make 'define' a non-descriptor. > It would unusual (unique?) to have a builtin-function (rather than a > method-descriptor) bound to a class, but I can't see any fundamental reason > not to. Oh, I see what you mean now. I hadn't fully thought through the implications of the static method being accessible through all instances of type, and that really doesn't seem like a good outcome. Exposing it through the types module as an ordinary builtin function is starting to sound a lot more attractive at this point. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Thu May 10 12:21:17 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 10 May 2012 20:21:17 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120510105749.7401f1d2@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On Thu, May 10, 2012 at 6:57 PM, Antoine Pitrou wrote: > On Thu, 10 May 2012 11:33:14 +1000 > Nick Coghlan wrote: >> >> The original concern (that sys.implementation may differ in length >> across implementations) has been eliminated by moving all >> implementation specific values into sys.implementation.metadata. > > Uh. It's scary the kind of things people sometimes come up with :-) > > sys.implementation.metadata looks like a completely over-engineered > concept. Please, let's just make sys.implementation a dict and stop > bothering about ordering and iterability. Aye. Add a rule that all implementation specific (i.e. not defined in the PEP) keys must be prefixed with an underscore and I'm sold. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From greg.ewing at canterbury.ac.nz Thu May 10 13:26:39 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 10 May 2012 23:26:39 +1200 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <4FAB05CC.3000806@canterbury.ac.nz> Message-ID: <4FABA5EF.4000509@canterbury.ac.nz> Nick Coghlan wrote: > On Thu, May 10, 2012 at 10:03 AM, Greg Ewing > wrote: > >>Something should >>only go in a class namespace if it somehow relates to >>that particular class, and other classes could might >>implement it differently. That's not the case with >>build_class(). > > Not true - you *will* get a type instance out of any sane call to > type.define(). You must have misunderstood me, because this doesn't relate to the point I was making at all. What I'm trying to say is that I don't see the justification for making build_class() a static method rather than a plain module-level function. To my way of thinking, static methods are very rarely justified in Python. The only argument so far in this case seems to be "we can't make up our minds where else to put it", which is rather lame. A stronger argument would be if there were cases where you wanted to define a subclass of type that implemented build_class differently. But how would it get called, if everyone who uses build_class invokes it using 'type.build_class()'? -- Greg From rdmurray at bitdance.com Thu May 10 15:10:20 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 10 May 2012 09:10:20 -0400 Subject: [Python-Dev] Allow use of sphinx-autodoc in the standard library documentation? In-Reply-To: References: Message-ID: <20120510131021.017D32500D2@webabinitio.net> On Thu, 10 May 2012 15:02:20 +1000, Nick Coghlan wrote: > So, given the advantages of autodoc, is there a concrete reason why we > can't use it for the documentation of *new* standard library modules? Yes. Our reason is that docstrings should be relatively lightweight, and that the sphinx docs should be the more expansive version of the documentation. Yes, this creates a double-maintenance burden, and the two sometimes slip of of sync. But it is a long-standing rule and will doubtless require considerable bikeshedding if we want to change it :) --David From g.brandl at gmx.net Thu May 10 16:42:28 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 10 May 2012 16:42:28 +0200 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120510105749.7401f1d2@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On 10.05.2012 10:57, Antoine Pitrou wrote: > On Thu, 10 May 2012 11:33:14 +1000 > Nick Coghlan wrote: >> >> The original concern (that sys.implementation may differ in length >> across implementations) has been eliminated by moving all >> implementation specific values into sys.implementation.metadata. > > Uh. It's scary the kind of things people sometimes come up with :-) .oO( Namespaception ) > sys.implementation.metadata looks like a completely over-engineered > concept. Please, let's just make sys.implementation a dict and stop > bothering about ordering and iterability. Agreed. Georg From g.brandl at gmx.net Thu May 10 16:50:31 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 10 May 2012 16:50:31 +0200 Subject: [Python-Dev] Allow use of sphinx-autodoc in the standard library documentation? In-Reply-To: References: Message-ID: On 10.05.2012 07:02, Nick Coghlan wrote: > One of the requirements for acceptance of PEP 3144 if the provision of > a reStructuredText API reference. > > The current plan for dealing with that is to use Spinx apidoc to > create a skeleton, and then capture the rewritten ReST produced by > autodoc. > > However, it occurs to me that the module reference could actually > *use* autodoc, with additional prose added to supplement the > docstrings, rather than completely replacing them. > > I'd initially dismissed this idea out of hand, but recently realised I > didn't have any especially strong arguments against it (and there are > all the usual "avoid double-keying data" arguments in favour). > > So, given the advantages of autodoc, is there a concrete reason why we > can't use it for the documentation of *new* standard library modules? The one reason that prevented me from ever proposing this is that to do this, you have to build the docs with exactly the Python you want the documentation for. This can create unpleasant dependencies for e.g. distributions, and also for developers who cannot build the docs without first building Python, which can be a hassle, especially under Windows. But of course we want people to build the docs before committing... The other issue is the extensiveness of the docstrings vs. separate docs. So far, the latter have always been more comprehensive than the docstrings, which works nicely for me (although crucial info is sometimes missing in the docstring). This difference can be kept, to a degree, even with autodoc, by putting additional content into the autodoc directive, but that renders one big autodoc advantage moot: having the documentation in one place only. Even worse, if someone changes the docstring, the addendum in the rst file may become wrong/obsolete/incomprehensible. cheers, Georg From tjreedy at udel.edu Thu May 10 17:31:49 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 10 May 2012 11:31:49 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On 5/10/2012 10:42 AM, Georg Brandl wrote: > On 10.05.2012 10:57, Antoine Pitrou wrote: >> On Thu, 10 May 2012 11:33:14 +1000 >> Nick Coghlan wrote: >>> >>> The original concern (that sys.implementation may differ in length >>> across implementations) has been eliminated by moving all >>> implementation specific values into sys.implementation.metadata. >> >> Uh. It's scary the kind of things people sometimes come up with :-) > > .oO( Namespaception ) > >> sys.implementation.metadata looks like a completely over-engineered >> concept. Please, let's just make sys.implementation a dict and stop >> bothering about ordering and iterability. Thank you for cutting through the knot. > Agreed. Ditto. Iterability is good and should be part of all python collections. People who want a sorted representation should just use sorted(d.items) as with other sortable mappings. Nick's idea of prefixing local implementation keys with '_' would nicely group them together on sorted displays. -- Terry Jan Reedy From scott+python-dev at scottdial.com Thu May 10 17:34:41 2012 From: scott+python-dev at scottdial.com (Scott Dial) Date: Thu, 10 May 2012 11:34:41 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <4FAB5356.3050903@v.loewis.de> Message-ID: <4FABE011.10306@scottdial.com> On 5/10/2012 1:40 AM, Nick Coghlan wrote: > Unordered types can be a PITA for testing, for display and for generic > serialisation, so I definitely want to see a PEP before we add a new > one that basically has its sole reason for existence being "you can > iterate over and index the field values in a namedtuple". > I could use those same arguments (testing, display, and generic serialization) as reasons /against/ using an ordered type (when it's not the intent of the author that it be ordered). That is: - Testing: This is an attractive nuisance because adding fields later can break the tests if the author of the type had no intent on the ordering being guaranteed (or the number of fields). - Display: If the author of the type didn't intend on the ordering being guaranteed, then the display could become nonsense when changing versions (e.g., upgrading a 3rd-party library). - Generic Serialization: Again, if the author didn't plan for that, then they could add additional fields or re-arrange them in a way that makes naive serialization give incorrect instances. The point is that the author of the type can't protect you from these mistakes if a namedtuple is used. The only tool the author of the type has at their disposal to warn you of your ways is documentation. If the type doesn't support iteration or indexing, then you are forced to do it right, because it's the only way that works. Furthermore, what is wrong with a repr that yields a dict-like string "record(a=1, b=2, c=3)" with regard to testing and display? -- Scott Dial scott at scottdial.com From barry at python.org Thu May 10 19:31:56 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 10 May 2012 10:31:56 -0700 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <20120509234402.037352500D2@webabinitio.net> References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> <20120509234402.037352500D2@webabinitio.net> Message-ID: <20120510103156.70755479@resist> On May 09, 2012, at 07:44 PM, R. David Murray wrote: >On Thu, 10 May 2012 08:14:55 +1000, Nick Coghlan wrote: >> Given that the statement form is referred to as a "class definition", and >> this is the dynamic equivalent, I'm inclined to go with "type.define()". >> Dynamic type definition is more consistent with existing terminology than >> dynamic type creation. > >Yeah, but that's the statement form. I think of the characters in the >.py file as the definition. If I'm creating a class dynamically...I'm >creating(*) it, not defining it. That's exactly how I think about it too. >I don't think it's a big deal, though. Either word will work. > >--David > >(*) Actually, come to think of it, I probably refer to it as >"constructing" the class, rather than creating or defining it. >It's the type equivalent of constructing an instance, perhaps? If, as Nick proposes in a different message, it actually does make better sense to put this as a module-level function, then putting `class` in the name makes sense. types.{new,create,build,construct}_class() works for me, in roughly that order. -Barry From stefan at bytereef.org Thu May 10 20:23:08 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 10 May 2012 20:23:08 +0200 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120509121842.5e96a6f2@pitrou.net> References: <20120507214943.579045c2@pitrou.net> <20120508174032.GA13600@sleipnir.bytereef.org> <20120508201330.4827ab8f@pitrou.net> <20120509092629.GA24611@sleipnir.bytereef.org> <20120509121842.5e96a6f2@pitrou.net> Message-ID: <20120510182308.GA7328@sleipnir.bytereef.org> Antoine Pitrou wrote: > On Wed, 9 May 2012 11:26:29 +0200 > Stefan Krah wrote: > > Antoine Pitrou wrote: > > > > _decimal is about 12% faster without threads, because the expensive > > > > thread local context can be disabled. > > > > > > If you cached the last thread id along with the corresponding context, > > > perhaps it could speed things up in most scenarios? > > > > Nice. This reduces the speed difference to about 4%! > > Note that you don't need the actual thread id, the Python thread state > is sufficient: PyThreadState_GET should be a simply variable lookup in > release builds. I've tried both ways now and the speed gain is roughly the same. Perhaps the interpreter as a whole is slightly faster --without-threads? That would explain the remaining speed difference of 4%. Stefan Krah From barry at python.org Thu May 10 20:30:26 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 10 May 2012 11:30:26 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120510105749.7401f1d2@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <20120510113026.4a0e45f9@resist> On May 10, 2012, at 10:57 AM, Antoine Pitrou wrote: >sys.implementation.metadata looks like a completely over-engineered >concept. Please, let's just make sys.implementation a dict and stop >bothering about ordering and iterability. I guess the question is whether immutability is useful to preserve in sys.implementation. I'm on the fence, but maybe "we're all consenting adults" and "simplest thing that will work" should rule the day. Using a straight up dict and underscores for non-PEP-defined values is certainly simple, and easy to implement and describe. -Barry From stefan at bytereef.org Thu May 10 20:43:11 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 10 May 2012 20:43:11 +0200 Subject: [Python-Dev] Point of building without threads? In-Reply-To: <20120510182308.GA7328@sleipnir.bytereef.org> References: <20120507214943.579045c2@pitrou.net> <20120508174032.GA13600@sleipnir.bytereef.org> <20120508201330.4827ab8f@pitrou.net> <20120509092629.GA24611@sleipnir.bytereef.org> <20120509121842.5e96a6f2@pitrou.net> <20120510182308.GA7328@sleipnir.bytereef.org> Message-ID: <20120510184311.GA7561@sleipnir.bytereef.org> Stefan Krah wrote: > > > Nice. This reduces the speed difference to about 4%! > > > > Note that you don't need the actual thread id, the Python thread state > > is sufficient: PyThreadState_GET should be a simply variable lookup in > > release builds. > > I've tried both ways now and the speed gain is roughly the same. > > Perhaps the interpreter as a whole is slightly faster --without-threads? > That would explain the remaining speed difference of 4%. Actually this seems to be the case: In the benchmark floats are also about 3% faster without threads. Stefan Krah From solipsis at pitrou.net Thu May 10 20:41:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 10 May 2012 20:41:53 +0200 Subject: [Python-Dev] Point of building without threads? References: <20120507214943.579045c2@pitrou.net> <20120508174032.GA13600@sleipnir.bytereef.org> <20120508201330.4827ab8f@pitrou.net> <20120509092629.GA24611@sleipnir.bytereef.org> <20120509121842.5e96a6f2@pitrou.net> <20120510182308.GA7328@sleipnir.bytereef.org> Message-ID: <20120510204153.200c6071@pitrou.net> On Thu, 10 May 2012 20:23:08 +0200 Stefan Krah wrote: > Antoine Pitrou wrote: > > On Wed, 9 May 2012 11:26:29 +0200 > > Stefan Krah wrote: > > > Antoine Pitrou wrote: > > > > > _decimal is about 12% faster without threads, because the expensive > > > > > thread local context can be disabled. > > > > > > > > If you cached the last thread id along with the corresponding context, > > > > perhaps it could speed things up in most scenarios? > > > > > > Nice. This reduces the speed difference to about 4%! > > > > Note that you don't need the actual thread id, the Python thread state > > is sufficient: PyThreadState_GET should be a simply variable lookup in > > release builds. > > I've tried both ways now and the speed gain is roughly the same. > > Perhaps the interpreter as a whole is slightly faster --without-threads? > That would explain the remaining speed difference of 4%. It may be. Can you try other benchmarks? Regards Antoine. From steve at pearwood.info Fri May 11 00:39:56 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 11 May 2012 08:39:56 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <4FAC43BC.7000705@pearwood.info> Nick Coghlan wrote: > On Thu, May 10, 2012 at 6:57 PM, Antoine Pitrou wrote: >> On Thu, 10 May 2012 11:33:14 +1000 >> Nick Coghlan wrote: >>> The original concern (that sys.implementation may differ in length >>> across implementations) has been eliminated by moving all >>> implementation specific values into sys.implementation.metadata. >> Uh. It's scary the kind of things people sometimes come up with :-) >> >> sys.implementation.metadata looks like a completely over-engineered >> concept. Please, let's just make sys.implementation a dict and stop >> bothering about ordering and iterability. > > Aye. Add a rule that all implementation specific (i.e. not defined in > the PEP) keys must be prefixed with an underscore and I'm sold. So now we're adding a new convention to single underscore names? Single underscore names are implementation-specific details that you shouldn't use or rely on, except in sys.implementation, where they are an optional part of the public interface. There are public keys which all Pythons are expected to support. There are public keys which only some Pythons are expected to support. We may call them "implementation-specific", but that refers to the PYTHON implementation, not the implementation of sys.implementation. As far as sys.implementation is concerned, these keys are public but optional, not private. Hence labelling them with a single underscore overrides the convention that _single underscore names are private, for one that they are public but optional. I'm not so sure that this is a good idea. To bike-shed a moment, if we're going to stick to a dict, and you really think that it is important to have a naming convention to distinguish between optional keys and those common to all Pythons, perhaps a better convention would be to prefix the optional keys with a dot, or a dash. This introduces a new convention without clashing with an existing one. -- Steven From steve at pearwood.info Fri May 11 00:45:42 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 11 May 2012 08:45:42 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FABE011.10306@scottdial.com> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <4FAB5356.3050903@v.loewis.de> <4FABE011.10306@scottdial.com> Message-ID: <4FAC4516.8050602@pearwood.info> Scott Dial wrote: > On 5/10/2012 1:40 AM, Nick Coghlan wrote: >> Unordered types can be a PITA for testing, for display and for generic >> serialisation, so I definitely want to see a PEP before we add a new >> one that basically has its sole reason for existence being "you can >> iterate over and index the field values in a namedtuple". >> > > I could use those same arguments (testing, display, and generic > serialization) as reasons /against/ using an ordered type (when it's not > the intent of the author that it be ordered). That is: > > - Testing: This is an attractive nuisance because adding fields later > can break the tests if the author of the type had no intent on the > ordering being guaranteed (or the number of fields). As opposed to unordered types when you add a new field? I don't think so. When you add new fields, you can break tests *regardless* of whether the type is ordered or unordered. If you change the public interface to a type, you have to change any tests that rely on it. But unordered types have a tendency to break tests even when you don't add new fields (at least doctests), simply because their display can arbitrarily change. Given the choice between having to re-write tests once in a blue moon when there is a backwards-incompatible change to a type, and having tests randomly break every time I run them because the display is unpredictable, I know which one I prefer. +1 to Nick's request for a PEP. -- Steven From ncoghlan at gmail.com Fri May 11 01:08:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 11 May 2012 09:08:20 +1000 Subject: [Python-Dev] Allow use of sphinx-autodoc in the standard library documentation? In-Reply-To: References: Message-ID: Thanks, that's pretty much what I thought (although I hadn't considered the sys.path and version dependency) . I'll proceed with the original plan. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri May 11 01:14:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 11 May 2012 09:14:16 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAC43BC.7000705@pearwood.info> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> <4FAC43BC.7000705@pearwood.info> Message-ID: No, they're private keys for the benefit of the implementation authors. Still, it's already the case that underscore prefixed names are sometimes used just for namespace separation (e.g. collections.namedtuple) -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri May 11 02:56:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 11 May 2012 10:56:00 +1000 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <20120510103156.70755479@resist> References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> <20120509234402.037352500D2@webabinitio.net> <20120510103156.70755479@resist> Message-ID: On Fri, May 11, 2012 at 3:31 AM, Barry Warsaw wrote: > On May 09, 2012, at 07:44 PM, R. David Murray wrote: >>(*) Actually, come to think of it, I probably refer to it as >>"constructing" the class, rather than creating or defining it. >>It's the type equivalent of constructing an instance, perhaps? > > If, as Nick proposes in a different message, it actually does make better > sense to put this as a module-level function, then putting `class` in the name > makes sense. ?types.{new,create,build,construct}_class() works for me, in > roughly that order. Yeah, as a result of the discussion in this thread, and considering the parallel with "imp.new_module()", I'm going to update the tracker issue to propose the addition of "types.new_class()" as the dynamic API for the PEP 3115 metaclass protocol. The question now moves to the implementation strategy - whether we redirect to the C machinery as originally proposed (either via __build_class__ or a new _types module) or just reimplement the algorithm in pure Python. The latter is actually quite an appealing concept, since it becomes a cross-check on the native C version. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From mark at hotpy.org Fri May 11 08:21:36 2012 From: mark at hotpy.org (Mark Shannon) Date: Fri, 11 May 2012 07:21:36 +0100 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> <20120509234402.037352500D2@webabinitio.net> <20120510103156.70755479@resist> Message-ID: <4FACAFF0.2070706@hotpy.org> Nick Coghlan wrote: > On Fri, May 11, 2012 at 3:31 AM, Barry Warsaw wrote: >> On May 09, 2012, at 07:44 PM, R. David Murray wrote: >>> (*) Actually, come to think of it, I probably refer to it as >>> "constructing" the class, rather than creating or defining it. >>> It's the type equivalent of constructing an instance, perhaps? >> If, as Nick proposes in a different message, it actually does make better >> sense to put this as a module-level function, then putting `class` in the name >> makes sense. types.{new,create,build,construct}_class() works for me, in >> roughly that order. > > Yeah, as a result of the discussion in this thread, and considering > the parallel with "imp.new_module()", I'm going to update the tracker > issue to propose the addition of "types.new_class()" as the dynamic > API for the PEP 3115 metaclass protocol. > > The question now moves to the implementation strategy - whether we > redirect to the C machinery as originally proposed (either via > __build_class__ or a new _types module) or just reimplement the > algorithm in pure Python. The latter is actually quite an appealing > concept, since it becomes a cross-check on the native C version. +1 to a pure Python version. Cheers, Mark From gcbirzan at gmail.com Fri May 11 12:33:57 2012 From: gcbirzan at gmail.com (=?UTF-8?Q?George=2DCristian_B=C3=AErzan?=) Date: Fri, 11 May 2012 13:33:57 +0300 Subject: [Python-Dev] Exception and ABCs / issue #12029 Message-ID: As per http://bugs.python.org/issue12029 , ABC registration cannot be used for exceptions. This was introduced in a commit that fixed a recursion limit problem back in 2008 (http://hg.python.org/cpython/rev/d6e86a96f9b3/#l8.10). This was later fixed in a different way and improved upon in the 2.x branch in http://hg.python.org/cpython/rev/7e86fa255fc2 and http://hg.python.org/cpython/rev/57de1ad15c54 respectively. Applying the fix from the 2.x branch for doesn't make any tests fail, and it fixes the problem described in the bug report. There are, however, two questions about this: * Is this a feature, or a bug? I would say that it's a bug, but even if it's not, it has to be documented, since one generally assumes that it will work. * Even so, is it worth fixing, considering the limited use cases for it? This slows exception type checking 3 times. I added a new test to pybench: before: TryRaiseExceptClass: 25ms 25ms 0.39us 0.216ms after: TryRaiseExceptException: 31ms 31ms 0.48us 0.214ms However, that doesn't tell the whole story, since there's overhead from raising the exception. In order to find out how much actually checking slows down the checking, I ran three timeits, with the following code: 1) try: raise ValueError() except NameError: pass except NameError: pass except ValueError: pass 2) try: raise ValueError() except NameError: pass except ValueError: pass 3) try: raise ValueError() except ValueError: pass Times are in ms: before after 1 528.69 825.38 2 473.73 653.39 3 416.29 496.80 avgdiff 56.23 164.29 The numbers don't change significantly for more exception tests. -- George-Cristian B?rzan From tjreedy at udel.edu Fri May 11 16:16:42 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 11 May 2012 10:16:42 -0400 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: <4FACAFF0.2070706@hotpy.org> References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> <20120509234402.037352500D2@webabinitio.net> <20120510103156.70755479@resist> <4FACAFF0.2070706@hotpy.org> Message-ID: On 5/11/2012 2:21 AM, Mark Shannon wrote: > Nick Coghlan wrote: >> The question now moves to the implementation strategy - whether we >> redirect to the C machinery as originally proposed (either via >> __build_class__ or a new _types module) or just reimplement the >> algorithm in pure Python. The latter is actually quite an appealing >> concept, since it becomes a cross-check on the native C version. I assume types.new_class would eventually call type(). This would make it available to any implementation with a conforming type(). > +1 to a pure Python version. Since new_class would be used rarely and not in inner loops, and (if I understand) should mostly contain branching logic rather than looping, speed hardly seems an issue. --- Terry Jan Reedy From rdmurray at bitdance.com Fri May 11 17:36:35 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 11 May 2012 11:36:35 -0400 Subject: [Python-Dev] Adding types.build_class for 3.3 In-Reply-To: References: <4FA7D133.5020303@avl.com> <20120509092112.1c29dac8@resist> <20120509234402.037352500D2@webabinitio.net> <20120510103156.70755479@resist> <4FACAFF0.2070706@hotpy.org> Message-ID: <20120511153635.8B52D2500E0@webabinitio.net> On Fri, 11 May 2012 10:16:42 -0400, Terry Reedy wrote: > On 5/11/2012 2:21 AM, Mark Shannon wrote: > > +1 to a pure Python version. > > Since new_class would be used rarely and not in inner loops, and (if I > understand) should mostly contain branching logic rather than looping, > speed hardly seems an issue. Well, actually, the proposed new email policy is doing dynamic class construction for any header accessed by the application, which could potentially be every header in every message processed by an application if refold_source is set true. That's not quite an "inner loop", but it isn't an outer one either. That said, the header parsing logic that is also invoked by the process of returning a header under the new policy is going to outweigh the class construction overhead, I'm pretty sure. --David From status at bugs.python.org Fri May 11 18:07:17 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 11 May 2012 18:07:17 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120511160717.0F1D41C8DF@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-05-04 - 2012-05-11) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3418 (+19) closed 23143 (+42) total 26561 (+61) Open issues with patches: 1460 Issues opened (38) ================== #13815: tarfile.ExFileObject can't be wrapped using io.TextIOWrapper http://bugs.python.org/issue13815 reopened by r.david.murray #14724: kill imp.load_dynamic's third argument http://bugs.python.org/issue14724 opened by pitrou #14728: trace function not set, causing some Pdb commands to fail http://bugs.python.org/issue14728 opened by xdegaye #14730: Implementation of the PEP 419: Protecting cleanup statements f http://bugs.python.org/issue14730 opened by tailhook #14731: Enhance Policy framework in preparation for adding email6 poli http://bugs.python.org/issue14731 opened by r.david.murray #14732: PEP 3121 Refactoring applied to _csv module http://bugs.python.org/issue14732 opened by Robin.Schreiber #14733: Custom commands don't work http://bugs.python.org/issue14733 opened by LEW21 #14734: Use binascii.b2a_qp/a2b_qp in email package header handling? http://bugs.python.org/issue14734 opened by r.david.murray #14735: Version 3.2.3 IDLE CTRL-Z plus Carriage Return to end does not http://bugs.python.org/issue14735 opened by ewodrich #14739: Add PyArg_Parse format unit like O& but providing context http://bugs.python.org/issue14739 opened by larry #14742: test_tools very slow http://bugs.python.org/issue14742 opened by pitrou #14743: on terminating, Pdb debugs itself http://bugs.python.org/issue14743 opened by xdegaye #14744: Use _PyUnicodeWriter API in str.format() internals http://bugs.python.org/issue14744 opened by haypo #14747: Classifiers are missing from distutils2-generated metadata http://bugs.python.org/issue14747 opened by agronholm #14748: spwd.getspall() is returning LDAP (non local) users too http://bugs.python.org/issue14748 opened by halfie #14750: Tkinter application doesn't run from source build on Windows http://bugs.python.org/issue14750 opened by vinay.sajip #14751: Pdb does not stop at a breakpoint http://bugs.python.org/issue14751 opened by xdegaye #14755: Distutils2 doesn't have a Python 3 version on PyPI http://bugs.python.org/issue14755 opened by njwilson #14757: INCA: Inline Caching meets Quickening in Python 3.3 http://bugs.python.org/issue14757 opened by sbrunthaler #14758: SMTPServer of smptd does not support binding to an IPv6 addres http://bugs.python.org/issue14758 opened by vsergeev #14759: BSDDB license missing from liscense page in 2.7. http://bugs.python.org/issue14759 opened by Jeff.Laing #14766: Non-naive time comparison throws naive time error http://bugs.python.org/issue14766 opened by Chris.Bergstresser #14767: urllib.request.HTTPRedirectHandler raises HTTPError when Locat http://bugs.python.org/issue14767 opened by jspenguin #14769: Add test to automatically detect missing format units in skipi http://bugs.python.org/issue14769 opened by larry #14770: Minor documentation fixes http://bugs.python.org/issue14770 opened by michael.foord #14771: Occasional failure in test_ioctl http://bugs.python.org/issue14771 opened by pitrou #14772: Return destination values in some shutil functions http://bugs.python.org/issue14772 opened by brian.curtin #14773: fwalk breaks on dangling symlinks http://bugs.python.org/issue14773 opened by hynek #14774: _sysconfigdata.py doesn't support multiple build configuration http://bugs.python.org/issue14774 opened by dmalcolm #14775: Slow unpickling of certain dictionaries in python 2.7 vs pytho http://bugs.python.org/issue14775 opened by stw #14776: Add SystemTap static markers http://bugs.python.org/issue14776 opened by dmalcolm #14777: Tkinter clipboard_get() decodes characters incorrectly http://bugs.python.org/issue14777 opened by takluyver #14778: IrrationalVersionError should include the project name http://bugs.python.org/issue14778 opened by njwilson #14779: test_buffer fails on OS X universal 64-/32-bit builds http://bugs.python.org/issue14779 opened by ned.deily #14780: SSL should use OpenSSL-defined default certificate store if ca http://bugs.python.org/issue14780 opened by jfunk #14781: Default to year 1 in strptime if year 0 has been specified http://bugs.python.org/issue14781 opened by Matthias.Meyer #14782: Tabcompletion of classes with static methods and __call__ has http://bugs.python.org/issue14782 opened by wpettersson #14783: Update int() docstring from manual http://bugs.python.org/issue14783 opened by terry.reedy Most recent 15 issues with no replies (15) ========================================== #14783: Update int() docstring from manual http://bugs.python.org/issue14783 #14782: Tabcompletion of classes with static methods and __call__ has http://bugs.python.org/issue14782 #14773: fwalk breaks on dangling symlinks http://bugs.python.org/issue14773 #14771: Occasional failure in test_ioctl http://bugs.python.org/issue14771 #14751: Pdb does not stop at a breakpoint http://bugs.python.org/issue14751 #14747: Classifiers are missing from distutils2-generated metadata http://bugs.python.org/issue14747 #14734: Use binascii.b2a_qp/a2b_qp in email package header handling? http://bugs.python.org/issue14734 #14731: Enhance Policy framework in preparation for adding email6 poli http://bugs.python.org/issue14731 #14730: Implementation of the PEP 419: Protecting cleanup statements f http://bugs.python.org/issue14730 #14714: PEp 414 tokenizing hook does not preserve tabs http://bugs.python.org/issue14714 #14713: PEP 414 installation hook fails with an AssertionError http://bugs.python.org/issue14713 #14712: Integrate PEP 405 http://bugs.python.org/issue14712 #14709: http.client fails sending read()able Object http://bugs.python.org/issue14709 #14703: Update PEP metaprocesses to describe PEP czar role http://bugs.python.org/issue14703 #14689: make PYTHONWARNINGS variable work in libpython http://bugs.python.org/issue14689 Most recent 15 issues waiting for review (15) ============================================= #14780: SSL should use OpenSSL-defined default certificate store if ca http://bugs.python.org/issue14780 #14779: test_buffer fails on OS X universal 64-/32-bit builds http://bugs.python.org/issue14779 #14776: Add SystemTap static markers http://bugs.python.org/issue14776 #14773: fwalk breaks on dangling symlinks http://bugs.python.org/issue14773 #14772: Return destination values in some shutil functions http://bugs.python.org/issue14772 #14770: Minor documentation fixes http://bugs.python.org/issue14770 #14769: Add test to automatically detect missing format units in skipi http://bugs.python.org/issue14769 #14766: Non-naive time comparison throws naive time error http://bugs.python.org/issue14766 #14757: INCA: Inline Caching meets Quickening in Python 3.3 http://bugs.python.org/issue14757 #14751: Pdb does not stop at a breakpoint http://bugs.python.org/issue14751 #14750: Tkinter application doesn't run from source build on Windows http://bugs.python.org/issue14750 #14744: Use _PyUnicodeWriter API in str.format() internals http://bugs.python.org/issue14744 #14743: on terminating, Pdb debugs itself http://bugs.python.org/issue14743 #14735: Version 3.2.3 IDLE CTRL-Z plus Carriage Return to end does not http://bugs.python.org/issue14735 #14733: Custom commands don't work http://bugs.python.org/issue14733 Top 10 most discussed issues (10) ================================= #14657: Avoid two importlib copies http://bugs.python.org/issue14657 18 msgs #13815: tarfile.ExFileObject can't be wrapped using io.TextIOWrapper http://bugs.python.org/issue13815 13 msgs #9260: A finer grained import lock http://bugs.python.org/issue9260 10 msgs #14744: Use _PyUnicodeWriter API in str.format() internals http://bugs.python.org/issue14744 10 msgs #14759: BSDDB license missing from liscense page in 2.7. http://bugs.python.org/issue14759 9 msgs #14082: shutil doesn't copy extended attributes http://bugs.python.org/issue14082 7 msgs #14702: os.makedirs breaks under autofs directories http://bugs.python.org/issue14702 7 msgs #14766: Non-naive time comparison throws naive time error http://bugs.python.org/issue14766 7 msgs #14772: Return destination values in some shutil functions http://bugs.python.org/issue14772 7 msgs #14750: Tkinter application doesn't run from source build on Windows http://bugs.python.org/issue14750 6 msgs Issues closed (39) ================== #2377: Replace __import__ w/ importlib.__import__ http://bugs.python.org/issue2377 closed by brett.cannon #13989: gzip always returns byte strings, no text mode http://bugs.python.org/issue13989 closed by python-dev #14034: Add argparse howto http://bugs.python.org/issue14034 closed by ezio.melotti #14093: Mercurial version information not appearing in Windows builds http://bugs.python.org/issue14093 closed by python-dev #14157: time.strptime without a year fails on Feb 29 http://bugs.python.org/issue14157 closed by pitrou #14583: try/except import fails --without-threads http://bugs.python.org/issue14583 closed by pitrou #14654: Faster utf-8 decoding http://bugs.python.org/issue14654 closed by loewis #14662: shutil.move doesn't handle ENOTSUP raised by chflags on OS X http://bugs.python.org/issue14662 closed by ned.deily #14695: Tools/parser/unparse.py is out of date. http://bugs.python.org/issue14695 closed by mark.dickinson #14697: parser module doesn't support set displays or set comprehensio http://bugs.python.org/issue14697 closed by mark.dickinson #14700: Integer overflow in classic string formatting http://bugs.python.org/issue14700 closed by mark.dickinson #14701: parser module doesn't support 'raise ... from' http://bugs.python.org/issue14701 closed by mark.dickinson #14705: Add 'bool' format character to PyArg_ParseTuple* http://bugs.python.org/issue14705 closed by larry #14716: Use unicode_writer API for str.format() http://bugs.python.org/issue14716 closed by python-dev #14722: Overflow in parsing 'float' parameters in PyArg_ParseTuple* http://bugs.python.org/issue14722 closed by mark.dickinson #14723: Misleading error message for str.format() http://bugs.python.org/issue14723 closed by eric.smith #14725: test_multiprocessing failure under Windows http://bugs.python.org/issue14725 closed by sbt #14726: Lib/email/*.py use an EMPTYSTRING global instead of '' http://bugs.python.org/issue14726 closed by r.david.murray #14727: test_multiprocessing failure under Linux http://bugs.python.org/issue14727 closed by vinay.sajip #14729: test_faulthandler test is too specific to work on Windows http://bugs.python.org/issue14729 closed by python-dev #14736: Add {encode, decode}_filter_properties() functions to lzma mod http://bugs.python.org/issue14736 closed by nadeem.vawda #14737: subprocess.Popen pipes not working http://bugs.python.org/issue14737 closed by pitrou #14738: Amazingly faster UTF-8 decoding http://bugs.python.org/issue14738 closed by pitrou #14740: get_payload(n, True) returns None http://bugs.python.org/issue14740 closed by r.david.murray #14741: parser module doesn't support Ellipsis. http://bugs.python.org/issue14741 closed by mark.dickinson #14745: Misleading exception http://bugs.python.org/issue14745 closed by python-dev #14746: Remove redundant paragraphs from getargs.c skipitem() http://bugs.python.org/issue14746 closed by larry #14749: Add 'Z' to skipitem() in Python/getargs.c http://bugs.python.org/issue14749 closed by larry #14752: Memleak in typeobject add_methods() http://bugs.python.org/issue14752 closed by python-dev #14753: multiprocessing treats negative timeouts differently from befo http://bugs.python.org/issue14753 closed by sbt #14754: Emacs configuration to enforce PEP7 http://bugs.python.org/issue14754 closed by georg.brandl #14756: Empty Dict in Initializer is Shared Betwean Objects http://bugs.python.org/issue14756 closed by pitrou #14760: logging: make setLevel() return handler itself for chained con http://bugs.python.org/issue14760 closed by vinay.sajip #14761: Memleak in import.c load_source_module() http://bugs.python.org/issue14761 closed by pitrou #14762: ElementTree memory leak http://bugs.python.org/issue14762 closed by Giuseppe.Attardi #14763: string.split maxsplit documented incorrectly http://bugs.python.org/issue14763 closed by ezio.melotti #14764: importlib.test.benchmark broken http://bugs.python.org/issue14764 closed by brett.cannon #14765: the struct example should give consistent results across diffe http://bugs.python.org/issue14765 closed by meador.inge #14768: os.path.expanduser('~/a') doesn't works correctly when HOME is http://bugs.python.org/issue14768 closed by python-dev From guido at python.org Fri May 11 18:38:34 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 11 May 2012 09:38:34 -0700 Subject: [Python-Dev] Exception and ABCs / issue #12029 In-Reply-To: References: Message-ID: Thanks for bringing this up. I've added my opinion to the tracker issue -- I think it's a bug and should be fixed. We should have a uniform way of checking for issubclass/isinstance. --Guido On Fri, May 11, 2012 at 3:33 AM, George-Cristian B?rzan wrote: > As per http://bugs.python.org/issue12029 , ABC registration cannot be > used for exceptions. This was introduced in a commit that fixed a > recursion limit problem back in 2008 > (http://hg.python.org/cpython/rev/d6e86a96f9b3/#l8.10). This was later > fixed in a different way and improved upon in the 2.x branch in > http://hg.python.org/cpython/rev/7e86fa255fc2 and > http://hg.python.org/cpython/rev/57de1ad15c54 respectively. > > Applying the fix from the 2.x branch for doesn't make any tests fail, > and it fixes the problem described in the bug report. There are, > however, two questions about this: > > * Is this a feature, or a bug? I would say that it's a bug, but even > if it's not, it has to be documented, since one generally assumes that > it will work. > * Even so, is it worth fixing, considering the limited use cases for > it? This slows exception type checking 3 times. I added a new test to > pybench: > > before: > ? ? ? TryRaiseExceptClass: ? ? 25ms ? ? 25ms ? ?0.39us ? ?0.216ms > after: > ? ? ? TryRaiseExceptException: ? ? 31ms ? ? 31ms ? ?0.48us ? ?0.214ms > > However, that doesn't tell the whole story, since there's overhead > from raising the exception. In order to find out how much actually > checking slows down the checking, I ran three timeits, with the > following code: > > 1) > try: raise ValueError() > except NameError: pass > except NameError: pass > except ValueError: pass > > 2) > try: raise ValueError() > except NameError: pass > except ValueError: pass > > 3) > try: raise ValueError() > except ValueError: pass > > Times are in ms: > ? ? ? before ? ? ?after > 1 ? ? ? 528.69 ? ? ?825.38 > 2 ? ? ? 473.73 ? ? ?653.39 > 3 ? ? ? 416.29 ? ? ?496.80 > avgdiff ?56.23 ? ? ?164.29 > > The numbers don't change significantly for more exception tests. > > -- > George-Cristian B?rzan > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From jdhardy at gmail.com Fri May 11 19:28:32 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Fri, 11 May 2012 10:28:32 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: <4FAC43BC.7000705@pearwood.info> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> <4FAC43BC.7000705@pearwood.info> Message-ID: On Thu, May 10, 2012 at 3:39 PM, Steven D'Aprano wrote: >> Aye. Add a rule that all implementation specific (i.e. not defined in >> the PEP) keys must be prefixed with an underscore and I'm sold. > > > So now we're adding a new convention to single underscore names? Single > underscore names are implementation-specific details that you shouldn't use > or rely on, except in sys.implementation, where they are an optional part of > the public interface. I've always seen _names as less implementation details and more 'here be dragons; tread carefully'. I don't think adding a different convention really changes that at all. The underscore ones would (mostly) be implementation-specific anyway. _clr_version is something only IronPython is going to have, for example. If more than one implementation has something it can be promoted to a non-underscore name, but I think that will be rare. Some of the suggested metadata (like vcs_revision and build date) could actually be required right out of the gate, and cache_tag should be optional. - Jeff From ncoghlan at gmail.com Sat May 12 02:20:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 12 May 2012 10:20:54 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> <4FAC43BC.7000705@pearwood.info> Message-ID: The specific reason cache_tag is mandatory is so that importlib can rely on it. Setting it to None for a given implementation will automatically disable caching of bytecode files. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Sat May 12 04:40:46 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 11 May 2012 20:40:46 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120510105749.7401f1d2@pitrou.net> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On Thu, May 10, 2012 at 2:57 AM, Antoine Pitrou wrote: > sys.implementation.metadata looks like a completely over-engineered > concept. Please, let's just make sys.implementation a dict and stop > bothering about ordering and iterability. I'm fine with ditching "metadata". The PEP will say sys.implementation must have the required attributes and leave it at that. However, my preference is still for dotted access rather than a dict. The type doesn't really matter to me otherwise. Immutability isn't a big concern nor is sequence-ness. I'll tone the type discussion accordingly. If anyone has strong feelings for item-access over attribute-access, please elaborate. I'm just not seeing it as that important and would rather finish up the PEP as simply as possible. -eric From ncoghlan at gmail.com Sat May 12 14:04:01 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 12 May 2012 22:04:01 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On Sat, May 12, 2012 at 12:40 PM, Eric Snow wrote: > If anyone has strong feelings for item-access over attribute-access, > please elaborate. ?I'm just not seeing it as that important and would > rather finish up the PEP as simply as possible. I object to adding a new type to the stdlib just for this PEP. Since iterating over the keys is significantly more useful than iterating over the values, that suggests a dictionary as the most appropriate type. If someone *really* wants a quick way to get dotted access to the contents of dictionary: >>> data = dict(a=1, b=2, c=3) >>> ns = type('', (), data) >>> ns.a 1 >>> ns.b 2 >>> ns.c 3 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Sat May 12 16:35:23 2012 From: barry at python.org (Barry Warsaw) Date: Sat, 12 May 2012 07:35:23 -0700 Subject: [Python-Dev] sys.implementation References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <20120512073523.03ee688b@resist> On May 12, 2012, at 10:04 PM, Nick Coghlan wrote: >On Sat, May 12, 2012 at 12:40 PM, Eric Snow >wrote: > If anyone has strong feelings for item-access over attribute-access, >> please elaborate. ?I'm just not seeing it as that important and would > >rather finish up the PEP as simply as possible. > >I object to adding a new type to the stdlib just for this PEP. Since >iterating over the keys is significantly more useful than iterating over the >values, that suggests a dictionary as the most appropriate type. I'm okay with dropping immutability for sys.implementation, but I still think attribute access is a more useful model. You can easily support both getattr and getitem with a class instance, so I think that's the way to go. (FWIW, immutability would also be easy to support with an instance.) -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tseaver at palladion.com Sat May 12 18:51:11 2012 From: tseaver at palladion.com (Tres Seaver) Date: Sat, 12 May 2012 12:51:11 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 05/12/2012 08:04 AM, Nick Coghlan wrote: > On Sat, May 12, 2012 at 12:40 PM, Eric Snow > wrote: >> If anyone has strong feelings for item-access over >> attribute-access, please elaborate. I'm just not seeing it as that >> important and would rather finish up the PEP as simply as possible. > > I object to adding a new type to the stdlib just for this PEP. Since > iterating over the keys is significantly more useful than iterating > over the values, that suggests a dictionary as the most appropriate > type. Why would anyone want to iterate over either of them? Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk+ulP8ACgkQ+gerLs4ltQ4OzACgwnmVgJzE+IdEdS0Ij1J357di bnoAni5nUCIDcZt7dwEOfLLPUZoJQYF9 =t05/ -----END PGP SIGNATURE----- From ericsnowcurrently at gmail.com Sat May 12 19:50:10 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 12 May 2012 11:50:10 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On Sat, May 12, 2012 at 6:04 AM, Nick Coghlan wrote: > On Sat, May 12, 2012 at 12:40 PM, Eric Snow wrote: >> If anyone has strong feelings for item-access over attribute-access, >> please elaborate. ?I'm just not seeing it as that important and would >> rather finish up the PEP as simply as possible. > > I object to adding a new type to the stdlib just for this PEP. And I agree with you. :) The only constraint is that it be an object with attribute access. That could be a named tuple, a module, an uninstantiated class, or whatever. A new type is not needed. If it's iterable or not is irrelevant with regards to the PEP. For the implementation I'd like it to have a good repr too, but even that's not part of the proposal. I've got the latest version of the PEP up now. It pares down the type discussion and eliminates "metadata". I figure it's good enough for what we need, and I've put adequate(?) warning that people shouldn't mess with it (consenting adults, etc.). Let me know what you think. > Since > iterating over the keys is significantly more useful than iterating > over the values, that suggests a dictionary as the most appropriate > type. > > If someone *really* wants a quick way to get dotted access to the > contents of dictionary: > >>>> data = dict(a=1, b=2, c=3) >>>> ns = type('', (), data) >>>> ns.a > 1 >>>> ns.b > 2 >>>> ns.c > 3 That's pretty cool. As a counter example, given a normal (dict-based) object you can use vars() to turn it into a dict: >>> data = SomeClass(a=1, b=2, c=3) >>> ns = vars(data) >>> ns['a'] 1 >>> ns['b'] 2 >>> ns['c'] 3 I'll grant that it doesn't work for some objects (like named tuples), but for sys.implementation I don't think it matters either way. -eric From ericsnowcurrently at gmail.com Sat May 12 19:57:39 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 12 May 2012 11:57:39 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120512073523.03ee688b@resist> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> <20120512073523.03ee688b@resist> Message-ID: On Sat, May 12, 2012 at 8:35 AM, Barry Warsaw wrote: > I'm okay with dropping immutability for sys.implementation, but I still think > attribute access is a more useful model. ?You can easily support both getattr > and getitem with a class instance, so I think that's the way to go. > > (FWIW, immutability would also be easy to support with an instance.) Agreed on both counts. The precedent in sys and elsewhere favors attribute access for a fixed namespace like sys.implementation. Also, item access (a la mappings) implies a more volatile namespace. -eric From ericsnowcurrently at gmail.com Sat May 12 20:02:14 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 12 May 2012 12:02:14 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: On Sat, May 12, 2012 at 10:51 AM, Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 05/12/2012 08:04 AM, Nick Coghlan wrote: >> On Sat, May 12, 2012 at 12:40 PM, Eric Snow >> wrote: >>> If anyone has strong feelings for item-access over >>> attribute-access, please elaborate. ?I'm just not seeing it as that >>> important and would rather finish up the PEP as simply as possible. >> >> I object to adding a new type to the stdlib just for this PEP. Since >> iterating over the keys is significantly more useful than iterating >> over the values, that suggests a dictionary as the most appropriate >> type. > > Why would anyone want to iterate over either of them? Nick gave a pretty good example [1]. I just don't think it's necessary for the PEP. -eric [1] http://mail.python.org/pipermail/python-dev/2012-May/119412.html From steve at pearwood.info Sat May 12 20:07:25 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 13 May 2012 04:07:25 +1000 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <4FAEA6DD.3060102@pearwood.info> Tres Seaver wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 05/12/2012 08:04 AM, Nick Coghlan wrote: >> On Sat, May 12, 2012 at 12:40 PM, Eric Snow >> wrote: >>> If anyone has strong feelings for item-access over >>> attribute-access, please elaborate. I'm just not seeing it as that >>> important and would rather finish up the PEP as simply as possible. >> I object to adding a new type to the stdlib just for this PEP. Since >> iterating over the keys is significantly more useful than iterating >> over the values, that suggests a dictionary as the most appropriate >> type. > > Why would anyone want to iterate over either of them? 1) I don't know what keys exist, so I use introspection on sys.implementation by iterating over the keys and/or values. E.g. dir(sys.implementation), or list(sys.implementation.keys()). 2) I know what keys exist, but I want to pretty-print the list of key/value pairs without having to explicitly write them out by hand: print("spam", sys.implementation.spam) print("ham", sys.implementation.ham) print("cheese", sys.implementation.cheese) # and so on... -- Steven From v+python at g.nevcal.com Sat May 12 23:25:53 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 12 May 2012 14:25:53 -0700 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <4FAED561.5000103@g.nevcal.com> On 5/12/2012 10:50 AM, Eric Snow wrote: > given a normal (dict-based) > object you can use vars() to turn it into a dict: > >>>> >>> data = SomeClass(a=1, b=2, c=3) >>>> >>> ns = vars(data) >>>> >>> ns['a'] > 1 >>>> >>> ns['b'] > 2 >>>> >>> ns['c'] > 3 > > I'll grant that it doesn't work for some objects (like named tuples), Why not? Seems like it could, with a tweak to vars ... named tuples already have a method to return a dict. vars already has a special case to act like locals. So why not add a special case to allow vars to work on named tuples? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Sun May 13 01:43:05 2012 From: brian at python.org (Brian Curtin) Date: Sat, 12 May 2012 18:43:05 -0500 Subject: [Python-Dev] Preparation for VS2010 - MSDN for Windows build slaves, core devs In-Reply-To: References: Message-ID: On Mon, Apr 2, 2012 at 9:12 PM, Brian Curtin wrote: > Hi all, > > If you are a running a build slave or otherwise have an MSDN account > for development work, please check that your MSDN subscription is > still in effect. If the subscription expired, please let me know in > private what your subscriber ID is along with the email address you > use for the account. > > Eventually we're switching to VS2010 so each slave will need to have > that version of the compiler installed. > > Thanks I heard back from our Microsoft contact that everyone who requested renewals should have begun processing around a week ago. Since it usually takes around a week, hopefully you've all received the renewal. If not, let me know and I'll get you taken care of. If build slave owners could let me know when their machine has VS2010 I'd appreciate it. I got the go-ahead to commit the port but want to wait until the build slaves are ready for it. From martin at v.loewis.de Sun May 13 09:30:30 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sun, 13 May 2012 09:30:30 +0200 Subject: [Python-Dev] Preparation for VS2010 - MSDN for Windows build slaves, core devs In-Reply-To: References: Message-ID: <20120513093030.Horde.DEyMZElCcOxPr2MW85k1deA@webmail.df.eu> > If build slave owners could let me know when their machine has VS2010 > I'd appreciate it. I got the go-ahead to commit the port but want to > wait until the build slaves are ready for it. Please don't wait, but let the build slaves break. This is getting urgent. Regards, Martin From martin at v.loewis.de Sun May 13 10:13:52 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 13 May 2012 10:13:52 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14779: Do not use get_config_var('SIZEOF_VOID_P') on OS X 64-/32-bit In-Reply-To: References: Message-ID: <4FAF6D40.3040108@v.loewis.de> > - self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') > + self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') \ > + if sys.platform != 'darwin' else None > if not self.sizeof_void_p: > - self.sizeof_void_p = 8 if architecture()[0] == '64bit' else 4 > + self.sizeof_void_p = 8 if sys.maxsize> 2**32 else 4 > Why not unconditionally use sys.maxsize? I'd also hard-code that sys.maxsize ought to be either 2**31-1 or 2**63-1. Regards, Martin From stefan at bytereef.org Sun May 13 11:48:48 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 13 May 2012 11:48:48 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14779: Do not use get_config_var('SIZEOF_VOID_P') on OS X 64-/32-bit In-Reply-To: <4FAF6D40.3040108@v.loewis.de> References: <4FAF6D40.3040108@v.loewis.de> Message-ID: <20120513094848.GA27514@sleipnir.bytereef.org> "Martin v. L?wis" wrote: [http://bugs.python.org/issue14779] >> - self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') >> + self.sizeof_void_p = get_config_var('SIZEOF_VOID_P') \ >> + if sys.platform != 'darwin' else None >> if not self.sizeof_void_p: >> - self.sizeof_void_p = 8 if architecture()[0] == '64bit' else 4 >> + self.sizeof_void_p = 8 if sys.maxsize> 2**32 else 4 >> > > Why not unconditionally use sys.maxsize? Because the tests need sizeof(void *). In an array with suboffsets void pointers are embedded at the start of the array. The C standard doesn't guarantee sizeof(void *) == sizeof(size_t). In fact, there are machines where sizeof(void *) > sizeof(size_t): http://comments.gmane.org/gmane.comp.programming.garbage-collection.boehmgc/651 http://www-01.ibm.com/support/docview.wss?uid=swg27019425 If you change pyconfig.h to 128 bit pointers while leaving sizeof(size_t) and sizeof(ssize_t) at 8, pyport.h by itself doesn't catch the mismatch. /* The size of `uintptr_t', as computed by sizeof. */ #define SIZEOF_UINTPTR_T 16 /* The size of `void *', as computed by sizeof. */ #define SIZEOF_VOID_P 16 However, now that I tried to compile Python with that pyconfig.h, longobject.c *does* catch it: Objects/longobject.c:943:5: error: #error "PyLong_FromVoidPtr: sizeof(PY_LONG_LONG) < sizeof(void*)" Objects/longobject.c:970:5: error: #error "PyLong_AsVoidPtr: sizeof(PY_LONG_LONG) < sizeof(void*)" If sizeof(void *) == sizeof(size_t) is the general assumption for compiling Python, I think the test should happen prominently in either pyport.h or Python.h. > I'd also hard-code that sys.maxsize ought to be either 2**31-1 or 2**63-1. I would have done exactly that, but the example in the docs that was quoted to me in the issue uses > 2**32: http://docs.python.org/dev/library/platform.html Stefan Krah From techtonik at gmail.com Sun May 13 13:02:50 2012 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 13 May 2012 14:02:50 +0300 Subject: [Python-Dev] WSGI paranoia with stdout/stderr Message-ID: There is fear and uncertainty in this pull request to PyPI - https://bitbucket.org/techtonik/pypi-techtonik/changeset/5396f8c60d49#comment-18915 - which is about that writing to stderr _might_ break things in WSGI applications. As a consequence logging to console will not be accepted in debug mode, which is disappointing, but not as disappointing as the absence of proper explanation. Martin couldn't provide any grounds for his fears, so I am asking fellow Python developers if anybody remember "if writing to stderr can break things in generic WSGI application" and reassure Martin that everything will be ok. -- anatoly t. From solipsis at pitrou.net Sun May 13 13:09:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 13 May 2012 13:09:19 +0200 Subject: [Python-Dev] WSGI paranoia with stdout/stderr References: Message-ID: <20120513130919.247546cf@pitrou.net> On Sun, 13 May 2012 14:02:50 +0300 anatoly techtonik wrote: > There is fear and uncertainty in this pull request to PyPI - > https://bitbucket.org/techtonik/pypi-techtonik/changeset/5396f8c60d49#comment-18915 > - which is about that writing to stderr _might_ break things in WSGI > applications. > > As a consequence logging to console will not be accepted in debug > mode, which is disappointing, but not as disappointing as the absence > of proper explanation. Martin couldn't provide any grounds for his > fears, so I am asking fellow Python developers if anybody remember "if > writing to stderr can break things in generic WSGI application" and > reassure Martin that everything will be ok. According to this blog post, writing to stderr is fine (stdout is not): http://blog.dscpl.com.au/2009/04/wsgi-and-printing-to-standard-output.html Regards Antoine. From g.brandl at gmx.net Sun May 13 15:02:15 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 13 May 2012 15:02:15 +0200 Subject: [Python-Dev] WSGI paranoia with stdout/stderr In-Reply-To: <20120513130919.247546cf@pitrou.net> References: <20120513130919.247546cf@pitrou.net> Message-ID: Am 13.05.2012 13:09, schrieb Antoine Pitrou: > On Sun, 13 May 2012 14:02:50 +0300 > anatoly techtonik wrote: >> There is fear and uncertainty in this pull request to PyPI - >> https://bitbucket.org/techtonik/pypi-techtonik/changeset/5396f8c60d49#comment-18915 >> - which is about that writing to stderr _might_ break things in WSGI >> applications. >> >> As a consequence logging to console will not be accepted in debug >> mode, which is disappointing, but not as disappointing as the absence >> of proper explanation. Martin couldn't provide any grounds for his >> fears, so I am asking fellow Python developers if anybody remember "if >> writing to stderr can break things in generic WSGI application" and >> reassure Martin that everything will be ok. > > According to this blog post, writing to stderr is fine (stdout is not): > http://blog.dscpl.com.au/2009/04/wsgi-and-printing-to-standard-output.html Whether yes or no, this topic doesn't belong to python-dev: it's either for python-list or the web-SIG. Georg From storchaka at gmail.com Sun May 13 16:28:15 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 13 May 2012 17:28:15 +0300 Subject: [Python-Dev] void* <-> size_t In-Reply-To: <20120513094848.GA27514@sleipnir.bytereef.org> References: <4FAF6D40.3040108@v.loewis.de> <20120513094848.GA27514@sleipnir.bytereef.org> Message-ID: On 13.05.12 12:48, Stefan Krah wrote: > The C standard doesn't guarantee sizeof(void *) == sizeof(size_t). In > fact, there are machines where sizeof(void *)> sizeof(size_t): > > http://comments.gmane.org/gmane.comp.programming.garbage-collection.boehmgc/651 > http://www-01.ibm.com/support/docview.wss?uid=swg27019425 I noticed recently that the code is often used unsafe casting void* -> size_t and size_t -> void*. For example: const char *aligned_end = (const char *) ((size_t) end & ~LONG_PTR_MASK); I defer this issue until issues 14624 and 14624 will be resolved (same method is used in the suggested patches), but once it already mentioned, should be replaced size_t to Py_uintptr_t in all such castings? From brian at python.org Sun May 13 18:21:58 2012 From: brian at python.org (Brian Curtin) Date: Sun, 13 May 2012 11:21:58 -0500 Subject: [Python-Dev] Preparation for VS2010 - MSDN for Windows build slaves, core devs In-Reply-To: <20120513093030.Horde.DEyMZElCcOxPr2MW85k1deA@webmail.df.eu> References: <20120513093030.Horde.DEyMZElCcOxPr2MW85k1deA@webmail.df.eu> Message-ID: On Sun, May 13, 2012 at 2:30 AM, wrote: >> If build slave owners could let me know when their machine has VS2010 >> I'd appreciate it. I got the go-ahead to commit the port but want to >> wait until the build slaves are ready for it. > > > Please don't wait, but let the build slaves break. This is getting urgent. Pushed the port in http://hg.python.org/cpython/rev/38d7d944370e From ncoghlan at gmail.com Mon May 14 14:04:53 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 14 May 2012 22:04:53 +1000 Subject: [Python-Dev] Accepting PEP 415 (alternative implementation strategy for PEP 409's "raise exc from None" syntax) Message-ID: As the subject line says, as Guido's delegate, I'm accepting Benjamin's PEP 415 with the current reference implementation. This PEP changes the implementation of the new "raise exc from None" syntax to eliminate the use of Ellipsis as a "not set" sentinel value in favour of a separate "__suppress_context__" attribute on exceptions. This new flag defaults to False, but is implicitly set to True whenever a value is assigned to __cause__ (regardless of whether that happens via direct assignment , the new syntax or the C API). The question of how the builtin and standard library exception display routines should handle the cause where both __cause__ and __context__ are set and __suppress_context__ has been explicitly set to False will be decided independently of the PEP acceptance (see http://bugs.python.org/issue14805). Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Mon May 14 16:20:40 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 14 May 2012 10:20:40 -0400 Subject: [Python-Dev] Accepting PEP 415 (alternative implementation strategy for PEP 409's "raise exc from None" syntax) In-Reply-To: References: Message-ID: <20120514102040.07531ee9@limelight.wooz.org> On May 14, 2012, at 10:04 PM, Nick Coghlan wrote: >As the subject line says, as Guido's delegate, I'm accepting >Benjamin's PEP 415 with the current reference implementation. I'm glad to see this PEP get accepted. I have just minor quibbles :). Can you or Benjamin improve the title of the PEP? It's already difficult enough to keep the mappings of PEP numbers to subjects in your head, even for the subset of PEPs you track. Having a PEP title that refers to *another* PEP number just makes things too confusing. How about: "Suppressing exception context via BaseException attribute" ? I also understand that PEP 415 is an elaboration of PEP 409, not a complete replacement, however it seems wrong that PEP 409 does not even reference PEP 415. Thus, while not a perfect solution, I suggest PEP 409 get a Superseded-By header that points to 415. 415 should get a Replaces header that points to 409. Then PEP 415 should get a section describing how the bulk of 409 is still valid, except for blah blah blah. (IOW, include the still valid parts of PEP 409 by reference.) Cheers, -Barry From solipsis at pitrou.net Mon May 14 18:50:59 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 14 May 2012 18:50:59 +0200 Subject: [Python-Dev] cpython: Issue #14532: Add a secure_compare() helper to the hmac module, to mitigate References: Message-ID: <20120514185059.3fd69356@pitrou.net> On Sun, 13 May 2012 19:53:27 +0200 charles-francois.natali wrote: > > +This module also provides the following helper function: > + > +.. function:: secure_compare(a, b) [...] You need a versionadded tag. Regards Antoine. From ncoghlan at gmail.com Tue May 15 03:09:51 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 May 2012 11:09:51 +1000 Subject: [Python-Dev] Accepting PEP 415 (alternative implementation strategy for PEP 409's "raise exc from None" syntax) In-Reply-To: <20120514102040.07531ee9@limelight.wooz.org> References: <20120514102040.07531ee9@limelight.wooz.org> Message-ID: On Tue, May 15, 2012 at 12:20 AM, Barry Warsaw wrote: > On May 14, 2012, at 10:04 PM, Nick Coghlan wrote: > >>As the subject line says, as Guido's delegate, I'm accepting >>Benjamin's PEP 415 with the current reference implementation. > > I'm glad to see this PEP get accepted. ?I have just minor quibbles :). > > Can you or Benjamin improve the title of the PEP? ?It's already difficult > enough to keep the mappings of PEP numbers to subjects in your head, even for > the subset of PEPs you track. ?Having a PEP title that refers to *another* PEP > number just makes things too confusing. ?How about: > > "Suppressing exception context via BaseException attribute" ?? > > I also understand that PEP 415 is an elaboration of PEP 409, not a complete > replacement, however it seems wrong that PEP 409 does not even reference PEP > 415. > > Thus, while not a perfect solution, I suggest PEP 409 get a Superseded-By > header that points to 415. ?415 should get a Replaces header that points to > 409. ?Then PEP 415 should get a section describing how the bulk of 409 is > still valid, except for blah blah blah. ?(IOW, include the still valid parts > of PEP 409 by reference.) Helping others follow the bouncing ball in the historical record makes sense to me - I'll make these tweaks this evening. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Tue May 15 06:03:22 2012 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 14 May 2012 21:03:22 -0700 Subject: [Python-Dev] Accepting PEP 415 (alternative implementation strategy for PEP 409's "raise exc from None" syntax) In-Reply-To: References: <20120514102040.07531ee9@limelight.wooz.org> Message-ID: 2012/5/14 Nick Coghlan : > On Tue, May 15, 2012 at 12:20 AM, Barry Warsaw wrote: >> On May 14, 2012, at 10:04 PM, Nick Coghlan wrote: >> >>>As the subject line says, as Guido's delegate, I'm accepting >>>Benjamin's PEP 415 with the current reference implementation. >> >> I'm glad to see this PEP get accepted. ?I have just minor quibbles :). >> >> Can you or Benjamin improve the title of the PEP? ?It's already difficult >> enough to keep the mappings of PEP numbers to subjects in your head, even for >> the subset of PEPs you track. ?Having a PEP title that refers to *another* PEP >> number just makes things too confusing. ?How about: >> >> "Suppressing exception context via BaseException attribute" ?? >> >> I also understand that PEP 415 is an elaboration of PEP 409, not a complete >> replacement, however it seems wrong that PEP 409 does not even reference PEP >> 415. >> >> Thus, while not a perfect solution, I suggest PEP 409 get a Superseded-By >> header that points to 415. ?415 should get a Replaces header that points to >> 409. ?Then PEP 415 should get a section describing how the bulk of 409 is >> still valid, except for blah blah blah. ?(IOW, include the still valid parts >> of PEP 409 by reference.) > > Helping others follow the bouncing ball in the historical record makes > sense to me - I'll make these tweaks this evening. +1 indeed. -- Regards, Benjamin From shooshx at gmail.com Tue May 15 13:19:54 2012 From: shooshx at gmail.com (Shy Shalom) Date: Tue, 15 May 2012 14:19:54 +0300 Subject: [Python-Dev] zipimport to read from a file object instead of just a path? Message-ID: In zipimport.c, function get_data(), the zip file is opened using fopen() and read with CLib functions. Did anyone ever consider making it possible to read the zipped data from a generic file object and not just using a string path? Using StringIO, This would allow a higher degree of python embedding in an application. I would be able to have a zip file in memory and make python read modules from it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbyszek at in.waw.pl Tue May 15 13:43:38 2012 From: zbyszek at in.waw.pl (=?UTF-8?B?WmJpZ25pZXcgSsSZZHJ6ZWpld3NraS1Tem1law==?=) Date: Tue, 15 May 2012 13:43:38 +0200 Subject: [Python-Dev] Open PEPs and large-scale changes for 3.3 In-Reply-To: <87havz8m6p.fsf@benfinney.id.au> References: <87havz8m6p.fsf@benfinney.id.au> Message-ID: <4FB2416A.6060002@in.waw.pl> On 05/02/2012 02:24 AM, Ben Finney wrote: > Georg Brandl writes: > >> list of possible features for 3.3 as specified by PEP 398: >> >> Candidate PEPs: > [?] > >> * PEP 3143: Standard daemon process library I think that http://0pointer.de/public/systemd-man/daemon.html would a good addition to the 'see also' section. It contains a detailed listing of steps to be taked during daemonization. Zbyszek From ncoghlan at gmail.com Tue May 15 14:13:09 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 15 May 2012 22:13:09 +1000 Subject: [Python-Dev] Accepting PEP 3144 (the ipaddress library) Message-ID: Based on the current version of PEP 3144 and its reference implementation, I am formally accepting ipaddress into the standard library. I believe Peter has satisfactorily resolved the concerns previously raised with the proposed API, and if I missed anything... well, that's why we have alpha releases and the new provisional API status :) There's one point that could do with better documentation, which is the meaning of a "non-strict" Network address. In ipaddr.py, non-strict networks filled the role now filled by the separate Interface objects in the ipaddress module. In ipaddress, the "strict" flag instead just selects between raising a ValueError when passed a host address (the default) or simply coercing the host address to the appropriate network address. That behaviour strikes me as both reasonable and useful - the coercion aspect just needs to be mentioned in the documentation. The integration of the module into 3.3. will be tracked in http://bugs.python.org/issue14814 Peter will also need to be granted commit access in order to maintain the module. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Tue May 15 17:14:49 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 15 May 2012 17:14:49 +0200 Subject: [Python-Dev] zipimport to read from a file object instead of just a path? In-Reply-To: References: Message-ID: <20120515171449.Horde.xTlvO9jz9kRPsnLpOcvg4ZA@webmail.df.eu> Zitat von Shy Shalom : > In zipimport.c, function get_data(), the zip file is opened using fopen() > and read with CLib functions. > Did anyone ever consider making it possible to read the zipped data from a > generic file object and not just using a string path? It's already possible - just write another importer. For the builtin zipimport, this is not an option, since it seeds itself from the sys.path entry, which will be a file name. See PEP 302. Regards, Martin From ericsnowcurrently at gmail.com Tue May 15 18:26:35 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 15 May 2012 10:26:35 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: At this point I'm pretty comfortable with where PEP 421 is at. Before asking for pronouncement, I'd like to know if anyone has any outstanding concerns that should be addressed first. The only (relatively) substantial point of debate has been the type for sys.implementation. The PEP now limits the specification of the type to the minimum (Big-Endian vs. Little...er...attribute-access vs mapping). If anyone objects to the decision there to go with attribute-access, please make your case. >From my point of the view either one would be fine for what we need and attribute-access is more representative of the fixed namespace. Unless there is a really good reason to use a mapping, I'd like to stick with that. Thanks! -eric From guido at python.org Tue May 15 19:03:09 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 15 May 2012 10:03:09 -0700 Subject: [Python-Dev] Accepting PEP 3144 (the ipaddress library) In-Reply-To: References: Message-ID: Congrats Nick and Peter! On Tue, May 15, 2012 at 5:13 AM, Nick Coghlan wrote: > Based on the current version of PEP 3144 and its reference > implementation, I am formally accepting ipaddress into the standard > library. > > I believe Peter has satisfactorily resolved the concerns previously > raised with the proposed API, and if I missed anything... well, that's > why we have alpha releases and the new provisional API status :) > > There's one point that could do with better documentation, which is > the meaning of a "non-strict" Network address. In ipaddr.py, > non-strict networks filled the role now filled by the separate > Interface objects in the ipaddress module. In ipaddress, the "strict" > flag instead just selects between raising a ValueError when passed a > host address (the default) or simply coercing the host address to the > appropriate network address. That behaviour strikes me as both > reasonable and useful - the coercion aspect just needs to be mentioned > in the documentation. > > The integration of the module into 3.3. will be tracked in > http://bugs.python.org/issue14814 > > Peter will also need to be granted commit access in order to maintain > the module. > > Regards, > Nick. > > -- > Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From shooshx at gmail.com Tue May 15 19:21:32 2012 From: shooshx at gmail.com (Shy Shalom) Date: Tue, 15 May 2012 17:21:32 +0000 (UTC) Subject: [Python-Dev] zipimport to read from a file object instead of just a path? References: <20120515171449.Horde.xTlvO9jz9kRPsnLpOcvg4ZA@webmail.df.eu> Message-ID: > > It's already possible - just write another importer. For the builtin > zipimport, this is not an option, since it seeds itself from the sys.path > entry, which will be a file name. See PEP 302. > > Regards, > Martin > > Maybe it can be seeded by both a string path *or* a file object, the same way zipfile.ZipFile does. This way I'll be able to somehow override or re-initialize zipimport with my own StringIO - Shy From barry at python.org Tue May 15 20:31:20 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 15 May 2012 14:31:20 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <20120515143120.58fd7e07@limelight.wooz.org> On May 15, 2012, at 10:26 AM, Eric Snow wrote: >At this point I'm pretty comfortable with where PEP 421 is at. Before >asking for pronouncement, I'd like to know if anyone has any >outstanding concerns that should be addressed first. It looks great to me. If I were the PEP czar , I'd approve it. -Barry From tismer at stackless.com Tue May 15 22:13:04 2012 From: tismer at stackless.com (Christian Tismer) Date: Tue, 15 May 2012 22:13:04 +0200 Subject: [Python-Dev] dir() in inspect.py ? Message-ID: <4FB2B8D0.1010102@stackless.com> Hi, by chance I looked into the impl of inspect.getmembers today and was slightly shocked: def getmembers(object, predicate=None): """Return all members of an object as (name, value) pairs sorted by name. Optionally, only return members that satisfy a given predicate.""" results = [] for key in dir(object): According to http://docs.python.org/library/functions.html """ Note Because dir() is supplied primarily as a convenience for use at an interactive prompt, it tries to supply an interesting set of names more than it tries to supply a rigorously or consistently defined set of names, and its detailed behavior may change across releases. For example, metaclass attributes are not in the result list when the argument is a class. """ This is a bit inconsistent, and I think the standard lib should be the best example for clean code that is consistent with the docs. Is the usage of dir() correct in this context or is the doc right? It would be nice to add a sentence of clarification if the use of dir() is in fact the correct way to implement inspect. cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ From d.s.seljebotn at astro.uio.no Wed May 16 09:44:10 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 09:44:10 +0200 Subject: [Python-Dev] C-level duck typing Message-ID: <4FB35ACA.7090908@astro.uio.no> Hi python-dev, these ideas/questions comes out of the Cython and NumPy developer lists. What we want is a way to communicate things on the C level about the extension type instances we pass around. The solution today is often to rely on PyObject_TypeCheck. For instance, hundreds of handcrafted C extensions rely on the internal structure of NumPy arrays, and Cython will check whether objects are instances of a Cython class or not. However, this creates one-to-many situations; only one implementor of an object API/ABI, but many consumers. What we would like is multiple implementors and multiple consumers of mutually agreed-upon standards. We essentially want more duck typing on the C level. A similar situation was PEP 3118. But there's many more such things one might want to communicate at the C level, many of which are very domain-specific and not suitable for a PEP at all. Also PEPs don't backport well to older versions of Python. What we *think* we would like (but we want other suggestions!) is an arbitrarily extensible type object, without tying this into the type hierarchy. Say you have typedef struct { unsigned long extension_id; void *data; } PyTypeObjectExtensionEntry; and then a type object can (somehow!) point to an array of these. The array is linearly scanned by consumers for IDs they recognize (most types would only have one or two entries). Cython could then get a reserved ID space to communicate whatever it wants, NumPy another one, and there could be "unofficial PEPs" where two or more projects get together to draft a spec for a particular type extension ID without having to bother python-dev about it. And, we want this to somehow work with existing Python; we still support users on Python 2.4. Options we've thought of so far: a) Use dicts and capsules to get information across. But performance-wise the dict lookup is not an option for what we want to use this for in Cython. b) Implement a metaclass which extends PyTypeObject in this way. However, that means a common runtime dependency for libraries that want to use this scheme, which is a big disadvantage to us. Today, Cython doesn't ship a runtime library but only creates standalone compileable C files, and there's no dependency from NumPy on Cython or the other way around. c) Hijack a free bit in tp_flags (22?) which we use to indicate that the PyTypeObject struct is immediately followed by a pointer to such an array. The final approach is drafted in more detail at http://wiki.cython.org/enhancements/cep1001 . To us that looks very attractive both for the speed and for the lack of runtime dependencies, and it seems like it should work in existing versions of Python. But do please feel free to tell us we are misguided. Hijacking a flag bit certainly feels dirty. Examples of how this would be used: - In Cython, we'd like to use this to annotate callable objects that happen to wrap a C function with their corresponding C function pointers. That way, callables that wrap a C function could be "unboxed", so that Cython could "cast" the Python object "scipy.special.gamma" to a function pointer at runtime and speed up the call with an order of magnitude. SciPy and Cython just needs to agree on a spec. - Lots of C extensions rely on using PyObject_TypeCheck (or even do an exact check) before calling the NumPy C API with PyArrayObject* arguments. This means that new features all have to go into NumPy; it is rather difficult to create new experimental array libraries. Extensible PyTypeObject would open up the way for other experimental array libraries; NumPy could make the standards, but others implement them (without getting NumPy as a runtime dependency, which is the consequence of subclassing). Of course, porting over the hundreds (thousands?) of extensions relying on the NumPy C API is a lot of work, but we can at least get started... Ideas? Dag Sverre Seljebotn From martin at v.loewis.de Wed May 16 09:52:02 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 16 May 2012 09:52:02 +0200 Subject: [Python-Dev] [Python-checkins] devguide: Add VS2010 link and text, then restructure a few things In-Reply-To: References: Message-ID: <4FB35CA2.2090901@v.loewis.de> > +All versions previous to 3.3 use Microsoft Visual Studio 2008, available at > +https://www.microsoft.com/visualstudio/en-us/products/2008-editions/express. This isn't actually the case. 2.4 and 2.5 used Visual Studio 2003, 2.0 to 2.3 used VC6, 1.4 and 1.5 used Visual C++ 1.5; versions before that were available only from Mark Hammond. Regards, Martin From ncoghlan at gmail.com Wed May 16 10:11:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 May 2012 18:11:08 +1000 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB35ACA.7090908@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> Message-ID: Use PyObject_HasAttr, just as people use hasattr() for ducktyping in Python. If you want something more structured, use Abstract Base Classes, that's what they're for. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From d.s.seljebotn at astro.uio.no Wed May 16 10:25:21 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 10:25:21 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> Message-ID: <4FB36471.2000804@astro.uio.no> On 05/16/2012 10:11 AM, Nick Coghlan wrote: > Use PyObject_HasAttr, just as people use hasattr() for ducktyping in Python. In the Cython wrap-function-pointers case we really want performance comparable to C, so we couldn't do the whole thing. But I guess we could intern some char* (somehow), pass that to tp_getattr, and then cast the returned PyObject* (which would not be a PyObject*) to a custom struct. As long as the interned strings can never be reached from Python that's almost safe. It's still slight abuse of tp_getattr. As I said, if we didn't worry about performance we'd just retrieve capsules through attributes. Dag > > If you want something more structured, use Abstract Base Classes, > that's what they're for. > > Cheers, > Nick. > From martin at v.loewis.de Wed May 16 10:36:03 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 16 May 2012 10:36:03 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB35ACA.7090908@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> Message-ID: <4FB366F3.7010208@v.loewis.de> > And, we want this to somehow work with existing Python; we still > support users on Python 2.4. This makes the question out-of-scope for python-dev - we only discuss new versions of Python here. Old versions cannot be developed anymore (as they are released already). > typedef struct { > unsigned long extension_id; > void *data; > } PyTypeObjectExtensionEntry; > > and then a type object can (somehow!) point to an array of these. The > array is linearly scanned It's unclear to me why you think that a linear scan is faster than a dictionary lookup. The contrary will be the case - the dictionary lookup (PyObject_GetAttr) will be much faster. Just make sure to use interned strings, and to cache the interned strings. Regards, Martin From d.s.seljebotn at astro.uio.no Wed May 16 11:09:02 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 11:09:02 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB366F3.7010208@v.loewis.de> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> Message-ID: <4FB36EAE.90208@astro.uio.no> On 05/16/2012 10:36 AM, "Martin v. L?wis" wrote: > > And, we want this to somehow work with existing Python; we still > > support users on Python 2.4. > > This makes the question out-of-scope for python-dev - we only discuss > new versions of Python here. Old versions cannot be developed anymore > (as they are released already). Point taken. Sorry about that, and I appreciate your patience with me. I guess my idea was that if some mechanism was approved for future Python versions, we would feel easier about hacking around older Python versions. Of course, nothing is better than this not being a problem, as you seem to suggest. But: > >> typedef struct { >> unsigned long extension_id; >> void *data; >> } PyTypeObjectExtensionEntry; >> >> and then a type object can (somehow!) point to an array of these. The >> array is linearly scanned > > It's unclear to me why you think that a linear scan is faster than > a dictionary lookup. The contrary will be the case - the dictionary > lookup (PyObject_GetAttr) will be much faster. I've benchmarked using a PyObject* as a function-pointer-capsule using the above mechanism; that added about 2-3 nanoseconds of overhead on what would be a 5 nanosecond call in pure C. (There will only be 1 or 2 entries in that list...) Dict lookups are about 18 nanoseconds for me, using interned string objects (see below). Perhaps that can be reduced somewhat, but I highly doubt you'll get to 3-4 nanoseconds? Cython benchmark (which does translate do what you'd do in C): def hammer_dict(int n): cdef dict the_dict a = "hello" b = "there" the_dict = {a : a, b : a} for i in range(n): the_dict[b] Dag From stefan_ml at behnel.de Wed May 16 11:22:31 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 16 May 2012 11:22:31 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB366F3.7010208@v.loewis.de> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> Message-ID: "Martin v. L?wis", 16.05.2012 10:36: >> And, we want this to somehow work with existing Python; we still >> support users on Python 2.4. > > This makes the question out-of-scope for python-dev - we only discuss > new versions of Python here. Old versions cannot be developed anymore > (as they are released already). Well, it's in scope because CPython would have to support this in a future version, or at least have to make sure it knows about it so that it can stay out of the way (depending on how the solution eventually ends up working). We're also very much interested in input from the CPython core developers regarding the design, because we think that this should become a general feature of the Python platform (potentially also for PyPy etc.). The fact that we need to support it in older CPython versions is also relevant, because the solution we choose shouldn't conflict with older versions. The fact that they are no longer developed actually helps, because it will prevent them from interfering in the future any more than they do now. >> typedef struct { >> unsigned long extension_id; >> void *data; >> } PyTypeObjectExtensionEntry; >> >> and then a type object can (somehow!) point to an array of these. The >> array is linearly scanned > > It's unclear to me why you think that a linear scan is faster than > a dictionary lookup. The contrary will be the case - the dictionary > lookup (PyObject_GetAttr) will be much faster. Agreed in general, but in this case, it's really not that easy. A C function call involves a certain overhead all by itself, so calling into the C-API multiple times may be substantially more costly than, say, calling through a function pointer once and then running over a returned C array comparing numbers. And definitely way more costly than running over an array that the type struct points to directly. We are not talking about hundreds of entries here, just a few. A linear scan in 64 bit steps over something like a hundred bytes in the L1 cache should hardly be measurable. This might sound like a premature micro optimisation, but these things can quickly add up, e.g. when running a user provided function over a large array. (And, yes, we'd try to do caching and all that...) Stefan From martin at v.loewis.de Wed May 16 11:50:04 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 16 May 2012 11:50:04 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> Message-ID: <4FB3784C.9020906@v.loewis.de> > Agreed in general, but in this case, it's really not that easy. A C > function call involves a certain overhead all by itself, so calling into > the C-API multiple times may be substantially more costly than, say, > calling through a function pointer once and then running over a returned C > array comparing numbers. And definitely way more costly than running over > an array that the type struct points to directly. We are not talking about > hundreds of entries here, just a few. A linear scan in 64 bit steps over > something like a hundred bytes in the L1 cache should hardly be measurable. I give up, then. I fail to understand the problem. Apparently, you want to do something with the value you get from this lookup operation, but that something won't involve function calls (or else the function call overhead for the lookup wouldn't be relevant). I still think this is out of scope for python-dev. If this is something you want to be able to do for Python 2.4 as well, then you don't need any change to Python - you can do whatever you come up with for all Python versions, no need to (or point in) changing Python 3.4 (say). As this is apparently only relevant to speed fanatics, too, I suggest that you check how fast PyPI works for you. Supposedly, they have very efficient lookup procedures, supported by the JIT. If this doesn't work for some reason, I suggest that you'll have to trade speed for convenience: a compile-time fixed layout will beat any dynamic lookup any time. Just define a common base class, and have all interesting types inherit from it. Regards, Martin From mark at hotpy.org Wed May 16 12:28:40 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 16 May 2012 11:28:40 +0100 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB366F3.7010208@v.loewis.de> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> Message-ID: <4FB38158.60800@hotpy.org> Martin v. L?wis wrote: > > And, we want this to somehow work with existing Python; we still > > support users on Python 2.4. > > This makes the question out-of-scope for python-dev - we only discuss > new versions of Python here. Old versions cannot be developed anymore > (as they are released already). > >> typedef struct { >> unsigned long extension_id; >> void *data; >> } PyTypeObjectExtensionEntry; >> >> and then a type object can (somehow!) point to an array of these. The >> array is linearly scanned > > It's unclear to me why you think that a linear scan is faster than > a dictionary lookup. The contrary will be the case - the dictionary > lookup (PyObject_GetAttr) will be much faster. PyObject_GetAttr does a lot more than just a dictionary lookup. Perhaps making _PyType_Lookup() public might provide what it is needed? > > Just make sure to use interned strings, and to cache the interned strings. > > Regards, > Martin > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/mark%40hotpy.org From d.s.seljebotn at astro.uio.no Wed May 16 12:48:19 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 12:48:19 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB3784C.9020906@v.loewis.de> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> Message-ID: <4FB385F3.7070209@astro.uio.no> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >> Agreed in general, but in this case, it's really not that easy. A C >> function call involves a certain overhead all by itself, so calling into >> the C-API multiple times may be substantially more costly than, say, >> calling through a function pointer once and then running over a >> returned C >> array comparing numbers. And definitely way more costly than running over >> an array that the type struct points to directly. We are not talking >> about >> hundreds of entries here, just a few. A linear scan in 64 bit steps over >> something like a hundred bytes in the L1 cache should hardly be >> measurable. > > I give up, then. I fail to understand the problem. Apparently, you want > to do something with the value you get from this lookup operation, but > that something won't involve function calls (or else the function call > overhead for the lookup wouldn't be relevant). In our specific case the value would be an offset added to the PyObject*, and there we would find a pointer to a C function (together with a 64-bit signature), and calling that C function (after checking the 64 bit signature) is our final objective. > I still think this is out of scope for python-dev. If this is something > you want to be able to do for Python 2.4 as well, then you don't need > any change to Python - you can do whatever you come up with for all > Python versions, no need to (or point in) changing Python 3.4 (say). We can go ahead and hijack tp_flags bit 22 to make things work in existing versions. But what if Python 3.8 then starts using that bit for something else? > As this is apparently only relevant to speed fanatics, too, I suggest > that you check how fast PyPI works for you. Supposedly, they have very > efficient lookup procedures, supported by the JIT. If this doesn't work > for some reason, I suggest that you'll have to trade speed for > convenience: a compile-time fixed layout will beat any dynamic lookup > any time. Just define a common base class, and have all interesting > types inherit from it. Did you mean PyPy? Me and Stefan are Cython developers, so that's kind of our angle... And I'm a Cython developer because it solves a practical need (in my case in scientific computation), not because I think it's that beautiful. PyPy won't work for me (let's not go down that road now...) Defining a common base class is what NumPy already does, and Cython would be forced to without this proposal. We just think it has significant disadvantages and were looking for something else. Dag From stefan_ml at behnel.de Wed May 16 13:13:42 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 16 May 2012 13:13:42 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB385F3.7070209@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: Dag Sverre Seljebotn, 16.05.2012 12:48: > On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>> Agreed in general, but in this case, it's really not that easy. A C >>> function call involves a certain overhead all by itself, so calling into >>> the C-API multiple times may be substantially more costly than, say, >>> calling through a function pointer once and then running over a >>> returned C >>> array comparing numbers. And definitely way more costly than running over >>> an array that the type struct points to directly. We are not talking >>> about >>> hundreds of entries here, just a few. A linear scan in 64 bit steps over >>> something like a hundred bytes in the L1 cache should hardly be >>> measurable. >> >> I give up, then. I fail to understand the problem. Apparently, you want >> to do something with the value you get from this lookup operation, but >> that something won't involve function calls (or else the function call >> overhead for the lookup wouldn't be relevant). > > In our specific case the value would be an offset added to the PyObject*, > and there we would find a pointer to a C function (together with a 64-bit > signature), and calling that C function (after checking the 64 bit > signature) is our final objective. I think the use case hasn't been communicated all that clearly yet. Let's give it another try. Imagine we have two sides, one that provides a callable and the other side that wants to call it. Both sides are implemented in C, so the callee has a C signature and the caller has the arguments available as C data types. The signature may or may not match the argument types exactly (float vs. double, int vs. long, ...), because the caller and the callee know nothing about each other initially, they just happen to appear in the same program at runtime. All they know is that they could call each other through Python space, but that would require data conversion, tuple packing, calling, tuple unpacking, data unpacking, and then potentially the same thing on the way back. They want to avoid that overhead. Now, the caller needs to figure out if the callee has a compatible signature. The callee may provide more than one signature (i.e. more than one C call entry point), perhaps because it is implemented to deal with different input data types efficiently, or perhaps because it can efficiently convert them to its expected input. So, there is a signature on the caller side given by the argument types it holds, and a couple of signature on the callee side that can accept different C data input. Then the caller needs to find out which signatures there are and match them against what it can efficiently call. It may even be a JIT compiler that can generate an efficient call signature on the fly, given a suitable signature on callee side. An example for this is an algorithm that evaluates a user provided function on a large NumPy array. The caller knows what array type it is operating on, and the user provided function may be designed to efficiently operate on arrays of int, float and double entries. Does this use case make sense to everyone? The reason why we are discussing this on python-dev is that we are looking for a general way to expose these C level signatures within the Python ecosystem. And Dag's idea was to expose them as part of the type object, basically as an addition to the current Python level tp_call() slot. Stefan From stefan_ml at behnel.de Wed May 16 14:16:05 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Wed, 16 May 2012 14:16:05 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: Stefan Behnel, 16.05.2012 13:13: > Dag Sverre Seljebotn, 16.05.2012 12:48: >> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>> Agreed in general, but in this case, it's really not that easy. A C >>>> function call involves a certain overhead all by itself, so calling into >>>> the C-API multiple times may be substantially more costly than, say, >>>> calling through a function pointer once and then running over a >>>> returned C >>>> array comparing numbers. And definitely way more costly than running over >>>> an array that the type struct points to directly. We are not talking >>>> about >>>> hundreds of entries here, just a few. A linear scan in 64 bit steps over >>>> something like a hundred bytes in the L1 cache should hardly be >>>> measurable. >>> >>> I give up, then. I fail to understand the problem. Apparently, you want >>> to do something with the value you get from this lookup operation, but >>> that something won't involve function calls (or else the function call >>> overhead for the lookup wouldn't be relevant). >> >> In our specific case the value would be an offset added to the PyObject*, >> and there we would find a pointer to a C function (together with a 64-bit >> signature), and calling that C function (after checking the 64 bit >> signature) is our final objective. > > I think the use case hasn't been communicated all that clearly yet. Let's > give it another try. > > Imagine we have two sides, one that provides a callable and the other side > that wants to call it. Both sides are implemented in C, so the callee has a > C signature and the caller has the arguments available as C data types. The > signature may or may not match the argument types exactly (float vs. > double, int vs. long, ...), because the caller and the callee know nothing > about each other initially, they just happen to appear in the same program > at runtime. All they know is that they could call each other through Python > space, but that would require data conversion, tuple packing, calling, > tuple unpacking, data unpacking, and then potentially the same thing on the > way back. They want to avoid that overhead. > > Now, the caller needs to figure out if the callee has a compatible > signature. The callee may provide more than one signature (i.e. more than > one C call entry point), perhaps because it is implemented to deal with > different input data types efficiently, or perhaps because it can > efficiently convert them to its expected input. So, there is a signature on > the caller side given by the argument types it holds, and a couple of > signature on the callee side that can accept different C data input. Then > the caller needs to find out which signatures there are and match them > against what it can efficiently call. It may even be a JIT compiler that > can generate an efficient call signature on the fly, given a suitable > signature on callee side. > > An example for this is an algorithm that evaluates a user provided function > on a large NumPy array. The caller knows what array type it is operating > on, and the user provided function may be designed to efficiently operate > on arrays of int, float and double entries. > > Does this use case make sense to everyone? > > The reason why we are discussing this on python-dev is that we are looking > for a general way to expose these C level signatures within the Python > ecosystem. And Dag's idea was to expose them as part of the type object, > basically as an addition to the current Python level tp_call() slot. ... and to finish the loop that I started here (sorry for being verbose): The proposal that Dag referenced describes a more generic way to make this kind of extension to type objects from user code. Basically, it allows implementers to say "my type object has capability X", in a C-ish kind of way. And the above C signature protocol would be one of those capabilities. Personally, I wouldn't mind making the specific signature extension a proposal instead of asking for a general extension mechanism for arbitrary capabilities (although that still sounds tempting). Stefan From solipsis at pitrou.net Wed May 16 14:44:56 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 16 May 2012 14:44:56 +0200 Subject: [Python-Dev] 64-bit Windows buildbots needed Message-ID: <20120516144456.3c838db3@pitrou.net> Hello all, We still need 64-bit Windows buildbots to test for regressions. Otherwise we might let regressions slip through, since few people seem to run the test suite under Windows at home. Regards Antoine. From mark at hotpy.org Wed May 16 14:47:47 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 16 May 2012 13:47:47 +0100 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: <4FB3A1F3.2050405@hotpy.org> Stefan Behnel wrote: > Dag Sverre Seljebotn, 16.05.2012 12:48: >> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>> Agreed in general, but in this case, it's really not that easy. A C >>>> function call involves a certain overhead all by itself, so calling into >>>> the C-API multiple times may be substantially more costly than, say, >>>> calling through a function pointer once and then running over a >>>> returned C >>>> array comparing numbers. And definitely way more costly than running over >>>> an array that the type struct points to directly. We are not talking >>>> about >>>> hundreds of entries here, just a few. A linear scan in 64 bit steps over >>>> something like a hundred bytes in the L1 cache should hardly be >>>> measurable. >>> I give up, then. I fail to understand the problem. Apparently, you want >>> to do something with the value you get from this lookup operation, but >>> that something won't involve function calls (or else the function call >>> overhead for the lookup wouldn't be relevant). >> In our specific case the value would be an offset added to the PyObject*, >> and there we would find a pointer to a C function (together with a 64-bit >> signature), and calling that C function (after checking the 64 bit >> signature) is our final objective. > > I think the use case hasn't been communicated all that clearly yet. Let's > give it another try. > > Imagine we have two sides, one that provides a callable and the other side > that wants to call it. Both sides are implemented in C, so the callee has a > C signature and the caller has the arguments available as C data types. The > signature may or may not match the argument types exactly (float vs. > double, int vs. long, ...), because the caller and the callee know nothing > about each other initially, they just happen to appear in the same program > at runtime. All they know is that they could call each other through Python > space, but that would require data conversion, tuple packing, calling, > tuple unpacking, data unpacking, and then potentially the same thing on the > way back. They want to avoid that overhead. > > Now, the caller needs to figure out if the callee has a compatible > signature. The callee may provide more than one signature (i.e. more than > one C call entry point), perhaps because it is implemented to deal with > different input data types efficiently, or perhaps because it can > efficiently convert them to its expected input. So, there is a signature on > the caller side given by the argument types it holds, and a couple of > signature on the callee side that can accept different C data input. Then > the caller needs to find out which signatures there are and match them > against what it can efficiently call. It may even be a JIT compiler that > can generate an efficient call signature on the fly, given a suitable > signature on callee side. > > An example for this is an algorithm that evaluates a user provided function > on a large NumPy array. The caller knows what array type it is operating > on, and the user provided function may be designed to efficiently operate > on arrays of int, float and double entries. Given that use case, can I suggest the following: Separate the discovery of the function from its use. By this I mean first lookup the function (outside of the loop) then use the function (inside the loop). It would then be possible to lookup the function pointer, using the standard API, PyObject_GetAttr (or maybe _PyType_Lookup). Then when it came to applying the function, the function pointer could be used directly. To do this would require an extra builtin-function-like object, which would wrap the C function pointer. Currently the builtin (C) function type only supports a very limited range of types for the underlying function pointer. For example, an extended builtin-function could support (among other types) the C function type double (*func)(double, double). The extended builtin-function would be a Python callable, but would allow C extensions such to NumPy to access the underlying C function directly. The builtin-function declaration would consist of a pointer to the underlying function pointer and a type declaration which states which types it accepts. The VM would be responsible for any unboxing/boxing required. E.g float.__add__ could be constructed from a very simple C function (that adds two doubles and returns a double) and a type declaration: (cdouble, cdouble)->cdouble. Allowable types would be intptr_t, double or PyObject* (and maybe char*) PyObject* types could be further qualified with their Python type. Not allowing char, short, unsigned etc may seem like its too restrictive, but it prevents an explosion of possible types. Allowing only 3 C-level types and no more than 3 parameters (plus return) means that all 121 (3**4+3**3+3**2+3**1+3**0) permutations can be handled without resorting to ctypes/ffi. Example usage: typedef double (*ddd_func)(double, double); ddd_func cfunc; PyObject *func = PyType_Lookup(the_type, the_attribute); if (Py_TYPE(func) == Py_ExtendedBuiltinFunction_Type && str_cmp(Py_ExtendedFunctionBuiltin_TypeOf(func), "d,d->d") == 0) cfunc = Py_ExtendedFunctionBuiltin_GetFunctionPtr(func); else goto feature_not_provided; for (;;) /* Loop using cfunc */ [snip] Cheers, Mark. From brian at python.org Wed May 16 15:45:42 2012 From: brian at python.org (Brian Curtin) Date: Wed, 16 May 2012 08:45:42 -0500 Subject: [Python-Dev] [Python-checkins] devguide: Add VS2010 link and text, then restructure a few things In-Reply-To: <4FB35CA2.2090901@v.loewis.de> References: <4FB35CA2.2090901@v.loewis.de> Message-ID: On Wed, May 16, 2012 at 2:52 AM, "Martin v. L?wis" wrote: >> +All versions previous to 3.3 use Microsoft Visual Studio 2008, available >> at >> >> +https://www.microsoft.com/visualstudio/en-us/products/2008-editions/express. > > > This isn't actually the case. 2.4 and 2.5 used Visual Studio 2003, 2.0 to > 2.3 used VC6, 1.4 and 1.5 used Visual C++ 1.5; versions before that > were available only from Mark Hammond. I know *all* previous versions didn't use 2008 -- just all other versions we still support and that people seem to be working on or using. Anyway, I changed it from "all" to "most" in 0ac1d3863208. From d.s.seljebotn at astro.uio.no Wed May 16 16:46:16 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 16:46:16 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: <4FB3BDB8.1000208@astro.uio.no> On 05/16/2012 02:16 PM, Stefan Behnel wrote: > Stefan Behnel, 16.05.2012 13:13: >> Dag Sverre Seljebotn, 16.05.2012 12:48: >>> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>>> Agreed in general, but in this case, it's really not that easy. A C >>>>> function call involves a certain overhead all by itself, so calling into >>>>> the C-API multiple times may be substantially more costly than, say, >>>>> calling through a function pointer once and then running over a >>>>> returned C >>>>> array comparing numbers. And definitely way more costly than running over >>>>> an array that the type struct points to directly. We are not talking >>>>> about >>>>> hundreds of entries here, just a few. A linear scan in 64 bit steps over >>>>> something like a hundred bytes in the L1 cache should hardly be >>>>> measurable. >>>> >>>> I give up, then. I fail to understand the problem. Apparently, you want >>>> to do something with the value you get from this lookup operation, but >>>> that something won't involve function calls (or else the function call >>>> overhead for the lookup wouldn't be relevant). >>> >>> In our specific case the value would be an offset added to the PyObject*, >>> and there we would find a pointer to a C function (together with a 64-bit >>> signature), and calling that C function (after checking the 64 bit >>> signature) is our final objective. >> >> I think the use case hasn't been communicated all that clearly yet. Let's >> give it another try. >> >> Imagine we have two sides, one that provides a callable and the other side >> that wants to call it. Both sides are implemented in C, so the callee has a >> C signature and the caller has the arguments available as C data types. The >> signature may or may not match the argument types exactly (float vs. >> double, int vs. long, ...), because the caller and the callee know nothing >> about each other initially, they just happen to appear in the same program >> at runtime. All they know is that they could call each other through Python >> space, but that would require data conversion, tuple packing, calling, >> tuple unpacking, data unpacking, and then potentially the same thing on the >> way back. They want to avoid that overhead. >> >> Now, the caller needs to figure out if the callee has a compatible >> signature. The callee may provide more than one signature (i.e. more than >> one C call entry point), perhaps because it is implemented to deal with >> different input data types efficiently, or perhaps because it can >> efficiently convert them to its expected input. So, there is a signature on >> the caller side given by the argument types it holds, and a couple of >> signature on the callee side that can accept different C data input. Then >> the caller needs to find out which signatures there are and match them >> against what it can efficiently call. It may even be a JIT compiler that >> can generate an efficient call signature on the fly, given a suitable >> signature on callee side. >> >> An example for this is an algorithm that evaluates a user provided function >> on a large NumPy array. The caller knows what array type it is operating >> on, and the user provided function may be designed to efficiently operate >> on arrays of int, float and double entries. >> >> Does this use case make sense to everyone? >> >> The reason why we are discussing this on python-dev is that we are looking >> for a general way to expose these C level signatures within the Python >> ecosystem. And Dag's idea was to expose them as part of the type object, >> basically as an addition to the current Python level tp_call() slot. > > ... and to finish the loop that I started here (sorry for being verbose): > > The proposal that Dag referenced describes a more generic way to make this > kind of extension to type objects from user code. Basically, it allows > implementers to say "my type object has capability X", in a C-ish kind of > way. And the above C signature protocol would be one of those capabilities. > > Personally, I wouldn't mind making the specific signature extension a > proposal instead of asking for a general extension mechanism for arbitrary > capabilities (although that still sounds tempting). Here's some reasons for the generic proposal: a) Avoid pre-mature PEP-ing. Look at PEP 3118 for instance; that would almost certainly had been better if there had been a few years of beta-testing in the wild among Cython and NumPy users. I think PEP-ing the "nativecall" proposal soon (even in the unlikely event that it would be accepted) is bound to give suboptimal results -- it needs to be tested in the wild on Cython and SciPy users for a few years first. (Still, we can't ask those to recompile their Python.) My proposal is then about allowing people to play with their own slots, and deploy that to users, without having to create a PEP for their specific usecase. b) There's more than the "nativecall" we'd use this for in Cython. Something like compiled abstract base classes/compiled multiple inheritance/Go-style interfaces for instance. Some of those things we'd like to use it for certainly will never be a PEP. c) Get NumPy users off their PyObject_TypeCheck habit, which IMO is damaging to the NumPy project (because you can't that easily play around with different array libraries and new ideas -- NumPy is the only array type you can ever have, because millions of code lines have been written using its C API. My proposal provides a way of moving that API over to accept any object implementing a NumPy-specified spec. We certainly don't want to have a 20 nanosecond speed regression on every single call they make to the NumPy C API, and you simply don't rewrite millions of code lines.). I think having millions of lines of "Python" code written in C, and not Python, and considering 20 nanoseconds as "much", is perhaps not the typical usecase on this list. Still, that's the world of scientific computing with Python. Python-the-interpreter is just the "shell" around the real stuff that all happens in C or Fortran. (Cython is not just about scientific computing, as I'm sure Stefan has told you all about. But in other situations I think there's less of a need of "cross-talk" between extensions without going through the Python API.) I guess I don't get "if something needs to be fast on the C level, then that one specific usecase should be in a PEP". And all we're asking for is really that one bit in tp_flags. Dag From d.s.seljebotn at astro.uio.no Wed May 16 16:59:16 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 16:59:16 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB3A1F3.2050405@hotpy.org> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3A1F3.2050405@hotpy.org> Message-ID: <4FB3C0C4.3060506@astro.uio.no> On 05/16/2012 02:47 PM, Mark Shannon wrote: > Stefan Behnel wrote: >> Dag Sverre Seljebotn, 16.05.2012 12:48: >>> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>>> Agreed in general, but in this case, it's really not that easy. A C >>>>> function call involves a certain overhead all by itself, so calling >>>>> into >>>>> the C-API multiple times may be substantially more costly than, say, >>>>> calling through a function pointer once and then running over a >>>>> returned C >>>>> array comparing numbers. And definitely way more costly than >>>>> running over >>>>> an array that the type struct points to directly. We are not talking >>>>> about >>>>> hundreds of entries here, just a few. A linear scan in 64 bit steps >>>>> over >>>>> something like a hundred bytes in the L1 cache should hardly be >>>>> measurable. >>>> I give up, then. I fail to understand the problem. Apparently, you want >>>> to do something with the value you get from this lookup operation, but >>>> that something won't involve function calls (or else the function call >>>> overhead for the lookup wouldn't be relevant). >>> In our specific case the value would be an offset added to the >>> PyObject*, >>> and there we would find a pointer to a C function (together with a >>> 64-bit >>> signature), and calling that C function (after checking the 64 bit >>> signature) is our final objective. >> >> I think the use case hasn't been communicated all that clearly yet. Let's >> give it another try. >> >> Imagine we have two sides, one that provides a callable and the other >> side >> that wants to call it. Both sides are implemented in C, so the callee >> has a >> C signature and the caller has the arguments available as C data >> types. The >> signature may or may not match the argument types exactly (float vs. >> double, int vs. long, ...), because the caller and the callee know >> nothing >> about each other initially, they just happen to appear in the same >> program >> at runtime. All they know is that they could call each other through >> Python >> space, but that would require data conversion, tuple packing, calling, >> tuple unpacking, data unpacking, and then potentially the same thing >> on the >> way back. They want to avoid that overhead. >> >> Now, the caller needs to figure out if the callee has a compatible >> signature. The callee may provide more than one signature (i.e. more than >> one C call entry point), perhaps because it is implemented to deal with >> different input data types efficiently, or perhaps because it can >> efficiently convert them to its expected input. So, there is a >> signature on >> the caller side given by the argument types it holds, and a couple of >> signature on the callee side that can accept different C data input. Then >> the caller needs to find out which signatures there are and match them >> against what it can efficiently call. It may even be a JIT compiler that >> can generate an efficient call signature on the fly, given a suitable >> signature on callee side. > >> >> An example for this is an algorithm that evaluates a user provided >> function >> on a large NumPy array. The caller knows what array type it is operating >> on, and the user provided function may be designed to efficiently operate >> on arrays of int, float and double entries. > > Given that use case, can I suggest the following: > > Separate the discovery of the function from its use. > By this I mean first lookup the function (outside of the loop) > then use the function (inside the loop). We would obviously do that when we can. But Cython is a compiler/code translator, and we don't control usecases. You can easily make up usecases (= Cython code people write) where you can't easily separate the two. For instance, the Sage projects has hundreds of thousands of lines of object-oriented Cython code (NOT just array-oriented, but also graphs and trees and stuff), which is all based on Cython's own fast vtable dispatches a la C++. They might want to clean up their code and more generic callback objects some places. Other users currently pass around C pointers for callback functions, and we'd like to tell them "pass around these nicer Python callables instead, honestly, the penalty is only 2 ns per call". (*Regardless* of how you use them, like making sure you use them in a loop where we can statically pull out the function pointer acquisition. Saying "this is only non-sluggish if you do x, y, z puts users off.) I'm not asking you to consider the details of all that. Just to allow some kind of high-performance extensibility of PyTypeObject, so that we can *stop* bothering python-dev with specific requirements from our parallel universe of nearly-all-Cython-and-Fortran-and-C++ codebases :-) Dag From mark at hotpy.org Wed May 16 17:40:23 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 16 May 2012 16:40:23 +0100 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB3C0C4.3060506@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3A1F3.2050405@hotpy.org> <4FB3C0C4.3060506@astro.uio.no> Message-ID: <4FB3CA67.9080008@hotpy.org> Dag Sverre Seljebotn wrote: > On 05/16/2012 02:47 PM, Mark Shannon wrote: >> Stefan Behnel wrote: >>> Dag Sverre Seljebotn, 16.05.2012 12:48: >>>> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>>>> Agreed in general, but in this case, it's really not that easy. A C >>>>>> function call involves a certain overhead all by itself, so calling >>>>>> into >>>>>> the C-API multiple times may be substantially more costly than, say, >>>>>> calling through a function pointer once and then running over a >>>>>> returned C >>>>>> array comparing numbers. And definitely way more costly than >>>>>> running over >>>>>> an array that the type struct points to directly. We are not talking >>>>>> about >>>>>> hundreds of entries here, just a few. A linear scan in 64 bit steps >>>>>> over >>>>>> something like a hundred bytes in the L1 cache should hardly be >>>>>> measurable. >>>>> I give up, then. I fail to understand the problem. Apparently, you >>>>> want >>>>> to do something with the value you get from this lookup operation, but >>>>> that something won't involve function calls (or else the function call >>>>> overhead for the lookup wouldn't be relevant). >>>> In our specific case the value would be an offset added to the >>>> PyObject*, >>>> and there we would find a pointer to a C function (together with a >>>> 64-bit >>>> signature), and calling that C function (after checking the 64 bit >>>> signature) is our final objective. >>> >>> I think the use case hasn't been communicated all that clearly yet. >>> Let's >>> give it another try. >>> >>> Imagine we have two sides, one that provides a callable and the other >>> side >>> that wants to call it. Both sides are implemented in C, so the callee >>> has a >>> C signature and the caller has the arguments available as C data >>> types. The >>> signature may or may not match the argument types exactly (float vs. >>> double, int vs. long, ...), because the caller and the callee know >>> nothing >>> about each other initially, they just happen to appear in the same >>> program >>> at runtime. All they know is that they could call each other through >>> Python >>> space, but that would require data conversion, tuple packing, calling, >>> tuple unpacking, data unpacking, and then potentially the same thing >>> on the >>> way back. They want to avoid that overhead. >>> >>> Now, the caller needs to figure out if the callee has a compatible >>> signature. The callee may provide more than one signature (i.e. more >>> than >>> one C call entry point), perhaps because it is implemented to deal with >>> different input data types efficiently, or perhaps because it can >>> efficiently convert them to its expected input. So, there is a >>> signature on >>> the caller side given by the argument types it holds, and a couple of >>> signature on the callee side that can accept different C data input. >>> Then >>> the caller needs to find out which signatures there are and match them >>> against what it can efficiently call. It may even be a JIT compiler that >>> can generate an efficient call signature on the fly, given a suitable >>> signature on callee side. >> >>> >>> An example for this is an algorithm that evaluates a user provided >>> function >>> on a large NumPy array. The caller knows what array type it is operating >>> on, and the user provided function may be designed to efficiently >>> operate >>> on arrays of int, float and double entries. >> >> Given that use case, can I suggest the following: >> >> Separate the discovery of the function from its use. >> By this I mean first lookup the function (outside of the loop) >> then use the function (inside the loop). > > We would obviously do that when we can. But Cython is a compiler/code > translator, and we don't control usecases. You can easily make up > usecases (= Cython code people write) where you can't easily separate > the two. > > For instance, the Sage projects has hundreds of thousands of lines of > object-oriented Cython code (NOT just array-oriented, but also graphs > and trees and stuff), which is all based on Cython's own fast vtable > dispatches a la C++. They might want to clean up their code and more > generic callback objects some places. > > Other users currently pass around C pointers for callback functions, and > we'd like to tell them "pass around these nicer Python callables > instead, honestly, the penalty is only 2 ns per call". (*Regardless* of > how you use them, like making sure you use them in a loop where we can > statically pull out the function pointer acquisition. Saying "this is > only non-sluggish if you do x, y, z puts users off.) Why not pass around a PyCFunction object, instead of a C function pointer. It contains two fields: the function pointer and the object (self), which is exactly what you want. Of course, the PyCFunction object only allows a limited range of function types, which is why I am suggesting a variant which supports a wider range of C function pointer types. Is a single extra indirection in obj->func() rather than func(), really that inefficient? If you are passing around raw pointers, you have already given up on dynamic type checking. > > I'm not asking you to consider the details of all that. Just to allow > some kind of high-performance extensibility of PyTypeObject, so that we > can *stop* bothering python-dev with specific requirements from our > parallel universe of nearly-all-Cython-and-Fortran-and-C++ codebases :-) If I read it correctly, you have two problems you wish to solve: 1. A fast callable that can be passed around (see above) 2. Fast access to that callable from a type. The solution for 2. is the _PyType_Lookup() function. By the time you have fixed your proposed solution to properly handle subclassing I doubt it will be any quicker than _PyType_Lookup(). Cheers, Mark. From robertwb at gmail.com Wed May 16 18:17:08 2012 From: robertwb at gmail.com (Robert Bradshaw) Date: Wed, 16 May 2012 09:17:08 -0700 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB3CA67.9080008@hotpy.org> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3A1F3.2050405@hotpy.org> <4FB3C0C4.3060506@astro.uio.no> <4FB3CA67.9080008@hotpy.org> Message-ID: On Wed, May 16, 2012 at 8:40 AM, Mark Shannon wrote: > Dag Sverre Seljebotn wrote: >> >> On 05/16/2012 02:47 PM, Mark Shannon wrote: >>> >>> Stefan Behnel wrote: >>>> >>>> Dag Sverre Seljebotn, 16.05.2012 12:48: >>>>> >>>>> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>>>>> >>>>>>> Agreed in general, but in this case, it's really not that easy. A C >>>>>>> function call involves a certain overhead all by itself, so calling >>>>>>> into >>>>>>> the C-API multiple times may be substantially more costly than, say, >>>>>>> calling through a function pointer once and then running over a >>>>>>> returned C >>>>>>> array comparing numbers. And definitely way more costly than >>>>>>> running over >>>>>>> an array that the type struct points to directly. We are not talking >>>>>>> about >>>>>>> hundreds of entries here, just a few. A linear scan in 64 bit steps >>>>>>> over >>>>>>> something like a hundred bytes in the L1 cache should hardly be >>>>>>> measurable. >>>>>> >>>>>> I give up, then. I fail to understand the problem. Apparently, you >>>>>> want >>>>>> to do something with the value you get from this lookup operation, but >>>>>> that something won't involve function calls (or else the function call >>>>>> overhead for the lookup wouldn't be relevant). >>>>> >>>>> In our specific case the value would be an offset added to the >>>>> PyObject*, >>>>> and there we would find a pointer to a C function (together with a >>>>> 64-bit >>>>> signature), and calling that C function (after checking the 64 bit >>>>> signature) is our final objective. >>>> >>>> >>>> I think the use case hasn't been communicated all that clearly yet. >>>> Let's >>>> give it another try. >>>> >>>> Imagine we have two sides, one that provides a callable and the other >>>> side >>>> that wants to call it. Both sides are implemented in C, so the callee >>>> has a >>>> C signature and the caller has the arguments available as C data >>>> types. The >>>> signature may or may not match the argument types exactly (float vs. >>>> double, int vs. long, ...), because the caller and the callee know >>>> nothing >>>> about each other initially, they just happen to appear in the same >>>> program >>>> at runtime. All they know is that they could call each other through >>>> Python >>>> space, but that would require data conversion, tuple packing, calling, >>>> tuple unpacking, data unpacking, and then potentially the same thing >>>> on the >>>> way back. They want to avoid that overhead. >>>> >>>> Now, the caller needs to figure out if the callee has a compatible >>>> signature. The callee may provide more than one signature (i.e. more >>>> than >>>> one C call entry point), perhaps because it is implemented to deal with >>>> different input data types efficiently, or perhaps because it can >>>> efficiently convert them to its expected input. So, there is a >>>> signature on >>>> the caller side given by the argument types it holds, and a couple of >>>> signature on the callee side that can accept different C data input. >>>> Then >>>> the caller needs to find out which signatures there are and match them >>>> against what it can efficiently call. It may even be a JIT compiler that >>>> can generate an efficient call signature on the fly, given a suitable >>>> signature on callee side. >>> >>> >>>> >>>> An example for this is an algorithm that evaluates a user provided >>>> function >>>> on a large NumPy array. The caller knows what array type it is operating >>>> on, and the user provided function may be designed to efficiently >>>> operate >>>> on arrays of int, float and double entries. >>> >>> >>> Given that use case, can I suggest the following: >>> >>> Separate the discovery of the function from its use. >>> By this I mean first lookup the function (outside of the loop) >>> then use the function (inside the loop). >> >> >> We would obviously do that when we can. But Cython is a compiler/code >> translator, and we don't control usecases. You can easily make up usecases >> (= Cython code people write) where you can't easily separate the two. >> >> For instance, the Sage projects has hundreds of thousands of lines of >> object-oriented Cython code (NOT just array-oriented, but also graphs and >> trees and stuff), which is all based on Cython's own fast vtable dispatches >> a la C++. They might want to clean up their code and more generic callback >> objects some places. >> >> Other users currently pass around C pointers for callback functions, and >> we'd like to tell them "pass around these nicer Python callables instead, >> honestly, the penalty is only 2 ns per call". (*Regardless* of how you use >> them, like making sure you use them in a loop where we can statically pull >> out the function pointer acquisition. Saying "this is only non-sluggish if >> you do x, y, z puts users off.) > > > Why not pass around a PyCFunction object, instead of a C function > pointer. It contains two fields: the function pointer and the object (self), > which is exactly what you want. > > Of course, the PyCFunction object only allows a limited range of > function types, which is why I am suggesting a variant which supports a > wider range of C function pointer types. > > Is a single extra indirection in obj->func() rather than func(), > really that inefficient? > If you are passing around raw pointers, you have already given up on > dynamic type checking. > > >> >> I'm not asking you to consider the details of all that. Just to allow some >> kind of high-performance extensibility of PyTypeObject, so that we can >> *stop* bothering python-dev with specific requirements from our parallel >> universe of nearly-all-Cython-and-Fortran-and-C++ codebases :-) > > > If I read it correctly, you have two problems you wish to solve: > 1. A fast callable that can be passed around (see above) > 2. Fast access to that callable from a type. > > The solution for 2. is the ?_PyType_Lookup() function. > By the time you have fixed your proposed solution to properly handle > subclassing I doubt it will be any quicker than _PyType_Lookup(). It is certainly (2) that we are most interested in solving here; (1) can be solved in a variety of ways. For this second point, we're looking for something that's faster than a dictionary lookup. (For example, common usecase is user-provided functions operating on C doubles which can be quite fast.) The PyTypeObject struct is in large part a list of methods that were deemed too common and time-critical to merit the dictionary lookup (and Python call) overhead. Unfortunately, it's not extensible. We figured it'd be useful to get any feedback from the large Python community on how best to add extensibility, in particular with an eye for being future-proof and possibly an official part of the standard for some future version of Python. - Robert From mark at hotpy.org Wed May 16 18:29:52 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 16 May 2012 17:29:52 +0100 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3A1F3.2050405@hotpy.org> <4FB3C0C4.3060506@astro.uio.no> <4FB3CA67.9080008@hotpy.org> Message-ID: <4FB3D600.9060708@hotpy.org> Robert Bradshaw wrote: > On Wed, May 16, 2012 at 8:40 AM, Mark Shannon wrote: >> Dag Sverre Seljebotn wrote: >>> On 05/16/2012 02:47 PM, Mark Shannon wrote: >>>> Stefan Behnel wrote: >>>>> Dag Sverre Seljebotn, 16.05.2012 12:48: >>>>>> On 05/16/2012 11:50 AM, "Martin v. L?wis" wrote: >>>>>>>> Agreed in general, but in this case, it's really not that easy. A C >>>>>>>> function call involves a certain overhead all by itself, so calling >>>>>>>> into >>>>>>>> the C-API multiple times may be substantially more costly than, say, >>>>>>>> calling through a function pointer once and then running over a >>>>>>>> returned C >>>>>>>> array comparing numbers. And definitely way more costly than >>>>>>>> running over >>>>>>>> an array that the type struct points to directly. We are not talking >>>>>>>> about >>>>>>>> hundreds of entries here, just a few. A linear scan in 64 bit steps >>>>>>>> over >>>>>>>> something like a hundred bytes in the L1 cache should hardly be >>>>>>>> measurable. >>>>>>> I give up, then. I fail to understand the problem. Apparently, you >>>>>>> want >>>>>>> to do something with the value you get from this lookup operation, but >>>>>>> that something won't involve function calls (or else the function call >>>>>>> overhead for the lookup wouldn't be relevant). >>>>>> In our specific case the value would be an offset added to the >>>>>> PyObject*, >>>>>> and there we would find a pointer to a C function (together with a >>>>>> 64-bit >>>>>> signature), and calling that C function (after checking the 64 bit >>>>>> signature) is our final objective. >>>>> >>>>> I think the use case hasn't been communicated all that clearly yet. >>>>> Let's >>>>> give it another try. >>>>> >>>>> Imagine we have two sides, one that provides a callable and the other >>>>> side >>>>> that wants to call it. Both sides are implemented in C, so the callee >>>>> has a >>>>> C signature and the caller has the arguments available as C data >>>>> types. The >>>>> signature may or may not match the argument types exactly (float vs. >>>>> double, int vs. long, ...), because the caller and the callee know >>>>> nothing >>>>> about each other initially, they just happen to appear in the same >>>>> program >>>>> at runtime. All they know is that they could call each other through >>>>> Python >>>>> space, but that would require data conversion, tuple packing, calling, >>>>> tuple unpacking, data unpacking, and then potentially the same thing >>>>> on the >>>>> way back. They want to avoid that overhead. >>>>> >>>>> Now, the caller needs to figure out if the callee has a compatible >>>>> signature. The callee may provide more than one signature (i.e. more >>>>> than >>>>> one C call entry point), perhaps because it is implemented to deal with >>>>> different input data types efficiently, or perhaps because it can >>>>> efficiently convert them to its expected input. So, there is a >>>>> signature on >>>>> the caller side given by the argument types it holds, and a couple of >>>>> signature on the callee side that can accept different C data input. >>>>> Then >>>>> the caller needs to find out which signatures there are and match them >>>>> against what it can efficiently call. It may even be a JIT compiler that >>>>> can generate an efficient call signature on the fly, given a suitable >>>>> signature on callee side. >>>> >>>>> An example for this is an algorithm that evaluates a user provided >>>>> function >>>>> on a large NumPy array. The caller knows what array type it is operating >>>>> on, and the user provided function may be designed to efficiently >>>>> operate >>>>> on arrays of int, float and double entries. >>>> >>>> Given that use case, can I suggest the following: >>>> >>>> Separate the discovery of the function from its use. >>>> By this I mean first lookup the function (outside of the loop) >>>> then use the function (inside the loop). >>> >>> We would obviously do that when we can. But Cython is a compiler/code >>> translator, and we don't control usecases. You can easily make up usecases >>> (= Cython code people write) where you can't easily separate the two. >>> >>> For instance, the Sage projects has hundreds of thousands of lines of >>> object-oriented Cython code (NOT just array-oriented, but also graphs and >>> trees and stuff), which is all based on Cython's own fast vtable dispatches >>> a la C++. They might want to clean up their code and more generic callback >>> objects some places. >>> >>> Other users currently pass around C pointers for callback functions, and >>> we'd like to tell them "pass around these nicer Python callables instead, >>> honestly, the penalty is only 2 ns per call". (*Regardless* of how you use >>> them, like making sure you use them in a loop where we can statically pull >>> out the function pointer acquisition. Saying "this is only non-sluggish if >>> you do x, y, z puts users off.) >> >> Why not pass around a PyCFunction object, instead of a C function >> pointer. It contains two fields: the function pointer and the object (self), >> which is exactly what you want. >> >> Of course, the PyCFunction object only allows a limited range of >> function types, which is why I am suggesting a variant which supports a >> wider range of C function pointer types. >> >> Is a single extra indirection in obj->func() rather than func(), >> really that inefficient? >> If you are passing around raw pointers, you have already given up on >> dynamic type checking. >> >> >>> I'm not asking you to consider the details of all that. Just to allow some >>> kind of high-performance extensibility of PyTypeObject, so that we can >>> *stop* bothering python-dev with specific requirements from our parallel >>> universe of nearly-all-Cython-and-Fortran-and-C++ codebases :-) >> >> If I read it correctly, you have two problems you wish to solve: >> 1. A fast callable that can be passed around (see above) >> 2. Fast access to that callable from a type. >> >> The solution for 2. is the _PyType_Lookup() function. >> By the time you have fixed your proposed solution to properly handle >> subclassing I doubt it will be any quicker than _PyType_Lookup(). > > It is certainly (2) that we are most interested in solving here; (1) > can be solved in a variety of ways. For this second point, we're > looking for something that's faster than a dictionary lookup. (For > example, common usecase is user-provided functions operating on C > doubles which can be quite fast.) _PyType_Lookup() is fast; it doesn't perform any dictionary lookups if the (type, attribute) pair is in the cache. > > The PyTypeObject struct is in large part a list of methods that were > deemed too common and time-critical to merit the dictionary lookup > (and Python call) overhead. Unfortunately, it's not extensible. We > figured it'd be useful to get any feedback from the large Python > community on how best to add extensibility, in particular with an eye > for being future-proof and possibly an official part of the standard > for some future version of Python. I don't see any problem with making _PyType_Lookup() public. But others might disagree. Cheers, Mark. From martin at v.loewis.de Wed May 16 20:29:26 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 16 May 2012 20:29:26 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB385F3.7070209@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: <4FB3F206.70601@v.loewis.de> > In our specific case the value would be an offset added to the > PyObject*, and there we would find a pointer to a C function (together > with a 64-bit signature), and calling that C function (after checking > the 64 bit signature) is our final objective. And what the C function does really is faster than the lookup through a dictionary? I find that hard to believe. >> I still think this is out of scope for python-dev. If this is something >> you want to be able to do for Python 2.4 as well, then you don't need >> any change to Python - you can do whatever you come up with for all >> Python versions, no need to (or point in) changing Python 3.4 (say). > > We can go ahead and hijack tp_flags bit 22 to make things work in > existing versions. But what if Python 3.8 then starts using that bit for > something else? Use flag bit 23 in Python 3.8. You know at compile time what Python version you have. > >> As this is apparently only relevant to speed fanatics, too, I suggest >> that you check how fast PyPI works for you. > > Did you mean PyPy? Oops, yes - Freudian slip :-) Regards, Martin From martin at v.loewis.de Wed May 16 20:33:28 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 16 May 2012 20:33:28 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: <4FB3F2F8.8060207@v.loewis.de> > Does this use case make sense to everyone? > > The reason why we are discussing this on python-dev is that we are looking > for a general way to expose these C level signatures within the Python > ecosystem. And Dag's idea was to expose them as part of the type object, > basically as an addition to the current Python level tp_call() slot. The use case makes sense, yet there is also a long-standing solution already to expose APIs and function pointers: the capsule objects. If you want to avoid dictionary lookups on the server side, implement tp_getattro, comparing addresses of interned strings. Regards, Martin From robertwb at gmail.com Wed May 16 22:24:42 2012 From: robertwb at gmail.com (Robert Bradshaw) Date: Wed, 16 May 2012 13:24:42 -0700 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB3F2F8.8060207@v.loewis.de> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> Message-ID: On Wed, May 16, 2012 at 11:33 AM, "Martin v. L?wis" wrote: >> Does this use case make sense to everyone? >> >> The reason why we are discussing this on python-dev is that we are looking >> for a general way to expose these C level signatures within the Python >> ecosystem. And Dag's idea was to expose them as part of the type object, >> basically as an addition to the current Python level tp_call() slot. > > The use case makes sense, yet there is also a long-standing solution already > to expose APIs and function pointers: the capsule objects. > > If you want to avoid dictionary lookups on the server side, implement > tp_getattro, comparing addresses of interned strings. Yes, that's an idea worth looking at. The point implementing tp_getattro to avoid dictionary lookups overhead is a good one, worth trying at least. One drawback is that this approach does require the GIL (as does _PyType_Lookup). Regarding the C function being faster than the dictionary lookup (or at least close enough that the lookup takes time), yes, this happens all the time. For example one might be solving differential equations and the "user input" is essentially a set of (usually simple) double f(double) and its derivatives. - Robert From d.s.seljebotn at astro.uio.no Wed May 16 22:59:51 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Wed, 16 May 2012 22:59:51 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> Message-ID: <4FB41547.9080105@astro.uio.no> On 05/16/2012 10:24 PM, Robert Bradshaw wrote: > On Wed, May 16, 2012 at 11:33 AM, "Martin v. L?wis" wrote: >>> Does this use case make sense to everyone? >>> >>> The reason why we are discussing this on python-dev is that we are looking >>> for a general way to expose these C level signatures within the Python >>> ecosystem. And Dag's idea was to expose them as part of the type object, >>> basically as an addition to the current Python level tp_call() slot. >> >> The use case makes sense, yet there is also a long-standing solution already >> to expose APIs and function pointers: the capsule objects. >> >> If you want to avoid dictionary lookups on the server side, implement >> tp_getattro, comparing addresses of interned strings. > > Yes, that's an idea worth looking at. The point implementing > tp_getattro to avoid dictionary lookups overhead is a good one, worth > trying at least. One drawback is that this approach does require the > GIL (as does _PyType_Lookup). > > Regarding the C function being faster than the dictionary lookup (or > at least close enough that the lookup takes time), yes, this happens > all the time. For example one might be solving differential equations > and the "user input" is essentially a set of (usually simple) double > f(double) and its derivatives. To underline how this is performance critical to us, perhaps a full Cython example is useful. The following Cython code is a real world usecase. It is not too contrived in the essentials, although simplified a little bit. For instance undergrad engineering students could pick up Cython just to play with simple scalar functions like this. from numpy import sin # assume sin is a Python callable and that NumPy decides to support # our spec to also support getting a "double (*sinfuncptr)(double)". # Our mission: Avoid to have the user manually import "sin" from C, # but allow just using the NumPy object and still be fast. # define a function to integrate cpdef double f(double x): return sin(x * x) # guess on signature and use "fastcall"! # the integrator def integrate(func, double a, double b, int n): cdef double s = 0 cdef double dx = (b - a) / n for i in range(n): # This is also a fastcall, but can be cached so doesn't # matter... s += func(a + i * dx) return s * dx integrate(f, 0, 1, 1000000) There are two problems here: - The "sin" global can be reassigned (monkey-patched) between each call to "f", no way for "f" to know. Even "sin" could do the reassignment. So you'd need to check for reassignment to do caching... - The fastcall inside of "f" is separated from the loop in "integrate". And since "f" is often in another module, we can't rely on static full program analysis. These problems with monkey-patching disappear if the lookup is negligible. Some rough numbers: - The overhead with the tp_flags hack is a 2 ns overhead (something similar with a metaclass, the problems are more how to synchronize that metaclass across multiple 3rd party libraries) - Dict lookup 20 ns - The sin function is about 35 ns. And, "f" is probably only 2-3 ns, and there could very easily be multiple such functions, defined in different modules, in a chain, in order to build up a formula. Dag From greg.ewing at canterbury.ac.nz Thu May 17 02:03:49 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 May 2012 12:03:49 +1200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> Message-ID: <4FB44065.4010306@canterbury.ac.nz> Dag wrote: > I'm not asking you to consider the details of all that. > Just to allow some kind of high-performance extensibility of > PyTypeObject, so that we can *stop* bothering python-dev with > specific requirements from our parallel universe of > nearly-all-Cython-and-Fortran-and-C++ codebases :-) Maybe you should ask for a bit more than that. Tp_flags bits are a scarce resource, and they shouldn't be handed out lightly to anyone who asks for one. Eventually they're going to run out, and then something else will have to be done to make further extensions of the type object possible. So maybe it's worth thinking about making a general mechanism available for third parties to extend the type object without them all needing to have their own tp_flags bits and without needing to collude with each other to avoid conflicts. -- Greg From robertwb at gmail.com Thu May 17 02:17:11 2012 From: robertwb at gmail.com (Robert Bradshaw) Date: Wed, 16 May 2012 17:17:11 -0700 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB44065.4010306@canterbury.ac.nz> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> Message-ID: On Wed, May 16, 2012 at 5:03 PM, Greg Ewing wrote: > Dag wrote: > >> I'm not asking you to consider the details of all that. Just to allow some >> kind of high-performance extensibility of PyTypeObject, so that we can >> *stop* bothering python-dev with specific requirements from our parallel >> universe of nearly-all-Cython-and-Fortran-and-C++ codebases :-) > > > Maybe you should ask for a bit more than that. > > Tp_flags bits are a scarce resource, and they shouldn't > be handed out lightly to anyone who asks for one. Eventually > they're going to run out, and then something else will have to > be done to make further extensions of the type object possible. > > So maybe it's worth thinking about making a general mechanism > available for third parties to extend the type object without > them all needing to have their own tp_flags bits and without > needing to collude with each other to avoid conflicts. This is exactly what was proposed to start this thread (with minimal collusion to avoid conflicts, specifically partitioning up a global ID space). - Robert From brian at python.org Thu May 17 04:45:34 2012 From: brian at python.org (Brian Curtin) Date: Wed, 16 May 2012 21:45:34 -0500 Subject: [Python-Dev] 64-bit Windows buildbots needed In-Reply-To: <20120516144456.3c838db3@pitrou.net> References: <20120516144456.3c838db3@pitrou.net> Message-ID: On Wed, May 16, 2012 at 7:44 AM, Antoine Pitrou wrote: > > Hello all, > > We still need 64-bit Windows buildbots to test for regressions. > Otherwise we might let regressions slip through, since few people seem > to run the test suite under Windows at home. The machine that I used to run a Server 2008 x64 build slave is back to being a Windows machine after a dance with Ubuntu. I'm going to order more RAM and set it up to provide some 64-bit build slave, probably Windows 8. I will see about having it setup in the next few weeks. From greg.ewing at canterbury.ac.nz Thu May 17 05:00:01 2012 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Thu, 17 May 2012 15:00:01 +1200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> Message-ID: <4FB469B1.3020804@canterbury.ac.nz> On 17/05/12 12:17, Robert Bradshaw wrote: > This is exactly what was proposed to start this thread (with minimal > collusion to avoid conflicts, specifically partitioning up a global ID > space). Yes, but I think this part of the mechanism needs to be spelled out in more detail, perhaps in the form of a draft PEP. Then there will be something concrete to discuss in python-dev. -- Greg From victor.stinner at gmail.com Thu May 17 10:10:30 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 17 May 2012 10:10:30 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14624: UTF-16 decoding is now 3x to 4x faster on various inputs. In-Reply-To: References: Message-ID: > http://hg.python.org/cpython/rev/cdcc816dea85 > changeset: ? 76971:cdcc816dea85 > user: ? ? ? ?Antoine Pitrou > date: ? ? ? ?Tue May 15 23:48:04 2012 +0200 > summary: > ?Issue #14624: UTF-16 decoding is now 3x to 4x faster on various inputs. > Patch by Serhiy Storchaka. Such optimization should be mentioned in the What's New in Python 3.3 doc if Python 3.3 is now faster than Python 3.2. Same remark for the UTF-8 optimization. Victor From martin at v.loewis.de Thu May 17 10:27:32 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Thu, 17 May 2012 10:27:32 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB44065.4010306@canterbury.ac.nz> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> Message-ID: <20120517102732.Horde.nrEiPcL8999PtLZ0jxkWeTA@webmail.df.eu> > So maybe it's worth thinking about making a general mechanism > available for third parties to extend the type object without > them all needing to have their own tp_flags bits and without > needing to collude with each other to avoid conflicts. That mechanism is already available. Subclass PyTypeType, and add whatever fields you want. Regards, Martin From mark at hotpy.org Thu May 17 12:38:39 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 17 May 2012 11:38:39 +0100 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB41547.9080105@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> <4FB41547.9080105@astro.uio.no> Message-ID: <4FB4D52F.4010706@hotpy.org> Dag Sverre Seljebotn wrote: > On 05/16/2012 10:24 PM, Robert Bradshaw wrote: >> On Wed, May 16, 2012 at 11:33 AM, "Martin v. >> L?wis" wrote: >>>> Does this use case make sense to everyone? >>>> >>>> The reason why we are discussing this on python-dev is that we are >>>> looking >>>> for a general way to expose these C level signatures within the Python >>>> ecosystem. And Dag's idea was to expose them as part of the type >>>> object, >>>> basically as an addition to the current Python level tp_call() slot. >>> >>> The use case makes sense, yet there is also a long-standing solution >>> already >>> to expose APIs and function pointers: the capsule objects. >>> >>> If you want to avoid dictionary lookups on the server side, implement >>> tp_getattro, comparing addresses of interned strings. >> >> Yes, that's an idea worth looking at. The point implementing >> tp_getattro to avoid dictionary lookups overhead is a good one, worth >> trying at least. One drawback is that this approach does require the >> GIL (as does _PyType_Lookup). >> >> Regarding the C function being faster than the dictionary lookup (or >> at least close enough that the lookup takes time), yes, this happens >> all the time. For example one might be solving differential equations >> and the "user input" is essentially a set of (usually simple) double >> f(double) and its derivatives. > > To underline how this is performance critical to us, perhaps a full > Cython example is useful. > > The following Cython code is a real world usecase. It is not too > contrived in the essentials, although simplified a little bit. For > instance undergrad engineering students could pick up Cython just to > play with simple scalar functions like this. > > from numpy import sin > # assume sin is a Python callable and that NumPy decides to support > # our spec to also support getting a "double (*sinfuncptr)(double)". > > # Our mission: Avoid to have the user manually import "sin" from C, > # but allow just using the NumPy object and still be fast. > > # define a function to integrate > cpdef double f(double x): > return sin(x * x) # guess on signature and use "fastcall"! > > # the integrator > def integrate(func, double a, double b, int n): > cdef double s = 0 > cdef double dx = (b - a) / n > for i in range(n): > # This is also a fastcall, but can be cached so doesn't > # matter... > s += func(a + i * dx) > return s * dx > > integrate(f, 0, 1, 1000000) > > There are two problems here: > > - The "sin" global can be reassigned (monkey-patched) between each call > to "f", no way for "f" to know. Even "sin" could do the reassignment. So > you'd need to check for reassignment to do caching... Since Cython allows static typing why not just declare that func can treat sin as if it can't be monkeypatched? Moving the load of a global variable out of the loop does seem to be a rather obvious optimisation, if it were declared to be legal. > > - The fastcall inside of "f" is separated from the loop in "integrate". > And since "f" is often in another module, we can't rely on static full > program analysis. > > These problems with monkey-patching disappear if the lookup is negligible. > > Some rough numbers: > > - The overhead with the tp_flags hack is a 2 ns overhead (something > similar with a metaclass, the problems are more how to synchronize that > metaclass across multiple 3rd party libraries) Does your approach handle subtyping properly? > > - Dict lookup 20 ns Did you time _PyType_Lookup() ? > > - The sin function is about 35 ns. And, "f" is probably only 2-3 ns, > and there could very easily be multiple such functions, defined in > different modules, in a chain, in order to build up a formula. > Such micro timings are meaningless, because the working set often tends to fit in the hardware cache. A level 2 cache miss can takes 100s of cycles. Cheers, Mark. From stefan_ml at behnel.de Thu May 17 14:14:23 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Thu, 17 May 2012 14:14:23 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB4D52F.4010706@hotpy.org> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> <4FB41547.9080105@astro.uio.no> <4FB4D52F.4010706@hotpy.org> Message-ID: Mark Shannon, 17.05.2012 12:38: > Dag Sverre Seljebotn wrote: >> On 05/16/2012 10:24 PM, Robert Bradshaw wrote: >>> On Wed, May 16, 2012 at 11:33 AM, "Martin v. L?wis" >>> wrote: >>>>> Does this use case make sense to everyone? >>>>> >>>>> The reason why we are discussing this on python-dev is that we are >>>>> looking >>>>> for a general way to expose these C level signatures within the Python >>>>> ecosystem. And Dag's idea was to expose them as part of the type object, >>>>> basically as an addition to the current Python level tp_call() slot. >>>> >>>> The use case makes sense, yet there is also a long-standing solution >>>> already >>>> to expose APIs and function pointers: the capsule objects. >>>> >>>> If you want to avoid dictionary lookups on the server side, implement >>>> tp_getattro, comparing addresses of interned strings. >>> >>> Yes, that's an idea worth looking at. The point implementing >>> tp_getattro to avoid dictionary lookups overhead is a good one, worth >>> trying at least. One drawback is that this approach does require the >>> GIL (as does _PyType_Lookup). >>> >>> Regarding the C function being faster than the dictionary lookup (or >>> at least close enough that the lookup takes time), yes, this happens >>> all the time. For example one might be solving differential equations >>> and the "user input" is essentially a set of (usually simple) double >>> f(double) and its derivatives. >> >> To underline how this is performance critical to us, perhaps a full >> Cython example is useful. >> >> The following Cython code is a real world usecase. It is not too >> contrived in the essentials, although simplified a little bit. For >> instance undergrad engineering students could pick up Cython just to play >> with simple scalar functions like this. >> >> from numpy import sin >> # assume sin is a Python callable and that NumPy decides to support >> # our spec to also support getting a "double (*sinfuncptr)(double)". >> >> # Our mission: Avoid to have the user manually import "sin" from C, >> # but allow just using the NumPy object and still be fast. >> >> # define a function to integrate >> cpdef double f(double x): >> return sin(x * x) # guess on signature and use "fastcall"! >> >> # the integrator >> def integrate(func, double a, double b, int n): >> cdef double s = 0 >> cdef double dx = (b - a) / n >> for i in range(n): >> # This is also a fastcall, but can be cached so doesn't >> # matter... >> s += func(a + i * dx) >> return s * dx >> >> integrate(f, 0, 1, 1000000) >> >> There are two problems here: >> >> - The "sin" global can be reassigned (monkey-patched) between each call >> to "f", no way for "f" to know. Even "sin" could do the reassignment. So >> you'd need to check for reassignment to do caching... > > Since Cython allows static typing why not just declare that func can treat > sin as if it can't be monkeypatched? You'd simply say cdef object sin # declare it as a C variable of type 'object' from numpy import sin That's also the one obvious way to do it in Cython. > Moving the load of a global variable out of the loop does seem to be a > rather obvious optimisation, if it were declared to be legal. My proposal was to simply extract any C function pointers at assignment time, i.e. at import time in the example above. Signature matching can then be done at the first call and the result can be cached as long as the object variable isn't changed. All of that is local to the module and can thus easily be controlled at code generation time. Stefan From ericsnowcurrently at gmail.com Thu May 17 15:19:10 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 17 May 2012 07:19:10 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: PEP 421 has reached a good place and I'd like to ask for pronouncement. Thanks! -eric From d.s.seljebotn at astro.uio.no Thu May 17 20:13:41 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 17 May 2012 20:13:41 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB4D52F.4010706@hotpy.org> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> <4FB41547.9080105@astro.uio.no> <4FB4D52F.4010706@hotpy.org> Message-ID: <4FB53FD5.6000701@astro.uio.no> Mark Shannon wrote: >Dag Sverre Seljebotn wrote: >> from numpy import sin >> # assume sin is a Python callable and that NumPy decides to support >> # our spec to also support getting a "double (*sinfuncptr)(double)". >> >> # Our mission: Avoid to have the user manually import "sin" from C, >> # but allow just using the NumPy object and still be fast. >> >> # define a function to integrate >> cpdef double f(double x): >> return sin(x * x) # guess on signature and use "fastcall"! >> >> # the integrator >> def integrate(func, double a, double b, int n): >> cdef double s = 0 >> cdef double dx = (b - a) / n >> for i in range(n): >> # This is also a fastcall, but can be cached so doesn't >> # matter... >> s += func(a + i * dx) >> return s * dx >> >> integrate(f, 0, 1, 1000000) >> >> There are two problems here: >> >> - The "sin" global can be reassigned (monkey-patched) between each >call >> to "f", no way for "f" to know. Even "sin" could do the reassignment. >So >> you'd need to check for reassignment to do caching... > >Since Cython allows static typing why not just declare that func can >treat sin as if it can't be monkeypatched? If you want to manually declare stuff, you can always use a C function pointer too... >Moving the load of a global variable out of the loop does seem to be a >rather obvious optimisation, if it were declared to be legal. In case you didn't notice, there was no global variable loads inside the loop... You can keep chasing this, but there's *always* cases where they don't (and you need to save the situation by manual typing). Anyway: We should really discuss Cython on the Cython list. If my motivating example wasn't good enough for you there's really nothing I can do. >> Some rough numbers: >> >> - The overhead with the tp_flags hack is a 2 ns overhead (something >> similar with a metaclass, the problems are more how to synchronize >that >> metaclass across multiple 3rd party libraries) > >Does your approach handle subtyping properly? Not really. >> >> - Dict lookup 20 ns > >Did you time _PyType_Lookup() ? No, didn't get around to it yet (and thanks for pointing it out). (Though the GIL requirement is an issue too for Cython.) >> - The sin function is about 35 ns. And, "f" is probably only 2-3 ns, > >> and there could very easily be multiple such functions, defined in >> different modules, in a chain, in order to build up a formula. >> > >Such micro timings are meaningless, because the working set often tends > >to fit in the hardware cache. A level 2 cache miss can takes 100s of >cycles. I find this sort of response arrogant -- do you know the details of every usecase for a programming language under the sun? Many Cython users are scientists. And in scientific computing in particular you *really* have the whole range of problems and working sets. Honestly. In some codes you only really care about the speed of the disk controller. In other cases you can spend *many seconds* working almost only in L1 or perhaps L2 cache (for instance when integrating ordinary differential equations in a few variables, which is not entirely different in nature from the example I posted). (Then, those many seconds are replicated many million times for different parameters on a large cluster, and a 2x speedup translates directly into large amounts of saved money.) Also, with numerical codes you block up the problem so that loads to L2 are amortized over sufficient FLOPs (when you can). Every time Cython becomes able to do stuff more easily in this domain, people thank us that they didn't have to dig up Fortran but can stay closer to Python. Sorry for going off on a rant. I find that people will give well-meant advice about performance, but that advice is just generalizing from computer programs in entirely different domains (web apps?), and sweeping generalizations has a way of giving the wrong answer. Dag From d.s.seljebotn at astro.uio.no Thu May 17 20:34:24 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 17 May 2012 20:34:24 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB53FD5.6000701@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> <4FB41547.9080105@astro.uio.no> <4FB4D52F.4010706@hotpy.org> <4FB53FD5.6000701@astro.uio.no> Message-ID: <4FB544B0.2060903@astro.uio.no> On 05/17/2012 08:13 PM, Dag Sverre Seljebotn wrote: > Mark Shannon wrote: >> Dag Sverre Seljebotn wrote: >>> from numpy import sin >>> # assume sin is a Python callable and that NumPy decides to support >>> # our spec to also support getting a "double (*sinfuncptr)(double)". >>> >>> # Our mission: Avoid to have the user manually import "sin" from C, >>> # but allow just using the NumPy object and still be fast. >>> >>> # define a function to integrate >>> cpdef double f(double x): >>> return sin(x * x) # guess on signature and use "fastcall"! >>> >>> # the integrator >>> def integrate(func, double a, double b, int n): >>> cdef double s = 0 >>> cdef double dx = (b - a) / n >>> for i in range(n): >>> # This is also a fastcall, but can be cached so doesn't >>> # matter... >>> s += func(a + i * dx) >>> return s * dx >>> >>> integrate(f, 0, 1, 1000000) >>> >>> There are two problems here: >>> >>> - The "sin" global can be reassigned (monkey-patched) between each >> call >>> to "f", no way for "f" to know. Even "sin" could do the reassignment. >> So >>> you'd need to check for reassignment to do caching... >> >> Since Cython allows static typing why not just declare that func can >> treat sin as if it can't be monkeypatched? > > If you want to manually declare stuff, you can always use a C function > pointer too... > >> Moving the load of a global variable out of the loop does seem to be a >> rather obvious optimisation, if it were declared to be legal. > > In case you didn't notice, there was no global variable loads inside the > loop... > > You can keep chasing this, but there's *always* cases where they don't > (and you need to save the situation by manual typing). > > Anyway: We should really discuss Cython on the Cython list. If my > motivating example wasn't good enough for you there's really nothing I > can do. > >>> Some rough numbers: >>> >>> - The overhead with the tp_flags hack is a 2 ns overhead (something >>> similar with a metaclass, the problems are more how to synchronize >> that >>> metaclass across multiple 3rd party libraries) >> >> Does your approach handle subtyping properly? > > Not really. > >>> >>> - Dict lookup 20 ns >> >> Did you time _PyType_Lookup() ? > > No, didn't get around to it yet (and thanks for pointing it out). > (Though the GIL requirement is an issue too for Cython.) > >>> - The sin function is about 35 ns. And, "f" is probably only 2-3 ns, >> >>> and there could very easily be multiple such functions, defined in >>> different modules, in a chain, in order to build up a formula. >>> >> >> Such micro timings are meaningless, because the working set often tends >> >> to fit in the hardware cache. A level 2 cache miss can takes 100s of >> cycles. I'm sorry; if my rant wasn't clear: Such micro-benchmarks do in fact mimic very closely what you'd do if you'd, say, integrate an ordinary differential equation. You *do* have a tight loop like that, just hammering on floating point numbers. Making that specific usecase more convenient was actually the original usecase that spawned this discussion on the NumPy list over a month ago... Dag > > I find this sort of response arrogant -- do you know the details of > every usecase for a programming language under the sun? > > Many Cython users are scientists. And in scientific computing in > particular you *really* have the whole range of problems and working > sets. Honestly. In some codes you only really care about the speed of > the disk controller. In other cases you can spend *many seconds* working > almost only in L1 or perhaps L2 cache (for instance when integrating > ordinary differential equations in a few variables, which is not > entirely different in nature from the example I posted). (Then, those > many seconds are replicated many million times for different parameters > on a large cluster, and a 2x speedup translates directly into large > amounts of saved money.) > > Also, with numerical codes you block up the problem so that loads to L2 > are amortized over sufficient FLOPs (when you can). > > Every time Cython becomes able to do stuff more easily in this domain, > people thank us that they didn't have to dig up Fortran but can stay > closer to Python. > > Sorry for going off on a rant. I find that people will give well-meant > advice about performance, but that advice is just generalizing from > computer programs in entirely different domains (web apps?), and > sweeping generalizations has a way of giving the wrong answer. > > Dag > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/d.s.seljebotn%40astro.uio.no > From rdmurray at bitdance.com Thu May 17 20:48:10 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 17 May 2012 14:48:10 -0400 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB53FD5.6000701@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB3F2F8.8060207@v.loewis.de> <4FB41547.9080105@astro.uio.no> <4FB4D52F.4010706@hotpy.org> <4FB53FD5.6000701@astro.uio.no> Message-ID: <20120517184811.59630250632@webabinitio.net> On Thu, 17 May 2012 20:13:41 +0200, Dag Sverre Seljebotn wrote: > Every time Cython becomes able to do stuff more easily in this domain, > people thank us that they didn't have to dig up Fortran but can stay > closer to Python. > > Sorry for going off on a rant. I find that people will give well-meant > advice about performance, but that advice is just generalizing from > computer programs in entirely different domains (web apps?), and > sweeping generalizations has a way of giving the wrong answer. I don't have opinions on the specific topic under discussion, since I don't get involved in the C level stuff unless I have to, but I do have some small amount of background in scientific computing (many years ago). I just want to chime in to say that I think it benefits the whole Python community to to extend welcoming arms to the scientific Python community and see what we can do to help them (without, of course, compromising Python). I think it is safe to assume that they do have significant experience with real applications where timings at this level of detail do matter. The scientific computing community is pretty much by definition pushing the limits of what's possible. --David From d.s.seljebotn at astro.uio.no Thu May 17 22:23:00 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Thu, 17 May 2012 22:23:00 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB469B1.3020804@canterbury.ac.nz> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> <4FB469B1.3020804@canterbury.ac.nz> Message-ID: <4FB55E24.3090006@astro.uio.no> On 05/17/2012 05:00 AM, Greg Ewing wrote: > On 17/05/12 12:17, Robert Bradshaw wrote: > >> This is exactly what was proposed to start this thread (with minimal >> collusion to avoid conflicts, specifically partitioning up a global ID >> space). > > Yes, but I think this part of the mechanism needs to be spelled out in > more detail, perhaps in the form of a draft PEP. Then there will be > something concrete to discuss in python-dev. > Well, we weren't 100% sure what is the best mechanism, so the point really was to solicit input, even if I got a bit argumentative along the way. Thanks to all of you! If we in the end decide that we would like a propose the PEP, does anyone feel the odds are anything but very, very slim? I don't think I've heard a single positive word about the proposal so far except from Cython devs, so I'm reluctant to spend my own and your time on a fleshing out a full PEP for that reason. In a PEP, the proposal would likely be an additional pointer to a table of "custom PyTypeObject extensions"; not a flag bit. The whole point would be to only do that once, and after that PyTypeObject would be infinitely extensible for custom purposes without collisions (even as a way of pre-testing PEPs about PyTypeObject in the wild before final approval!). Of course, a pointer more per type object is a bigger burden to push on others. The thing is, you *can* just use a subtype of PyType_Type for this purpose (or any purpose), it's just my opinion that it's not be best solution here; it means many different libraries need a common dependency for this reason alone (or dynamically handshake on a base class at runtime). You could just stick that base class in CPython, which would be OK I guess but not great (using the type hierarchy is quite intrusive in general; you didn't subclass PyType_Type to stick in tp_as_buffer either). Dag From ncoghlan at gmail.com Fri May 18 00:57:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 18 May 2012 08:57:20 +1000 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB55E24.3090006@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> <4FB469B1.3020804@canterbury.ac.nz> <4FB55E24.3090006@astro.uio.no> Message-ID: I think the main things we'd be looking for would be: - a clear explanation of why a new metaclass is considered too complex a solution - what the implications are for classes that have nothing to do with the SciPy/NumPy ecosystem - how subclassing would behave (both at the class and metaclass level) Yes, defining a new metaclass for fast signature exchange has its challenges - but it means that *our* concerns about maintaining consistent behaviour in the default object model and avoiding adverse effects on code that doesn't need the new behaviour are addressed automatically. Also, I'd consider a functioning reference implementation using a custom metaclass a requirement before we considered modifying type anyway, so I think that's the best thing to pursue next rather than a PEP. It also has the virtue of letting you choose which Python versions to target and iterating at a faster rate than CPython. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Fri May 18 02:49:07 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 18 May 2012 02:49:07 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: <4FB55E24.3090006@astro.uio.no> References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> <4FB469B1.3020804@canterbury.ac.nz> <4FB55E24.3090006@astro.uio.no> Message-ID: <20120518024907.Horde.c_unIruWis5PtZyDlGjW2_A@webmail.df.eu> > If we in the end decide that we would like a propose the PEP, does > anyone feel the odds are anything but very, very slim? I don't think > I've heard a single positive word about the proposal so far except > from Cython devs, so I'm reluctant to spend my own and your time on > a fleshing out a full PEP for that reason. Before you do that, it might be useful to publish a precise, reproducible, complete benchmark first, to support the performance figures you have been quoting. I'm skeptical by nature, so I don't believe any of the numbers you have given until I can reproduce them myself. More precisely, I fail to understand what they mean without seeing the source code that produced them (perhaps along with an indication what hardware, operating system, compiler version, and Python version were used to produce them). Regards, Martin From d.s.seljebotn at astro.uio.no Fri May 18 10:30:14 2012 From: d.s.seljebotn at astro.uio.no (Dag Sverre Seljebotn) Date: Fri, 18 May 2012 10:30:14 +0200 Subject: [Python-Dev] C-level duck typing In-Reply-To: References: <4FB35ACA.7090908@astro.uio.no> <4FB366F3.7010208@v.loewis.de> <4FB3784C.9020906@v.loewis.de> <4FB385F3.7070209@astro.uio.no> <4FB44065.4010306@canterbury.ac.nz> <4FB469B1.3020804@canterbury.ac.nz> <4FB55E24.3090006@astro.uio.no> Message-ID: <4FB60896.4030702@astro.uio.no> On 05/18/2012 12:57 AM, Nick Coghlan wrote: > I think the main things we'd be looking for would be: > - a clear explanation of why a new metaclass is considered too complex a > solution > - what the implications are for classes that have nothing to do with the > SciPy/NumPy ecosystem > - how subclassing would behave (both at the class and metaclass level) > > Yes, defining a new metaclass for fast signature exchange has its > challenges - but it means that *our* concerns about maintaining > consistent behaviour in the default object model and avoiding adverse > effects on code that doesn't need the new behaviour are addressed > automatically. > > Also, I'd consider a functioning reference implementation using a custom > metaclass a requirement before we considered modifying type anyway, so I > think that's the best thing to pursue next rather than a PEP. It also > has the virtue of letting you choose which Python versions to target and > iterating at a faster rate than CPython. This seems right on target. I could make a utility code C header for such a metaclass, and then the different libraries can all include it and handshake on which implementation becomes the real one through sys.modules during module initialization. That way an eventual PEP will only be a natural incremental step to make things more polished, whether that happens by making such a metaclass part of the standard library or by extending PyTypeObject. Thanks, Dag From ncoghlan at gmail.com Fri May 18 15:16:09 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 18 May 2012 23:16:09 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Remove outdated statements about threading and imports. In-Reply-To: References: Message-ID: I know you fixed the deadlock problem, but the warnings about shutdown misbehaviour are still valid. -- Sent from my phone, thus the relative brevity :) On May 18, 2012 9:59 PM, "antoine.pitrou" wrote: > http://hg.python.org/cpython/rev/565734c9b66d > changeset: 77020:565734c9b66d > parent: 77018:364289cc7891 > user: Antoine Pitrou > date: Fri May 18 13:57:04 2012 +0200 > summary: > Remove outdated statements about threading and imports. > > files: > Doc/library/multiprocessing.rst | 4 +-- > Doc/library/threading.rst | 23 --------------------- > 2 files changed, 1 insertions(+), 26 deletions(-) > > > diff --git a/Doc/library/multiprocessing.rst > b/Doc/library/multiprocessing.rst > --- a/Doc/library/multiprocessing.rst > +++ b/Doc/library/multiprocessing.rst > @@ -120,9 +120,7 @@ > print(q.get()) # prints "[42, None, 'hello']" > p.join() > > - Queues are thread and process safe, but note that they must never > - be instantiated as a side effect of importing a module: this can lead > - to a deadlock! (see :ref:`threaded-imports`) > + Queues are thread and process safe. > > **Pipes** > > diff --git a/Doc/library/threading.rst b/Doc/library/threading.rst > --- a/Doc/library/threading.rst > +++ b/Doc/library/threading.rst > @@ -996,27 +996,3 @@ > Currently, :class:`Lock`, :class:`RLock`, :class:`Condition`, > :class:`Semaphore`, and :class:`BoundedSemaphore` objects may be used as > :keyword:`with` statement context managers. > - > - > -.. _threaded-imports: > - > -Importing in threaded code > --------------------------- > - > -While the import machinery is thread-safe, there are two key restrictions > on > -threaded imports due to inherent limitations in the way that > thread-safety is > -provided: > - > -* Firstly, other than in the main module, an import should not have the > - side effect of spawning a new thread and then waiting for that thread in > - any way. Failing to abide by this restriction can lead to a deadlock if > - the spawned thread directly or indirectly attempts to import a module. > -* Secondly, all import attempts must be completed before the interpreter > - starts shutting itself down. This can be most easily achieved by only > - performing imports from non-daemon threads created through the threading > - module. Daemon threads and threads created directly with the thread > - module will require some other form of synchronization to ensure they do > - not attempt imports after system shutdown has commenced. Failure to > - abide by this restriction will lead to intermittent exceptions and > - crashes during interpreter shutdown (as the late imports attempt to > - access machinery which is no longer in a valid state). > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Fri May 18 15:29:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 18 May 2012 15:29:21 +0200 Subject: [Python-Dev] cpython: Remove outdated statements about threading and imports. References: Message-ID: <20120518152921.4c24e5d1@pitrou.net> On Fri, 18 May 2012 23:16:09 +1000 Nick Coghlan wrote: > I know you fixed the deadlock problem, but the warnings about shutdown > misbehaviour are still valid. Do we have a reproducer? It should have been fixed by http://bugs.python.org/issue1856. Regards Antoine. From status at bugs.python.org Fri May 18 18:07:12 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 18 May 2012 18:07:12 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120518160712.260111C853@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-05-11 - 2012-05-18) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3432 (+14) closed 23196 (+53) total 26628 (+67) Open issues with patches: 1457 Issues opened (52) ================== #14784: Re-importing _warnings changes warnings.filters http://bugs.python.org/issue14784 opened by brett.cannon #14785: Add sys._debugmallocstats() http://bugs.python.org/issue14785 opened by dmalcolm #14787: pkgutil.walk_packages returns extra modules http://bugs.python.org/issue14787 opened by cjerdonek #14788: Pdb debugs itself after ^C and a breakpoint is set anywhere http://bugs.python.org/issue14788 opened by xdegaye #14789: after continue, Pdb stops at a line without a breakpoint http://bugs.python.org/issue14789 opened by xdegaye #14790: use packaging in setup.py http://bugs.python.org/issue14790 opened by pitrou #14791: setup.py only adds /prefix/lib, not /prefix/lib64 http://bugs.python.org/issue14791 opened by pitrou #14792: setting a bp on current function, Pdb stops at next line altho http://bugs.python.org/issue14792 opened by xdegaye #14794: slice.indices raises OverflowError http://bugs.python.org/issue14794 opened by Paul.Upchurch #14795: Pdb incorrectly handles a method breakpoint when module not im http://bugs.python.org/issue14795 opened by xdegaye #14796: Calendar module test coverage improved http://bugs.python.org/issue14796 opened by Oleg.Plakhotnyuk #14797: Deprecate imp.find_module()/load_module() http://bugs.python.org/issue14797 opened by brett.cannon #14798: pyclbr raises KeyError when the prefix of a dotted name is not http://bugs.python.org/issue14798 opened by xdegaye #14799: Tkinter ttk tests hang on linux http://bugs.python.org/issue14799 opened by asvetlov #14802: Python fails to compile with VC11 ARM configuration http://bugs.python.org/issue14802 opened by Minmin.Gong #14803: Add feature to allow code execution prior to __main__ invocati http://bugs.python.org/issue14803 opened by ncoghlan #14804: Wrong defaults args notation in docs http://bugs.python.org/issue14804 opened by hynek #14805: Support display of both __cause__ and __context__ http://bugs.python.org/issue14805 opened by ncoghlan #14807: Move tarfile.filemode() into stat module http://bugs.python.org/issue14807 opened by giampaolo.rodola #14808: Pdb does not stop at a breakpoint set on the line of a functio http://bugs.python.org/issue14808 opened by xdegaye #14810: Bug in tarfile http://bugs.python.org/issue14810 opened by hwm #14811: decoding_fgets() truncates long lines and fails with a SyntaxE http://bugs.python.org/issue14811 opened by v+python #14812: Change file associations to not be a default installer feature http://bugs.python.org/issue14812 opened by brian.curtin #14813: Can't build under VS2008 anymore http://bugs.python.org/issue14813 opened by pitrou #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 opened by ncoghlan #14815: random_seed uses only 32-bits of hash on Win64 http://bugs.python.org/issue14815 opened by loewis #14817: pkgutil.extend_path has no tests http://bugs.python.org/issue14817 opened by eric.smith #14818: C implementation of ElementTree: Some functions should support http://bugs.python.org/issue14818 opened by cmn #14821: Ctypes extension module builds as _ctypes_test.pyd http://bugs.python.org/issue14821 opened by jason.coombs #14822: Build unusable when compiled for Win 64-bit release http://bugs.python.org/issue14822 opened by jason.coombs #14824: reprlib documentation references string module http://bugs.python.org/issue14824 opened by magcius #14826: urllib2.urlopen fails to load URL http://bugs.python.org/issue14826 opened by wichert #14830: pysetup fails on non-ascii filenames http://bugs.python.org/issue14830 opened by tarek #14831: make r argument on itertools.combinations() optional http://bugs.python.org/issue14831 opened by djc #14833: Copyright date in footer of /pypi says 2011 http://bugs.python.org/issue14833 opened by antlong #14834: A list of broken links on the python.org website http://bugs.python.org/issue14834 opened by antlong #14835: plistlib: output empty elements correctly http://bugs.python.org/issue14835 opened by ssm #14836: Add next(iter(o)) to set.pop, dict.popitem entries. http://bugs.python.org/issue14836 opened by terry.reedy #14837: Better SSL errors http://bugs.python.org/issue14837 opened by pitrou #14838: IDLE Will not load on reinstall http://bugs.python.org/issue14838 opened by BugReporter #14840: Tutorial: Add a bit on the difference between tuples and lists http://bugs.python.org/issue14840 opened by zach.ware #14841: os.get_terminal_size() should check stdin as a fallback http://bugs.python.org/issue14841 opened by Arfrever #14842: Link to time.time() in the docs of time.localtime() is wrong http://bugs.python.org/issue14842 opened by petri.lehtinen #14843: support define_macros / undef_macros in setup.cfg http://bugs.python.org/issue14843 opened by dholth #14844: netrc does not handle accentuated characters http://bugs.python.org/issue14844 opened by drzraf #14845: list() != [] http://bugs.python.org/issue14845 opened by Peter.Norvig #14846: Change in error when sys.path contains a nonexistent folder (i http://bugs.python.org/issue14846 opened by takluyver #14847: AttributeError: NoneType has no attribute 'utf_8_decode' http://bugs.python.org/issue14847 opened by jason.coombs #14848: os.rename should not be used http://bugs.python.org/issue14848 opened by nvetoshkin #14849: C implementation of ElementTree: Inheriting from Element break http://bugs.python.org/issue14849 opened by cmn #14850: The inconsistency of codecs.charmap_decode http://bugs.python.org/issue14850 opened by storchaka #1635217: Warn against using requires/provides/obsoletes in setup.py http://bugs.python.org/issue1635217 reopened by techtonik Most recent 15 issues with no replies (15) ========================================== #14850: The inconsistency of codecs.charmap_decode http://bugs.python.org/issue14850 #14849: C implementation of ElementTree: Inheriting from Element break http://bugs.python.org/issue14849 #14844: netrc does not handle accentuated characters http://bugs.python.org/issue14844 #14843: support define_macros / undef_macros in setup.cfg http://bugs.python.org/issue14843 #14842: Link to time.time() in the docs of time.localtime() is wrong http://bugs.python.org/issue14842 #14841: os.get_terminal_size() should check stdin as a fallback http://bugs.python.org/issue14841 #14837: Better SSL errors http://bugs.python.org/issue14837 #14835: plistlib: output empty elements correctly http://bugs.python.org/issue14835 #14833: Copyright date in footer of /pypi says 2011 http://bugs.python.org/issue14833 #14830: pysetup fails on non-ascii filenames http://bugs.python.org/issue14830 #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 #14812: Change file associations to not be a default installer feature http://bugs.python.org/issue14812 #14808: Pdb does not stop at a breakpoint set on the line of a functio http://bugs.python.org/issue14808 #14805: Support display of both __cause__ and __context__ http://bugs.python.org/issue14805 #14795: Pdb incorrectly handles a method breakpoint when module not im http://bugs.python.org/issue14795 Most recent 15 issues waiting for review (15) ============================================= #14840: Tutorial: Add a bit on the difference between tuples and lists http://bugs.python.org/issue14840 #14837: Better SSL errors http://bugs.python.org/issue14837 #14836: Add next(iter(o)) to set.pop, dict.popitem entries. http://bugs.python.org/issue14836 #14835: plistlib: output empty elements correctly http://bugs.python.org/issue14835 #14824: reprlib documentation references string module http://bugs.python.org/issue14824 #14818: C implementation of ElementTree: Some functions should support http://bugs.python.org/issue14818 #14813: Can't build under VS2008 anymore http://bugs.python.org/issue14813 #14811: decoding_fgets() truncates long lines and fails with a SyntaxE http://bugs.python.org/issue14811 #14808: Pdb does not stop at a breakpoint set on the line of a functio http://bugs.python.org/issue14808 #14807: Move tarfile.filemode() into stat module http://bugs.python.org/issue14807 #14804: Wrong defaults args notation in docs http://bugs.python.org/issue14804 #14798: pyclbr raises KeyError when the prefix of a dotted name is not http://bugs.python.org/issue14798 #14796: Calendar module test coverage improved http://bugs.python.org/issue14796 #14795: Pdb incorrectly handles a method breakpoint when module not im http://bugs.python.org/issue14795 #14792: setting a bp on current function, Pdb stops at next line altho http://bugs.python.org/issue14792 Top 10 most discussed issues (10) ================================= #14813: Can't build under VS2008 anymore http://bugs.python.org/issue14813 26 msgs #13210: Support Visual Studio 2010 http://bugs.python.org/issue13210 18 msgs #14315: zipfile.ZipFile() unable to open zip File http://bugs.python.org/issue14315 13 msgs #8271: str.decode('utf8', 'replace') -- conformance with Unicode 5.2. http://bugs.python.org/issue8271 12 msgs #14780: urllib.request could use the default CA store http://bugs.python.org/issue14780 12 msgs #11959: smtpd cannot be used without affecting global state http://bugs.python.org/issue11959 11 msgs #14807: Move tarfile.filemode() into stat module http://bugs.python.org/issue14807 11 msgs #14811: decoding_fgets() truncates long lines and fails with a SyntaxE http://bugs.python.org/issue14811 11 msgs #12029: Catching virtual subclasses in except clauses http://bugs.python.org/issue12029 10 msgs #14674: Add link to RFC 4627 from json documentation http://bugs.python.org/issue14674 10 msgs Issues closed (49) ================== #5730: setdefault speedup http://bugs.python.org/issue5730 closed by pitrou #6302: Add decode_header_as_string method to email.utils http://bugs.python.org/issue6302 closed by r.david.murray #6544: Fix refleak in kqueue implementation http://bugs.python.org/issue6544 closed by pitrou #8098: PyImport_ImportModuleNoBlock() may solve problems but causes o http://bugs.python.org/issue8098 closed by pitrou #8330: Failures seen in test_gdb on buildbots http://bugs.python.org/issue8330 closed by dmalcolm #9120: Reduce pickle size for an empty set http://bugs.python.org/issue9120 closed by loewis #9251: Test for the import lock http://bugs.python.org/issue9251 closed by pitrou #9260: A finer grained import lock http://bugs.python.org/issue9260 closed by pitrou #11051: Improve Python 3.3 startup time http://bugs.python.org/issue11051 closed by pitrou #12541: Accepting Badly formed headers in urllib HTTPBasicAuth http://bugs.python.org/issue12541 closed by orsenthil #13031: small speed-up for tarfile.py when unzipping tarballs http://bugs.python.org/issue13031 closed by rosslagerwall #14082: shutil doesn't copy extended attributes http://bugs.python.org/issue14082 closed by pitrou #14245: float rounding examples in FAQ are outdated http://bugs.python.org/issue14245 closed by mark.dickinson #14366: Supporting lzma compression in zip files http://bugs.python.org/issue14366 closed by loewis #14405: Some "Other Resources" in the sidebar are hopelessly out of da http://bugs.python.org/issue14405 closed by ezio.melotti #14417: dict RuntimeError workaround http://bugs.python.org/issue14417 closed by pitrou #14419: Faster ascii decoding http://bugs.python.org/issue14419 closed by pitrou #14543: Upgrade OpenSSL on Windows to 0.9.8u http://bugs.python.org/issue14543 closed by loewis #14584: Add gzip support to xmlrpc.server http://bugs.python.org/issue14584 closed by rosslagerwall #14624: Faster utf-16 decoder http://bugs.python.org/issue14624 closed by pitrou #14682: Backport missing errnos to 2.7 http://bugs.python.org/issue14682 closed by hynek #14692: json.loads parse_constant callback not working anymore http://bugs.python.org/issue14692 closed by hynek #14702: os.makedirs breaks under autofs directories http://bugs.python.org/issue14702 closed by hynek #14732: PEP 3121 Refactoring applied to _csv module http://bugs.python.org/issue14732 closed by pitrou #14766: Non-naive time comparison throws naive time error http://bugs.python.org/issue14766 closed by r.david.murray #14770: Minor documentation fixes http://bugs.python.org/issue14770 closed by ezio.melotti #14773: fwalk breaks on dangling symlinks http://bugs.python.org/issue14773 closed by hynek #14777: Tkinter clipboard_get() decodes characters incorrectly http://bugs.python.org/issue14777 closed by ned.deily #14779: test_buffer fails on OS X universal 64-/32-bit builds http://bugs.python.org/issue14779 closed by skrah #14781: Default to year 1 in strptime if year 0 has been specified http://bugs.python.org/issue14781 closed by r.david.murray #14786: htmlparser with tag br http://bugs.python.org/issue14786 closed by ezio.melotti #14793: broken grammar in Built-in Types doc http://bugs.python.org/issue14793 closed by sandro.tosi #14800: stat.py constant comments + docstrings http://bugs.python.org/issue14800 closed by giampaolo.rodola #14801: ssize_t where size_t expected http://bugs.python.org/issue14801 closed by pitrou #14806: re.match does not match word '{' http://bugs.python.org/issue14806 closed by ezio.melotti #14809: Add HTTP status codes introduced by RFC 6585 http://bugs.python.org/issue14809 closed by hynek #14816: compilation failed on Ubuntu shared buildbot http://bugs.python.org/issue14816 closed by pitrou #14819: Add `assertIsSubclass` and `assertNotIsSubclass` to `unittest. http://bugs.python.org/issue14819 closed by ezio.melotti #14820: socket._decref_socketios and close http://bugs.python.org/issue14820 closed by giampaolo.rodola #14823: Simplify threading.Lock.acquire() description http://bugs.python.org/issue14823 closed by r.david.murray #14825: Interactive Shell vs Executed code http://bugs.python.org/issue14825 closed by mark.dickinson #14827: IDLE crash when typing ^ character on Mac OS X http://bugs.python.org/issue14827 closed by JPEC #14828: itertools.groupby not working as expected http://bugs.python.org/issue14828 closed by petri.lehtinen #14829: test_bisect failure under 64-bit Windows http://bugs.python.org/issue14829 closed by pitrou #14832: unittest's assertItemsEqual() method gives wrong order in erro http://bugs.python.org/issue14832 closed by r.david.murray #14839: xml.sax.make_parser() returns "No parsers found" http://bugs.python.org/issue14839 closed by Arfrever #1479611: speed up function calls http://bugs.python.org/issue1479611 closed by pitrou #504152: rfc822 long header continuation broken http://bugs.python.org/issue504152 closed by r.david.murray #1440472: email.Generator is not idempotent http://bugs.python.org/issue1440472 closed by r.david.murray From barry at python.org Fri May 18 20:24:18 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 18 May 2012 14:24:18 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? Message-ID: <20120518142418.7609fe21@limelight.wooz.org> At what point should we cut over docs.python.org to point to the Python 3 documentation by default? Wouldn't this be an easy bit to flip in order to promote Python 3 more better? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From brian.curtin at gmail.com Fri May 18 20:30:34 2012 From: brian.curtin at gmail.com (Brian Curtin) Date: Fri, 18 May 2012 13:30:34 -0500 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: On May 18, 2012 1:26 PM, "Barry Warsaw" wrote: > > At what point should we cut over docs.python.org to point to the Python 3 > documentation by default? Today sounds good to me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Fri May 18 20:36:30 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 18 May 2012 11:36:30 -0700 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: 2012/5/18 Barry Warsaw : > At what point should we cut over docs.python.org to point to the Python 3 > documentation by default? ?Wouldn't this be an easy bit to flip in order to > promote Python 3 more better? Perhaps on the occasion on the release on Python 3.3? -- Regards, Benjamin From hs at ox.cx Fri May 18 20:39:46 2012 From: hs at ox.cx (Hynek Schlawack) Date: Fri, 18 May 2012 20:39:46 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: <4FB69772.2080406@ox.cx> Hi, > At what point should we cut over docs.python.org to point to the > Python 3 documentation by default? Wouldn't this be an easy bit to > flip in order to promote Python 3 more better? I?d vote for the release of 3.3 instead of a surprise change in the middle of nowhere. Cheers, Hynek From barry at python.org Fri May 18 21:05:10 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 18 May 2012 15:05:10 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: <20120518150510.3acc23d4@resist.wooz.org> On May 18, 2012, at 11:36 AM, Benjamin Peterson wrote: >2012/5/18 Barry Warsaw : >> At what point should we cut over docs.python.org to point to the Python 3 >> documentation by default? ?Wouldn't this be an easy bit to flip in order to >> promote Python 3 more better? > >Perhaps on the occasion on the release on Python 3.3? Of course, I'm with Brian, JFDI. :) But coordinating with the 3.3 release would also be nice advertisement. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Fri May 18 23:15:55 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 18 May 2012 17:15:55 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <4FB69772.2080406@ox.cx> References: <20120518142418.7609fe21@limelight.wooz.org> <4FB69772.2080406@ox.cx> Message-ID: On 5/18/2012 2:39 PM, Hynek Schlawack wrote: > Hi, > >> At what point should we cut over docs.python.org to point to the >> Python 3 documentation by default? Wouldn't this be an easy bit to >> flip in order to promote Python 3 more better? > > I?d vote for the release of 3.3 instead of a surprise change in the > middle of nowhere. I would have done it with 3.2 and thought that was once agreed on. The last 3.2.3 would also have been a good time, but today might seem odd, so I would am willing to wait for 3.3 as long as it is not somehow forgotten about ;-). -- Terry Jan Reedy From ncoghlan at gmail.com Sat May 19 02:26:15 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 May 2012 10:26:15 +1000 Subject: [Python-Dev] cpython: Remove outdated statements about threading and imports. In-Reply-To: <20120518152921.4c24e5d1@pitrou.net> References: <20120518152921.4c24e5d1@pitrou.net> Message-ID: On May 18, 2012 11:34 PM, "Antoine Pitrou" wrote: > > On Fri, 18 May 2012 23:16:09 +1000 > Nick Coghlan wrote: > > > I know you fixed the deadlock problem, but the warnings about shutdown > > misbehaviour are still valid. > > Do we have a reproducer? It should have been fixed by > http://bugs.python.org/issue1856. > No, I'd simply missed that change when it was made (or had forgotten about it). Cool. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From glyph at twistedmatrix.com Sat May 19 20:43:13 2012 From: glyph at twistedmatrix.com (Glyph) Date: Sat, 19 May 2012 14:43:13 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: <5C8E31E3-94E6-4B8E-AF97-85DCEC747E56@twistedmatrix.com> On May 18, 2012, at 2:24 PM, Barry Warsaw wrote: > At what point should we cut over docs.python.org to point to the Python 3 > documentation by default? Wouldn't this be an easy bit to flip in order to > promote Python 3 more better? I would like to suggest a less all-or-nothing approach. Just redirecting to Python 3 docs is going to create a lot of support headaches for people trying to help others learn Python. Right now, e.g. directly renders a page. I suggest that this be changed to a redirect to . The fact that people can bookmark the "default" version of a document is kind of a bug. The front page, could then be changed into a "are you looking for documentation for Python 2 or Python 3?" page, with nice big click targets for each (an initial suggestion: half the page each, split down the middle, but the web design isn't really the important thing for me). If you want to promote python 3 then putting "most recent version" links (for example, see ) across the top of all the old versions would be pretty visible. -glyph From martin at v.loewis.de Sat May 19 22:48:50 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 19 May 2012 22:48:50 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <5C8E31E3-94E6-4B8E-AF97-85DCEC747E56@twistedmatrix.com> References: <20120518142418.7609fe21@limelight.wooz.org> <5C8E31E3-94E6-4B8E-AF97-85DCEC747E56@twistedmatrix.com> Message-ID: <20120519224850.Horde.wwYQN0lCcOxPuAcySCBiWUA@webmail.df.eu> > I would like to suggest a less all-or-nothing approach. Just > redirecting to Python 3 docs is going to create a lot of support > headaches for people trying to help others learn Python. I don't think this will be that bad. Most Python 3 documentation pages apply to Python 2 as well. There may be features documented that don't exist in Python 2, but it was always the case that users of older Python versions had to watch for the versionadded/versionchanged notices. IMO, it would be good if each individual page had an "other versions" section on left-hand block, or on the top along with the "previous | next" links. As for the amount of cross-linking, I suggest the following, assuming 2.7 and 3.3 are the current releases: 1. 2.7 links to 2.6 and 3.3 2. 3.3 links to 3.2 and 2.7 3. all older versions link to "newest", i.e. 3.3. I understand that this would require a custom mapping in some cases. It would be best if Sphinx could already consider such a mapping when generating links. Failing that, we can also do the custom mapping in the web server (i.e. with redirects). Regards, Martin From rosuav at gmail.com Sun May 20 01:38:26 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 20 May 2012 09:38:26 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <5C8E31E3-94E6-4B8E-AF97-85DCEC747E56@twistedmatrix.com> References: <20120518142418.7609fe21@limelight.wooz.org> <5C8E31E3-94E6-4B8E-AF97-85DCEC747E56@twistedmatrix.com> Message-ID: On Sun, May 20, 2012 at 4:43 AM, Glyph wrote: > Right now, e.g. directly renders a page. ?I suggest that this be changed to a redirect to . ?The fact that people can bookmark the "default" version of a document is kind of a bug. I'm -1 on that; unless there's a strong reason to avoid it, bookmarking the "default" version seems like the right thing to me. (One example of a strong reason would be if all Python modules were numbered sequentially in alphabetical order, meaning that adding a new module changes the URLs of existing modules' pages.) Compare the PostgreSQL documentation: if you do a web search for 'postgres nextval', you'll find the documentation for Postgres's sequence functions (which is correct), but chances are it'll be the old docs - version 8.1 most likely. If there's no weighting toward one in particular, I'd say that returning information for the latest version is the most logical default. Obviously there's more docs difference between Python 2 and Python 3 than between Postgres 8.1 and Postgres 9.1, but the most accessible version of a page should not IMHO distinguish between Python minor versions. ChrisA From nadeem.vawda at gmail.com Sun May 20 02:39:49 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Sat, 19 May 2012 17:39:49 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Clean up the PCBuild project files, removing redundant settings and In-Reply-To: References: Message-ID: On Sat, May 19, 2012 at 2:11 PM, kristjan.jonsson < python-checkins at python.org> wrote: > +Visual Studio 2010 uses version 10 of the C runtime (MSVCRT9). The > executables > Shouldn't that be MSVCRT10? Nadeem -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun May 20 10:38:10 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 18:38:10 +1000 Subject: [Python-Dev] Language reference updated for metaclasses Message-ID: When writing the docs for types.new_class(), I discovered that the description of the class creation process in the language reference was not only hard to follow, it was actually *incorrect* when it came to describing the algorithm for determining the correct metaclass. I rewrote the offending section of the language reference to both describe the correct algorithm, and hopefully also to be easier to read. Once people have had a chance to review the changes in the 3.3 docs, I'll backport the update to 3.2. Previous docs: http://docs.python.org/py3k/reference/datamodel.html#customizing-class-creation Updated docs: http://docs.python.org/dev/reference/datamodel.html#customizing-class-creation Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun May 20 10:51:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 18:51:27 +1000 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 Message-ID: PEP 3135 defines the new zero-argument form of super() as implicitly equivalent to super(__class__, ), and up until 3.2 has behaved accordingly: if you accessed __class__ from inside a method, you would receive a reference to the lexically containing class. In 3.3, that currently doesn't work: you get NameError instead (http://bugs.python.org/issue14857) While the 3.2 behaviour wasn't documented in the language reference, it's *definitely* documented in PEP 3135 (and my recent updates to the 3.3 version of the metaclass docs were written accordingly - that's how I discovered the problem) The error in the alpha releases appears to be a consequence of the attempt to fix a problem where the special treatment of __class__ meant that you couldn't properly set the __class__ attribute of the class itself in the class body (see http://bugs.python.org/issue12370). The fact that patch went in without causing a test failure means that this aspect of PEP 3135 has no explicit tests - it was only tested indirectly through the zero-argument super() construct. What I plan to do: 1. Revert the previous fix for #12370 2. Add tests for direct access to __class__ from methods 3. Create a *new* fix for #12370 that only affects the class scope, not the method bodies (this will be harder than the previous fix which affected the resolution of __class__ *everywhere* in the class body). Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Sun May 20 10:53:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 18:53:08 +1000 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 6:51 PM, Nick Coghlan wrote: > What I plan to do: > 1. Revert the previous fix for #12370 > 2. Add tests for direct access to __class__ from methods > 3. Create a *new* fix for #12370 that only affects the class scope, > not the method bodies (this will be harder than the previous fix which > affected the resolution of __class__ *everywhere* in the class body). Correction - I only plan to *reopen* #12370. I agree it's a legitimate problem with the PEP 3135 implementation, but at least it's not a regression for something that previously worked in 3.2. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From urban.dani+py at gmail.com Sun May 20 10:56:46 2012 From: urban.dani+py at gmail.com (Daniel Urban) Date: Sun, 20 May 2012 10:56:46 +0200 Subject: [Python-Dev] Language reference updated for metaclasses In-Reply-To: References: Message-ID: I think there is a small mistake in section "3.3.3.4. Creating the class object": "After the class object is created, any class decorators included in the *function* definition are invoked ..." That probaly should be "class definition". Daniel From solipsis at pitrou.net Sun May 20 12:09:00 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 20 May 2012 12:09:00 +0200 Subject: [Python-Dev] cpython: Describe the default hash correctly, and mark a couple of CPython References: Message-ID: <20120520120900.6dcedc23@pitrou.net> On Sun, 20 May 2012 10:31:01 +0200 nick.coghlan wrote: > + > + .. impl-detail:: > + > + CPython uses ``hash(id(x))`` as the default hash for class instances. This isn't true: >>> class C: pass ... >>> c = C() >>> hash(c) 619973 >>> id(c) 9919568 >>> hash(id(c)) 9919568 id(...) always has the lower bits clear, so it was decided to shift it to the right by a number of bits. Regards Antoine. From ncoghlan at gmail.com Sun May 20 12:58:02 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 20:58:02 +1000 Subject: [Python-Dev] cpython: Describe the default hash correctly, and mark a couple of CPython In-Reply-To: <20120520120900.6dcedc23@pitrou.net> References: <20120520120900.6dcedc23@pitrou.net> Message-ID: On Sun, May 20, 2012 at 8:09 PM, Antoine Pitrou wrote: > On Sun, 20 May 2012 10:31:01 +0200 > nick.coghlan wrote: >> + >> + ? .. impl-detail:: >> + >> + ? ? ?CPython uses ``hash(id(x))`` as the default hash for class instances. > > This isn't true: > >>>> class C: pass > ... >>>> c = C() >>>> hash(c) > 619973 >>>> id(c) > 9919568 >>>> hash(id(c)) > 9919568 > id(...) always has the lower bits clear, so it was decided to shift it > to the right by a number of bits. Ah, you're right - I misread my own experiment. Regardless, the hash(c) == id(c) that *was* there was also wrong. I'll just drop the implementation detail entirely and leave the new wording on its own. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From cf.natali at gmail.com Sun May 20 13:04:13 2012 From: cf.natali at gmail.com (=?ISO-8859-1?Q?Charles=2DFran=E7ois_Natali?=) Date: Sun, 20 May 2012 13:04:13 +0200 Subject: [Python-Dev] cpython: Describe the default hash correctly, and mark a couple of CPython In-Reply-To: <20120520120900.6dcedc23@pitrou.net> References: <20120520120900.6dcedc23@pitrou.net> Message-ID: Is documenting such implementation details really a good idea? Apart from preventing further evolutions/improvements/fixes (like the recent hash randomization), I don't see any benefit in exposing such details. FWIW, I clearly remember Josh Bloch warning against this type of documentation in one of its presentations (and in his excellent "Effective Java"). Cheers, cf From ncoghlan at gmail.com Sun May 20 13:20:05 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 21:20:05 +1000 Subject: [Python-Dev] cpython: Describe the default hash correctly, and mark a couple of CPython In-Reply-To: References: <20120520120900.6dcedc23@pitrou.net> Message-ID: On Sun, May 20, 2012 at 9:04 PM, Charles-Fran?ois Natali wrote: > Is documenting such implementation details really a good idea? > Apart from preventing further evolutions/improvements/fixes (like the > recent hash randomization), I don't see any benefit in exposing such > details. > FWIW, I clearly remember Josh Bloch warning against this type of > documentation in one of its presentations (and in his excellent > "Effective Java"). We've been weeding a lot of them out over time (e.g. by deleting them rather than updating them when they change). However, keeping them can be useful for a couple of reasons: - sometimes we're explicitly OK with people relying on certain CPython behaviour (or genuinely want to help them understand that behaviour) - sometimes it's useful as an additional hint to authors of other implementations Mostly (as in this case) they're just due to the past blurriness of the distinction between Python-the-language and CPython-the-reference-implementation, though. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From hs at ox.cx Sun May 20 13:58:38 2012 From: hs at ox.cx (Hynek Schlawack) Date: Sun, 20 May 2012 13:58:38 +0200 Subject: [Python-Dev] Backward compatibility of shutil.rmtree Message-ID: <4FB8DC6E.406@ox.cx> Hi, as our shutil.rmtree() is vulnerable to symlink attacks (see ) I?ve implemented a safe version using os.fwalk() and os.unlinkat() for Python 3.3. Now we face a problem I?d like a broad opinion on: rmtree has a callback hook called `onerror` that that gets called with amongst others the function that caused the error (see ). Two of them differ in the new version: os.fwalk() is used instead of os.listdir() and os.unlinkat() instead of os.remove(). The safe version is used transparently if available, so this could potentially break code. Also it would mean that rmtree would behave differently on Linux & OS X for example. I?ve been thinking to "fake" the function names, as they map pretty good anyway. I.e. call onerror with os.listdir if os.fwalk failed and with os.remove instead of os.unlinkat. That could also make sense if some kind soul writes a safe rmtree for Windows or OS X so the function works the same across all platforms. It's a bit ugly though, a cleaner way would be to start using well defined symbols, but that would break code for sure. Opinions? Cheers, Hynek From solipsis at pitrou.net Sun May 20 15:03:21 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 20 May 2012 15:03:21 +0200 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 References: Message-ID: <20120520150321.7a8ea144@pitrou.net> On Sun, 20 May 2012 18:51:27 +1000 Nick Coghlan wrote: > PEP 3135 defines the new zero-argument form of super() as implicitly > equivalent to super(__class__, ), and up until 3.2 has > behaved accordingly: if you accessed __class__ from inside a method, > you would receive a reference to the lexically containing class. > > In 3.3, that currently doesn't work: you get NameError instead > (http://bugs.python.org/issue14857) > > While the 3.2 behaviour wasn't documented in the language reference, > it's *definitely* documented in PEP 3135 (and my recent updates to the > 3.3 version of the metaclass docs were written accordingly - that's > how I discovered the problem) The question is, do we want to support it? What's the use case? Regards Antoine. From ncoghlan at gmail.com Sun May 20 15:56:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 20 May 2012 23:56:06 +1000 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 In-Reply-To: <20120520150321.7a8ea144@pitrou.net> References: <20120520150321.7a8ea144@pitrou.net> Message-ID: On Sun, May 20, 2012 at 11:03 PM, Antoine Pitrou wrote: > On Sun, 20 May 2012 18:51:27 +1000 > Nick Coghlan wrote: >> PEP 3135 defines the new zero-argument form of super() as implicitly >> equivalent to super(__class__, ), and up until 3.2 has >> behaved accordingly: if you accessed __class__ from inside a method, >> you would receive a reference to the lexically containing class. >> >> In 3.3, that currently doesn't work: you get NameError instead >> (http://bugs.python.org/issue14857) >> >> While the 3.2 behaviour wasn't documented in the language reference, >> it's *definitely* documented in PEP 3135 (and my recent updates to the >> 3.3 version of the metaclass docs were written accordingly - that's >> how I discovered the problem) > > The question is, do we want to support it? What's the use case? Being able to deconstruct zero-argument super into something simpler (i.e. an implicit closure reference) makes it a *lot* more understandable, and lets people create their own variations on that theme rather than having it be completely opaque black magic (as it is now in 3.3). If __class__ had been covered by the test suite, then #12370 would never have been fixed the way it was. However, while it isn't mentioned in the language reference (well, not until I added a mention of it yesterday), PEP 3135 itself *was* updated to say "Every function will have a cell named __class__ that contains the class object that the function is defined in". The special variable is named as part of the specification section of the PEP. This contrasts with PEP 3115 and the __build_class__ builtin, where the latter isn't mentioned in the PEP at all - it's just a CPython implementation artifact. So this isn't a matter of "What's the use case for accessing __class__ directly?" - it's a matter of "We broke backwards compatibility for a documented (albeit only in the originating PEP) feature and the test suite didn't pick it up". Now, it isn't just a matter of reverting the old patch, because we need to bump the magic number for the bytecode again. But the fix for #12370 *is* broken, because it didn't just affect the __class__ references at class scope - it also changed them all at method scope. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Sun May 20 17:28:40 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 May 2012 11:28:40 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Describe the default hash correctly, and mark a couple of CPython In-Reply-To: References: Message-ID: <4FB90DA8.2090607@udel.edu> On 5/20/2012 4:31 AM, nick.coghlan wrote: > + and ``x.__hash__()`` returns an appropriate value such that ``x == y`` > + implies both that ``x is y`` and ``hash(x) == hash(y)``. I don't understand what you were trying to say with x == y implies x is y but I know you know that that is not true ;=0. From martin at v.loewis.de Sun May 20 19:49:20 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 20 May 2012 19:49:20 +0200 Subject: [Python-Dev] Backward compatibility of shutil.rmtree In-Reply-To: <4FB8DC6E.406@ox.cx> References: <4FB8DC6E.406@ox.cx> Message-ID: <4FB92EA0.2010203@v.loewis.de> > Two of them differ in the new version: os.fwalk() is used instead of > os.listdir() and os.unlinkat() instead of os.remove(). It would be os.flistdir instead of os.listdir, not os.fwalk, right? The way this interface is defined, it's IMO best to do "precise" reporting, i.e. pass the exact function that caused the failure. I'd weaken the documentation to just specify that the error-causing function is reported, indicating that the exact set of functions may depend on the operating system and change across Python versions. Regards, Martin From tjreedy at udel.edu Sun May 20 19:18:29 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 20 May 2012 13:18:29 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14814: addition of the ipaddress module (stage 1 - code and tests) In-Reply-To: References: Message-ID: <4FB92765.803@udel.edu> On 5/20/2012 7:02 AM, nick.coghlan wrote: > +def ip_address(address, version=None): > + """Take an IP string/int and return an object of the correct type. > + > + Args: > + address: A string or integer, the IP address. Either IPv4 or > + IPv6 addresses may be supplied; integers less than 2**32 will > + be considered to be IPv4 by default. > + version: An Integer, 4 or 6. If set, don't try to automatically integer, not Integer > + determine what the IP address type is. important for things > + like ip_address(1), which could be IPv4, '192.0.2.1', or IPv6, > + '2001:db8::1'. I read this as saying that a version other than 4 or 6 should be an error, and not ignored as if not set. If version is set incorrectly, it is still set. I certainly would expect an error to be an error. > + > + Returns: > + An IPv4Address or IPv6Address object. > + > + Raises: > + ValueError: if the string passed isn't either a v4 or a v6 > + address. Should say "if the *address*...", and I suggest adding "or if the version is not None, 4, or 6." > + > + """ > + if version: if version is not None: ?? Do you really want to silently ignore *every* null value, like '' or []? > + if version == 4: > + return IPv4Address(address) > + elif version == 6: > + return IPv6Address(address) else: raise ValueError() ?? ... > +def ip_network(address, version=None, strict=True): > + """Take an IP string/int and return an object of the correct type. > + > + Args: > + address: A string or integer, the IP network. Either IPv4 or > + IPv6 networks may be supplied; integers less than 2**32 will > + be considered to be IPv4 by default. > + version: An Integer, if set, don't try to automatically > + determine what the IP address type is. important for things > + like ip_network(1), which could be IPv4, '192.0.2.1/32', or IPv6, > + '2001:db8::1/128'. Same comments > +def ip_interface(address, version=None): > + """Take an IP string/int and return an object of the correct type. > + > + Args: > + address: A string or integer, the IP address. Either IPv4 or > + IPv6 addresses may be supplied; integers less than 2**32 will > + be considered to be IPv4 by default. > + version: An Integer, if set, don't try to automatically > + determine what the IP address type is. important for things > + like ip_network(1), which could be IPv4, '192.0.2.1/32', or IPv6, > + '2001:db8::1/128'. ditto > + Returns: > + An IPv4Network or IPv6Network object. Interface, not Network > +def v4_int_to_packed(address): > + """The binary representation of this address. Since integers are typically implemented as strings of binary bits, a 'binary representation' could mean a string of 0s and 1s. > + > + Args: > + address: An integer representation of an IPv4 IP address. > + > + Returns: > + The binary representation of this address. The integer address packed as 4 bytes in network (big-endian) order. > + Raises: > + ValueError: If the integer is too large to be an IPv4 IP > + address. And if the address is too small (negative)? "If the integer is negative or ..." ? > + """ > + if address> _BaseV4._ALL_ONES: or address < 0? > + raise ValueError('Address too large for IPv4') > + return struct.pack('!I', address) It is true that struct will raise struct.error: argument out of range for negative addresses, but it will also also do the same for too large addresses. So either let it propagate or catch it in both cases. For the latter (assuming the max is the max 4 byte int): try: return struct.pack('!I', address) except struct.error: raise ValueError("Address negative or too large for IPv4") > + > +def v6_int_to_packed(address): > + """The binary representation of this address. Similar comments, except packed into 16 bytes > + Args: > + address: An integer representation of an IPv4 IP address. > + > + Returns: > + The binary representation of this address. > + """ > + return struct.pack('!QQ', address>> 64, address& (2**64 - 1)) Why no range check? Here you are letting struct.error propagate. > + > +def _find_address_range(addresses): > + """Find a sequence of addresses. An 'address' can in various places be a string, int, bytes, IPv4Address, or IPv6Address. For neophyte users, I think you should be clear each time you use 'address'. From the code, I conclude it here means the latter two. > + > + Args: > + addresses: a list of IPv4 or IPv6 addresses. a list of IPv#Address objects. > +def _get_prefix_length(number1, number2, bits): > + """Get the number of leading bits that are same for two numbers. > + > + Args: > + number1: an integer. > + number2: another integer. > + bits: the maximum number of bits to compare. > + > + Returns: > + The number of leading bits that are the same for two numbers. > + > + """ > + for i in range(bits): > + if number1>> i == number2>> i: This non-PEP8 spacing is awful to read. The double space after the tighter binding operator is actively deceptive. Please use if number1 >> i == number2 >> i: > + if (number>> i) % 2: ditto > +def summarize_address_range(first, last): > + Args: > + first: the first IPv4Address or IPv6Address in the range. > + last: the last IPv4Address or IPv6Address in the range. > + > + Returns: > + An iterator of the summarized IPv(4|6) network objects. Very clear as to types. > + while first_int<= last_int: PEP8: while first_int <= last_int: is *really* much easier to read. > + while nbits>= 0: ditto, etcetera through rest of file. > + while mask: > + if ip_int& 1 == 1: Instead of no space and then 2 spaces, use uniform 1 space around & if ip_int & 1 == 1: > + ip_int>>= 1 To me, augmented assignments need a space before and after even more than plain assignments, especially with underscored names. ip_int >>= 1 I am guessing that Peter dislikes putting ' ' before '<' and '>' and perhaps '&', but it makes code harder to read. Putting an extra space after is even worse. This is as far as I read. Some of the style changes could be done with global search and selective replace. --- Terry Jan Reedy From solipsis at pitrou.net Sun May 20 20:29:41 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 20 May 2012 20:29:41 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14814: addition of the ipaddress module (stage 1 - code and tests) References: <4FB92765.803@udel.edu> Message-ID: <20120520202941.374e89c4@pitrou.net> On Sun, 20 May 2012 13:18:29 -0400 Terry Reedy wrote: > > + > > + """ > > + if version: > > if version is not None: ?? > Do you really want to silently ignore *every* null value, like '' or []? The goal is probably to have "midnight" mean "auto-detect the address family" ;-) cheers Antoine. From rdmurray at bitdance.com Sun May 20 21:02:52 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 20 May 2012 15:02:52 -0400 Subject: [Python-Dev] Email6 status In-Reply-To: <20120501105503.49774ada@resist.wooz.org> References: <20120501144009.720AE250147@webabinitio.net> <20120501105503.49774ada@resist.wooz.org> Message-ID: <20120520190253.9B6C22500E9@webabinitio.net> On Tue, 01 May 2012 10:55:03 -0400, Barry Warsaw wrote: > On May 01, 2012, at 10:40 AM, R. David Murray wrote: > >I guess it's time to talk about my plans for this one :) > > Thanks for the update RDM. I really wish I had more time to contribute to > email6, but I'd still really like to see this land in 3.3 if possible. > > I suspect you're just not going to get much practical feedback on email6 until > it's available in Python's stdlib. I don't know how many Python 3 email > consuming applications there are out there. The one I'm intimately familiar > with still can't port to Python 3 because of its dependencies. My thought exactly. > >What I'd like to do is have the second patch introduce the new policies > >as *provisional policies*. That is, in the spirit but not the letter > >of PEP 411, I'd like the new header API to be considered provisional > >and subject to improvement in 3.4 based on what we learn by having it > >actually out there in the field and getting tested. > > That seems reasonable to me. The documentation should be clear as to what's > provisional and what's stable. With that, and based on your level of > confidence, I'd be in favor of getting email6 into Python 3.3. OK, both patches are now up on the tracker. The first patch, as mentioned, does some internal refactoring that makes the policy framework cleaner and adds hooks and a 'compat32' policy implementation such that the current Python 3.2 behavior is preserved by default. That's issue 14731: http://bugs.python.org/issue14731 The second patch adds a policy implementation (marked as provisional) that adds the new header parsing and folding. As of this patch only 'Date' type and 'Address' type headers are parsed as anything other than Unstructured, but that's already worlds better than the compat32 policy. That's issue 12586: http://bugs.python.org/issue12586 I would appreciate reviews of both patches, even cursory ones. This split up should make them as easy to review as such big patches can be: the goal of 14731 is 100% backward compatibility, so a review can focus on making sure that the tests match the Python 3.2 tests (with some additions for bugs fixed). 125867 then adds a bunch of new code that can be evaluated on its own merits. Absent objection from patch reviewers, my plan is to apply these patches before the next alpha (which is scheduled for May 26th, ie: next weekend). --David From benjamin at python.org Sun May 20 22:28:43 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 20 May 2012 13:28:43 -0700 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 In-Reply-To: References: Message-ID: 2012/5/20 Nick Coghlan : > PEP 3135 defines the new zero-argument form of super() as implicitly > equivalent to super(__class__, ), and up until 3.2 has > behaved accordingly: if you accessed __class__ from inside a method, > you would receive a reference to the lexically containing class. I don't understand why PEP 3135 cares how it's implemented. It's silly enough that you can get the class by "using" super (even just referencing the name). Thus that you can get __class__ reeks of more an implementation detail than a feature to me. -- Regards, Benjamin From ironfroggy at gmail.com Sun May 20 22:49:02 2012 From: ironfroggy at gmail.com (Calvin Spealman) Date: Sun, 20 May 2012 16:49:02 -0400 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 In-Reply-To: References: Message-ID: On Sun, May 20, 2012 at 4:28 PM, Benjamin Peterson wrote: > 2012/5/20 Nick Coghlan : >> PEP 3135 defines the new zero-argument form of super() as implicitly >> equivalent to super(__class__, ), and up until 3.2 has >> behaved accordingly: if you accessed __class__ from inside a method, >> you would receive a reference to the lexically containing class. > > I don't understand why PEP 3135 cares how it's implemented. It's silly > enough that you can get the class by "using" super (even just > referencing the name). Thus that you can get __class__ reeks of more > an implementation detail than a feature to me. It made sense at the time to discuss the issues together. It was often wanted to reference the "current class" and super was simply the most common reason for this and, as was the point of the PEP in the first place, given an even more direct shortcut. I never would have considered __class__ a simple implementation detail. > -- > Regards, > Benjamin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From benjamin at python.org Sun May 20 22:58:51 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 20 May 2012 13:58:51 -0700 Subject: [Python-Dev] PEP 3135 (new super()) __class__ references broken in 3.3 In-Reply-To: References: Message-ID: 2012/5/20 Calvin Spealman : > On Sun, May 20, 2012 at 4:28 PM, Benjamin Peterson wrote: >> 2012/5/20 Nick Coghlan : >>> PEP 3135 defines the new zero-argument form of super() as implicitly >>> equivalent to super(__class__, ), and up until 3.2 has >>> behaved accordingly: if you accessed __class__ from inside a method, >>> you would receive a reference to the lexically containing class. >> >> I don't understand why PEP 3135 cares how it's implemented. It's silly >> enough that you can get the class by "using" super (even just >> referencing the name). Thus that you can get __class__ reeks of more >> an implementation detail than a feature to me. > > It made sense at the time to discuss the issues together. It was often wanted > to reference the "current class" and super was simply the most common reason > for this and, as was the point of the PEP in the first place, given an even more > direct shortcut. Well, then, back to the old way it is. -- Regards, Benjamin From hs at ox.cx Sun May 20 23:20:50 2012 From: hs at ox.cx (Hynek Schlawack) Date: Sun, 20 May 2012 23:20:50 +0200 Subject: [Python-Dev] Backward compatibility of shutil.rmtree In-Reply-To: <4FB92EA0.2010203@v.loewis.de> References: <4FB8DC6E.406@ox.cx> <4FB92EA0.2010203@v.loewis.de> Message-ID: <4FB96032.1080709@ox.cx> >> Two of them differ in the new version: os.fwalk() is used instead of >> os.listdir() and os.unlinkat() instead of os.remove(). > It would be os.flistdir instead of os.listdir, not os.fwalk, right? It?s actually os.fwalk. It has been implemented by Charles-Fran?ois as a dependency of the ticket because it seemed generally useful ? therefore I used it for the implementation. (There has been also been the idea to re-implement the default rmdir with os.walk to make them more similar; but that's a different story.) > The way this interface is defined, it's IMO best to do "precise" > reporting, i.e. pass the exact function that caused the failure. > I'd weaken the documentation to just specify that the error-causing > function is reported, indicating that the exact set of functions > may depend on the operating system and change across Python versions. So you suggest to not mention all the possible functions at all? That seems useful to me, as the list will (hopefully) grow anyway and nailing it down is getting less useful with every new implementation. Regards, Hynek From martin at v.loewis.de Sun May 20 23:46:22 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sun, 20 May 2012 23:46:22 +0200 Subject: [Python-Dev] Backward compatibility of shutil.rmtree In-Reply-To: <4FB96032.1080709@ox.cx> References: <4FB8DC6E.406@ox.cx> <4FB92EA0.2010203@v.loewis.de> <4FB96032.1080709@ox.cx> Message-ID: <20120520234622.Horde.Ma73KsL8999PuWYub5O3JPA@webmail.df.eu> Zitat von Hynek Schlawack : >>> Two of them differ in the new version: os.fwalk() is used instead of >>> os.listdir() and os.unlinkat() instead of os.remove(). >> It would be os.flistdir instead of os.listdir, not os.fwalk, right? > > It?s actually os.fwalk. It has been implemented by Charles-Fran?ois as a > dependency of the ticket because it seemed generally useful ? therefore > I used it for the implementation. I think that's a mistake then, because of the limited error reporting. With os.fwalk, you don't know exactly what it is that failed, but it may be useful to know. So I propose to duplicate the walking in rmtree. I also wonder how exactly in your implementation directory handles get closed, and how that correlates to attempts at removing the directories. > (There has been also been the idea to re-implement the default rmdir > with os.walk to make them more similar; but that's a different story.) -1 on that, for the reasons above. > So you suggest to not mention all the possible functions at all? That > seems useful to me, as the list will (hopefully) grow anyway and nailing > it down is getting less useful with every new implementation. Exactly. Users would have to look at the code, but that will make them aware that the code may change. For that reason, also, using fwalk is a bad idea, since they then will need to trace their code reading into fwalk. Regards, Martin From hs at ox.cx Mon May 21 00:17:40 2012 From: hs at ox.cx (Hynek Schlawack) Date: Mon, 21 May 2012 00:17:40 +0200 Subject: [Python-Dev] Backward compatibility of shutil.rmtree In-Reply-To: <20120520234622.Horde.Ma73KsL8999PuWYub5O3JPA@webmail.df.eu> References: <4FB8DC6E.406@ox.cx> <4FB92EA0.2010203@v.loewis.de> <4FB96032.1080709@ox.cx> <20120520234622.Horde.Ma73KsL8999PuWYub5O3JPA@webmail.df.eu> Message-ID: <4FB96D84.1080704@ox.cx> Am 20.05.12 23:46, schrieb martin at v.loewis.de: >>>> Two of them differ in the new version: os.fwalk() is used instead of >>>> os.listdir() and os.unlinkat() instead of os.remove(). >>> It would be os.flistdir instead of os.listdir, not os.fwalk, right? >> It?s actually os.fwalk. It has been implemented by Charles-Fran?ois as a >> dependency of the ticket because it seemed generally useful ? therefore >> I used it for the implementation. > I think that's a mistake then, because of the limited error reporting. > With os.fwalk, you don't know exactly what it is that failed, but it > may be useful to know. Well, as fwalk does only directory traversing, it means that something went wrong while doing so. The exception should be more helpful at this point, no? > So I propose to duplicate the walking in rmtree. I'm -1 on that one; the information gain doesn?t seem that big to me and doing fwalk right isn't trivial (see ). It?s easy to do a copy?n?paste now but the trade-off of having to maintain both for a bit more of information from a high level function doesn?t seem worth to me. > I also wonder how exactly in your implementation directory handles > get closed, and how that correlates to attempts at removing the > directories. Directory handles get closed inside of fwalk (try/finally) ? but I think it?s easier if you take a quick look yourself before I explain things to you you didn?t want to know. :) Regards, Hynek From raymond.hettinger at gmail.com Mon May 21 01:27:06 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sun, 20 May 2012 16:27:06 -0700 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> On May 18, 2012, at 11:24 AM, Barry Warsaw wrote: > At what point should we cut over docs.python.org to point to the Python 3 > documentation by default? Wouldn't this be an easy bit to flip in order to > promote Python 3 more better? My experience teaching and consulting suggests that this would be a bad move. People are using Python2.7 and are going to docs.python.org for information. This would only disrupt their experience. It wouldn't "promote" anything, it would just make accessing the documentation more awkward for the large majority of users who are still on Python 2. When there is more uptake of Python 3, it would be reasonable move. If it is done now, it will just create confusion and provide no benefit. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From meadori at gmail.com Mon May 21 01:55:03 2012 From: meadori at gmail.com (Meador Inge) Date: Sun, 20 May 2012 18:55:03 -0500 Subject: [Python-Dev] dir() in inspect.py ? In-Reply-To: <4FB2B8D0.1010102@stackless.com> References: <4FB2B8D0.1010102@stackless.com> Message-ID: On Tue, May 15, 2012 at 3:13 PM, Christian Tismer wrote: > Is the usage of dir() correct in this context or is the doc right? > It would be nice to add a sentence of clarification if the use of > dir() is in fact the correct way to implement inspect. There is already a note in the inspect.getmembers documentation (http://docs.python.org/library/inspect.html#inspect.getmembers): """ Note getmembers() does not return metaclass attributes when the argument is a class (this behavior is inherited from the dir() function). """ In any case, open a tracker issue if you think the documentation needs to be improved or that there might be a bug. -- Meador From guido at python.org Mon May 21 03:23:09 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 20 May 2012 18:23:09 -0700 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. On Sun, May 20, 2012 at 4:27 PM, Raymond Hettinger wrote: > > On May 18, 2012, at 11:24 AM, Barry Warsaw wrote: > > At what point should we cut over?docs.python.org?to point to the Python 3 > documentation by default? ?Wouldn't this be an easy bit to flip in order to > promote Python 3 more better? > > > My experience teaching and consulting suggests that this would be a bad > move. > People are using Python2.7 and are going to docs.python.org for information. > This would only disrupt their experience. > > It wouldn't "promote" anything, it would just make accessing the > documentation > more awkward for the large majority of users who are still on Python 2. > > When there is more uptake of Python 3, it would be reasonable move. > If it is done now, it will just create confusion and provide no benefit. > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon May 21 03:33:03 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 20 May 2012 18:33:03 -0700 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale Message-ID: I have just reviewed PEP 420 (namespace packages) and sent Eric my detailed feedback; most of it is minor or requesting for examples and I'm sure he'll fix it to my satisfaction. Generally speaking the PEP is a beacon if clarity. But I stumbled about one feature that bothers me in its specification and through its lack of rationale. This is the section on Dynamic Path Computation: (http://www.python.org/dev/peps/pep-0420/#dynamic-path-computation). The specification bothers me because it requires in-place modification of sys.path. Does this mean sys.path is no longer a plain list? I'm sure it's going to break things left and right (or at least things will be violating this requirement left and right); there has never been a similar requirement (unlike, e.g., sys.modules, which is relatively well-known for being cached in a C-level global variable). Worse, this apparently affects __path__ variables of namespace packages as well, which are now specified as an unspecified read-only iterable. (I can only guess that there is a connection between these two features -- the PEP doesn't mention one.) Again, I would be much happier with just a list. While I can imagine there being a use case for recomputing the various paths, I am much less sure that it is worth attempting to specify that this will happen *automatically* when sys.path is modified in a certain way. I'd be much happier if these constraints were struck and the recomputation had to be requested explicitly by calling some new function in sys. >From my POV, this is the only show-stopper for acceptance of PEP 420. (That is, either a rock-solid rationale should be supplied, or the constraints should be removed.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon May 21 03:36:30 2012 From: guido at python.org (Guido van Rossum) Date: Sun, 20 May 2012 18:36:30 -0700 Subject: [Python-Dev] dir() in inspect.py ? In-Reply-To: References: <4FB2B8D0.1010102@stackless.com> Message-ID: On Sun, May 20, 2012 at 4:55 PM, Meador Inge wrote: > On Tue, May 15, 2012 at 3:13 PM, Christian Tismer wrote: > >> Is the usage of dir() correct in this context or is the doc right? >> It would be nice to add a sentence of clarification if the use of >> dir() is in fact the correct way to implement inspect. > > There is already a note in the inspect.getmembers documentation > (http://docs.python.org/library/inspect.html#inspect.getmembers): > > """ > Note > > getmembers() does not return metaclass attributes when the argument is > a class (this behavior is inherited from the dir() function). > """ > > In any case, open a tracker issue if you think the documentation needs > to be improved or that there might be a bug. > > -- Meador I have to agree with Christian that inspect.py is full of hacks and heuristics that would be fine in a module that's part of a user's app or even in a library, but stand out as brittle or outright unreliable in a stdlib module. Basically, you can't trust that inspect.py will work. I've seen various occasions (sorry, can't remember details) where some function in it outright crashed when given a slightly unusual (but not unreasonable) argument. It might be a nice project for a new contributor to improve this situation. -- --Guido van Rossum (python.org/~guido) From ncoghlan at gmail.com Mon May 21 04:50:11 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 May 2012 12:50:11 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Describe the default hash correctly, and mark a couple of CPython In-Reply-To: <4FB90DA8.2090607@udel.edu> References: <4FB90DA8.2090607@udel.edu> Message-ID: It's true for the default comparison definition for user defined classes, which is what that paragraph describes. -- Sent from my phone, thus the relative brevity :) On May 21, 2012 2:32 AM, "Terry Reedy" wrote: > On 5/20/2012 4:31 AM, nick.coghlan wrote: > > + and ``x.__hash__()`` returns an appropriate value such that ``x == y`` >> + implies both that ``x is y`` and ``hash(x) == hash(y)``. >> > > I don't understand what you were trying to say with > x == y implies x is y > but I know you know that that is not true ;=0. > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon May 21 06:28:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 May 2012 14:28:06 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum wrote: > I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. Rather than a new subdomain, I'd prefer to see a discreet "documentation version" CSS widget, similar to that used in the Django docs (see https://docs.djangoproject.com/en/1.4/) that indicated the current displayed version and provided quick links to the 2.7 docs, the stable 3.x docs and the development docs. The versionadded/versionchanged notes in the 3.x series are not adequate for 2.x development, as everything up to and including 3.0 is taken as a given - the notes are used solely for changes within the 3.x series. I know plenty of people are keen to push the migration to Python 3 forward as quickly as possible, but this is *definitely* a case of "make haste slowly". We need to tread carefully or we're going to give existing users an even stronger feeling that we simply don't care about the impact the Python 3 migration is having (or is going to have) on them. *We* know that we care, but there's still plenty of folks out there that don't realise how deeply rooted the problems are in Python 2's text model and why the Python 3 backwards compatibility break was needed to fix them. They don't get to see the debates that happen on this list - they only get to see the end results of our decisions. Switching the default docs.python.org version to the 3.x series is a move that needs to be advertised *well* in advance as a courtesy to our users, so that those that need to specifically reference 2.7 have plenty of time to update their links. Back when Python 3 was first released, we set a target for the migration period of around 5 years. Since the io performance problems in 3.0 meant that 3.1 was the first real production ready release of 3.x, that makes June 2014 the target date for when we would like the following things to be true: - all major third party libraries and frameworks support Python 3 (or there are Python 3 forks or functional replacements) - Python 3 is the default choice for most new Python projects - most Python instruction uses Python 3, with Python 2 differences described for those that need to work with legacy code - (less likely, but possible) user-focused distros such as Ubuntu and Fedora have changed their "python" symlink to refer to Python 3 That's still 2 years away, and should line up fairly nicely with the release of Python 3.4 (assuming the current release cadence is maintained for at least one more version). Key web and networking frameworks such as Django [1], Pyramid [2] and Twisted [3] should also be well supported on 3.x by that point. In the meantime, I propose the following steps be taken in order to prepare for the eventual migration: - change the current unqualified URLs into redirects to the corresponding direct 2.7 URLs - add a "latest" subpath that is equivalent to the current "py3k" subpath - add a Django-inspired version switching widget to the CSS & HTML for the 2.7, 3.2 and trunk docs that offers the following options: 2.7, 3.2, latest (3.2), dev (3.3). Cheers, Nick. [1] https://www.djangoproject.com/weblog/2012/mar/13/py3k/ [2] http://docs.pylonsproject.org/projects/pyramid/en/1.3-branch/whatsnew-1.3.html [3] http://twistedmatrix.com/trac/milestone/Python-3.x -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From martin at v.loewis.de Mon May 21 07:42:43 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Mon, 21 May 2012 07:42:43 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521074243.Horde.ni14edjz9kRPudXTEVCEZLA@webmail.df.eu> > I know plenty of people are keen to push the migration to Python 3 > forward as quickly as possible, but this is *definitely* a case of > "make haste slowly". We need to tread carefully or we're going to give > existing users an even stronger feeling that we simply don't care > about the impact the Python 3 migration is having (or is going to > have) on them. I don't think users will have *that* feeling. I got comments that users were puzzled that we kept continuing development on 2.x when 3.x was released, so users do recognize that the migration to 3.x is not abrupt. > *We* know that we care, but there's still plenty of > folks out there that don't realise how deeply rooted the problems are > in Python 2's text model and why the Python 3 backwards compatibility > break was needed to fix them. I don't think users care much about philosophical or abstract engineering differences between the versions when thinking about porting. I'd expect that most of them agree, in the abstract, that they will have to port to Python 3 eventually. Some, of course, wish to stay with Python 2 forever, and wish that this Python 3 madness is simply abandoned. That they don't port is often caused by missing dependencies. If all dependencies are met, it's caused by simple lack of time and energy. > Back when Python 3 was first released, we set a target for the > migration period of around 5 years. Maybe you set this target for yourself. I set "Python 3.2/3.3" as a target. I think Guido set an even earlier target initially. Regards, Martin From tjreedy at udel.edu Mon May 21 07:47:50 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 01:47:50 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On 5/21/2012 12:28 AM, Nick Coghlan wrote: > On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum wrote: >> I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. I was about to post the exact same idea. docs.python.org/py3k is a bit obscure and buried and makes Python 3.x look a bit like a second-class citizen on the site. It has previously been our policy that each new production-ready release takes 'pride of place' at docs.python.org. Not doing so even with 3.3, *and doing nothing else*, could be taken as implying that we lack full confidence in the release. On the other hand, I am sympathetic to Raymond's and Nick's points that switching might seem too much 'in their faces' for Py 2 users, especially those who do not have or use an offline help file as their everyday reference. I want Python 3 to get equal billing, but not to generate reaction against it. I also suggest docs2.python.org as the permanent home for latest python 2 docs for as long as it seems sensible (probably a decade at least). Make that operable now and suggest on the front page of docs.python.org that py2 users switch before 3.4. > Rather than a new subdomain, I'd prefer to see a discreet > "documentation version" CSS widget, similar to that used in the Django > docs (see https://docs.djangoproject.com/en/1.4/) that indicated the > current displayed version and provided quick links to the 2.7 docs, > the stable 3.x docs and the development docs. Each page of our docs say "Python 3.3.0a3 Documentation", or the equivalent, at the top. So we already have that covered. The drop-down version selection box on the django page seems to only apply to searches. Merely selecting a different version does not trigger anything. What might be useful is to have the 'Other versions' links on the left margin of *every* page, not just the front page, but have them link to the corresponding page of the other docs (if there is one, and non-trivial I expect). For someone trying to write combined 2/3 code, or merely to learn the other version, I would think it useful to be able to jump to the corresponding page for the other version. -- Terry Jan Reedy From taschini at ieee.org Mon May 21 07:52:05 2012 From: taschini at ieee.org (Stefano Taschini) Date: Mon, 21 May 2012 07:52:05 +0200 Subject: [Python-Dev] dir() in inspect.py ? In-Reply-To: References: <4FB2B8D0.1010102@stackless.com> Message-ID: On 21 May 2012 03:36, Guido van Rossum wrote: > [...] > > I have to agree with Christian that inspect.py is full of hacks and > heuristics that would be fine in a module that's part of a user's app > or even in a library, but stand out as brittle or outright unreliable > in a stdlib module. Basically, you can't trust that inspect.py will > work. I've seen various occasions (sorry, can't remember details) > where some function in it outright crashed when given a slightly > unusual (but not unreasonable) argument. It might be a nice project > for a new contributor to improve this situation. > [...] > An example that crashes is >>> def f(l, (x, y)): ... sup = max(u*x + v*y for u, v in l) ... return ((u, v) for u, v in l if u*x + v*y == sup) >>> inspect.getargspec(f) See http://bugs.python.org/issue14611 . I did submit a patch, a few weeks ago. Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon May 21 08:28:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 May 2012 16:28:49 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On Mon, May 21, 2012 at 3:47 PM, Terry Reedy wrote: > On 5/21/2012 12:28 AM, Nick Coghlan wrote: >> >> On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum >> ?wrote: >>> >>> I suggest that we add a separate (virtual) subdomain, e.g. >>> docs3.python.org. > > > I was about to post the exact same idea. Please, no - proliferating subdomains can quickly get confusing and hard to remember. It makes sense up to a point (e.g. separating out the docs from everything else on python.org), but having multiple docs subdomains is completely unnecessary when we already have directory based versioning. Namespaces are a great idea, let's do more of those :) > docs.python.org/py3k is a bit obscure and buried and makes Python 3.x look a > bit like a second-class citizen on the site. It has previously been our > policy that each new production-ready release takes 'pride of place' at > docs.python.org. Not doing so even with 3.3, *and doing nothing else*, could > be taken as implying that we lack full confidence in the release. Having "http://docs.python.org/latest" refer to Python 3.x would remove the "second class citizen" status, as well as providing a clear indication right in the URL that docs.python.org contains more content than just the latest version of the docs. The unqualified URLs could then become redirects to "latest" after a suitable migration period with a notification and a link to the 2.7 version specific docs on each page. For example, at the release of 3.3, each page of the default docs on the website could be updated with a note like the following: "The default documentation pages will be switching to the Python 3 series in February 2012, 6 months after the release of Python 3.3. The permanent link for the 2.7 version of this page is: " > On the other hand, I am sympathetic to Raymond's and Nick's points that > switching might seem too much 'in their faces' for Py 2 users, especially > those who do not have or use an offline help file as their everyday > reference. I want Python 3 to get equal billing, but not to generate > reaction against it. Right, and switching the default docs without a suitable notice period would be a great way to generate confusion. Migrating to a "latest" URL has no such negative impact: - the new URLs become available immediately for those that want to use them - the old URLs can be converted to 301 redirects after a suitable warning period > I also suggest docs2.python.org as the permanent home for latest python 2 > docs for as long as it seems sensible (probably a decade at least). Make > that operable now and suggest on the front page of docs.python.org that py2 > users switch before 3.4. I think "http://docs.python.org/2.7" is fine as the long term home for the final version of the Python 2 documentation (it also has the virtue of already existing). >> Rather than a new subdomain, I'd prefer to see a discreet >> "documentation version" CSS widget, similar to that used in the Django >> docs (see https://docs.djangoproject.com/en/1.4/) that indicated the >> current displayed version and provided quick links to the 2.7 docs, >> the stable 3.x docs and the development docs. > > Each page of our docs say "Python 3.3.0a3 Documentation", or the equivalent, > at the top. So we already have that covered. The drop-down version selection > box on the django page seems to only apply to searches. Merely selecting a > different version does not trigger anything. > > What might be useful is to have the 'Other versions' links on the left > margin of *every* page, not just the front page, but have them link to the > corresponding page of the other docs (if there is one, and non-trivial I > expect). For someone trying to write combined 2/3 code, or merely to learn > the other version, I would think it useful to be able to jump to the > corresponding page for the other version. That's what the Django widget does. I'm not talking about their search form - I'm talking about the floating CSS box that appears in the bottom right of each page and stays there as you scroll down. If you click on it, the list of available documentation versions appears, with direct links to the corresponding page in the other versions. It has several attractive features: - always present, even when you scroll down on a long page - unobtrusive when you don't need it (only displays current version by default, have to click it to get the list of all versions) - direct links to the corresponding page in other versions Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Mon May 21 08:32:57 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 20 May 2012 23:32:57 -0700 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: 2012/5/20 Nick Coghlan : > On Mon, May 21, 2012 at 3:47 PM, Terry Reedy wrote: >> On 5/21/2012 12:28 AM, Nick Coghlan wrote: >>> >>> On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum >>> ?wrote: >>>> >>>> I suggest that we add a separate (virtual) subdomain, e.g. >>>> docs3.python.org. >> >> >> I was about to post the exact same idea. > > Please, no - proliferating subdomains can quickly get confusing and > hard to remember. It makes sense up to a point (e.g. separating out > the docs from everything else on python.org), but having multiple docs > subdomains is completely unnecessary when we already have directory > based versioning. > > Namespaces are a great idea, let's do more of those :) A subdomain isn't a namespace? -- Regards, Benjamin From tjreedy at udel.edu Mon May 21 08:37:39 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 02:37:39 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On 5/21/2012 2:28 AM, Nick Coghlan wrote: > On Mon, May 21, 2012 at 3:47 PM, Terry Reedy wrote: >> What might be useful is to have the 'Other versions' links on the left >> margin of *every* page, not just the front page, but have them link to the >> corresponding page of the other docs (if there is one, and non-trivial I >> expect). For someone trying to write combined 2/3 code, or merely to learn >> the other version, I would think it useful to be able to jump to the >> corresponding page for the other version. > > That's what the Django widget does. I'm not talking about their search > form - I'm talking about the floating CSS box that appears in the > bottom right of each page and stays there as you scroll down. If you > click on it, the list of available documentation versions appears, > with direct links to the corresponding page in the other versions. > > It has several attractive features: > - always present, even when you scroll down on a long page > - unobtrusive when you don't need it (only displays current version by > default, have to click it to get the list of all versions) > - direct links to the corresponding page in other versions I see it now. Very nice. I hope our doc people can duplicate it. -- Terry Jan Reedy From ncoghlan at gmail.com Mon May 21 09:24:26 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 May 2012 17:24:26 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On Mon, May 21, 2012 at 4:32 PM, Benjamin Peterson wrote: > 2012/5/20 Nick Coghlan : >> On Mon, May 21, 2012 at 3:47 PM, Terry Reedy wrote: >>> On 5/21/2012 12:28 AM, Nick Coghlan wrote: >>>> >>>> On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum >>>> ?wrote: >>>>> >>>>> I suggest that we add a separate (virtual) subdomain, e.g. >>>>> docs3.python.org. >>> >>> >>> I was about to post the exact same idea. >> >> Please, no - proliferating subdomains can quickly get confusing and >> hard to remember. It makes sense up to a point (e.g. separating out >> the docs from everything else on python.org), but having multiple docs >> subdomains is completely unnecessary when we already have directory >> based versioning. >> >> Namespaces are a great idea, let's do more of those :) > > A subdomain isn't a namespace? A subdomain is only a namespace if you use it as one. The following would be using docs.python.org as a namespace (and is what I think we should move towards): docs.python.org/latest docs.python.org/dev docs.python.org/3.2 docs.python.org/3.1 docs.python.org/2.7 docs.python.org/2.6 etc... The following is *not* using it as a namespace: docs.python.org # 2.7 docs3.python.org # 3.2 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From hs at ox.cx Mon May 21 09:35:33 2012 From: hs at ox.cx (Hynek Schlawack) Date: Mon, 21 May 2012 09:35:33 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <4FB9F045.5060603@ox.cx> >>> Namespaces are a great idea, let's do more of those :) >> A subdomain isn't a namespace? > A subdomain is only a namespace if you use it as one. The following > would be using docs.python.org as a namespace (and is what I think we > should move towards): > > docs.python.org/latest > docs.python.org/dev > docs.python.org/3.2 > docs.python.org/3.1 > docs.python.org/2.7 > docs.python.org/2.6 > etc... Bikesheddingly, I?d prefer ?stable? over ?latest?. That would also better convey the point that 3 is ready for production. Otherwise +1; I find the current hybrid structure suboptimal. Also -1 on docs3, that would suggest that it?s still something special and 2 (= docs) is the real deal. Regards, Hynek From turnbull at sk.tsukuba.ac.jp Mon May 21 10:05:39 2012 From: turnbull at sk.tsukuba.ac.jp (Stephen J. Turnbull) Date: Mon, 21 May 2012 17:05:39 +0900 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <87mx52ot64.fsf@uwakimon.sk.tsukuba.ac.jp> Nick Coghlan writes: > > A subdomain isn't a namespace? > > A subdomain is only a namespace if you use it as one. The following > would be using docs.python.org as a namespace (and is what I think we > should move towards): +1 > The following is *not* using it as a namespace: > > docs.python.org # 2.7 > docs3.python.org # 3.2 No, but it *is* using "python.org" as a namespace. I personally think this is ugly and hard to use, but I'm hard-pressed to explain why. :-( I hope you can do better (the above isn't going to convince anybody who currently holds the opposite opinion). From eric at trueblade.com Mon May 21 10:00:32 2012 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 21 May 2012 04:00:32 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: Message-ID: <4FB9F620.1060602@trueblade.com> On 5/20/2012 9:33 PM, Guido van Rossum wrote: > Generally speaking the PEP is a beacon if clarity. But I stumbled > about one feature that bothers me in its specification and through its > lack of rationale. This is the section on Dynamic Path Computation: > (http://www.python.org/dev/peps/pep-0420/#dynamic-path-computation). > The specification bothers me because it requires in-place modification > of sys.path. Does this mean sys.path is no longer a plain list? I'm > sure it's going to break things left and right (or at least things > will be violating this requirement left and right); there has never > been a similar requirement (unlike, e.g., sys.modules, which is > relatively well-known for being cached in a C-level global variable). > Worse, this apparently affects __path__ variables of namespace > packages as well, which are now specified as an unspecified read-only > iterable. (I can only guess that there is a connection between these > two features -- the PEP doesn't mention one.) Again, I would be much > happier with just a list. sys.path would still be a plain list. It's the namespace package's __path__ that would be a special object. Every time __path__ is accessed it checks to see if it's parent path has been modified. The parent path for top level modules is sys.path. The __path__ object detects modification by keeping a local copy of the parent, plus a reference to the parent, and compares them. > While I can imagine there being a use case for recomputing the various > paths, I am much less sure that it is worth attempting to specify that > this will happen *automatically* when sys.path is modified in a > certain way. I'd be much happier if these constraints were struck and > the recomputation had to be requested explicitly by calling some new > function in sys. > >>From my POV, this is the only show-stopper for acceptance of PEP 420. > (That is, either a rock-solid rationale should be supplied, or the > constraints should be removed.) I don't have a preference on whether the feature stays or goes, so I'll let PJE give the use case. I've copied him here in case he doesn't read python-dev. Now that I think about it some more, the motivation is probably to ease the migration from setuptools, which does provide this feature. Eric. From p.f.moore at gmail.com Mon May 21 10:27:15 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 21 May 2012 09:27:15 +0100 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <4FB9F045.5060603@ox.cx> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <4FB9F045.5060603@ox.cx> Message-ID: On 21 May 2012 08:35, Hynek Schlawack wrote: > Also -1 on docs3, that would suggest that it?s still something special > and 2 (= docs) is the real deal. Good point. Paul. From steve at pearwood.info Mon May 21 11:00:45 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 21 May 2012 19:00:45 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521090044.GA32207@ando> On Mon, May 21, 2012 at 01:47:50AM -0400, Terry Reedy wrote: > What might be useful is to have the 'Other versions' links on the left > margin of *every* page, not just the front page, but have them link to > the corresponding page of the other docs (if there is one, and > non-trivial I expect). For someone trying to write combined 2/3 code, or > merely to learn the other version, I would think it useful to be able to > jump to the corresponding page for the other version. +1 -- Steven From lukasz at langa.pl Mon May 21 11:09:17 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Mon, 21 May 2012 11:09:17 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> Wiadomo?? napisana przez Nick Coghlan w dniu 21 maj 2012, o godz. 09:24: > The following > would be using docs.python.org as a namespace (and is what I think we > should move towards): > > docs.python.org/latest > docs.python.org/dev > docs.python.org/3.2 > docs.python.org/3.1 > docs.python.org/2.7 > docs.python.org/2.6 Love it. +1 I also like the Django-like "Documentation version" bubble. Makes navigating between versions simple regardless where you got the original link from. Blog posts and search engines often keep links to outdated versions. -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. http://lukasz.langa.pl/ +48 791 080 144 From solipsis at pitrou.net Mon May 21 11:07:15 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 21 May 2012 11:07:15 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521110715.0c1eb179@pitrou.net> On Mon, 21 May 2012 14:28:06 +1000 Nick Coghlan wrote: > On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum wrote: > > I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. > > Rather than a new subdomain, I'd prefer to see a discreet > "documentation version" CSS widget, similar to that used in the Django > docs (see https://docs.djangoproject.com/en/1.4/) that indicated the > current displayed version and provided quick links to the 2.7 docs, > the stable 3.x docs and the development docs. +1. There will be some subtleties: for example, the 2.x docs for urllib2 will have to link to the 3.x docs for urllib.request. Regards Antoine. From g.brandl at gmx.net Mon May 21 11:41:29 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 11:41:29 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On 05/21/2012 03:23 AM, Guido van Rossum wrote: > I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. Here are the time machine keys: this subdomain has existed for a few years now :) Georg From tismer at stackless.com Mon May 21 12:14:09 2012 From: tismer at stackless.com (Christian Tismer) Date: Mon, 21 May 2012 12:14:09 +0200 Subject: [Python-Dev] dir() in inspect.py ? In-Reply-To: References: <4FB2B8D0.1010102@stackless.com> Message-ID: <4FBA1571.8030903@stackless.com> On 21.05.12 07:52, Stefano Taschini wrote: > On 21 May 2012 03:36, Guido van Rossum > wrote: > > [...] > > I have to agree with Christian that inspect.py is full of hacks and > heuristics that would be fine in a module that's part of a user's app > or even in a library, but stand out as brittle or outright unreliable > in a stdlib module. Basically, you can't trust that inspect.py will > work. I've seen various occasions (sorry, can't remember details) > where some function in it outright crashed when given a slightly > unusual (but not unreasonable) argument. It might be a nice project > for a new contributor to improve this situation. > [...] > > > An example that crashes is > > >>> def f(l, (x, y)): > ... sup = max(u*x + v*y for u, v in l) > ... return ((u, v) for u, v in l if u*x + v*y == sup) > >>> inspect.getargspec(f) > > See http://bugs.python.org/issue14611 . I did submit a patch, a few > weeks ago. Nice finding, not related to dir() usage but very useful to know. I looked over test_inspect.py a bit, which is quite big and has many tests, although very little references to reported bugs, and this opcode combination was obviously missing in the test cases. Did not find your patch yet (no time), but hope you added an extra testcase with explicit reference to the bug reported. inspect is very nice and useful in many cases, but sometimes not. Instead of using things like currentframe() I have a look and write my own version because the convenience is too little compared to an extra import and dependency. And although currentframe() is mentioned in test_inspect, I cannot find any direct testcase for it that really calls this function. Admittedly a trivial case, but it is one reason, besides dissed dir() usage, that makes me think of 'suspect' ;-) Instead, I'd love to use inspect as the basis to write reliable, portable code, because its abstraction hides implementation details nicely. I think we have reached when things like sys._getframe() are declared as deprecated. """ This is no longer recommended to use. Use inspect.currentframe instead """ cheers - chris -- Christian Tismer :^) tismerysoft GmbH : Have a break! Take a ride on Python's Karl-Liebknecht-Str. 121 : *Starship* http://starship.python.net/ 14482 Potsdam : PGP key -> http://pgp.uni-mainz.de work +49 173 24 18 776 mobile +49 173 24 18 776 fax n.a. PGP 0x57F3BF04 9064 F4E1 D754 C2FF 1619 305B C09C 5A3B 57F3 BF04 whom do you want to sponsor today? http://www.stackless.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Mon May 21 12:26:16 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 21 May 2012 13:26:16 +0300 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: On 18.05.12 21:30, Brian Curtin wrote: > On May 18, 2012 1:26 PM, "Barry Warsaw" > wrote: > > At what point should we cut over docs.python.org > to point to the Python 3 > > documentation by default? > > Today sounds good to me. Yesterday. ;-) Issue14469. From g.brandl at gmx.net Mon May 21 13:42:28 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 13:42:28 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> Message-ID: On 05/21/2012 11:09 AM, ?ukasz Langa wrote: > Wiadomo?? napisana przez Nick Coghlan w dniu 21 maj 2012, o godz. 09:24: > >> The following >> would be using docs.python.org as a namespace (and is what I think we >> should move towards): >> >> docs.python.org/latest >> docs.python.org/dev >> docs.python.org/3.2 >> docs.python.org/3.1 >> docs.python.org/2.7 >> docs.python.org/2.6 > > Love it. +1 Apart from the "latest" one, all these URLs already work. Of course, /2.7 is redirected to /, and /3.3 to /dev, etc. If required, the direction of these redirects can be changed, so that e.g. / goes to /2.7. What about: * Canonical: docs.python.org/2/ docs.python.org/3/ for latest versions of 2.x and 3.x docs.python.org/2.7/ etc. for latest minor versions docs.python.org/dev/ for latest dev version. * Redirected: docs.python.org/ --> either /2/ or /3/ or a "disambiguation page" docs.python.org/py3k/ -> /3/ There is also /release/X.Y.Z for individual released versions, which I don't want to change. I also like Martin's idea of offering more links between individual pages, not only the front-pages. Georg From rdmurray at bitdance.com Mon May 21 14:14:51 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 21 May 2012 08:14:51 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521121451.C6E922500D5@webabinitio.net> On Mon, 21 May 2012 11:41:29 +0200, Georg Brandl wrote: > On 05/21/2012 03:23 AM, Guido van Rossum wrote: > > I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. > > Here are the time machine keys: this subdomain has existed for a few years now :) The fact that none of us knew about it may say something about its effectiveness, though. As long as it does exist, there ought to be a parallel docs2.python.org. --David From ncoghlan at gmail.com Mon May 21 14:18:03 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 21 May 2012 22:18:03 +1000 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> Message-ID: On Mon, May 21, 2012 at 9:42 PM, Georg Brandl wrote: > Apart from the "latest" one, all these URLs already work. Yeah, I was just extending the scheme I already knew existed :) > Of course, /2.7 is redirected to /, and /3.3 to /dev, etc. > If required, the direction of these redirects can be changed, so > that e.g. / goes to /2.7. > > What about: > > * Canonical: > > docs.python.org/2/ > docs.python.org/3/ > > for latest versions of 2.x and 3.x > > docs.python.org/2.7/ etc. > > for latest minor versions > > docs.python.org/dev/ > > for latest dev version. > > * Redirected: > > docs.python.org/ ?--> ?either /2/ or /3/ or a "disambiguation page" > docs.python.org/py3k/ -> /3/ Works for me. It also means we're covered if Guido ever finds a reason to create Python 4000 :) > I also like Martin's idea of offering more links between individual pages, not > only the front-pages. Definite +1 on that. I personally like Django's version selector (for reasons stated elsewhere in the thread), but anything that makes it easier to hop between versions would be good. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From guido at python.org Mon May 21 15:44:58 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 21 May 2012 06:44:58 -0700 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On Sun, May 20, 2012 at 10:47 PM, Terry Reedy wrote: > On 5/21/2012 12:28 AM, Nick Coghlan wrote: >> >> On Mon, May 21, 2012 at 11:23 AM, Guido van Rossum >> ?wrote: >>> >>> I suggest that we add a separate (virtual) subdomain, e.g. >>> docs3.python.org. > > > I was about to post the exact same idea. > > docs.python.org/py3k is a bit obscure and buried and makes Python 3.x look a > bit like a second-class citizen on the site. It has previously been our > policy that each new production-ready release takes 'pride of place' at > docs.python.org. Not doing so even with 3.3, *and doing nothing else*, could > be taken as implying that we lack full confidence in the release. > > On the other hand, I am sympathetic to Raymond's and Nick's points that > switching might seem too much 'in their faces' for Py 2 users, especially > those who do not have or use an offline help file as their everyday > reference. I want Python 3 to get equal billing, but not to generate > reaction against it. > > I also suggest docs2.python.org as the permanent home for latest python 2 > docs for as long as it seems sensible (probably a decade at least). Make > that operable now and suggest on the front page of docs.python.org that py2 > users switch before 3.4. > > >> Rather than a new subdomain, I'd prefer to see a discreet >> "documentation version" CSS widget, similar to that used in the Django >> docs (see https://docs.djangoproject.com/en/1.4/) that indicated the >> current displayed version and provided quick links to the 2.7 docs, >> the stable 3.x docs and the development docs. > > > Each page of our docs say "Python 3.3.0a3 Documentation", or the equivalent, > at the top. So we already have that covered. The drop-down version selection > box on the django page seems to only apply to searches. Merely selecting a > different version does not trigger anything. > > What might be useful is to have the 'Other versions' links on the left > margin of *every* page, not just the front page, but have them link to the > corresponding page of the other docs (if there is one, and non-trivial I > expect). For someone trying to write combined 2/3 code, or merely to learn > the other version, I would think it useful to be able to jump to the > corresponding page for the other version. Right. I don't think new subdomains and the improvements that Nick suggests are incompatible. docs2 and docs3 can just redirect to to docs/. It's just that docs2 and docs3 make it easier to type or link to the (super-major) version you care about. (docs2 should be an alias for 2.7; docs3 for the latest released 3.x version.) -- --Guido van Rossum (python.org/~guido) From guido at python.org Mon May 21 15:55:08 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 21 May 2012 06:55:08 -0700 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FB9F620.1060602@trueblade.com> References: <4FB9F620.1060602@trueblade.com> Message-ID: On Mon, May 21, 2012 at 1:00 AM, Eric V. Smith wrote: > On 5/20/2012 9:33 PM, Guido van Rossum wrote: >> Generally speaking the PEP is a beacon if clarity. But I stumbled >> about one feature that bothers me in its specification and through its >> lack of rationale. This is the section on Dynamic Path Computation: >> (http://www.python.org/dev/peps/pep-0420/#dynamic-path-computation). >> The specification bothers me because it requires in-place modification >> of sys.path. Does this mean sys.path is no longer a plain list? I'm >> sure it's going to break things left and right (or at least things >> will be violating this requirement left and right); there has never >> been a similar requirement (unlike, e.g., sys.modules, which is >> relatively well-known for being cached in a C-level global variable). >> Worse, this apparently affects __path__ variables of namespace >> packages as well, which are now specified as an unspecified read-only >> iterable. (I can only guess that there is a connection between these >> two features -- the PEP doesn't mention one.) Again, I would be much >> happier with just a list. > > sys.path would still be a plain list. It's the namespace package's > __path__ that would be a special object. Every time __path__ is accessed > it checks to see if it's parent path has been modified. The parent path > for top level modules is sys.path. The __path__ object detects > modification by keeping a local copy of the parent, plus a reference to > the parent, and compares them. Ah, I see. But I disagree that this is a reasonable constraint on sys.path. The magic __path__ object of a toplevel namespace module should know it is a toplevel module, and explicitly refetch sys.path rather than just keeping around a copy. This leaves the magic __path__ objects for namespace modules, which I could live with, as long as their repr was not the same as a list, and assuming a good rationale is given. Although I'd still prefer plain lists here as well; I'd like to be able to manually construct a namespace package and force its directories to be a specific set of directories that I happen to know about, regardless of whether they are related to sys.path or not. And I'd like to know that my setup in that case would not be disturbed by changes to sys.path. >> While I can imagine there being a use case for recomputing the various >> paths, I am much less sure that it is worth attempting to specify that >> this will happen *automatically* when sys.path is modified in a >> certain way. I'd be much happier if these constraints were struck and >> the recomputation had to be requested explicitly by calling some new >> function in sys. >> >>>From my POV, this is the only show-stopper for acceptance of PEP 420. >> (That is, either a rock-solid rationale should be supplied, or the >> constraints should be removed.) > > I don't have a preference on whether the feature stays or goes, so I'll > let PJE give the use case. I've copied him here in case he doesn't read > python-dev. > > Now that I think about it some more, the motivation is probably to ease > the migration from setuptools, which does provide this feature. I'd like to hear more about this from Philip -- is that feature actually widely used? What would a package have to do if the feature didn't exist? I'd really much rather not have this feature, which reeks of too much magic to me. (An area where Philip and I often disagree. :-) -- --Guido van Rossum (python.org/~guido) From Kristofer.Wempa at sig.com Mon May 21 16:49:23 2012 From: Kristofer.Wempa at sig.com (Wempa, Kristofer) Date: Mon, 21 May 2012 14:49:23 +0000 Subject: [Python-Dev] ossaudiodev and linuxaudiodev not built on SLES11SP2 due to change in sys.platform Message-ID: <7178542C389AAE4B9A26C656124E966F12BBB0@xchmbbal505.ds.susq.com> I am currently working on porting our Linux tool chains to SuSe Enterprise Linux 11 service pack 2 (SLES11SP2). During this process, we noticed an issue due to a change in the sys.platform string. Previously, the string was "linux2" and it has now changed to "linux3". This seems to be due to a change in the kernel version. This causes the ossaudiodev and linuxaudiodev modules to be omitted from the build. I found the relevant code in setup.py: if platform == 'linux2': # Linux-specific modules exts.append( Extension('linuxaudiodev', ['linuxaudiodev.c']) ) else: missing.append('linuxaudiodev') if platform in ('linux2', 'freebsd4', 'freebsd5', 'freebsd6', 'freebsd7', 'freebsd8'): exts.append( Extension('ossaudiodev', ['ossaudiodev.c']) ) else: missing.append('ossaudiodev') Since neither of these account for "linux3", they are both omitted from the build. I can simply modify the code to include "linux3" so that these modules are built on the new platform. However, I wanted to check and see whether they are specifically being omitted for a reason or if the setup file just wasn't ported and tested against the new platform. Any help would be appreciated. Thanks. Kris ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kristofer.Wempa at sig.com Mon May 21 16:53:06 2012 From: Kristofer.Wempa at sig.com (Wempa, Kristofer) Date: Mon, 21 May 2012 14:53:06 +0000 Subject: [Python-Dev] ossaudiodev and linuxaudiodev not built on SLES11SP2 due to change in sys.platform In-Reply-To: <7178542C389AAE4B9A26C656124E966F12BBB0@xchmbbal505.ds.susq.com> References: <7178542C389AAE4B9A26C656124E966F12BBB0@xchmbbal505.ds.susq.com> Message-ID: <7178542C389AAE4B9A26C656124E966F12BBE9@xchmbbal505.ds.susq.com> Sorry, I forgot to mention that this is for Python 2.6.8 and 2.7.3. From: python-dev-bounces+kristofer.wempa=sig.com at python.org [mailto:python-dev-bounces+kristofer.wempa=sig.com at python.org] On Behalf Of Wempa, Kristofer Sent: Monday, May 21, 2012 10:49 AM To: python-dev at python.org Subject: [Python-Dev] ossaudiodev and linuxaudiodev not built on SLES11SP2 due to change in sys.platform I am currently working on porting our Linux tool chains to SuSe Enterprise Linux 11 service pack 2 (SLES11SP2). During this process, we noticed an issue due to a change in the sys.platform string. Previously, the string was "linux2" and it has now changed to "linux3". This seems to be due to a change in the kernel version. This causes the ossaudiodev and linuxaudiodev modules to be omitted from the build. I found the relevant code in setup.py: if platform == 'linux2': # Linux-specific modules exts.append( Extension('linuxaudiodev', ['linuxaudiodev.c']) ) else: missing.append('linuxaudiodev') if platform in ('linux2', 'freebsd4', 'freebsd5', 'freebsd6', 'freebsd7', 'freebsd8'): exts.append( Extension('ossaudiodev', ['ossaudiodev.c']) ) else: missing.append('ossaudiodev') Since neither of these account for "linux3", they are both omitted from the build. I can simply modify the code to include "linux3" so that these modules are built on the new platform. However, I wanted to check and see whether they are specifically being omitted for a reason or if the setup file just wasn't ported and tested against the new platform. Any help would be appreciated. Thanks. Kris ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses. ________________________________ IMPORTANT: The information contained in this email and/or its attachments is confidential. If you are not the intended recipient, please notify the sender immediately by reply and immediately delete this message and all its attachments. Any review, use, reproduction, disclosure or dissemination of this message or any attachment by an unintended recipient is strictly prohibited. Neither this message nor any attachment is intended as or should be construed as an offer, solicitation or recommendation to buy or sell any security or other financial instrument. Neither the sender, his or her employer nor any of their respective affiliates makes any warranties as to the completeness or accuracy of any of the information contained herein or that this message or any of its attachments is free of viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at cheimes.de Mon May 21 17:16:34 2012 From: lists at cheimes.de (Christian Heimes) Date: Mon, 21 May 2012 17:16:34 +0200 Subject: [Python-Dev] ossaudiodev and linuxaudiodev not built on SLES11SP2 due to change in sys.platform In-Reply-To: <7178542C389AAE4B9A26C656124E966F12BBE9@xchmbbal505.ds.susq.com> References: <7178542C389AAE4B9A26C656124E966F12BBB0@xchmbbal505.ds.susq.com> <7178542C389AAE4B9A26C656124E966F12BBE9@xchmbbal505.ds.susq.com> Message-ID: <4FBA5C52.2010203@cheimes.de> Am 21.05.2012 16:53, schrieb Wempa, Kristofer: > Sorry, I forgot to mention that this is for Python 2.6.8 and 2.7.3. You must be mistaken, Python 2.7.3 has a fix for the issue. configure checks for linux*: linux*) MACHDEP="linux2";; On the other hand Python 2.6.8 has no such fix and thus sets linux3 on Kernel 3.0 and newer. You have fix it yourself as described in my blog http://lipyrary.blogspot.de/2011/09/python-and-linux-kernel-30-sysplatform.html Christian From dmalcolm at redhat.com Mon May 21 17:19:56 2012 From: dmalcolm at redhat.com (David Malcolm) Date: Mon, 21 May 2012 11:19:56 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120518142418.7609fe21@limelight.wooz.org> References: <20120518142418.7609fe21@limelight.wooz.org> Message-ID: <1337613597.1758.76.camel@surprise> On Fri, 2012-05-18 at 14:24 -0400, Barry Warsaw wrote: > At what point should we cut over docs.python.org to point to the Python 3 > documentation by default? Wouldn't this be an easy bit to flip in order to > promote Python 3 more better? If we do, perhaps we should revisit http://bugs.python.org/issue10446 http://hg.python.org/cpython/rev/b41404a3f7d4/ changed pydoc in the py3k branch to direct people to http://docs.python.org/X.Y/library/ rather than to http://docs.python.org/library/ This was applied to the 3.2 and 3.1 branches, but hasn't been backported to any of the 2.* - so if docs.python.org starts defaulting to python 3, it makes sense to backport that change to 2.* Hope this is helpful Dave From g.brandl at gmx.net Mon May 21 17:50:17 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 17:50:17 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <20120521121451.C6E922500D5@webabinitio.net> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <20120521121451.C6E922500D5@webabinitio.net> Message-ID: On 05/21/2012 02:14 PM, R. David Murray wrote: > On Mon, 21 May 2012 11:41:29 +0200, Georg Brandl wrote: >> On 05/21/2012 03:23 AM, Guido van Rossum wrote: >> > I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. >> >> Here are the time machine keys: this subdomain has existed for a few years now :) > > The fact that none of us knew about it may say something about its > effectiveness, though. Sure. I was never fond of it, but there was a discussion probably similar to this one, and it was agreed to add that subdomain. Georg From barry at python.org Mon May 21 17:51:57 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 21 May 2012 11:51:57 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521115157.01d7e7b8@resist.wooz.org> On May 20, 2012, at 04:27 PM, Raymond Hettinger wrote: >When there is more uptake of Python 3, it would be reasonable move. How do we measure this, and what's the milestone for enough uptake to make the switch? Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From tjreedy at udel.edu Mon May 21 17:56:02 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 11:56:02 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <4FB9F045.5060603@ox.cx> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <4FB9F045.5060603@ox.cx> Message-ID: On 5/21/2012 3:35 AM, Hynek Schlawack wrote: > Also -1 on docs3, that would suggest that it?s still something special > and 2 (= docs) is the real deal. Guido and I are proposing docs2 and docs3 each pointing to the latest docs for each series. That puts them on equal status. docs.python.org, besides being a namespace for specific version docs (/x.y, minus Nick's /latest) would be transitioned away from being a synonym for docs2. It could become a *neutral* index page listing docs2 and docs3 for the 'latest' production version of each series and then each subdirectory. -- Terry Jan Reedy From tjreedy at udel.edu Mon May 21 18:03:31 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 12:03:31 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: On 5/21/2012 3:24 AM, Nick Coghlan wrote: > docs.python.org/latest > docs.python.org/dev > docs.python.org/3.2 > docs.python.org/3.1 > docs.python.org/2.7 > docs.python.org/2.6 > etc... This looks great except for 'latest', which is ambiguous and awkward. Like Guido, I would have docs2 and docs3 link to the latest of each series. This gives both series equal billing. docs itself could then become a *neutral* index page. In retrospect, I wish we had done this a year ago. This design would continue to work if and when we need docs4.python.org. -- Terry Jan Reedy From solipsis at pitrou.net Mon May 21 18:07:36 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 21 May 2012 18:07:36 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521180736.6ec70661@pitrou.net> On Mon, 21 May 2012 12:03:31 -0400 Terry Reedy wrote: > On 5/21/2012 3:24 AM, Nick Coghlan wrote: > > > docs.python.org/latest > > docs.python.org/dev > > docs.python.org/3.2 > > docs.python.org/3.1 > > docs.python.org/2.7 > > docs.python.org/2.6 > > etc... > > This looks great except for 'latest', which is ambiguous and awkward. > > Like Guido, I would have docs2 and docs3 link to the latest of each > series. This gives both series equal billing. docs itself could then > become a *neutral* index page. In retrospect, I wish we had done this a > year ago. I don't like docs2/docs3. First, they are clumsy to type and look awkward. Second, it's not the right level of segregation; if you wanted separate domains you'd really want docs.python2.org and docs.python3.org. So, in the end, I think the current scheme is ok and we only need to add a "/stable" pointing to latest 3.x. Regards Antoine. From barry at python.org Mon May 21 18:14:54 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 21 May 2012 12:14:54 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> Message-ID: <20120521121454.6ac6882c@resist.wooz.org> On May 21, 2012, at 02:28 PM, Nick Coghlan wrote: >Rather than a new subdomain, I'd prefer to see a discreet >"documentation version" CSS widget, similar to that used in the Django >docs (see https://docs.djangoproject.com/en/1.4/) that indicated the >current displayed version and provided quick links to the 2.7 docs, >the stable 3.x docs and the development docs. I'd be all for this, as long as I can still write chrome/firefox/genericbrowser shortcuts to give me the latest Python 2 or Python 3 library page. >I know plenty of people are keen to push the migration to Python 3 >forward as quickly as possible, but this is *definitely* a case of >"make haste slowly". We need to tread carefully or we're going to give >existing users an even stronger feeling that we simply don't care >about the impact the Python 3 migration is having (or is going to >have) on them. *We* know that we care, but there's still plenty of >folks out there that don't realise how deeply rooted the problems are >in Python 2's text model and why the Python 3 backwards compatibility >break was needed to fix them. They don't get to see the debates that >happen on this list - they only get to see the end results of our >decisions. Switching the default docs.python.org version to the 3.x >series is a move that needs to be advertised *well* in advance as a >courtesy to our users, so that those that need to specifically >reference 2.7 have plenty of time to update their links. Right. I'm just keen on continuing to make progress. I really do think we're not far from a tipping point on Python 3, and I want to keep nudging us over the edge. Roller coasters are scary *and* fun. :) >Back when Python 3 was first released, we set a target for the >migration period of around 5 years. Since the io performance problems >in 3.0 meant that 3.1 was the first real production ready release of >3.x, that makes June 2014 the target date for when we would like the >following things to be true: If history is repeated, my guess is that will put us a few months into Python 3.5 development. I think Python 3.3 is shaping up to be a fantastic release, and once it's out we should start thinking about what we want to accomplish in Python 3.4 to achieve the goal of Python 3 dominance. >- all major third party libraries and frameworks support Python 3 (or >there are Python 3 forks or functional replacements) There's already great ongoing work on this. It could use more help of course. I've mentioned Ubuntu's efforts here before, but this is really more about the greater Python universe, and getting Python 3 on the radar of more and more projects. >- Python 3 is the default choice for most new Python projects When I talk to folks starting new Python projects, I always push for it to begin in Python 3. Of course, the state of their dependencies is always a consideration, but this is becoming more feasible for more projects every day. >- most Python instruction uses Python 3, with Python 2 differences >described for those that need to work with legacy code >- (less likely, but possible) user-focused distros such as Ubuntu and >Fedora have changed their "python" symlink to refer to Python 3 I doubt Debian/Ubuntu will ever switch /usr/bin/python though PEP 394 will probably have the final word. >That's still 2 years away, and should line up fairly nicely with the >release of Python 3.4 (assuming the current release cadence is >maintained for at least one more version). Key web and networking >frameworks such as Django [1], Pyramid [2] and Twisted [3] should also >be well supported on 3.x by that point. Rough estimate, assuming 18 month cadences and an on-time release of 3.3, puts 3.4 final in February of 2014. >>> final33 = datetime.datetime(day=25, month=8, year=2012) >>> final34 = final33 + datetime.timedelta(days=18 * 30) >>> final34.isoformat() '2014-02-16T00:00:00' Cheers, -Barry From rdmurray at bitdance.com Mon May 21 18:15:38 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 21 May 2012 12:15:38 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <1337613597.1758.76.camel@surprise> References: <20120518142418.7609fe21@limelight.wooz.org> <1337613597.1758.76.camel@surprise> Message-ID: <20120521161539.46C352500E0@webabinitio.net> On Mon, 21 May 2012 11:19:56 -0400, David Malcolm wrote: > On Fri, 2012-05-18 at 14:24 -0400, Barry Warsaw wrote: > > At what point should we cut over docs.python.org to point to the Python 3 > > documentation by default? Wouldn't this be an easy bit to flip in order to > > promote Python 3 more better? > > If we do, perhaps we should revisit http://bugs.python.org/issue10446 > > http://hg.python.org/cpython/rev/b41404a3f7d4/ changed pydoc in the py3k > branch to direct people to http://docs.python.org/X.Y/library/ rather > than to http://docs.python.org/library/ > > This was applied to the 3.2 and 3.1 branches, but hasn't been backported > to any of the 2.* - so if docs.python.org starts defaulting to python 3, > it makes sense to backport that change to 2.* Note that I did apply the fix for 14434 to 2.7. So yes, I think 10446 should be applied to 2.7 as well. --David From tjreedy at udel.edu Mon May 21 18:18:16 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 12:18:16 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> Message-ID: On 5/21/2012 7:42 AM, Georg Brandl wrote: > On 05/21/2012 11:09 AM, ?ukasz Langa wrote: >> Wiadomo?? napisana przez Nick Coghlan w dniu 21 maj 2012, o godz. 09:24: >> >>> The following >>> would be using docs.python.org as a namespace (and is what I think we >>> should move towards): >>> >>> docs.python.org/latest >>> docs.python.org/dev >>> docs.python.org/3.2 >>> docs.python.org/3.1 >>> docs.python.org/2.7 >>> docs.python.org/2.6 >> >> Love it. +1 > > Apart from the "latest" one, all these URLs already work. > > Of course, /2.7 is redirected to /, and /3.3 to /dev, etc. > If required, the direction of these redirects can be changed, so > that e.g. / goes to /2.7. > > What about: > > * Canonical: > > docs.python.org/2/ > docs.python.org/3/ If you prefer these to docs2, docs3, OK with me. Whatever we do, we should encourage book/blog writers to use the canonical 'latest' links that will not go out of date. So there should definitely be one for each, with the same format. The exact format is less important. > for latest versions of 2.x and 3.x > > docs.python.org/2.7/ etc. > > for latest minor versions > > docs.python.org/dev/ > > for latest dev version. > > * Redirected: > > docs.python.org/ --> either /2/ or /3/ or a "disambiguation page" While I am a strong partisan of Py 3, I do not want Py 2 users to feel 'pushed', so I vote for a neutral index or 'disambiguation' page. What I would do is set up the canonical pages now. Next, add a notice to the top of docs.python.org that it will become a neutral index page with the release of 3.3, so 'please change bookmarks to the new, permanent page for Py 2', whatever it is. > docs.python.org/py3k/ -> /3/ > There is also /release/X.Y.Z for individual released versions, which > I don't want to change. I would leave those alone too. -- Terry Jan Reedy From hs at ox.cx Mon May 21 18:25:06 2012 From: hs at ox.cx (Hynek Schlawack) Date: Mon, 21 May 2012 18:25:06 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <4FB9F045.5060603@ox.cx> Message-ID: <4FBA6C62.9070106@ox.cx> >> Also -1 on docs3, that would suggest that it?s still something special >> and 2 (= docs) is the real deal. > Guido and I are proposing docs2 and docs3 each pointing to the latest > docs for each series. That puts them on equal status. > docs.python.org, besides being a namespace for specific version docs > (/x.y, minus Nick's /latest) would be transitioned away from being a > synonym for docs2. It could become a *neutral* index page listing docs2 > and docs3 for the 'latest' production version of each series and then > each subdirectory. I find docs2/3 ugly as it reminds me of load balancing (like www1.python.org) and it also doesn?t really make sense to me. I have no problem to have these DNS records and redirect them to docs.python.org/2 or /3 but I wouldn?t like them to be the canonical URIs. From g.brandl at gmx.net Mon May 21 19:45:14 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 19:45:14 +0200 Subject: [Python-Dev] cpython: Close #13585: add contextlib.ExitStack to replace the ill-fated In-Reply-To: References: Message-ID: Am 21.05.2012 14:54, schrieb nick.coghlan: > http://hg.python.org/cpython/rev/8ef66c73b1e1 > changeset: 77095:8ef66c73b1e1 > user: Nick Coghlan > date: Mon May 21 22:54:43 2012 +1000 > summary: > Close #13585: add contextlib.ExitStack to replace the ill-fated contextlib.nested API > > files: > Doc/library/contextlib.rst | 279 +++++++++++++++++++++++- > Doc/whatsnew/3.3.rst | 15 + > Lib/contextlib.py | 126 ++++++++++- > Lib/test/test_contextlib.py | 123 ++++++++++ > Misc/NEWS | 2 + > 5 files changed, 539 insertions(+), 6 deletions(-) > > > diff --git a/Doc/library/contextlib.rst b/Doc/library/contextlib.rst > --- a/Doc/library/contextlib.rst > +++ b/Doc/library/contextlib.rst > @@ -12,8 +12,11 @@ > statement. For more information see also :ref:`typecontextmanager` and > :ref:`context-managers`. > > -Functions provided: > > +Utilities > +--------- > + > +Functions and classes provided: > > .. decorator:: contextmanager > > @@ -168,6 +171,280 @@ > .. versionadded:: 3.2 > > > +.. class:: ExitStack() > + > + A context manager that is designed to make it easy to programmatically > + combine other context managers and cleanup functions, especially those > + that are optional or otherwise driven by input data. > + > + For example, a set of files may easily be handled in a single with > + statement as follows:: > + > + with ExitStack() as stack: > + files = [stack.enter_context(open(fname)) for fname in filenames] > + # All opened files will automatically be closed at the end of > + # the with statement, even if attempts to open files later > + # in the list throw an exception > + > + Each instance maintains a stack of registered callbacks that are called in > + reverse order when the instance is closed (either explicitly or implicitly > + at the end of a ``with`` statement). Note that callbacks are *not* invoked > + implicitly when the context stack instance is garbage collected. > + > + This stack model is used so that context managers that acquire their > + resources in their ``__init__`` method (such as file objects) can be > + handled correctly. > + > + Since registered callbacks are invoked in the reverse order of > + registration, this ends up behaving as if multiple nested ``with`` > + statements had been used with the registered set of callbacks. This even > + extends to exception handling - if an inner callback suppresses or replaces > + an exception, then outer callbacks will be passed arguments based on that > + updated state. > + > + This is a relatively low level API that takes care of the details of > + correctly unwinding the stack of exit callbacks. It provides a suitable > + foundation for higher level context managers that manipulate the exit > + stack in application specific ways. > + > + .. method:: enter_context(cm) > + > + Enters a new context manager and adds its :meth:`__exit__` method to > + the callback stack. The return value is the result of the context > + manager's own :meth:`__enter__` method. > + > + These context managers may suppress exceptions just as they normally > + would if used directly as part of a ``with`` statement. > + > + .. method:: push(exit) > + > + Adds a context manager's :meth:`__exit__` method to the callback stack. > + > + As ``__enter__`` is *not* invoked, this method can be used to cover > + part of an :meth:`__enter__` implementation with a context manager's own > + :meth:`__exit__` method. > + > + If passed an object that is not a context manager, this method assumes > + it is a callback with the same signature as a context manager's > + :meth:`__exit__` method and adds it directly to the callback stack. > + > + By returning true values, these callbacks can suppress exceptions the > + same way context manager :meth:`__exit__` methods can. > + > + The passed in object is returned from the function, allowing this > + method to be used is a function decorator. > + > + .. method:: callback(callback, *args, **kwds) > + > + Accepts an arbitrary callback function and arguments and adds it to > + the callback stack. > + > + Unlike the other methods, callbacks added this way cannot suppress > + exceptions (as they are never passed the exception details). > + > + The passed in callback is returned from the function, allowing this > + method to be used is a function decorator. > + > + .. method:: pop_all() > + > + Transfers the callback stack to a fresh :class:`ExitStack` instance > + and returns it. No callbacks are invoked by this operation - instead, > + they will now be invoked when the new stack is closed (either > + explicitly or implicitly). > + > + For example, a group of files can be opened as an "all or nothing" > + operation as follows:: > + > + with ExitStack() as stack: > + files = [stack.enter_context(open(fname)) for fname in filenames] > + close_files = stack.pop_all().close > + # If opening any file fails, all previously opened files will be > + # closed automatically. If all files are opened successfully, > + # they will remain open even after the with statement ends. > + # close_files() can then be invoked explicitly to close them all > + > + .. method:: close() > + > + Immediately unwinds the callback stack, invoking callbacks in the > + reverse order of registration. For any context managers and exit > + callbacks registered, the arguments passed in will indicate that no > + exception occurred. > + > + .. versionadded:: 3.3 I'd prefer this versionadded a little more towards the top of the class documentation, e.g. before the first method. Otherwise it might get overlooked, or taken as belonging to the close() method (the indentation suggests otherwise, but that might not be enough cue for a quick read). Georg From g.brandl at gmx.net Mon May 21 19:46:59 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 19:46:59 +0200 Subject: [Python-Dev] cpython (3.2): #14766: Add correct algorithm for when a 'time' object is naive. In-Reply-To: References: Message-ID: Am 15.05.2012 04:33, schrieb r.david.murray: > diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst > --- a/Doc/library/datetime.rst > +++ b/Doc/library/datetime.rst > @@ -15,16 +15,23 @@ > formatting and manipulation. For related > functionality, see also the :mod:`time` and :mod:`calendar` modules. > > -There are two kinds of date and time objects: "naive" and "aware". This > -distinction refers to whether the object has any notion of time zone, daylight > -saving time, or other kind of algorithmic or political time adjustment. Whether > -a naive :class:`.datetime` object represents Coordinated Universal Time (UTC), > +There are two kinds of date and time objects: "naive" and "aware". > + > +An aware object has sufficient knowledge of applicable algorithmic and > +political time adjustments, such as time zone and daylight saving time > +information, to locate itself relative to other aware objects. An aware object > +is used to represent a specific moment in time that is not open to > +interpretation [#]_. > @@ -1806,3 +1816,7 @@ > When the ``%z`` directive is provided to the :meth:`strptime` method, an > aware :class:`.datetime` object will be produced. The ``tzinfo`` of the > result will be set to a :class:`timezone` instance. > + > +.. rubric:: Footnotes > + > +.. [#] If, that is, we ignore the effects of Relativity I like that :) Georg From merwok at netwok.org Mon May 21 19:58:44 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 21 May 2012 13:58:44 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> Message-ID: <4FBA8254.2090605@netwok.org> Le 21/05/2012 07:42, Georg Brandl a ?crit : > What about: > > * Canonical: > > docs.python.org/2/ > docs.python.org/3/ > > for latest versions of 2.x and 3.x > > docs.python.org/2.7/ etc. > > for latest minor versions > > docs.python.org/dev/ > > for latest dev version. +1. I?d be +1 to adding /stable but both 2.7 and 3.2 are stable at this time. > * Redirected: > > docs.python.org/ --> either /2/ or /3/ or a "disambiguation page" Either sounds good, I?m in favor of redirecting to /2 for a few years still to preserve existing links and avoid the need to click on each page. > docs.python.org/py3k/ -> /3/ +1, the py3k name is not obvious for everyone. > There is also /release/X.Y.Z for individual released versions, which > I don't want to change. The URIs should not change, but it seems a bit bad to me that for example the 2.7.1 docs don?t link to the latest 2.7 page and mention 2.6 as stable version > I also like Martin's idea of offering more links between individual > pages, not only the front-pages. +1 On a related note, we may want to find a way to make the version more prominent in the pages; I?ve seen beginners install Python 3 and use the Python 2 docs and fail at the first print 'Hello, world!' example. That?s why I support always having the version numbers in the URIs. Cheers From tjreedy at udel.edu Mon May 21 20:04:26 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 21 May 2012 14:04:26 -0400 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <20120521121451.C6E922500D5@webabinitio.net> Message-ID: On 5/21/2012 11:50 AM, Georg Brandl wrote: > On 05/21/2012 02:14 PM, R. David Murray wrote: >> On Mon, 21 May 2012 11:41:29 +0200, Georg Brandl wrote: >>> On 05/21/2012 03:23 AM, Guido van Rossum wrote: >>>> I suggest that we add a separate (virtual) subdomain, e.g. docs3.python.org. >>> Here are the time machine keys: this subdomain has existed for a few years now :) >> >> The fact that none of us knew about it may say something about its >> effectiveness, though. > > Sure. I was never fond of it, but there was a discussion probably similar > to this one, and it was agreed to add that subdomain. Since there is no link to it from docs.python.org, of course it it difficult to find 8-). Such a link is part of the otherwise redundant proposal. -- Terry Jan Reedy From pje at telecommunity.com Mon May 21 20:08:16 2012 From: pje at telecommunity.com (PJ Eby) Date: Mon, 21 May 2012 14:08:16 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> Message-ID: On Mon, May 21, 2012 at 9:55 AM, Guido van Rossum wrote: > Ah, I see. But I disagree that this is a reasonable constraint on > sys.path. The magic __path__ object of a toplevel namespace module > should know it is a toplevel module, and explicitly refetch sys.path > rather than just keeping around a copy. > That's fine by me - the class could actually be defined to take a module name and attribute (e.g. 'sys', 'path' or 'foo', '__path__'), and then there'd be no need to special case anything: it would behave exactly the same way for subpackages and top-level packages. > This leaves the magic __path__ objects for namespace modules, which I > could live with, as long as their repr was not the same as a list, and > assuming a good rationale is given. Although I'd still prefer plain > lists here as well; I'd like to be able to manually construct a > namespace package and force its directories to be a specific set of > directories that I happen to know about, regardless of whether they > are related to sys.path or not. And I'd like to know that my setup in > that case would not be disturbed by changes to sys.path. > To do that, you just assign to __path__, the same as now, ala __path__ = pkgutil.extend_path(). The auto-updating is in the initially-assigned __path__ object, not the module object or some sort of generalized magic. I'd like to hear more about this from Philip -- is that feature > actually widely used? Well, it's built into setuptools, so yes. ;-) It gets used any time a dynamically specified dependency is used that might contain a namespace package. This means, for example, that every setup script out there using "setup.py test", every project using certain paste.deploy features... it's really difficult to spell out the scope of things that are using this, in the context of setuptools and distribute, because there are an immense number of ways to indirectly rely on it. This doesn't mean that the feature can't continue to be implemented inside setuptools' dynamic dependency system, but the code to do it in setuptools is MUCH more complicated than the PEP 420 code, and doesn't work if you manually add something to sys.path without asking setuptools to do it. It's also somewhat timing-sensitive, depending on when and whether you import 'site' and pkg_resources, and whether you are mixing eggs and non-eggs in your namespace packages. In short, the implementation is a huge mess that the PEP 420 approach would vastly simplify. But... that wasn't the original reason why I proposed it. The original reason was simply that it makes namespace packages act more like the equivalents do in other languages. While being able to override __path__ can be considered a feature of Python, its being static by default is NOT a feature, in the same way that *requiring* an __init__.py is not really a feature. The principle of least surprise says (at least IMO) that if you add a directory to sys.path, you should be able to import stuff from it. That whether it works depends on whether or not you already imported part of a namespace package earlier is both surprising and confusing. (More on this below.) > What would a package have to do if the feature didn't exist? Continue to depend on setuptools to do it for them, or use some hypothetical update API... but that's not really the right question. ;-) The right question is, what happens to package *users* if the feature didn't exist? And the answer to that question is, "you must call this hypothetical update API *every time* you change sys.path, because otherwise your imports might break, depending on whether or not some other package imported something from a namespace before you changed sys.path". And of course, you also need to make sure that any third-party code you use does this too, if it adds something to sys.path for you. And if you're writing cross-Python-version code, you need to check to make sure whether the API is actually available. And if you're someone helping Python newbies, you need to add this to your list of debugging questions for import-related problems. And remember: if you forget to do this, it might not break now. It'll break later, when you add that other plugin or update that random module that dynamically decides to import something that just happens to be in a namespace package, so be prepared for it to break your application in the field, when an end-user is using it with a collection of plugins that you haven't tested together, or in the same import sequence... The people using setuptools won't have these problems, but *new* Python users will, as people begin using a PEP 420 that lacks this feature. The key scope question, I think, is: "How often do programs change sys.path at runtime, and what have they imported up to that point?" (Because for the other part of the scope, I think it's a fairly safe bet that namespace packages are going to become even *more* popular than they are now, once PEP 420 is in place.) But the key API/usability question is: "What's the One Obvious Way to add/change what's importable?" And I believe the answer to that question is, "change sys.path", not "change sys.path, and then import some other module to call another API to say, 'yes, I really *meant* to update sys.path, thank you very much.'" (Especially since NOT requiring that extra API isn't going to break any existing code.) > I'd really much rather not have this feature, which > reeks of too much magic to me. (An area where Philip and I often > disagree. :-) > My take on it is that it only SEEMS like magic, because we're used to static __path__. But other languages don't have per-package __path__ in the first place, so there's nothing to "automatically update", and so it's not magic at all that other subpackages/modules can be found when the system path changes! So, under the PEP 420 approach, it's *static* __path__ that's really the weird special case, and should be considered so. (After all, __path__ is and was primarily an implementation optimization and compatibility hack, rather than a user-facing "feature" of the import system.) For example, when *would* you want to explicitly spell out a namespace package __path__, and restrict it from seeing sys.path changes? I've not seen *anybody* ask for this feature in the context of setuptools; it's only ever been bug reports about when the more complicated implementation fails to detect an update. So, to wrap up: * The primary rationale for the feature is that "least surprise" for a new user to Python is that adding to sys.path should allow importing a portion of a namespace, whether or not you've already imported some other thing in that namespace. Symmetry with other languages and with other Python features (e.g. changing the working directory in an interactive interpreter) suggests it, and the removal of a similar timing dependency from PEP 402 (preventing direct import of a namespace-only package unless you imported a subpackage first) suggests that the same type of timing dependency should be removed here, too. (Note, for example, that I may not know that importing baz.spam indirectly causes some part of foo.wiz to be imported, and that if I then add another directory to sys.path containing a foo.* portion, my code will *no longer work* when I try to import foo.ham. This is much more "magical" behavior, in least-surprise terms!) * The constraints on sys.path and package __path__ objects can and should be removed, by making the dynamic path objects refer to a module and attribute, instead of directly referencing parent __path__ objects. Code that currently manipulates __path__ will not break, because such code will not be using PEP 420 namespace packages anyway (and so, __path__ will be a list. (Even so, the most common __path__ manipulation idiom is "__path__ = pkgutil.extend_path(...)" anyway!) * Namespace packages are a widely used feature of setuptools, and AFAIK nobody has *ever* asked to stop dynamic additions to namespace __path__, but a wide assortment of things people do with setuptools rely on dynamic additions under the hood. Providing the feature in PEP 420 gives a migration path away from setuptools, at least for this one feature. (Specifically, it does away with the need to use declare_namespace(), and the need to do all sys.path manipulation via setuptools' requirements API.) * Self-contained (__init__.py packages) and fixed __path__ lists can and should be considered the "magic" or "special case" parts of importing in Python 3, even though we're accustomed to them being central import concepts in Python 2. Modules and namespace packages can and should be the default case from an instructional POV, and sys.path updating should reflect this. (That is, future tutorials should introduce modules, then namespace packages, and finally self-contained packages with __init__ and __path__, because the *idea* of a namespace package doesn't depend on __path__ existing in the first place; it's essentially only a historical accident that self-contained packages were implemented in Python first.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Mon May 21 20:14:36 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 20:14:36 +0200 Subject: [Python-Dev] cpython: Close #14588: added a PEP 3115 compliant dynamic type creation mechanism In-Reply-To: References: Message-ID: Am 19.05.2012 18:34, schrieb nick.coghlan: > diff --git a/Doc/library/types.rst b/Doc/library/types.rst > --- a/Doc/library/types.rst > +++ b/Doc/library/types.rst > @@ -1,5 +1,5 @@ > -:mod:`types` --- Names for built-in types > -========================================= > +:mod:`types` --- Dynamic type creation and names for built-in types > +=================================================================== > > .. module:: types > :synopsis: Names for built-in types. > @@ -8,20 +8,69 @@ > > -------------- > > -This module defines names for some object types that are used by the standard > +This module defines utility function to assist in dynamic creation of > +new types. > + > +It also defines names for some object types that are used by the standard > Python interpreter, but not exposed as builtins like :class:`int` or > -:class:`str` are. Also, it does not include some of the types that arise > -transparently during processing such as the ``listiterator`` type. > +:class:`str` are. > > -Typical use is for :func:`isinstance` or :func:`issubclass` checks. > > -The module defines the following names: > +Dynamic Type Creation > +--------------------- > + > +.. function:: new_class(name, bases=(), kwds=None, exec_body=None) > + > + Creates a class object dynamically using the appropriate metaclass. > + > + The arguments are the components that make up a class definition: the > + class name, the base classes (in order), the keyword arguments (such as > + ``metaclass``) and the callback function to populate the class namespace. > + > + The *exec_body* callback should accept the class namespace as its sole > + argument and update the namespace directly with the class contents. > + > +.. function:: prepare_class(name, bases=(), kwds=None) > + > + Calculates the appropriate metaclass and creates the class namespace. > + > + The arguments are the components that make up a class definition: the > + class name, the base classes (in order) and the keyword arguments (such as > + ``metaclass``). > + > + The return value is a 3-tuple: ``metaclass, namespace, kwds`` > + > + *metaclass* is the appropriate metaclass > + *namespace* is the prepared class namespace > + *kwds* is an updated copy of the passed in *kwds* argument with any > + ``'metaclass'`` entry removed. If no *kwds* argument is passed in, this > + will be an empty dict. > + > + > +.. seealso:: > + > + :pep:`3115` - Metaclasses in Python 3000 > + Introduced the ``__prepare__`` namespace hook > + > + Should have versionadded. Georg From g.brandl at gmx.net Mon May 21 20:26:24 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 21 May 2012 20:26:24 +0200 Subject: [Python-Dev] cpython (3.2): Issue12541 - Add UserWarning for unquoted realms In-Reply-To: References: Message-ID: Am 15.05.2012 18:08, schrieb senthil.kumaran: > diff --git a/Lib/urllib/request.py b/Lib/urllib/request.py > --- a/Lib/urllib/request.py > +++ b/Lib/urllib/request.py > @@ -95,6 +95,7 @@ > import sys > import time > import collections > +import warnings > > from urllib.error import URLError, HTTPError, ContentTooShortError > from urllib.parse import ( > @@ -827,6 +828,9 @@ > mo = AbstractBasicAuthHandler.rx.search(authreq) > if mo: > scheme, quote, realm = mo.groups() > + if quote not in ["'", '"']: > + warnings.warn("Basic Auth Realm was unquoted", > + UserWarning, 2) > if scheme.lower() == 'basic': > response = self.retry_http_basic_auth(host, req, realm) > if response and response.code != 401: This looks suspect. Do we issue UserWarnings/any warnings anywhere else in the network-related libs when servers don't implement protocols correctly? I'm afraid of spurious warnings generated that will bug users unnecessarily. If the warning is left in, the message should probably include the offending realm string. Georg From barry at python.org Mon May 21 23:24:09 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 21 May 2012 17:24:09 -0400 Subject: [Python-Dev] Volunteering to be PEP czar for PEP 421, sys.implementation Message-ID: <20120521172409.2010be8f@resist> I've mentioned this in private to a few folks, with generally positive feedback. I am formally volunteering to be PEP czar for PEP 421, sys.implementation. If there are no objections in the next few days, I'll make it official. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From ncoghlan at gmail.com Tue May 22 01:25:52 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 May 2012 09:25:52 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> Message-ID: As a simple example to back up PJE's explanation, consider: 1. encodings becomes a namespace package 2. It sometimes gets imported during interpreter startup to initialise the standard io streams 3. An application modifies sys.path after startup and wants to contribute additional encodings Searching the entire parent path for new portions on every import would be needlessly slow. Not recognising new portions would be needlessly confusing for users. In our simple case above, the application would fail if the io initialisation accessed the encodings package, but work if it did not (e.g. when all streams are utf-8). PEP 420 splits the difference via an automatically invalidated cache: when you iterate over a namespace package __path__ object, it rescans the parent path for new portions *if and only if* the contents of the parent path have changed since the previous scan. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) On May 22, 2012 4:10 AM, "PJ Eby" wrote: > On Mon, May 21, 2012 at 9:55 AM, Guido van Rossum wrote: > >> Ah, I see. But I disagree that this is a reasonable constraint on >> sys.path. The magic __path__ object of a toplevel namespace module >> should know it is a toplevel module, and explicitly refetch sys.path >> rather than just keeping around a copy. >> > > That's fine by me - the class could actually be defined to take a module > name and attribute (e.g. 'sys', 'path' or 'foo', '__path__'), and then > there'd be no need to special case anything: it would behave exactly the > same way for subpackages and top-level packages. > > >> This leaves the magic __path__ objects for namespace modules, which I >> could live with, as long as their repr was not the same as a list, and >> assuming a good rationale is given. Although I'd still prefer plain >> lists here as well; I'd like to be able to manually construct a >> namespace package and force its directories to be a specific set of >> directories that I happen to know about, regardless of whether they >> are related to sys.path or not. And I'd like to know that my setup in >> that case would not be disturbed by changes to sys.path. >> > > To do that, you just assign to __path__, the same as now, ala __path__ = > pkgutil.extend_path(). The auto-updating is in the initially-assigned > __path__ object, not the module object or some sort of generalized magic. > > > I'd like to hear more about this from Philip -- is that feature >> actually widely used? > > > Well, it's built into setuptools, so yes. ;-) It gets used any time a > dynamically specified dependency is used that might contain a namespace > package. This means, for example, that every setup script out there using > "setup.py test", every project using certain paste.deploy features... it's > really difficult to spell out the scope of things that are using this, in > the context of setuptools and distribute, because there are an immense > number of ways to indirectly rely on it. > > This doesn't mean that the feature can't continue to be implemented inside > setuptools' dynamic dependency system, but the code to do it in setuptools > is MUCH more complicated than the PEP 420 code, and doesn't work if you > manually add something to sys.path without asking setuptools to do it. > It's also somewhat timing-sensitive, depending on when and whether you > import 'site' and pkg_resources, and whether you are mixing eggs and > non-eggs in your namespace packages. > > In short, the implementation is a huge mess that the PEP 420 approach > would vastly simplify. > > But... that wasn't the original reason why I proposed it. The original > reason was simply that it makes namespace packages act more like the > equivalents do in other languages. While being able to override __path__ > can be considered a feature of Python, its being static by default is NOT a > feature, in the same way that *requiring* an __init__.py is not really a > feature. > > The principle of least surprise says (at least IMO) that if you add a > directory to sys.path, you should be able to import stuff from it. That > whether it works depends on whether or not you already imported part of a > namespace package earlier is both surprising and confusing. (More on this > below.) > > > >> What would a package have to do if the feature didn't exist? > > > Continue to depend on setuptools to do it for them, or use some > hypothetical update API... but that's not really the right question. ;-) > > The right question is, what happens to package *users* if the feature > didn't exist? > > And the answer to that question is, "you must call this hypothetical > update API *every time* you change sys.path, because otherwise your imports > might break, depending on whether or not some other package imported > something from a namespace before you changed sys.path". > > And of course, you also need to make sure that any third-party code you > use does this too, if it adds something to sys.path for you. > > And if you're writing cross-Python-version code, you need to check to make > sure whether the API is actually available. > > And if you're someone helping Python newbies, you need to add this to your > list of debugging questions for import-related problems. > > And remember: if you forget to do this, it might not break now. It'll > break later, when you add that other plugin or update that random module > that dynamically decides to import something that just happens to be in a > namespace package, so be prepared for it to break your application in the > field, when an end-user is using it with a collection of plugins that you > haven't tested together, or in the same import sequence... > > The people using setuptools won't have these problems, but *new* Python > users will, as people begin using a PEP 420 that lacks this feature. > > The key scope question, I think, is: "How often do programs change > sys.path at runtime, and what have they imported up to that point?" > (Because for the other part of the scope, I think it's a fairly safe bet > that namespace packages are going to become even *more* popular than they > are now, once PEP 420 is in place.) > > But the key API/usability question is: "What's the One Obvious Way to > add/change what's importable?" > > And I believe the answer to that question is, "change sys.path", not > "change sys.path, and then import some other module to call another API to > say, 'yes, I really *meant* to update sys.path, thank you very much.'" > > (Especially since NOT requiring that extra API isn't going to break any > existing code.) > > > >> I'd really much rather not have this feature, which >> reeks of too much magic to me. (An area where Philip and I often >> disagree. :-) >> > > My take on it is that it only SEEMS like magic, because we're used to > static __path__. But other languages don't have per-package __path__ in > the first place, so there's nothing to "automatically update", and so it's > not magic at all that other subpackages/modules can be found when the > system path changes! > > So, under the PEP 420 approach, it's *static* __path__ that's really the > weird special case, and should be considered so. (After all, __path__ is > and was primarily an implementation optimization and compatibility hack, > rather than a user-facing "feature" of the import system.) > > For example, when *would* you want to explicitly spell out a namespace > package __path__, and restrict it from seeing sys.path changes? I've not > seen *anybody* ask for this feature in the context of setuptools; it's only > ever been bug reports about when the more complicated implementation fails > to detect an update. > > So, to wrap up: > > * The primary rationale for the feature is that "least surprise" for a new > user to Python is that adding to sys.path should allow importing a portion > of a namespace, whether or not you've already imported some other thing in > that namespace. Symmetry with other languages and with other Python > features (e.g. changing the working directory in an interactive > interpreter) suggests it, and the removal of a similar timing dependency > from PEP 402 (preventing direct import of a namespace-only package unless > you imported a subpackage first) suggests that the same type of timing > dependency should be removed here, too. (Note, for example, that I may not > know that importing baz.spam indirectly causes some part of foo.wiz to be > imported, and that if I then add another directory to sys.path containing a > foo.* portion, my code will *no longer work* when I try to import foo.ham. > This is much more "magical" behavior, in least-surprise terms!) > > * The constraints on sys.path and package __path__ objects can and should > be removed, by making the dynamic path objects refer to a module and > attribute, instead of directly referencing parent __path__ objects. Code > that currently manipulates __path__ will not break, because such code will > not be using PEP 420 namespace packages anyway (and so, __path__ will be a > list. (Even so, the most common __path__ manipulation idiom is "__path__ = > pkgutil.extend_path(...)" anyway!) > > * Namespace packages are a widely used feature of setuptools, and AFAIK > nobody has *ever* asked to stop dynamic additions to namespace __path__, > but a wide assortment of things people do with setuptools rely on dynamic > additions under the hood. Providing the feature in PEP 420 gives a > migration path away from setuptools, at least for this one feature. > (Specifically, it does away with the need to use declare_namespace(), and > the need to do all sys.path manipulation via setuptools' requirements API.) > > * Self-contained (__init__.py packages) and fixed __path__ lists can and > should be considered the "magic" or "special case" parts of importing in > Python 3, even though we're accustomed to them being central import > concepts in Python 2. Modules and namespace packages can and should be the > default case from an instructional POV, and sys.path updating should > reflect this. (That is, future tutorials should introduce modules, then > namespace packages, and finally self-contained packages with __init__ and > __path__, because the *idea* of a namespace package doesn't depend on > __path__ existing in the first place; it's essentially only a historical > accident that self-contained packages were implemented in Python first.) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue May 22 01:32:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 May 2012 09:32:55 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> Message-ID: I agree the parent path should be retrieved by name rather than a direct reference when checking the cache validity, though. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue May 22 02:32:10 2012 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 21 May 2012 20:32:10 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> Message-ID: <4FBADE8A.2060407@trueblade.com> On 5/21/2012 2:08 PM, PJ Eby wrote: > On Mon, May 21, 2012 at 9:55 AM, Guido van Rossum > wrote: > > Ah, I see. But I disagree that this is a reasonable constraint on > sys.path. The magic __path__ object of a toplevel namespace module > should know it is a toplevel module, and explicitly refetch sys.path > rather than just keeping around a copy. > > > That's fine by me - the class could actually be defined to take a module > name and attribute (e.g. 'sys', 'path' or 'foo', '__path__'), and then > there'd be no need to special case anything: it would behave exactly the > same way for subpackages and top-level packages. Any reason to make this the string "sys" or "foo", and not the module itself? Can the module be replaced in sys.modules? Mostly I'm just curious. But regardless, I'm okay with keeping these both as strings and looking it up in sys.modules and then by attribute. Eric. From stefan at sofa-rockers.org Tue May 22 09:08:25 2012 From: stefan at sofa-rockers.org (Stefan Scherfke) Date: Tue, 22 May 2012 09:08:25 +0200 Subject: [Python-Dev] docs.python.org pointing to Python 3 by default? In-Reply-To: <4FBA8254.2090605@netwok.org> References: <20120518142418.7609fe21@limelight.wooz.org> <5EE84AF9-42EC-4E02-8B3D-6A9B815D680D@gmail.com> <19802197-4CB9-440B-8632-C5FBD0E96EDB@langa.pl> <4FBA8254.2090605@netwok.org> Message-ID: <8276EBA1-AF2E-4CDA-96D7-D6FC84A549DA@sofa-rockers.org> Am 2012-05-21 um 19:58 schrieb ?ric Araujo: > Le 21/05/2012 07:42, Georg Brandl a ?crit : >> What about: >> >> * Canonical: >> >> docs.python.org/2/ >> docs.python.org/3/ >> >> for latest versions of 2.x and 3.x >> >> docs.python.org/2.7/ etc. >> >> for latest minor versions >> >> docs.python.org/dev/ >> >> for latest dev version. > +1. > > I?d be +1 to adding /stable but both 2.7 and 3.2 are stable at this time. > >> * Redirected: >> >> docs.python.org/ --> either /2/ or /3/ or a "disambiguation page" > Either sounds good, I?m in favor of redirecting to /2 for a few years > still to preserve existing links and avoid the need to click on each page. > >> docs.python.org/py3k/ -> /3/ > +1, the py3k name is not obvious for everyone. > >> There is also /release/X.Y.Z for individual released versions, which >> I don't want to change. > The URIs should not change, but it seems a bit bad to me that for > example the 2.7.1 docs don?t link to the latest 2.7 page and mention 2.6 > as stable version > >> I also like Martin's idea of offering more links between individual >> pages, not only the front-pages. > +1 > > On a related note, we may want to find a way to make the version more > prominent in the pages; I?ve seen beginners install Python 3 and use the > Python 2 docs and fail at the first print 'Hello, world!' example. > That?s why I support always having the version numbers in the URIs. > > Cheers I think this URL scheme looks most clean: docs.python.org/ --> Points to recommended version(2 for now, 3 later) docs.python.org/2/ --> Points to latest stable 2.x docs.python.org/2.7/ docs.python.org/2.6/ ... docs.python.org/3/ --> Points to latest stable 3.x docs.python.org/3.2/ ... docs.python.org/dev/ --> Points to dev version (e.g., 3.3) Using something like docs.python.org/stable/ in books might not make sense if the book is about Python 3 and /stable/ points to Python 4 a few years later. Imho, adding additional sub-domains also wouldn?t improve anything, but would add more clutter and confusion (what if somebody types "docs3.python.org/2/ ?) A prominent CCS-box showing the current version and offering Links to other main versions would make it perfect (e.g. 2, 3 and dev for all versions, 3.x sub-releases only, if you are under docs.python.org/3/... and for 2.x accordingly). Cheers, Stefan From ncoghlan at gmail.com Tue May 22 14:08:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 May 2012 22:08:50 +1000 Subject: [Python-Dev] [Python-checkins] cpython (2.7): #14804: Remove [] around optional arguments with default values In-Reply-To: References: Message-ID: On Tue, May 22, 2012 at 6:34 PM, hynek.schlawack wrote: > http://hg.python.org/cpython/rev/a36666c52115 > changeset: ? 77102:a36666c52115 > branch: ? ? ?2.7 > parent: ? ? ?77099:c13066f752a8 > user: ? ? ? ?Hynek Schlawack > date: ? ? ? ?Tue May 22 10:27:40 2012 +0200 > summary: > ?#14804: Remove [] around optional arguments with default values > > Mostly just mechanical removal of []. In some rare cases I've pulled the > default value up into the argument list. Be a little careful with this - "[]" is the right notation when the function doesn't support keyword arguments. At least one of the updated signatures is incorrect: > diff --git a/Doc/library/itertools.rst b/Doc/library/itertools.rst > --- a/Doc/library/itertools.rst > +++ b/Doc/library/itertools.rst > @@ -627,7 +627,7 @@ > ? ? ? ? ? ? ? ? ? break > > > -.. function:: tee(iterable[, n=2]) > +.. function:: tee(iterable, n=2) >>> itertools.tee([], n=2) Traceback (most recent call last): File "", line 1, in TypeError: tee() takes no keyword arguments Since calling "tee(itr, n=2)" doesn't add really any clarity over "tee(itr, 2)", it's unlikely this function will ever gain keyword argument support (since supporting keyword arguments *is* slower than supporting only positional arguments for functions written in C. The change is probably valid for the pure Python modules, and the builtins looked right, but be wary of any extension modules in the list. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From hs at ox.cx Tue May 22 14:38:01 2012 From: hs at ox.cx (Hynek Schlawack) Date: Tue, 22 May 2012 14:38:01 +0200 Subject: [Python-Dev] [Python-checkins] cpython (2.7): #14804: Remove [] around optional arguments with default values In-Reply-To: References: Message-ID: <4FBB88A9.3040409@ox.cx> Hi Nick, >> Mostly just mechanical removal of []. In some rare cases I've pulled the >> default value up into the argument list. > Be a little careful with this - "[]" is the right notation when the > function doesn't support keyword arguments. At least one of the > updated signatures is incorrect: Ah dang, thanks for pointing this out! I went at least five times through all changes but there had to be one thing I missed. :( Same in dl.open() & ossaudiodev.oss_audio_device.setparameters(). I will go through them all once more and fix it at the latest tomorrow. Regards, Hynek From eric at trueblade.com Tue May 22 16:51:22 2012 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 22 May 2012 10:51:22 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> Message-ID: <4FBBA7EA.4060706@trueblade.com> On 05/21/2012 07:25 PM, Nick Coghlan wrote: > As a simple example to back up PJE's explanation, consider: > 1. encodings becomes a namespace package > 2. It sometimes gets imported during interpreter startup to initialise > the standard io streams > 3. An application modifies sys.path after startup and wants to > contribute additional encodings > > Searching the entire parent path for new portions on every import would > be needlessly slow. > > Not recognising new portions would be needlessly confusing for users. In > our simple case above, the application would fail if the io > initialisation accessed the encodings package, but work if it did not > (e.g. when all streams are utf-8). > > PEP 420 splits the difference via an automatically invalidated cache: > when you iterate over a namespace package __path__ object, it rescans > the parent path for new portions *if and only if* the contents of the > parent path have changed since the previous scan. That seems like a pretty convincing example to me. Personally I'm +1 on putting dynamic computation into the PEP, at least for top-level namespace packages, and probably for all namespace packages. The code is not very large or complicated, and with the proposed removal of the restriction that sys.path cannot be replaced, I think it behaves well. But Guido can decide against it without hurting my feelings. Eric. P.S.: Here's the current code in the pep-420 branch. This code still has the restriction that sys.path (or parent_path in general) can't be replaced. I'll fix that if we decide to keep the feature. class _NamespacePath: def __init__(self, name, path, parent_path, path_finder): self._name = name self._path = path self._parent_path = parent_path self._last_parent_path = tuple(parent_path) self._path_finder = path_finder def _recalculate(self): # If _parent_path has changed, recalculate _path parent_path = tuple(self._parent_path) # Make a copy if parent_path != self._last_parent_path: loader, new_path = self._path_finder(self._name, parent_path) # Note that no changes are made if a loader is returned, but we # do remember the new parent path if loader is None: self._path = new_path self._last_parent_path = parent_path # Save the copy return self._path def __iter__(self): return iter(self._recalculate()) def __len__(self): return len(self._recalculate()) def __repr__(self): return "_NamespacePath" + repr((self._path, self._parent_path)) def __contains__(self, item): return item in self._recalculate() From ncoghlan at gmail.com Tue May 22 17:39:32 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 01:39:32 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBBA7EA.4060706@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBBA7EA.4060706@trueblade.com> Message-ID: On Wed, May 23, 2012 at 12:51 AM, Eric V. Smith wrote: > That seems like a pretty convincing example to me. > > Personally I'm +1 on putting dynamic computation into the PEP, at least > for top-level namespace packages, and probably for all namespace packages. Same here, but Guido's right that the rationale (and example) should be clearer in the PEP itself if the feature is to be retained. > P.S.: Here's the current code in the pep-420 branch. This code still has > the restriction that sys.path (or parent_path in general) can't be > replaced. I'll fix that if we decide to keep the feature. I wonder if it would be worth exposing an importlib.LazyRef API to make it generally easy to avoid this kind of early binding problem? class LazyRef: # similar API to weakref.weakref def __init__(self, modname, attr=None): self.modname = modname self.attr = attr def __call__(self): mod = sys.modules[self.modname] attr = self.attr if attr is None: return mod return getattr(mod, attr) Then _NamespacePath could just be defined as taking a callable that returns the parent path: class _NamespacePath: ? ?def __init__(self, name, path, parent_path, path_finder): ? ? ? ?self._name = name ? ? ? ?self._path = path ? ? ? ?self._parent_path = parent_path ? ? ? ?self._last_parent_path = tuple(parent_path) ? ? ? ?self._path_finder = path_finder ? ?def _recalculate(self): ? ? ? ?# If _parent_path has changed, recalculate _path ? ? ? ?parent_path = tuple(self._parent_path()) ? ? # Retrieve and make a copy ? ? ? ?if parent_path != self._last_parent_path: ? ? ? ? ? ?loader, new_path = self._path_finder(self._name, parent_path) ? ? ? ? ? ?# Note that no changes are made if a loader is returned, but we ? ? ? ? ? ?# ?do remember the new parent path ? ? ? ? ? ?if loader is None: ? ? ? ? ? ? ? ?self._path = new_path ? ? ? ? ? ?self._last_parent_path = parent_path ? # Save the copy ? ? ? ?return self._path Even if the LazyRef idea isn't used, I still like the idea of passing a callable in to _NamespacePath for the parent path rather than hardcoding the "module name + attribute name" approach. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Tue May 22 17:41:46 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 01:41:46 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBBA7EA.4060706@trueblade.com> Message-ID: On Wed, May 23, 2012 at 1:39 AM, Nick Coghlan wrote: > ?? ?def _recalculate(self): > ?? ? ? ?# If _parent_path has changed, recalculate _path > ?? ? ? ?parent_path = tuple(self._parent_path()) ? ? # Retrieve and make a copy > ?? ? ? ?if parent_path != self._last_parent_path: > ?? ? ? ? ? ?loader, new_path = self._path_finder(self._name, parent_path) > ?? ? ? ? ? ?# Note that no changes are made if a loader is returned, but we > ?? ? ? ? ? ?# ?do remember the new parent path > ?? ? ? ? ? ?if loader is None: > ?? ? ? ? ? ? ? ?self._path = new_path > ?? ? ? ? ? ?self._last_parent_path = parent_path ? # Save the copy > ?? ? ? ?return self._path Oops, I also meant to say that it's probably worth at least issuing ImportWarning if a new portion with an __init__.py gets added - it's going to block all future dynamic updates of that namespace package. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eric at trueblade.com Tue May 22 18:31:19 2012 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 22 May 2012 12:31:19 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBBA7EA.4060706@trueblade.com> Message-ID: <4FBBBF57.9050909@trueblade.com> On 05/22/2012 11:39 AM, Nick Coghlan wrote: > On Wed, May 23, 2012 at 12:51 AM, Eric V. Smith wrote: >> That seems like a pretty convincing example to me. >> >> Personally I'm +1 on putting dynamic computation into the PEP, at least >> for top-level namespace packages, and probably for all namespace packages. > > Same here, but Guido's right that the rationale (and example) should > be clearer in the PEP itself if the feature is to be retained. Completely agreed. I'll work on it. > Oops, I also meant to say that it's probably worth at least issuing > ImportWarning if a new portion with an __init__.py gets added - it's > going to block all future dynamic updates of that namespace package. Right. That's on my list of things to clean up. It actually won't block updates during this run of Python, though: once a namespace package, always a namespace package. But if, on another run, that entry is on sys.path, then yes, it will block all namespace package portions. Eric. From pje at telecommunity.com Tue May 22 19:47:25 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 22 May 2012 13:47:25 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBADE8A.2060407@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> Message-ID: On Mon, May 21, 2012 at 8:32 PM, Eric V. Smith wrote: > Any reason to make this the string "sys" or "foo", and not the module > itself? Can the module be replaced in sys.modules? Mostly I'm just curious. > Probably not, but it occurred to me that storing references to modules introduces a reference cycle that wasn't there when we were pointing to parent path objects instead. It basically would make child packages point to their parents, as well as the other way around. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue May 22 20:37:02 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 22 May 2012 11:37:02 -0700 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> Message-ID: Okay, I've been convinced that keeping the dynamic path feature is a good idea. I am really looking forward to seeing the rationale added to the PEP -- that's pretty much the last thing on my list that made me hesitate. I'll leave the details of exactly how the parent path is referenced up to the implementation team (several good points were made), as long as the restriction that sys.path must be modified in place is lifted. -- --Guido van Rossum (python.org/~guido) From sandro.tosi at gmail.com Tue May 22 21:59:20 2012 From: sandro.tosi at gmail.com (Sandro Tosi) Date: Tue, 22 May 2012 21:59:20 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14814: addition of the ipaddress module (stage 1 - code and tests) In-Reply-To: <4FB92765.803@udel.edu> References: <4FB92765.803@udel.edu> Message-ID: Thanks Terry for the review! I've attached a patch to issue14814 addressing your points; but.. On Sun, May 20, 2012 at 7:18 PM, Terry Reedy wrote: >> +def _get_prefix_length(number1, number2, bits): >> + ? ?"""Get the number of leading bits that are same for two numbers. >> + >> + ? ?Args: >> + ? ? ? ?number1: an integer. >> + ? ? ? ?number2: another integer. >> + ? ? ? ?bits: the maximum number of bits to compare. >> + >> + ? ?Returns: >> + ? ? ? ?The number of leading bits that are the same for two numbers. >> + >> + ? ?""" >> + ? ?for i in range(bits): >> + ? ? ? ?if number1>> ?i == number2>> ?i: > > > This non-PEP8 spacing is awful to read. The double space after the tighter > binding operator is actively deceptive. Please use > > ? ? ? ?if number1 >> i == number2 >> i: I don't see this (and all the other) spacing issue you mentioned. Is it possible that your mail client had played some "funny" tricks? >> + ? ?Args: >> + ? ? ? ?first: the first IPv4Address or IPv6Address in the range. >> + ? ? ? ?last: the last IPv4Address or IPv6Address in the range. >> + >> + ? ?Returns: >> + ? ? ? ?An iterator of the summarized IPv(4|6) network objects. > > Very clear as to types. I don't think I get exactly what you mean here. Cheers, -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi From pje at telecommunity.com Tue May 22 22:43:39 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 22 May 2012 16:43:39 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBBBF57.9050909@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBBA7EA.4060706@trueblade.com> <4FBBBF57.9050909@trueblade.com> Message-ID: On Tue, May 22, 2012 at 12:31 PM, Eric V. Smith wrote: > On 05/22/2012 11:39 AM, Nick Coghlan wrote: > > Oops, I also meant to say that it's probably worth at least issuing > > ImportWarning if a new portion with an __init__.py gets added - it's > > going to block all future dynamic updates of that namespace package. > > Right. That's on my list of things to clean up. It actually won't block > updates during this run of Python, though: once a namespace package, > always a namespace package. But if, on another run, that entry is on > sys.path, then yes, it will block all namespace package portions. > This discussion has gotten me thinking: should we expose a pkgutil.declare_namespace() API to allow such an __init__.py to turn itself back into a namespace? (Per our previous discussion on transitioning existing namespace packages.) It wouldn't need to do all the other stuff that the setuptools version does, it would just be a way to transition away from setuptools. What it would do is: 1. Recursively invoke itself for parent packages 2. Create the module object if it doesn't already exist 3. Set the module __path__ to a _NamespacePath instance. def declare_namespace(package_name): parent, dot, tail = package_name.rpartition('.') attr = '__path__' if dot: declare_namespace(parent) else: parent, attr = 'sys', 'path' with importlockcontext: module = sys.modules.get(package_name) if module is None: module = XXX new module here module.__path__ = _NamespacePath(...stuff involving 'parent' and 'attr') It may be that this should complain under certain circumstances, or use the '__path__ = something' idiom, but the above approach would be (basically) API compatible with the standard usage of declare_namespace. Obviously, this'll only be useful for people who are porting code going forward, but even if a different API is chosen, there still ought to be a way for people to do it. Namespace packages are one of a handful of features that are still basically setuptools-only at this point (i.e. not yet provided by packaging/distutils2), but if it's the only setuptools-only feature a project is using, they'd be able to drop their dependency as of 3.3. (Next up, I guess we'll need an entry-points PEP, but that'll be another discussion. ;-) ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Tue May 22 22:43:43 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 22 May 2012 16:43:43 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14814: addition of the ipaddress module (stage 1 - code and tests) In-Reply-To: References: <4FB92765.803@udel.edu> Message-ID: On 5/22/2012 3:59 PM, Sandro Tosi wrote: > Thanks Terry for the review! I've attached a patch to issue14814 > addressing your points; but.. > > On Sun, May 20, 2012 at 7:18 PM, Terry Reedy wrote: >>> +def _get_prefix_length(number1, number2, bits): >>> + """Get the number of leading bits that are same for two numbers. >>> + >>> + Args: >>> + number1: an integer. >>> + number2: another integer. >>> + bits: the maximum number of bits to compare. >>> + >>> + Returns: >>> + The number of leading bits that are the same for two numbers. >>> + >>> + """ >>> + for i in range(bits): >>> + if number1>> i == number2>> i: >> >> >> This non-PEP8 spacing is awful to read. The double space after the tighter >> binding operator is actively deceptive. Please use >> >> if number1>> i == number2>> i: > > I don't see this (and all the other) spacing issue you mentioned. Is > it possible that your mail client had played some "funny" tricks? Well, *something* between there and here seems to have. I retrieved the patch in FF browser and that line looks fine. It also looks fine when I cut and pasted it into a test message from a web mail account to my udel account, viewed with same mail client. Sorry for the noise. Glad that you do not need to 'fix' anything of this sort. >>> + Args: >>> + first: the first IPv4Address or IPv6Address in the range. >>> + last: the last IPv4Address or IPv6Address in the range. >>> + >>> + Returns: >>> + An iterator of the summarized IPv(4|6) network objects. >> >> Very clear as to types. > > I don't think I get exactly what you mean here. This docstring clearly says what the input type is instead of the more vague 'address'. Also, the output is pretty clearly an iterable of IPv#Address objects. I meant to contrast this as a good example compared to some of the previous docstrings. -- Terry Jan Reedy From barry at python.org Wed May 23 00:05:32 2012 From: barry at python.org (Barry Warsaw) Date: Tue, 22 May 2012 18:05:32 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBBA7EA.4060706@trueblade.com> <4FBBBF57.9050909@trueblade.com> Message-ID: <20120522180532.6c0ab53c@resist> Minor nit. On May 22, 2012, at 04:43 PM, PJ Eby wrote: >def declare_namespace(package_name): > parent, dot, tail = package_name.rpartition('.') > attr = '__path__' > if dot: > declare_namespace(parent) > else: > parent, attr = 'sys', 'path' > with importlockcontext: > module = sys.modules.get(package_name) Best to use a marker object here instead of checking for None, since the latter is a valid value for an existing entry in sys.modules. > if module is None: > module = XXX new module here > module.__path__ = _NamespacePath(...stuff involving 'parent' and >'attr') Cheers, -Barry From eric at trueblade.com Wed May 23 02:40:24 2012 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 22 May 2012 20:40:24 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> Message-ID: <4FBC31F8.7070203@trueblade.com> On 5/22/2012 2:37 PM, Guido van Rossum wrote: > Okay, I've been convinced that keeping the dynamic path feature is a > good idea. I am really looking forward to seeing the rationale added > to the PEP -- that's pretty much the last thing on my list that made > me hesitate. I'll leave the details of exactly how the parent path is > referenced up to the implementation team (several good points were > made), as long as the restriction that sys.path must be modified in > place is lifted. I've updated the PEP. Let me know how it looks. I have not updated the implementation yet. I'm not exactly sure how I'm going to convert from a path list of unknown origin to ('sys', 'path') or ('foo', '__path__'). I'll look at it later tonight to see if it's possible. I'm hoping it doesn't require major surgery to importlib._bootstrap. I still owe PEP updates for finder/loader examples and nested namespace package examples. But I think that's all that's needed. Eric. From ncoghlan at gmail.com Wed May 23 02:42:22 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 10:42:22 +1000 Subject: [Python-Dev] [Python-checkins] peps: Added dynamic path computation rationale, specification, and discussion. In-Reply-To: References: Message-ID: On Wed, May 23, 2012 at 10:35 AM, eric.smith wrote: > + ?4. An attempt is made to import an ``encodings`` portion that is > + ? ? found on a path added in step 3. I'd phrase this as something like "import an encoding from an ``encodings`` portion". You don't really import namespace package portions directly - you import the modules and packages they contain. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From pje at telecommunity.com Wed May 23 03:49:16 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 22 May 2012 21:49:16 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBC31F8.7070203@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> Message-ID: On Tue, May 22, 2012 at 8:40 PM, Eric V. Smith wrote: > On 5/22/2012 2:37 PM, Guido van Rossum wrote: > > Okay, I've been convinced that keeping the dynamic path feature is a > > good idea. I am really looking forward to seeing the rationale added > > to the PEP -- that's pretty much the last thing on my list that made > > me hesitate. I'll leave the details of exactly how the parent path is > > referenced up to the implementation team (several good points were > > made), as long as the restriction that sys.path must be modified in > > place is lifted. > > I've updated the PEP. Let me know how it looks. > My name is misspelled in it, but otherwise it looks fine. ;-) I have not updated the implementation yet. I'm not exactly sure how I'm > going to convert from a path list of unknown origin to ('sys', 'path') > or ('foo', '__path__'). I'll look at it later tonight to see if it's > possible. I'm hoping it doesn't require major surgery to > importlib._bootstrap. > It shouldn't - all you should need is to use getattr(sys.modules[self.modname], self.attr) instead of referencing a parent path object directly. (The more interesting thing is what to do if the parent module goes away, due to somebody deleting the module out of sys.modules. The simplest thing to do would probably be to just keep using the cached value in that case.) Ah crap, I just thought of something - what happens if you reload() a namespace package? Probably nothing, but should we specify what sort of nothing? ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed May 23 03:58:28 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 11:58:28 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBC31F8.7070203@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> Message-ID: On Wed, May 23, 2012 at 10:40 AM, Eric V. Smith wrote: > On 5/22/2012 2:37 PM, Guido van Rossum wrote: >> Okay, I've been convinced that keeping the dynamic path feature is a >> good idea. I am really looking forward to seeing the rationale added >> to the PEP -- that's pretty much the last thing on my list that made >> me hesitate. I'll leave the details of exactly how the parent path is >> referenced up to the implementation team (several good points were >> made), as long as the restriction that sys.path must be modified in >> place is lifted. > > I've updated the PEP. Let me know how it looks. > > I have not updated the implementation yet. I'm not exactly sure how I'm > going to convert from a path list of unknown origin to ('sys', 'path') > or ('foo', '__path__'). I'll look at it later tonight to see if it's > possible. I'm hoping it doesn't require major surgery to > importlib._bootstrap. If you wanted to do this without changing the sys.meta_path hook API, you'd have to pass an object to find_module() that did the dynamic lookup of the value in obj.__iter__. Something like: class _LazyPath: def __init__(self, modname, attribute): self.modname = modname self.attribute = attribute def __iter__(self): return iter(getattr(sys.module[self.modname], self.attribute)) A potentially cleaner alternative to consider is tweaking the find_loader API spec so that it gets used at the meta path level as well as at the path hooks level and is handed a *callable* that dynamically retrieves the path rather than a direct reference to the path itself. The full signature of find_loader would then become: def find_loader(fullname, get_path=None): # fullname as for find_module # When get_path is None, it means the finder is being called as a path hook and # should use the specific path entry passed to __init__ # In this case, namespace package portions are returned as (None, portions) # Otherwise, the finder is being called as a meta_path hook and get_path() will return the relevant path # Any namespace packages are then returned as (loader, portions) There are two major consequences of this latter approach: - the PEP 302 find_module API would now be a purely legacy interface for both the meta_path and path_hooks, used only if find_loader is not defined - it becomes trivial to tell whether a particular name references a package or not *without* needing to load it first: find_loader() returns a non-empty iterable for the list of portions That second consequence is rather appealing: it means you'd be able to implement an almost complete walk of a package hierarchy *without* having to import anything (although you would miss old-style namespace packages and any other packages that alter their own __path__ in __init__, so you may still want to load packages to make sure you found everything. You could definitively answer the "is this a package or not?" question without running any code, though). The first consequence is also appealing, since the find_module() name is more than a little misleading. The "find_module" name strongly suggests that the method is expected to return a module object, and that's just wrong - you actually find a loader, then you use that to load the module. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From pje at telecommunity.com Wed May 23 05:58:32 2012 From: pje at telecommunity.com (PJ Eby) Date: Tue, 22 May 2012 23:58:32 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> Message-ID: On Tue, May 22, 2012 at 9:58 PM, Nick Coghlan wrote: > If you wanted to do this without changing the sys.meta_path hook API, > you'd have to pass an object to find_module() that did the dynamic > lookup of the value in obj.__iter__. Something like: > > class _LazyPath: > def __init__(self, modname, attribute): > self.modname = modname > self.attribute = attribute > def __iter__(self): > return iter(getattr(sys.module[self.modname], self.attribute)) > > A potentially cleaner alternative to consider is tweaking the > find_loader API spec so that it gets used at the meta path level as > well as at the path hooks level and is handed a *callable* that > dynamically retrieves the path rather than a direct reference to the > path itself. > > The full signature of find_loader would then become: > > def find_loader(fullname, get_path=None): > # fullname as for find_module > # When get_path is None, it means the finder is being called > as a path hook and > # should use the specific path entry passed to __init__ > # In this case, namespace package portions are returned as > (None, portions) > # Otherwise, the finder is being called as a meta_path hook > and get_path() will return the relevant path > # Any namespace packages are then returned as (loader, portions) > > There are two major consequences of this latter approach: > - the PEP 302 find_module API would now be a purely legacy interface > for both the meta_path and path_hooks, used only if find_loader is not > defined > - it becomes trivial to tell whether a particular name references a > package or not *without* needing to load it first: find_loader() > returns a non-empty iterable for the list of portions > > That second consequence is rather appealing: it means you'd be able to > implement an almost complete walk of a package hierarchy *without* > having to import anything (although you would miss old-style namespace > packages and any other packages that alter their own __path__ in > __init__, so you may still want to load packages to make sure you > found everything. You could definitively answer the "is this a package > or not?" question without running any code, though). > > The first consequence is also appealing, since the find_module() name > is more than a little misleading. The "find_module" name strongly > suggests that the method is expected to return a module object, and > that's just wrong - you actually find a loader, then you use that to > load the module. > While I see no problem with cleaning up the interface, I'm kind of lost as to the point of making a get_path callable, vs. just using the iterable interface you sketched. Python has iterables, so why add a call to get the iterable, when iter() or a straight "for" loop will do effectively the same thing? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed May 23 06:27:18 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 14:27:18 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> Message-ID: On Wed, May 23, 2012 at 1:58 PM, PJ Eby wrote: > While I see no problem with cleaning up the interface, I'm kind of lost as > to the point of making a get_path callable, vs. just using the iterable > interface you sketched.? Python has iterables, so why add a call to get the > iterable, when iter() or a straight "for" loop will do effectively the same > thing? Yeah, I'm not sure what I was thinking either, since just documenting the interface and providing LazyPath as a public API somewhere in importlib should suffice. Meta path hooks are already going to need to tolerate being handed arbitrary iterables, since that's exactly what namespace package path objects are going to be. While I still like the idea of killing off find_module() completely rather than leaving it in at the meta_path level, there's no reason that needs to be done as part of PEP 420 itself. Instead, it can be done later if anyone comes up with a concrete use case for access the path details without loading packages and modules. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eric at trueblade.com Wed May 23 14:31:46 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 23 May 2012 08:31:46 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> Message-ID: <4FBCD8B2.9040904@trueblade.com> On 05/22/2012 09:49 PM, PJ Eby wrote: > On Tue, May 22, 2012 at 8:40 PM, Eric V. Smith > wrote: > > On 5/22/2012 2:37 PM, Guido van Rossum wrote: > > Okay, I've been convinced that keeping the dynamic path feature is a > > good idea. I am really looking forward to seeing the rationale added > > to the PEP -- that's pretty much the last thing on my list that made > > me hesitate. I'll leave the details of exactly how the parent path is > > referenced up to the implementation team (several good points were > > made), as long as the restriction that sys.path must be modified in > > place is lifted. > > I've updated the PEP. Let me know how it looks. > > > My name is misspelled in it, but otherwise it looks fine. ;-) Oops, sorry. Fixed (I think). > I have not updated the implementation yet. I'm not exactly sure how I'm > going to convert from a path list of unknown origin to ('sys', 'path') > or ('foo', '__path__'). I'll look at it later tonight to see if it's > possible. I'm hoping it doesn't require major surgery to > importlib._bootstrap. > > > It shouldn't - all you should need is to use > getattr(sys.modules[self.modname], self.attr) instead of referencing a > parent path object directly. The problem isn't the lookup, it's coming up with self.modname and self.attr. As it currently stands, PathFinder.find_module is given the parent path, not the module name and attribute name used to look up the parent path using sys.modules and getattr. Eric. From ncoghlan at gmail.com Wed May 23 15:02:15 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 23 May 2012 23:02:15 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBCD8B2.9040904@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> Message-ID: On Wed, May 23, 2012 at 10:31 PM, Eric V. Smith wrote: > On 05/22/2012 09:49 PM, PJ Eby wrote: >> It shouldn't - all you should need is to use >> getattr(sys.modules[self.modname], self.attr) instead of referencing a >> parent path object directly. > > The problem isn't the lookup, it's coming up with self.modname and > self.attr. As it currently stands, PathFinder.find_module is given the > parent path, not the module name and attribute name used to look up the > parent path using sys.modules and getattr. Right, that's what PJE and I were discussing. Instead of passing in the path object directly, you can instead pass an object that *lazily* retrieves the path object in its __iter__ method: class LazyIterable: """On iteration, retrieves a reference to a named iterable and returns an iterator over that iterable""" def __init__(self, modname, attribute): self.modname = modname self.attribute = attribute def __iter__(self): mod = import_module(self.modname) # Will almost always get a hit directly in sys.modules return iter(getattr(mod, self.attribute) Where importlib currently passes None or sys.path as the path argument to find_module(), instead pass "LazyIterable('sys', 'path')" and where it currently passes package.__path__, instead pass "LazyIterable(package.__name__, '__path__')". The existing for loop iteration and tuple() calls should then take care of the lazy lookup automatically. That way, the only code that needs to know the values of modname and attribute is the code that already has access to those values. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From eric at trueblade.com Wed May 23 15:10:42 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 23 May 2012 09:10:42 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> Message-ID: <4FBCE1D2.7040406@trueblade.com> On 05/23/2012 09:02 AM, Nick Coghlan wrote: > On Wed, May 23, 2012 at 10:31 PM, Eric V. Smith wrote: >> On 05/22/2012 09:49 PM, PJ Eby wrote: >>> It shouldn't - all you should need is to use >>> getattr(sys.modules[self.modname], self.attr) instead of referencing a >>> parent path object directly. >> >> The problem isn't the lookup, it's coming up with self.modname and >> self.attr. As it currently stands, PathFinder.find_module is given the >> parent path, not the module name and attribute name used to look up the >> parent path using sys.modules and getattr. > > Right, that's what PJE and I were discussing. Instead of passing in > the path object directly, you can instead pass an object that *lazily* > retrieves the path object in its __iter__ method: Hey, one message at a time! I'm just reading those now. I'd like to hear Brett's comments on this approach. Eric. From pje at telecommunity.com Wed May 23 17:13:01 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 23 May 2012 11:13:01 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> Message-ID: On May 23, 2012 9:02 AM, "Nick Coghlan" wrote: > > On Wed, May 23, 2012 at 10:31 PM, Eric V. Smith wrote: > > On 05/22/2012 09:49 PM, PJ Eby wrote: > >> It shouldn't - all you should need is to use > >> getattr(sys.modules[self.modname], self.attr) instead of referencing a > >> parent path object directly. > > > > The problem isn't the lookup, it's coming up with self.modname and > > self.attr. As it currently stands, PathFinder.find_module is given the > > parent path, not the module name and attribute name used to look up the > > parent path using sys.modules and getattr. > > Right, that's what PJE and I were discussing. Instead of passing in > the path object directly, you can instead pass an object that *lazily* > retrieves the path object in its __iter__ method: > > class LazyIterable: > """On iteration, retrieves a reference to a named iterable and > returns an iterator over that iterable""" > def __init__(self, modname, attribute): > self.modname = modname > self.attribute = attribute > def __iter__(self): > mod = import_module(self.modname) # Will almost always get > a hit directly in sys.modules > return iter(getattr(mod, self.attribute) > > Where importlib currently passes None or sys.path as the path argument > to find_module(), instead pass "LazyIterable('sys', 'path')" and where > it currently passes package.__path__, instead pass > "LazyIterable(package.__name__, '__path__')". > > The existing for loop iteration and tuple() calls should then take > care of the lazy lookup automatically. > > That way, the only code that needs to know the values of modname and > attribute is the code that already has access to those values. Perhaps calling it a ModulePath instead of a LazyIterable would be better? Also, this is technically a change from PEP 302, which says the actual sys.path or __path__ are passed to find_module(). I'm not sure whether any find_module() code ever written actually *cares* about this, though. (Especially if, as I believe I understand in this context, we're only talking about meta-importers.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.brunthaler at uci.edu Wed May 23 20:40:37 2012 From: s.brunthaler at uci.edu (stefan brunthaler) Date: Wed, 23 May 2012 11:40:37 -0700 Subject: [Python-Dev] Benchmark performance... Message-ID: Hi, as Antoine pointed out in the corresponding issue (http://bugs.python.org/issue14757#msg160870), measuring/assessing real-world performance of my patch would be interesting. I mentioned that I am not aware of any relevant Python 3 program/application to report numbers for (but guess that the speedups should persist.) Since nobody came up with an answer yet, I figured it would be a good idea to ask everybody on python-dev for suggestions... Regards, --stefan From brett at python.org Wed May 23 21:02:25 2012 From: brett at python.org (Brett Cannon) Date: Wed, 23 May 2012 15:02:25 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBCE1D2.7040406@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> Message-ID: On Wed, May 23, 2012 at 9:10 AM, Eric V. Smith wrote: > On 05/23/2012 09:02 AM, Nick Coghlan wrote: > > On Wed, May 23, 2012 at 10:31 PM, Eric V. Smith > wrote: > >> On 05/22/2012 09:49 PM, PJ Eby wrote: > >>> It shouldn't - all you should need is to use > >>> getattr(sys.modules[self.modname], self.attr) instead of referencing a > >>> parent path object directly. > >> > >> The problem isn't the lookup, it's coming up with self.modname and > >> self.attr. As it currently stands, PathFinder.find_module is given the > >> parent path, not the module name and attribute name used to look up the > >> parent path using sys.modules and getattr. > > > > Right, that's what PJE and I were discussing. Instead of passing in > > the path object directly, you can instead pass an object that *lazily* > > retrieves the path object in its __iter__ method: > > Hey, one message at a time! I'm just reading those now. > > I'd like to hear Brett's comments on this approach. If I understand the proposal correctly, this would be a change in NamespaceLoader in how it sets __path__ and in no way affect any other code since __import__() just grabs the object on __path__ and passes as an argument to the meta path finders which just iterate over the object, so I have no objections to it. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From pje at telecommunity.com Wed May 23 21:35:28 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 23 May 2012 15:35:28 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> Message-ID: On Wed, May 23, 2012 at 3:02 PM, Brett Cannon wrote: > If I understand the proposal correctly, this would be a change in > NamespaceLoader in how it sets __path__ and in no way affect any other code > since __import__() just grabs the object on __path__ and passes as an > argument to the meta path finders which just iterate over the object, so I > have no objections to it. That's not *quite* the proposal (but almost). The change would also mean that __import__() instead passes a ModulePath (aka Nick's LazyIterable) instance to the meta path finders, which just iterate over it. But other than that, yes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed May 23 21:56:07 2012 From: brett at python.org (Brett Cannon) Date: Wed, 23 May 2012 15:56:07 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> Message-ID: On Wed, May 23, 2012 at 3:35 PM, PJ Eby wrote: > On Wed, May 23, 2012 at 3:02 PM, Brett Cannon wrote: > >> If I understand the proposal correctly, this would be a change in >> NamespaceLoader in how it sets __path__ and in no way affect any other code >> since __import__() just grabs the object on __path__ and passes as an >> argument to the meta path finders which just iterate over the object, so I >> have no objections to it. > > > That's not *quite* the proposal (but almost). The change would also mean > that __import__() instead passes a ModulePath (aka Nick's LazyIterable) > instance to the meta path finders, which just iterate over it. But other > than that, yes. > And why does __import__() need to construct that? I thought NamespaceLoader was going to be making these "magical" __path__ objects that detected changes and thus update themselves as necessary and just stick them on the object. Why specifically does __import__() need to play a role? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandro.tosi at gmail.com Wed May 23 21:56:40 2012 From: sandro.tosi at gmail.com (Sandro Tosi) Date: Wed, 23 May 2012 21:56:40 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14814: addition of the ipaddress module (stage 1 - code and tests) In-Reply-To: References: <4FB92765.803@udel.edu> Message-ID: On Tue, May 22, 2012 at 10:43 PM, Terry Reedy wrote: > On 5/22/2012 3:59 PM, Sandro Tosi wrote: >> On Sun, May 20, 2012 at 7:18 PM, Terry Reedy ?wrote >>>> + ? ?Args: >>>> + ? ? ? ?first: the first IPv4Address or IPv6Address in the range. >>>> + ? ? ? ?last: the last IPv4Address or IPv6Address in the range. >>>> + >>>> + ? ?Returns: >>>> + ? ? ? ?An iterator of the summarized IPv(4|6) network objects. >>> >>> >>> Very clear as to types. >> >> >> I don't think I get exactly what you mean here. > > > This docstring clearly says what the input type is instead of the more vague > 'address'. Also, the output is pretty clearly an iterable of IPv#Address > objects. I meant to contrast this as a good example compared to some of the > previous docstrings. Ah now I see, thanks for fixing my understanding ;) Cheers, -- Sandro Tosi (aka morph, morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi From eric at trueblade.com Wed May 23 23:29:41 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 23 May 2012 17:29:41 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> Message-ID: <4FBD56C5.7000605@trueblade.com> On 05/23/2012 03:56 PM, Brett Cannon wrote: > > > On Wed, May 23, 2012 at 3:35 PM, PJ Eby > wrote: > > On Wed, May 23, 2012 at 3:02 PM, Brett Cannon > wrote: > > If I understand the proposal correctly, this would be a change > in NamespaceLoader in how it sets __path__ and in no way affect > any other code since __import__() just grabs the object on > __path__ and passes as an argument to the meta path finders > which just iterate over the object, so I have no objections to it. > > > That's not *quite* the proposal (but almost). The change would also > mean that __import__() instead passes a ModulePath (aka Nick's > LazyIterable) instance to the meta path finders, which just iterate > over it. But other than that, yes. > > > And why does __import__() need to construct that? I thought > NamespaceLoader was going to be making these "magical" __path__ objects > that detected changes and thus update themselves as necessary and just > stick them on the object. Why specifically does __import__() need to > play a role? Assume that we're talking about importing either a top-level namespace package named 'parent' and a nested namespace package parent.child. The problem is that NamespaceLoader is just passed the parent path (typically sys.path, but if a sub-package then parent.__path__). The concern is that if the parent path object is replaced: sys.path = sys.path + ['new-dir'] or parent.__path__ = ['new-dir'] then the NamespaceLoader instance can no longer detect changes to parent_path. So the proposed solution is for NamespaceLoader to be told the name of the parent module ('sys' or 'parent') and the attribute name to use to find the path ('path' or '__path__'). Here's another suggestion: instead of modifying the finder/loader code to pass these names through, assume that we can always find (module_name, attribute_name) with this code: def find_parent_path_names(module): parent, dot, me = module.__name__.rpartition('.') if dot == '': return 'sys', 'path' return parent, '__path__' >>> import glob >>> find_parent_path_names(glob) ('sys', 'path') >>> import unittest.test.test_case >>> find_parent_path_names(unittest.test.test_case) ('unittest.test', '__path__') I guess it's a little more fragile than passing in these names to NamespaceLoader, but it requires less code to change. I think I'll whip this up in the pep-420 branch. Eric. From prem1pre at gmail.com Thu May 24 01:00:11 2012 From: prem1pre at gmail.com (PremAnand Lakshmanan) Date: Wed, 23 May 2012 19:00:11 -0400 Subject: [Python-Dev] Python db2 installation error Message-ID: I want to install python db2 package for Python but Im unable to install it. I have installed the easy_install and Im able to successfully import the easy_install. My easy_install location :c:/python27/lib/site-packages/ My db2 egg location c:/python27/ibm_db-1.0.5-py2.7-win32.egg How would my installation command look like in the shell, I tried this command and it gives me invalid error, C:\Python27\Scripts>easy_install c:/python27/lib/site-packages/ibm_db-1.0.5-py2. 7-win32.egg error: Not a URL, existing file, or requirement spec: 'c:/python27/lib/site-pack ages/ibm_db-1.0.5-py2.7-win32.egg' -- Prem 408-393-2545 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Thu May 24 01:45:50 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 23 May 2012 19:45:50 -0400 Subject: [Python-Dev] Python db2 installation error In-Reply-To: References: Message-ID: On 5/23/2012 7:00 PM, PremAnand Lakshmanan wrote: > I want to install python db2 package for Python but Im unable to install it. pydev list is for development of future python releases. Ask questions about using existing python releases on python-list or the gmane mirror. -- Terry Jan Reedy From eric at trueblade.com Thu May 24 02:24:01 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 23 May 2012 20:24:01 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBD56C5.7000605@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> Message-ID: <4FBD7FA1.7010107@trueblade.com> > Here's another suggestion: instead of modifying the finder/loader code > to pass these names through, assume that we can always find > (module_name, attribute_name) with this code: > > def find_parent_path_names(module): > parent, dot, me = module.__name__.rpartition('.') > if dot == '': > return 'sys', 'path' > return parent, '__path__' > >>>> import glob >>>> find_parent_path_names(glob) > ('sys', 'path') >>>> import unittest.test.test_case >>>> find_parent_path_names(unittest.test.test_case) > ('unittest.test', '__path__') > > I guess it's a little more fragile than passing in these names to > NamespaceLoader, but it requires less code to change. > > I think I'll whip this up in the pep-420 branch. I tried this approach and it works fine. The only caveat is that it assumes that the parent path can always be computed as described above, independent of what's passed in to PathFinder.load_module(). I think that's reasonable, since load_module() itself hard-codes sys.path if the supplied path is missing. I've checked this in to the pep-420 branch. I prefer this approach over Nick's because it doesn't require any changes to any existing interfaces. The changes are contained to the namespace package code and don't affect other code in importlib. Assuming this approach is acceptable, I'm done with the PEP except for adding some examples. And I'm done with the implementation except for adding tests and a few small tweaks. Eric. From pje at telecommunity.com Thu May 24 02:58:53 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 23 May 2012 20:58:53 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBD7FA1.7010107@trueblade.com> References: <4FB9F620.1060602@trueblade.com> <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> Message-ID: On Wed, May 23, 2012 at 8:24 PM, Eric V. Smith wrote: > I tried this approach and it works fine. The only caveat is that it > assumes that the parent path can always be computed as described above, > independent of what's passed in to PathFinder.load_module(). I think > that's reasonable, since load_module() itself hard-codes sys.path if the > supplied path is missing. > Technically, PEP 302 says that finders aren't allowed to assume their parent packages are imported: """ However, the find_module() method isn't necessarily always called during an actual import: meta tools that analyze import dependencies (such as freeze, Installer or py2exe) don't actually load modules, so a finder shouldn't *depend* on the parent package being available in sys.modules.""" OTOH, that's finders, and I think we're dealing with loaders here. Splitting hairs, perhaps, but at least it's in a good cause. ;-) I've checked this in to the pep-420 branch. I prefer this approach over > Nick's because it doesn't require any changes to any existing > interfaces. The changes are contained to the namespace package code and > don't affect other code in importlib. > > Assuming this approach is acceptable, I'm done with the PEP except for > adding some examples. > > And I'm done with the implementation except for adding tests and a few > small tweaks. > Yay! -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Thu May 24 03:02:04 2012 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 23 May 2012 21:02:04 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> Message-ID: <4FBD888C.7090909@trueblade.com> On 5/23/2012 8:58 PM, PJ Eby wrote: > On Wed, May 23, 2012 at 8:24 PM, Eric V. Smith > wrote: > > I tried this approach and it works fine. The only caveat is that it > assumes that the parent path can always be computed as described above, > independent of what's passed in to PathFinder.load_module(). I think > that's reasonable, since load_module() itself hard-codes sys.path if the > supplied path is missing. > > > Technically, PEP 302 says that finders aren't allowed to assume their > parent packages are imported: > > """ However, the find_module() method isn't necessarily always called > during an actual import: meta tools that analyze import dependencies > (such as freeze, Installer or py2exe) don't actually load modules, so a > finder shouldn't /depend/ on the parent package being available in > sys.modules.""" > > OTOH, that's finders, and I think we're dealing with loaders here. > Splitting hairs, perhaps, but at least it's in a good cause. ;-) I guess I could store the passed-in parent path, and use that if it can't be found through sys.modules. I'm not sure I can conjure up code to test this. From pje at telecommunity.com Thu May 24 04:58:53 2012 From: pje at telecommunity.com (PJ Eby) Date: Wed, 23 May 2012 22:58:53 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBD888C.7090909@trueblade.com> References: <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> <4FBD888C.7090909@trueblade.com> Message-ID: On Wed, May 23, 2012 at 9:02 PM, Eric V. Smith wrote: > On 5/23/2012 8:58 PM, PJ Eby wrote: > > On Wed, May 23, 2012 at 8:24 PM, Eric V. Smith > > wrote: > > > > I tried this approach and it works fine. The only caveat is that it > > assumes that the parent path can always be computed as described > above, > > independent of what's passed in to PathFinder.load_module(). I think > > that's reasonable, since load_module() itself hard-codes sys.path if > the > > supplied path is missing. > > > > > > Technically, PEP 302 says that finders aren't allowed to assume their > > parent packages are imported: > > > > """ However, the find_module() method isn't necessarily always called > > during an actual import: meta tools that analyze import dependencies > > (such as freeze, Installer or py2exe) don't actually load modules, so a > > finder shouldn't /depend/ on the parent package being available in > > sys.modules.""" > > > > OTOH, that's finders, and I think we're dealing with loaders here. > > Splitting hairs, perhaps, but at least it's in a good cause. ;-) > > I guess I could store the passed-in parent path, and use that if it > can't be found through sys.modules. > > I'm not sure I can conjure up code to test this. > I actually was suggesting that we change PEP 302, if it became an issue. ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 24 05:49:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 May 2012 13:49:08 +1000 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: <4FBD888C.7090909@trueblade.com> References: <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> <4FBD888C.7090909@trueblade.com> Message-ID: On Thu, May 24, 2012 at 11:02 AM, Eric V. Smith wrote: > On 5/23/2012 8:58 PM, PJ Eby wrote: >> OTOH, that's finders, and I think we're dealing with loaders here. >> Splitting hairs, perhaps, but at least it's in a good cause. ?;-) > > I guess I could store the passed-in parent path, and use that if it > can't be found through sys.modules. > > I'm not sure I can conjure up code to test this. I don't think there's a need to change anything from your current strategy, but we should be clear in the docs: 1. Finders should *not* assume their parent packages have been loaded (and should not load them implicitly) 2. Loaders *can* assume their parent packages have already been loaded and are present in sys.modules (and can complain if they're not there) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From g.brandl at gmx.net Thu May 24 08:10:14 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 24 May 2012 08:10:14 +0200 Subject: [Python-Dev] Python db2 installation error In-Reply-To: References: Message-ID: Am 24.05.2012 01:45, schrieb Terry Reedy: > On 5/23/2012 7:00 PM, PremAnand Lakshmanan wrote: >> I want to install python db2 package for Python but Im unable to install it. > > pydev list is for development of future python releases. Ask questions > about using existing python releases on python-list or the gmane mirror. No please? Georg From sturla at molden.no Thu May 24 14:03:00 2012 From: sturla at molden.no (Sturla Molden) Date: Thu, 24 May 2012 14:03:00 +0200 Subject: [Python-Dev] possible bug in distutils (Mingw32CCompiler)? Message-ID: <4FBE2374.6020008@molden.no> Mingw32CCompiler in cygwincompiler.py emits the symbol -mno-cygwin. This is used to make Cygwin's gcc behave as mingw. As of gcc 4.6 it is not recognized by the mingw gcc compiler itself, and causes as crash. It should be removed because it is never needed for mingw (in any version), only for cross-compilation to mingw from other gcc versions. Instead, those who use CygwinCCompiler or Linux GCC to "cross-compile" to plain Win32 can set -mno-cygwin manually. It also means -mcygwin should be removed from the output of CygwinCCompiler. I think... Sturla From ncoghlan at gmail.com Thu May 24 14:47:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 May 2012 22:47:14 +1000 Subject: [Python-Dev] [Python-checkins] peps: Added examples. In-Reply-To: References: Message-ID: On Thu, May 24, 2012 at 8:10 PM, eric.smith wrote: > + ? Lib/test/namspace_pkgs Typo: s/namspace/namespace/ > +Here we add the parent directories to ``sys.path``, and show that the > +portions are correctly found:: > + > + ? ?>>> import sys > + ? ?>>> sys.path += ['Lib/test/namespace_pkgs/parent1/parent', 'Lib/test/namespace_pkgs/parent2/parent'] The trailing "/parent" shouldn't be there on either of these paths. The comments that refer back to these also need the same adjustment. > + ? Lib/test/namspace_pkgs Same typo as above. > + ? ?# add the first two parent paths to sys.path > + ? ?>>> import sys > + ? ?>>> sys.path += ['Lib/test/namespace_pkgs/parent1/parent', 'Lib/test/namespace_pkgs/parent2/parent'] Again, need to lose the last directory from these paths and the comments that refer to them. > + ? ?# now add parent3 to the parent's __path__: > + ? ?>>> parent.__path__.append('Lib/test/namespace_pkgs/parent3/parent') This modification is incorrect, it should be: sys.path.append('Lib/test/namespace_pkgs/parent3') and both parent.__path__ and parent.child.__path__ should pick up their extra portions on the next import attempt. Also, I suggest renaming "parent1", "parent2" and "parent3" to "project1", "project2" and "project3". Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From brian at python.org Thu May 24 15:45:30 2012 From: brian at python.org (Brian Curtin) Date: Thu, 24 May 2012 08:45:30 -0500 Subject: [Python-Dev] possible bug in distutils (Mingw32CCompiler)? In-Reply-To: <4FBE2374.6020008@molden.no> References: <4FBE2374.6020008@molden.no> Message-ID: On Thu, May 24, 2012 at 7:03 AM, Sturla Molden wrote: > > Mingw32CCompiler in cygwincompiler.py emits the symbol -mno-cygwin. > > This is used to make Cygwin's gcc behave as mingw. As of gcc 4.6 it is not > recognized by the mingw gcc compiler itself, and causes as crash. It should > be removed because it is never needed for mingw (in any version), only for > cross-compilation to mingw from other gcc versions. > > Instead, those who use CygwinCCompiler or Linux GCC to "cross-compile" to > plain Win32 can set -mno-cygwin manually. It also means -mcygwin should be > removed from the output of CygwinCCompiler. > > I think... Please report bugs to http://bugs.python.org so they don't get lost in email. The relevant people will be notified or assigned if a bug is entered. From rdmurray at bitdance.com Thu May 24 16:11:19 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 24 May 2012 10:11:19 -0400 Subject: [Python-Dev] possible bug in distutils (Mingw32CCompiler)? In-Reply-To: References: <4FBE2374.6020008@molden.no> Message-ID: <20120524141120.42D6325008C@webabinitio.net> On Thu, 24 May 2012 08:45:30 -0500, Brian Curtin wrote: > On Thu, May 24, 2012 at 7:03 AM, Sturla Molden wrote: > > > > Mingw32CCompiler in cygwincompiler.py emits the symbol -mno-cygwin. > > > > This is used to make Cygwin's gcc behave as mingw. As of gcc 4.6 it is not > > recognized by the mingw gcc compiler itself, and causes as crash. It should > > be removed because it is never needed for mingw (in any version), only for > > cross-compilation to mingw from other gcc versions. > > > > Instead, those who use CygwinCCompiler or Linux GCC to "cross-compile" to > > plain Win32 can set -mno-cygwin manually. It also means -mcygwin should be > > removed from the output of CygwinCCompiler. > > > > I think... > > Please report bugs to http://bugs.python.org so they don't get lost in > email. The relevant people will be notified or assigned if a bug is > entered. It was already reported by someone else: http://bugs.python.org/issue12641 --David From guido at python.org Thu May 24 20:33:08 2012 From: guido at python.org (Guido van Rossum) Date: Thu, 24 May 2012 11:33:08 -0700 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> <4FBD888C.7090909@trueblade.com> Message-ID: I've reviewed the updates to the PEP and have accepted it. Congrats all! I know the implementation is lagging behind a bit, that's not a problem. Just get it into the next 3.3 alpha release! -- --Guido van Rossum (python.org/~guido) From eric at trueblade.com Thu May 24 20:42:18 2012 From: eric at trueblade.com (Eric V. Smith) Date: Thu, 24 May 2012 14:42:18 -0400 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> <4FBD888C.7090909@trueblade.com> Message-ID: <4FBE810A.7040807@trueblade.com> On 5/24/2012 2:33 PM, Guido van Rossum wrote: > I've reviewed the updates to the PEP and have accepted it. Congrats all! Thanks to the many people who helped: Martin, Barry, Guido, Jason, Nick, PJE, and others. I'm sure I've offended someone by leaving them out, and I apologize in advance. But special thanks to Brett. Without his work on importlib, this never would have happened (as Barry, Jason, and I demonstrated on a two or three occasions)! > I know the implementation is lagging behind a bit, that's not a > problem. Just get it into the next 3.3 alpha release! It's only missing a few small things. I'll get it committed in the next day or so. Eric. From daniel at heroku.com Thu May 24 21:11:58 2012 From: daniel at heroku.com (Daniel Farina) Date: Thu, 24 May 2012 12:11:58 -0700 Subject: [Python-Dev] An infinite loop in dictobject.c Message-ID: Hello all. I seem to be encountering somewhat rare an infinite loop in hash table probing while importing _socket, as triggered by init_socket.c in Python 2.6, as seen/patched shipped with Ubuntu 10.04 LTS. The problem only reproduces on 32 bit machines, on both -O2 and -O0 builds (which is how I have managed to retrieve the detailed stack traces below). To cut to the chase, the bottom of the stack trace invariably looks like this, in particular the "key" (and therefore "hash") value is always the same: #0 0x08088637 in lookdict_string (mp=0xa042714, key='SO_RCVTIMEO', hash=612808203) at ../Objects/dictobject.c:421 #1 0x080886cd in insertdict (mp=0xa042714, key='SO_RCVTIMEO', hash=612808203, value=20) at ../Objects/dictobject.c:450 #2 0x08088cac in PyDict_SetItem (op=, key= 'SO_RCVTIMEO', value=20) at ../Objects/dictobject.c:701 #3 0x0808b8d4 in PyDict_SetItemString (v= {'AF_INET6': 10, 'SocketType': , 'getaddrinfo': , 'TIPC_MEDIUM_IMPORTANCE': 1, 'htonl': , 'AF_UNSPEC': 0, 'TIPC_DEST_DROPPABLE': 129, 'TIPC_ADDR_ID': 3, 'PF_PACKET': 17, 'AF_WANPIPE': 25, 'PACKET_OTHERHOST': 3, 'AF_AX25': 3, 'PACKET_BROADCAST': 1, 'PACKET_FASTROUTE': 6, 'TIPC_NODE_SCOPE': 3, 'inet_pton': , 'AF_ATMPVC': 8, 'NETLINK_IP6_FW': 13, 'NETLINK_ROUTE': 0, 'TIPC_PUBLISHED': 1, 'TIPC_WITHDRAWN': 2, 'AF_ECONET': 19, 'AF_LLC': 26, '__name__': '_socket', 'AF_NETROM': 6, 'SOCK_RDM': 4, 'AF_IRDA': 23, 'htons': , 'SOCK_RAW': 3, 'inet_ntoa': , 'AF_NETBEUI': 13, 'AF_NETLINK': 16, 'TIPC_WAIT_FOREVER': -1, 'AF_UNIX': 1, 'TIPC_SUB_PORTS': 1, 'HCI_TIME_STAMP': 3, 'gethostbyname_ex': , 'SO_RCVBUF': 8, 'AF_APPLETALK': 5, 'SOCK_SEQPACKET': 5, 'AF_DECnet': 12, 'PACKET_OUTGOING': 4, 'SO_SNDLOWAT': 19, 'TIPC_SRC_DROPPABLE':...(truncated), key=0x81ac5fb "SO_RCVTIMEO", item=20) at ../Objects/dictobject.c:2301 #4 0x080f6c98 in PyModule_AddObject (m=, name= 0x81ac5fb "SO_RCVTIMEO", o=20) at ../Python/modsupport.c:615 #5 0x080f6d0b in PyModule_AddIntConstant (m=, name=0x81ac5fb "SO_RCVTIMEO", value=20) at ../Python/modsupport.c:627 #6 0x081321fd in init_socket () at ../Modules/socketmodule.c:4708 Here, we never escape from lookdict_string. The key is not in the dictionary, but at this stage Python is trying to figure out that is the case, and cannot seem to exit because of the lack of a dummy entry. Furthermore, every single reproduced case has a dictionary with a suspicious looking violation of an invariant that I believe is communicated by the source of dictobject.c, with emphasis on the values of ma_fill, ma_used, and ma_mask, which never deviate in any reproduced case. It seems like no hash table should ever get this full, per the comments in the source: $3 = {ob_refcnt = 1, ob_type = 0x81c3aa0, ma_fill = 128, ma_used = 128, ma_mask = 127, ma_table = 0xa06b4a8, ma_lookup = 0x8088564 , ma_smalltable = {{me_hash = 0, me_key = 0x0, me_value = 0x0}, {me_hash = 1023053529, me_key = '__name__', me_value = '_socket'}, {me_hash = 1679430097, me_key = 'gethostbyname', me_value = }, {me_hash = 0, me_key = 0x0, me_value = 0x0}, {me_hash = 779452068, me_key = 'gethostbyname_ex', me_value = }, {me_hash = -322108099, me_key = '__doc__', me_value = None}, {me_hash = -1649837379, me_key = 'gethostbyaddr', me_value = }, { me_hash = 1811348911, me_key = '__package__', me_value = None}}} The Python program that is running afoul this bug is using gevent, but the stack traces suggest that all gevent is doing at the time this crashes is importing "socket", and this is done at the very, very beginning of program execution. Finally, what's especially strange is that I had gone a very long time running this exact version of Python, libraries, and application quite frequently: it suddenly started cropping up a little while ago (maybe a few weeks). It could have been just coincidence, but if there are code paths in init_socket.c that may somehow be sensitive to the network somehow, this could have been related. I also have a limited suspicion that particularly unlucky OOM (these systems are configured in a way where malloc and friends will return NULL, i.e. no overcommit on Linux) could be related. Any guiding words, known bugs, or suspicions? -- fdr From mark at hotpy.org Thu May 24 21:59:37 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 24 May 2012 20:59:37 +0100 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: Message-ID: <4FBE9329.60408@hotpy.org> Daniel Farina wrote: > Hello all. I seem to be encountering somewhat rare an infinite loop > in hash table probing while importing _socket, as triggered by > init_socket.c in Python 2.6, as seen/patched shipped with Ubuntu 10.04 > LTS. The problem only reproduces on 32 bit machines, on both -O2 and > -O0 builds (which is how I have managed to retrieve the detailed stack Please submit a report to the tracker for this. (Add me to the nosy list if you can) Cheers, Mark. From ncoghlan at gmail.com Thu May 24 22:07:37 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 May 2012 06:07:37 +1000 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: Message-ID: If it only started happening recently, suspicion would naturally fall on the hash randomisation security fix (as I assume a new version of Python would have been pushed for 10.04 with that update) Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Thu May 24 22:11:29 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 24 May 2012 14:11:29 -0600 Subject: [Python-Dev] PEP 420 - dynamic path computation is missing rationale In-Reply-To: References: <4FBADE8A.2060407@trueblade.com> <4FBC31F8.7070203@trueblade.com> <4FBCD8B2.9040904@trueblade.com> <4FBCE1D2.7040406@trueblade.com> <4FBD56C5.7000605@trueblade.com> <4FBD7FA1.7010107@trueblade.com> <4FBD888C.7090909@trueblade.com> Message-ID: On Thu, May 24, 2012 at 12:33 PM, Guido van Rossum wrote: > I've reviewed the updates to the PEP and have accepted it. Congrats all! Congrats! -eric From ncoghlan at gmail.com Thu May 24 22:13:33 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 May 2012 06:13:33 +1000 Subject: [Python-Dev] [Python-checkins] peps: Added examples. In-Reply-To: <4FBE3626.1030605@trueblade.com> References: <4FBE3626.1030605@trueblade.com> Message-ID: On May 24, 2012 11:29 PM, "Eric V. Smith" wrote: > > Possibly I am being too tricky here by modifying parent.__path__, and I > should just modify sys.path again, as you suggest. But I was trying to > show that modifying parent.__path__ will also work. Modifying namespace package __path__ attributes directly seems like a good way to accidentally break the auto-updating. We probably don't want to encourage that. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at heroku.com Thu May 24 22:15:43 2012 From: daniel at heroku.com (Daniel Farina) Date: Thu, 24 May 2012 13:15:43 -0700 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: Message-ID: On Thu, May 24, 2012 at 1:07 PM, Nick Coghlan wrote: > If it only started happening recently, suspicion would naturally fall on the > hash randomisation security fix (as I assume a new version of Python would > have been pushed for 10.04 with that update) I do not think so; I do not see in in the backpatches made to 2.6.5 (http://packages.ubuntu.com/lucid/python2.6), unless they are particularly slick. -- fdr From solipsis at pitrou.net Thu May 24 22:15:52 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 24 May 2012 22:15:52 +0200 Subject: [Python-Dev] An infinite loop in dictobject.c References: Message-ID: <20120524221552.36b211e1@pitrou.net> On Thu, 24 May 2012 12:11:58 -0700 Daniel Farina wrote: > > Finally, what's especially strange is that I had gone a very long time > running this exact version of Python, libraries, and application quite > frequently: it suddenly started cropping up a little while ago (maybe > a few weeks). Do you mean it's a hand-compiled Python? Are you sure you didn't recompile it / update to a later version recently? If you didn't change anything, this may be something unrelated to Python, such as a hardware problem. You are right that ma_fill == ma_used should, AFAIK, never happen. Perhaps you could add conditional debug statements when that condition happens, to know where it comes from. Furthermore, if this is a hand-compiled Python, you could reconfigure it --with-pydebug, so as to enable more assertions in the interpreter core (this will make it quite a bit slower too :-)). Regards Antoine. From daniel at heroku.com Thu May 24 22:20:34 2012 From: daniel at heroku.com (Daniel Farina) Date: Thu, 24 May 2012 13:20:34 -0700 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: <4FBE9329.60408@hotpy.org> References: <4FBE9329.60408@hotpy.org> Message-ID: On Thu, May 24, 2012 at 12:59 PM, Mark Shannon wrote: > Please submit a report to the tracker for this. > (Add me to the nosy list if you can) http://bugs.python.org/issue14903 However, I cannot add you to the nosy list, as you do not show up in the search. -- fdr From barry at python.org Thu May 24 21:56:18 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 24 May 2012 15:56:18 -0400 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: Message-ID: <20120524155618.368fbe32@resist.wooz.org> On May 25, 2012, at 06:07 AM, Nick Coghlan wrote: >If it only started happening recently, suspicion would naturally fall on >the hash randomisation security fix (as I assume a new version of Python >would have been pushed for 10.04 with that update) I do not think the hash randomization patch has been pushed to Python 2.6 in Lucid 10.04.4 yet, which still has Python 2.6.5 (plus patches, but not that one). -Barry From daniel at heroku.com Thu May 24 22:23:59 2012 From: daniel at heroku.com (Daniel Farina) Date: Thu, 24 May 2012 13:23:59 -0700 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: <20120524221552.36b211e1@pitrou.net> References: <20120524221552.36b211e1@pitrou.net> Message-ID: On Thu, May 24, 2012 at 1:15 PM, Antoine Pitrou wrote: > On Thu, 24 May 2012 12:11:58 -0700 > Daniel Farina wrote: >> >> Finally, what's especially strange is that I had gone a very long time >> running this exact version of Python, libraries, and application quite >> frequently: it suddenly started cropping up a little while ago (maybe >> a few weeks). > > Do you mean it's a hand-compiled Python? Are you sure you didn't > recompile it / update to a later version recently? Quite sure. It was vanilla Ubuntu, and then I side-graded it to vanilla ubuntu at -O0. > If you didn't change anything, this may be something unrelated to > Python, such as a hardware problem. Occurs on a smattering of a large number of systems more or less at random. Seems unlikely. > You are right that ma_fill == ma_used should, AFAIK, never happen. > Perhaps you could add conditional debug statements when that condition > happens, to know where it comes from. > > Furthermore, if this is a hand-compiled Python, you could reconfigure > it --with-pydebug, so as to enable more assertions in the interpreter > core (this will make it quite a bit slower too :-)). Yes, this is my next step, although I am going to do a bit more whacking of the interpreter as to pause rather than crash when it encounters this problem. -- fdr From tjreedy at udel.edu Thu May 24 23:27:11 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 24 May 2012 17:27:11 -0400 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: <4FBE9329.60408@hotpy.org> Message-ID: On 5/24/2012 4:20 PM, Daniel Farina wrote: > On Thu, May 24, 2012 at 12:59 PM, Mark Shannon wrote: >> Please submit a report to the tracker for this. >> (Add me to the nosy list if you can) > > http://bugs.python.org/issue14903 > > However, I cannot add you to the nosy list, as you do not show up in the search. The nosy list box search only shows people with commit privileges. Others you have to find them in the User List accessible from the sidebar, but that may be admin only for all I know. Anyway, Mark should have said 'as Mark.Shannon'. I have added him on the issue. -- Terry Jan Reedy From ncoghlan at gmail.com Thu May 24 23:28:12 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 May 2012 07:28:12 +1000 Subject: [Python-Dev] An infinite loop in dictobject.c In-Reply-To: References: <20120524221552.36b211e1@pitrou.net> Message-ID: On Fri, May 25, 2012 at 6:23 AM, Daniel Farina wrote: >> Furthermore, if this is a hand-compiled Python, you could reconfigure >> it --with-pydebug, so as to enable more assertions in the interpreter >> core (this will make it quite a bit slower too :-)). > > Yes, this is my next step, although I am going to do a bit more > whacking of the interpreter as to pause rather than crash when it > encounters this problem. You may also want to give Victor's faulthandler module a try: http://pypi.python.org/pypi/faulthandler/ Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Fri May 25 00:21:33 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 24 May 2012 18:21:33 -0400 Subject: [Python-Dev] VS 11 Express is Metro only. Message-ID: The free Visual Studio 11 Express for Windows 8 (still in beta) will produce both 32 and 64 bit binaries and allow multiple languages but will only produce Metro apps. For desktop apps, either the paid Visual Studio versions or the free 2010 Express releases are required. https://www.microsoft.com/visualstudio/11/en-us/products/express bottom of page. Will this inhibit someday moving to Visual Studio 11 Professional or would VS2010 Express or VC++2010 Express still work for hacking on Python or making extensions that would work with any VS11-produced binary? -- Terry Jan Reedy From brian at python.org Fri May 25 00:26:50 2012 From: brian at python.org (Brian Curtin) Date: Thu, 24 May 2012 17:26:50 -0500 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: Message-ID: On Thu, May 24, 2012 at 5:21 PM, Terry Reedy wrote: > The free Visual Studio 11 Express for Windows 8 (still in beta) will produce > both 32 and 64 bit binaries and allow multiple languages but will only > produce Metro apps. For desktop apps, either the paid Visual Studio versions > or the free 2010 Express releases are required. > https://www.microsoft.com/visualstudio/11/en-us/products/express > bottom of page. > > Will this inhibit someday moving to Visual Studio 11 Professional or would > VS2010 Express or VC++2010 Express still work for hacking on Python or > making extensions that would work with any VS11-produced binary? I don't know. Maybe? Windows 8 and VS11 are still not released so who knows what will happen. From martin at v.loewis.de Fri May 25 00:36:47 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 25 May 2012 00:36:47 +0200 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: Message-ID: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> > The free Visual Studio 11 Express for Windows 8 (still in beta) will > produce both 32 and 64 bit binaries and allow multiple languages but > will only produce Metro apps. For desktop apps, either the paid > Visual Studio versions or the free 2010 Express releases are required. > https://www.microsoft.com/visualstudio/11/en-us/products/express > bottom of page. > > Will this inhibit someday moving to Visual Studio 11 Professional or > would VS2010 Express or VC++2010 Express still work for hacking on > Python or making extensions that would work with any VS11-produced > binary? I think it's too early to guess what the final release of Visual Studio 11 Express will or will not include. Regards, Martin From ncoghlan at gmail.com Fri May 25 10:44:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 May 2012 18:44:14 +1000 Subject: [Python-Dev] Accepting PEP 405 (Python Virtual Environments) Message-ID: As the latest round of updates that Carl and Vinay pushed to the PEPs repo have addressed my few remaining questions, I am accepting PEP 405 for inclusion in Python 3.3. Thanks to all involved in working out the spec for what to model directly on virtualenv, and areas where cleaner solutions could be found given the power to tweak the behaviour of the core interpreter and the standard library. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From jsbueno at python.org.br Fri May 25 13:43:32 2012 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Fri, 25 May 2012 08:43:32 -0300 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> Message-ID: On 24 May 2012 19:36, wrote: >> The free Visual Studio 11 Express for Windows 8 (still in beta) will >> produce both 32 and 64 bit binaries and allow multiple languages but will >> only produce Metro apps. For desktop apps, either the paid Visual Studio >> versions or the free 2010 Express releases are required. >> https://www.microsoft.com/visualstudio/11/en-us/products/express >> bottom of page. >> >> Will this inhibit someday moving to Visual Studio 11 Professional or would >> VS2010 Express or VC++2010 Express still work for hacking on Python or >> making extensions that would work with any VS11-produced binary? > > > I think it's too early to guess what the final release of Visual Studio > 11 Express will or will not include. It is better documented here, and seems something to start thinking about: http://arstechnica.com/information-technology/2012/05/no-cost-desktop-software-development-is-dead-on-windows-8/ > > Regards, > Martin > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From martin at v.loewis.de Fri May 25 14:06:22 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 25 May 2012 14:06:22 +0200 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> Message-ID: <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> > It is better documented here, and seems something to start thinking about: > > http://arstechnica.com/information-technology/2012/05/no-cost-desktop-software-development-is-dead-on-windows-8/ This isn't actually better documentation, since it talks about the future, without being "official" (i.e. it's Peter Bright's opinion, not a Microsoft announcement). I hereby predict that Microsoft will revert this decision, and that VS Express 11 will be able to build CPython. Regards, Martin From curt at hagenlocher.org Fri May 25 14:17:19 2012 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Fri, 25 May 2012 05:17:19 -0700 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: On Fri, May 25, 2012 at 5:06 AM, wrote: > It is better documented here, and seems something to start thinking about: >> >> http://arstechnica.com/**information-technology/2012/** >> 05/no-cost-desktop-software-**development-is-dead-on-**windows-8/ >> > > This isn't actually better documentation, since it talks about the future, > without being "official" (i.e. it's Peter Bright's opinion, not a Microsoft > announcement). > > I hereby predict that Microsoft will revert this decision, and that VS > Express > 11 will be able to build CPython. > But will it be able to target Windows XP? http://connect.microsoft.com/VisualStudio/feedback/details/690617/bug-apps-created-with-crt-and-mfc-vnext-11-cannot-be-used-on-windows-xp-sp3 (Disclaimer: I work at Microsoft, but I know nothing about either of these topics.) -Curt -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri May 25 15:36:35 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 25 May 2012 23:36:35 +1000 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: On Fri, May 25, 2012 at 10:17 PM, Curt Hagenlocher wrote: > But will it be able to target Windows XP? > > http://connect.microsoft.com/VisualStudio/feedback/details/690617/bug-apps-created-with-crt-and-mfc-vnext-11-cannot-be-used-on-windows-xp-sp3 The key things to remember at this point: 1. There's every chance Microsoft will reverse this decision, for all the reasons they introduced the Express editions of Visual Studio in the first place (e.g. to stop haemorrhaging hobbyist developers to other ecosystems where development tools aren't a profit centre). The collective "WTF?!" from third parties at their current approach (eloquently expressed by Peter Bright over at Ars) is going to be hard for even the most passionate Metro advocates to ignore. 2. It's going to be at least 18 months before CPython's Windows build is likely to migrate to VS2011, and if there's still no desktop app support in the Express edition at that time, that will be a strong argument against migrating. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Fri May 25 16:41:30 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 25 May 2012 10:41:30 -0400 Subject: [Python-Dev] [Python-checkins] cpython: issue 14660: Implement PEP 420, namespace packages. In-Reply-To: References: Message-ID: <20120525104130.19b416fb@limelight.wooz.org> On May 25, 2012, at 10:31 AM, Brett Cannon wrote: >Is documentation coming in a separate commit? Yes. I've been reworking the import machinery documentation; it's a work-in-progress on the pep-420 feature clone ('importdocs' branch). I made some good progress and then got side-tracked, but I'm planning on getting back to it soon. -Barry From status at bugs.python.org Fri May 25 18:07:06 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 25 May 2012 18:07:06 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120525160706.7468D1CA87@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-05-18 - 2012-05-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3440 ( +8) closed 23254 (+58) total 26694 (+66) Open issues with patches: 1455 Issues opened (44) ================== #11804: expat parser not xml 1.1 (breaks xmlrpclib) http://bugs.python.org/issue11804 reopened by xrg #14852: json and ElementTree parsers misbehave on streams containing m http://bugs.python.org/issue14852 opened by Frederick.Ross #14853: test_file.py depends on sys.stdin being unseekable http://bugs.python.org/issue14853 opened by gregory.p.smith #14854: faulthandler: fatal error with "SystemError: null argument to http://bugs.python.org/issue14854 opened by zbysz #14855: IPv6 support for logging.handlers http://bugs.python.org/issue14855 opened by cblp #14856: argparse: creating an already defined subparsers does not rais http://bugs.python.org/issue14856 opened by eacb #14857: Direct access to lexically scoped __class__ is broken in 3.3 http://bugs.python.org/issue14857 opened by ncoghlan #14858: 'pysetup create' off-by-one when choosing classification matur http://bugs.python.org/issue14858 opened by todddeluca #14869: imaplib erronously quotes atoms such as flags http://bugs.python.org/issue14869 opened by brightbyte #14870: Descriptions of os.utime() and os.utimensat() use wrong notati http://bugs.python.org/issue14870 opened by hynek #14871: Rewrite the command line parsers and actions system used in di http://bugs.python.org/issue14871 opened by eric.araujo #14873: Windows devguide: clarification for build errors due to missin http://bugs.python.org/issue14873 opened by valhallasw #14874: Faster charmap decoding http://bugs.python.org/issue14874 opened by storchaka #14876: IDLE highlighting theme does not preview with user-selected fo http://bugs.python.org/issue14876 opened by andrew.m #14877: No option to run bdist_wininst against newer msvc versions on http://bugs.python.org/issue14877 opened by Aaron.Staley #14878: send statement from PEP342 is poorly documented. http://bugs.python.org/issue14878 opened by Stephen.Lacy #14879: invalid docs for subprocess exceptions with shell=True http://bugs.python.org/issue14879 opened by techtonik #14880: csv.reader and .writer use wrong kwargs notation in 2.7 docs http://bugs.python.org/issue14880 opened by hynek #14881: multiprocessing.dummy craches when self._parent._children does http://bugs.python.org/issue14881 opened by Itay.Brandes #14882: Link/Compile Error on Sun Sparc Solaris10 with gcc3.4.3----pyt http://bugs.python.org/issue14882 opened by seeker77 #14886: json C vs pure-python implementation difference http://bugs.python.org/issue14886 opened by mmarkk #14892: 'import readline' hangs when launching with '&' on BSD and OS http://bugs.python.org/issue14892 opened by olivier-mattelaer #14893: Tutorial: Add function annotation example to function tutorial http://bugs.python.org/issue14893 opened by zach.ware #14894: distutils.LooseVersion fails to compare number and a word http://bugs.python.org/issue14894 opened by Natalia #14895: test_warnings.py EnvironmentVariableTests is a bad test http://bugs.python.org/issue14895 opened by tebeka #14897: struct.pack raises unexpected error message http://bugs.python.org/issue14897 opened by mesheb82 #14899: Naming conventions and guidelines for packages and namespace p http://bugs.python.org/issue14899 opened by benoitbryon #14900: cProfile does not take its result headers as sort arguments http://bugs.python.org/issue14900 opened by ArneBab #14901: Python Windows FAQ is Very Outdated http://bugs.python.org/issue14901 opened by michael.driscoll #14902: test_logging failed http://bugs.python.org/issue14902 opened by cblp #14903: dictobject infinite loop on 2.6.5 on 32-bit x86 http://bugs.python.org/issue14903 opened by Daniel.Farina #14904: test_unicode_repr_oflw (in test_bigmem) crashes http://bugs.python.org/issue14904 opened by pitrou #14905: zipimport.c needs to support namespace packages when no 'direc http://bugs.python.org/issue14905 opened by eric.smith #14906: rotatingHandler WindowsError http://bugs.python.org/issue14906 opened by jacuro #14907: SSL module cannot handle unicode filenames http://bugs.python.org/issue14907 opened by ms4py #14908: datetime.datetime should have a timestamp() method http://bugs.python.org/issue14908 opened by djc #14909: Fix incorrect use of *Realloc() and *Resize() http://bugs.python.org/issue14909 opened by kristjan.jonsson #14910: argparse: disable abbreviation http://bugs.python.org/issue14910 opened by jens.jaehrig #14911: generator.throw() documentation inaccurate http://bugs.python.org/issue14911 opened by kristjan.jonsson #14912: Pdb does not stop at a breakpoint after a restart command and http://bugs.python.org/issue14912 opened by xdegaye #14913: tokenize the source to manage Pdb breakpoints http://bugs.python.org/issue14913 opened by xdegaye #14914: pysetup installed distribute despite dry run option being spec http://bugs.python.org/issue14914 opened by ncoghlan #14915: pysetup may leave a package in a half-installed state http://bugs.python.org/issue14915 opened by ncoghlan #14916: PyRun_InteractiveLoop fails to run interactively when using a http://bugs.python.org/issue14916 opened by Kevin.Barry Most recent 15 issues with no replies (15) ========================================== #14916: PyRun_InteractiveLoop fails to run interactively when using a http://bugs.python.org/issue14916 #14915: pysetup may leave a package in a half-installed state http://bugs.python.org/issue14915 #14914: pysetup installed distribute despite dry run option being spec http://bugs.python.org/issue14914 #14913: tokenize the source to manage Pdb breakpoints http://bugs.python.org/issue14913 #14911: generator.throw() documentation inaccurate http://bugs.python.org/issue14911 #14910: argparse: disable abbreviation http://bugs.python.org/issue14910 #14909: Fix incorrect use of *Realloc() and *Resize() http://bugs.python.org/issue14909 #14906: rotatingHandler WindowsError http://bugs.python.org/issue14906 #14904: test_unicode_repr_oflw (in test_bigmem) crashes http://bugs.python.org/issue14904 #14900: cProfile does not take its result headers as sort arguments http://bugs.python.org/issue14900 #14874: Faster charmap decoding http://bugs.python.org/issue14874 #14871: Rewrite the command line parsers and actions system used in di http://bugs.python.org/issue14871 #14858: 'pysetup create' off-by-one when choosing classification matur http://bugs.python.org/issue14858 #14853: test_file.py depends on sys.stdin being unseekable http://bugs.python.org/issue14853 #14852: json and ElementTree parsers misbehave on streams containing m http://bugs.python.org/issue14852 Most recent 15 issues waiting for review (15) ============================================= #14913: tokenize the source to manage Pdb breakpoints http://bugs.python.org/issue14913 #14909: Fix incorrect use of *Realloc() and *Resize() http://bugs.python.org/issue14909 #14900: cProfile does not take its result headers as sort arguments http://bugs.python.org/issue14900 #14899: Naming conventions and guidelines for packages and namespace p http://bugs.python.org/issue14899 #14895: test_warnings.py EnvironmentVariableTests is a bad test http://bugs.python.org/issue14895 #14893: Tutorial: Add function annotation example to function tutorial http://bugs.python.org/issue14893 #14876: IDLE highlighting theme does not preview with user-selected fo http://bugs.python.org/issue14876 #14874: Faster charmap decoding http://bugs.python.org/issue14874 #14873: Windows devguide: clarification for build errors due to missin http://bugs.python.org/issue14873 #14856: argparse: creating an already defined subparsers does not rais http://bugs.python.org/issue14856 #14855: IPv6 support for logging.handlers http://bugs.python.org/issue14855 #14854: faulthandler: fatal error with "SystemError: null argument to http://bugs.python.org/issue14854 #14843: support define_macros / undef_macros in setup.cfg http://bugs.python.org/issue14843 #14840: Tutorial: Add a bit on the difference between tuples and lists http://bugs.python.org/issue14840 #14837: Better SSL errors http://bugs.python.org/issue14837 Top 10 most discussed issues (10) ================================= #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 15 msgs #14775: Dict untracking can result in quadratic dict build-up http://bugs.python.org/issue14775 12 msgs #14855: IPv6 support for logging.handlers http://bugs.python.org/issue14855 11 msgs #11804: expat parser not xml 1.1 (breaks xmlrpclib) http://bugs.python.org/issue11804 7 msgs #12014: str.format parses replacement field incorrectly http://bugs.python.org/issue12014 7 msgs #14744: Use _PyUnicodeWriter API in str.format() internals http://bugs.python.org/issue14744 7 msgs #14854: faulthandler: fatal error with "SystemError: null argument to http://bugs.python.org/issue14854 7 msgs #14886: json C vs pure-python implementation difference http://bugs.python.org/issue14886 6 msgs #14894: distutils.LooseVersion fails to compare number and a word http://bugs.python.org/issue14894 6 msgs #1191964: asynchronous Subprocess http://bugs.python.org/issue1191964 6 msgs Issues closed (54) ================== #4033: python search path - .pth recursion http://bugs.python.org/issue4033 closed by brett.cannon #9374: urlparse should parse query and fragment for arbitrary schemes http://bugs.python.org/issue9374 closed by orsenthil #9400: multiprocessing.pool.AsyncResult.get() messes up exceptions http://bugs.python.org/issue9400 closed by sbt #11647: function decorated with a context manager can only be invoked http://bugs.python.org/issue11647 closed by ncoghlan #12098: Child process running as debug on Windows http://bugs.python.org/issue12098 closed by sbt #13152: textwrap: support custom tabsize http://bugs.python.org/issue13152 closed by hynek #13208: Problems with urllib on windows http://bugs.python.org/issue13208 closed by holdenweb #13210: Support Visual Studio 2010 http://bugs.python.org/issue13210 closed by loewis #13445: Enable linking the module pysqlite with Berkeley DB SQL instea http://bugs.python.org/issue13445 closed by petri.lehtinen #13585: Add contextlib.ExitStack http://bugs.python.org/issue13585 closed by python-dev #13682: Documentation of os.fdopen() refers to non-existing bufsize ar http://bugs.python.org/issue13682 closed by petri.lehtinen #14072: urlparse on tel: URI-s misses the scheme in some cases http://bugs.python.org/issue14072 closed by ezio.melotti #14075: argparse: unused method? http://bugs.python.org/issue14075 closed by petri.lehtinen #14136: Simplify PEP 409 command line test and move it to test_cmd_lin http://bugs.python.org/issue14136 closed by python-dev #14426: date format problem in Cookie/http.cookies http://bugs.python.org/issue14426 closed by orsenthil #14472: .gitignore is outdated http://bugs.python.org/issue14472 closed by petri.lehtinen #14494: __future__.py and its documentation claim absolute imports bec http://bugs.python.org/issue14494 closed by petri.lehtinen #14572: 2.7.3: sqlite module does not build on centos 5 and Mac OS X 1 http://bugs.python.org/issue14572 closed by ned.deily #14588: PEP 3115 compliant dynamic class creation http://bugs.python.org/issue14588 closed by python-dev #14660: Implement PEP 420: Implicit Namespace Packages http://bugs.python.org/issue14660 closed by eric.smith #14721: httplib doesn't specify content-length header for POST request http://bugs.python.org/issue14721 closed by orsenthil #14798: pyclbr raises KeyError when the prefix of a dotted name is not http://bugs.python.org/issue14798 closed by petri.lehtinen #14804: Wrong defaults args notation in docs http://bugs.python.org/issue14804 closed by hynek #14821: _ctypes and other modules not built with msbuild on vs2010 sol http://bugs.python.org/issue14821 closed by jason.coombs #14822: Build unusable when compiled for Win 64-bit release http://bugs.python.org/issue14822 closed by jason.coombs #14831: make r argument on itertools.combinations() optional http://bugs.python.org/issue14831 closed by terry.reedy #14833: Copyright date in footer of /pypi says 2011 http://bugs.python.org/issue14833 closed by terry.reedy #14836: Add next(iter(o)) to set.pop, dict.popitem entries. http://bugs.python.org/issue14836 closed by rhettinger #14838: IDLE Will not load on reinstall http://bugs.python.org/issue14838 closed by loewis #14842: Link to function time() in the docs point to the time module http://bugs.python.org/issue14842 closed by python-dev #14849: C implementation of ElementTree: Inheriting from Element break http://bugs.python.org/issue14849 closed by eli.bendersky #14851: Python-2.6.8 install fails due to missing files http://bugs.python.org/issue14851 closed by ned.deily #14859: Patch to make IDLE window rise to top in OS X on launch http://bugs.python.org/issue14859 closed by ned.deily #14860: devguide: Clarify how to run cpython test suite - esp. on 2.7 http://bugs.python.org/issue14860 closed by ezio.melotti #14861: Make ./python -m test work to run test suite in Python 2.7 http://bugs.python.org/issue14861 closed by loewis #14862: os.__all__ is missing some names http://bugs.python.org/issue14862 closed by petri.lehtinen #14863: Update docs of os.fdopen() http://bugs.python.org/issue14863 closed by petri.lehtinen #14864: Mention logging.disable(logging.NOTSET) to reset the command i http://bugs.python.org/issue14864 closed by python-dev #14865: #doctest: directives removed from doctest chapter examples http://bugs.python.org/issue14865 closed by terry.reedy #14866: 2.x,3.x iOS static build: Fatal Python error: exceptions boots http://bugs.python.org/issue14866 closed by amaury.forgeotdarc #14867: chm link missing from 2.7.3 download page http://bugs.python.org/issue14867 closed by ned.deily #14868: Allow log calls to return True for code optimization. http://bugs.python.org/issue14868 closed by Llu??s #14872: subprocess is not safe from deadlocks http://bugs.python.org/issue14872 closed by rosslagerwall #14875: Unusual way of doing 'Inf' in json library http://bugs.python.org/issue14875 closed by ezio.melotti #14883: html documentation does not show comments in code blocks http://bugs.python.org/issue14883 closed by ezio.melotti #14884: Windows Build instruction typo http://bugs.python.org/issue14884 closed by eli.bendersky #14885: shutil tests, test_copy2_xattr and test_copyxattr, fail http://bugs.python.org/issue14885 closed by hynek #14887: pysetup: unfriendly error message for unknown commands http://bugs.python.org/issue14887 closed by eric.araujo #14888: _md5 module crashes on large data http://bugs.python.org/issue14888 closed by pitrou #14889: PyBytes_FromObject(bytes_object) creates a new object http://bugs.python.org/issue14889 closed by larry #14890: typo in difflib http://bugs.python.org/issue14890 closed by ninsen #14891: An error in bindings of closures http://bugs.python.org/issue14891 closed by amaury.forgeotdarc #14896: plistlib handling of real datatype http://bugs.python.org/issue14896 closed by ned.deily #14898: Dict collision on boolean and integer values http://bugs.python.org/issue14898 closed by mark.dickinson From g.brandl at gmx.net Fri May 25 18:46:19 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 25 May 2012 18:46:19 +0200 Subject: [Python-Dev] Accepting PEP 405 (Python Virtual Environments) In-Reply-To: References: Message-ID: Am 25.05.2012 10:44, schrieb Nick Coghlan: > As the latest round of updates that Carl and Vinay pushed to the PEPs > repo have addressed my few remaining questions, I am accepting PEP 405 > for inclusion in Python 3.3. > > Thanks to all involved in working out the spec for what to model > directly on virtualenv, and areas where cleaner solutions could be > found given the power to tweak the behaviour of the core interpreter > and the standard library. Great! Please remember that the next 3.3 alpha is scheduled for this weekend, so please let me know in which timescale you plan to implement this PEP. If you want to commit it before this alpha, I can shift it by a few days, but not a whole week since I'm on vacation for one week from June 2nd. Georg From g.brandl at gmx.net Fri May 25 18:57:57 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 25 May 2012 18:57:57 +0200 Subject: [Python-Dev] cpython: simplify and rewrite the zipimport part of 702009f3c0b1 a bit In-Reply-To: References: Message-ID: Am 25.05.2012 07:54, schrieb benjamin.peterson: > http://hg.python.org/cpython/rev/a47d32a28662 > changeset: 77129:a47d32a28662 > user: Benjamin Peterson > date: Thu May 24 22:54:15 2012 -0700 > summary: > simplify and rewrite the zipimport part of 702009f3c0b1 a bit > > files: > Modules/zipimport.c | 92 ++++++++++++++------------------ > 1 files changed, 41 insertions(+), 51 deletions(-) > > > diff --git a/Modules/zipimport.c b/Modules/zipimport.c > --- a/Modules/zipimport.c > +++ b/Modules/zipimport.c > @@ -319,13 +319,20 @@ > return MI_NOT_FOUND; > } > > +typedef enum { > + fl_error, > + fl_not_found, > + fl_module_found, > + fl_ns_found > +} find_loader_result; This is probably minor, but wouldn't it make more sense to have those constants uppercased? At least that's the general style we have in the codebase for enum values. (There's one exception, but also recently committed, in posixmodule.c for the utime_result enum. Maybe that could also be fixed.) Georg From solipsis at pitrou.net Fri May 25 19:14:31 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 May 2012 19:14:31 +0200 Subject: [Python-Dev] cpython: simplify and rewrite the zipimport part of 702009f3c0b1 a bit References: Message-ID: <20120525191431.0623368c@pitrou.net> On Fri, 25 May 2012 18:57:57 +0200 Georg Brandl wrote: > Am 25.05.2012 07:54, schrieb benjamin.peterson: > > http://hg.python.org/cpython/rev/a47d32a28662 > > changeset: 77129:a47d32a28662 > > user: Benjamin Peterson > > date: Thu May 24 22:54:15 2012 -0700 > > summary: > > simplify and rewrite the zipimport part of 702009f3c0b1 a bit > > > > files: > > Modules/zipimport.c | 92 ++++++++++++++------------------ > > 1 files changed, 41 insertions(+), 51 deletions(-) > > > > > > diff --git a/Modules/zipimport.c b/Modules/zipimport.c > > --- a/Modules/zipimport.c > > +++ b/Modules/zipimport.c > > @@ -319,13 +319,20 @@ > > return MI_NOT_FOUND; > > } > > > > +typedef enum { > > + fl_error, > > + fl_not_found, > > + fl_module_found, > > + fl_ns_found > > +} find_loader_result; > > This is probably minor, but wouldn't it make more sense to have those > constants uppercased? At least that's the general style we have in > the codebase for enum values. +1, this surprised me too. Regards Antoine. From ethan at stoneleaf.us Fri May 25 19:21:39 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 25 May 2012 10:21:39 -0700 Subject: [Python-Dev] doc change for weakref Message-ID: <4FBFBFA3.10001@stoneleaf.us> I'd like to make a slight doc change for weakref to state (more or less): weakrefs are not invalidated when the strong refs are gone, but rather when garbage collection reclaims the object Should this be accurate for all implementations, or should it be more along the lines of: weakrefs may be invalidated as soon as the strong refs are gone, but may last until garbage collection reclaims the object ~Ethan~ From benjamin at python.org Fri May 25 19:45:15 2012 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 25 May 2012 10:45:15 -0700 Subject: [Python-Dev] doc change for weakref In-Reply-To: <4FBFBFA3.10001@stoneleaf.us> References: <4FBFBFA3.10001@stoneleaf.us> Message-ID: 2012/5/25 Ethan Furman : > I'd like to make a slight doc change for weakref to state (more or less): > > ? weakrefs are not invalidated when the strong refs > ? are gone, but rather when garbage collection > ? reclaims the object I think this is fine. -- Regards, Benjamin From solipsis at pitrou.net Fri May 25 19:46:27 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 25 May 2012 19:46:27 +0200 Subject: [Python-Dev] doc change for weakref References: <4FBFBFA3.10001@stoneleaf.us> Message-ID: <20120525194627.7de0879b@pitrou.net> On Fri, 25 May 2012 10:21:39 -0700 Ethan Furman wrote: > I'd like to make a slight doc change for weakref to state (more or less): > > weakrefs are not invalidated when the strong refs > are gone, but rather when garbage collection > reclaims the object > > Should this be accurate for all implementations, or should it be more > along the lines of: > > weakrefs may be invalidated as soon as the strong refs > are gone, but may last until garbage collection reclaims > the object How about: weakrefs are invalidated when the object is destroyed, either as a product of all the strong references being gone or the object being reclaimed by the :term:`cyclic garbage collector `. Regards Antoine. From martin at v.loewis.de Fri May 25 19:57:51 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Fri, 25 May 2012 19:57:51 +0200 Subject: [Python-Dev] Rietveld update Message-ID: <20120525195751.Horde.h5spV6GZi1VPv8gfJLFUMFA@webmail.df.eu> As some have probably noticed: I updated the Rietveld version that we use to the current code base. There have been a few incompatible changes (schema, GAE API) which I hope I resolved. If you find new problems, please report them to the meta tracker. Regards, Martin From barry at python.org Fri May 25 20:16:18 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 25 May 2012 14:16:18 -0400 Subject: [Python-Dev] Volunteering to be PEP czar for PEP 421, sys.implementation References: <20120521172409.2010be8f@resist> Message-ID: <20120525141618.2c576f87@resist.wooz.org> On May 21, 2012, at 05:24 PM, Barry Warsaw wrote: >I've mentioned this in private to a few folks, with generally positive >feedback. > >I am formally volunteering to be PEP czar for PEP 421, sys.implementation. If >there are no objections in the next few days, I'll make it official. I received no objections, so I've claimed BDFL-Delegate on this PEP. I am doing a little bit more research, but I'm nearly ready to pronounce on this. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From barry at python.org Fri May 25 20:23:58 2012 From: barry at python.org (Barry Warsaw) Date: Fri, 25 May 2012 14:23:58 -0400 Subject: [Python-Dev] sys.implementation In-Reply-To: References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> Message-ID: <20120525142358.66034f42@resist.wooz.org> On May 17, 2012, at 07:19 AM, Eric Snow wrote: >PEP 421 has reached a good place and I'd like to ask for pronouncement. As the newly self-appointed PEP 421 czar, I hereby accept this PEP. Eric, you've done a masterful job at balancing and addressing the input from the Python development community, and the PEP looks great. I have not yet reviewed the patches in issue 14673, but the last message on that item indicates the patch is not yet up-to-date with the PEP description. Please update the issue, and if possible, let's get it landed before Sunday's alpha 4. Congratulations. -Barry http://bugs.python.org/issue14673 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: not available URL: From vinay_sajip at yahoo.co.uk Fri May 25 20:29:57 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Fri, 25 May 2012 18:29:57 +0000 (UTC) Subject: [Python-Dev] Accepting PEP 405 (Python Virtual Environments) References: Message-ID: Georg Brandl gmx.net> writes: > Great! Please remember that the next 3.3 alpha is scheduled for this > weekend, so please let me know in which timescale you plan to implement > this PEP. If you want to commit it before this alpha, I can shift it > by a few days, but not a whole week since I'm on vacation for one week > from June 2nd. I believe it is ready to integrate now. I aim to do it tomorrow (26 May) a.m. UTC, so that it can make the next alpha. Regards, Vinay Sajip From ericsnowcurrently at gmail.com Fri May 25 21:55:40 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 25 May 2012 13:55:40 -0600 Subject: [Python-Dev] sys.implementation In-Reply-To: <20120525142358.66034f42@resist.wooz.org> References: <20120426103150.4898a678@limelight.wooz.org> <4FAA3FA7.5070808@v.loewis.de> <20120509165039.23c8bf56@pitrou.net> <20120509095311.3a2c25c2@resist> <20120510105749.7401f1d2@pitrou.net> <20120525142358.66034f42@resist.wooz.org> Message-ID: On Fri, May 25, 2012 at 12:23 PM, Barry Warsaw wrote: > On May 17, 2012, at 07:19 AM, Eric Snow wrote: > >>PEP 421 has reached a good place and I'd like to ask for pronouncement. > > As the newly self-appointed PEP 421 czar, I hereby accept this PEP. > > Eric, you've done a masterful job at balancing and addressing the input from > the Python development community, and the PEP looks great. ?I have not yet > reviewed the patches in issue 14673, but the last message on that item > indicates the patch is not yet up-to-date with the PEP description. Thanks Barry, and everyone who chimed in! This PEP is a whole lot better because of the feedback of this community. I'm hopeful that it will be a good vehicle to make life easier for the other implementations. > Please update the issue, and if possible, let's get it landed before Sunday's > alpha 4. I'll do that tonight. -eric > > Congratulations. > -Barry > > http://bugs.python.org/issue14673 From timothy.c.delaney at gmail.com Sat May 26 00:01:45 2012 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Sat, 26 May 2012 08:01:45 +1000 Subject: [Python-Dev] doc change for weakref In-Reply-To: <20120525194627.7de0879b@pitrou.net> References: <4FBFBFA3.10001@stoneleaf.us> <20120525194627.7de0879b@pitrou.net> Message-ID: On 26 May 2012 03:46, Antoine Pitrou wrote: > On Fri, 25 May 2012 10:21:39 -0700 > Ethan Furman wrote: > > I'd like to make a slight doc change for weakref to state (more or less): > > > > weakrefs are not invalidated when the strong refs > > are gone, but rather when garbage collection > > reclaims the object > > > > Should this be accurate for all implementations, or should it be more > > along the lines of: > > > > weakrefs may be invalidated as soon as the strong refs > > are gone, but may last until garbage collection reclaims > > the object > > How about: weakrefs are invalidated when the object is destroyed, > either as a product of all the strong references being gone or the > object being reclaimed by the :term:`cyclic garbage collector > `. > I think this could be misleading - it could be read as weakrefs are gone as soon as all strong refs are gone if there are no cycles. It's CPython-specific. IIRC this was exactly Ethan's issue on python-list - he'd made the assumption that weakrefs went away as soon as all strong refs were gone, which broke on other Python implementations (and would have also broken if he'd had cycles). How about: weakrefs are invalidated only when the object is destroyed, which is dependent on the garbage collection method implemented. That then prevents an implementation from invalidating weakrefs before GC - however, since the object would then be completely unreachable (except by C code) I'm not sure it matters. Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sat May 26 03:55:49 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Sat, 26 May 2012 11:55:49 +1000 Subject: [Python-Dev] doc change for weakref In-Reply-To: <4FBFBFA3.10001@stoneleaf.us> References: <4FBFBFA3.10001@stoneleaf.us> Message-ID: <4FC03825.1030600@pearwood.info> Ethan Furman wrote: > I'd like to make a slight doc change for weakref to state (more or less): What specific part of the docs are you planning to change? My guess is that you want to change this start of the third paragraph: http://docs.python.org/py3k/library/weakref.html [quote] A weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else. [end quote] I don't think that should be changed. It makes no promises except that weak refs won't keep an object alive. Everything else is an implementation detail, as it should be. > weakrefs are not invalidated when the strong refs > are gone, but rather when garbage collection > reclaims the object I think you're making a distinction here that we should not make. Reference counting *is* a garbage collector (even if gc-bigots like to sneer at ref counting as "not a real gc"), and implementations with such a ref counting gc will not always distinguish the two states "strong refs are gone" and "object is reclaimed". I don't believe that we need to make promises about the exact timing of when weak refs will be invalidated. > Should this be accurate for all implementations, or should it be more > along the lines of: > > weakrefs may be invalidated as soon as the strong refs > are gone, but may last until garbage collection reclaims > the object This is better than the previous suggestion, since it says "may" rather than implies a "will". -- Steven From vinay_sajip at yahoo.co.uk Sat May 26 04:53:02 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Sat, 26 May 2012 02:53:02 +0000 (UTC) Subject: [Python-Dev] Accepting PEP 405 (Python Virtual Environments) References: Message-ID: Georg Brandl gmx.net> writes: > Great! Please remember that the next 3.3 alpha is scheduled for this > weekend, so please let me know in which timescale you plan to implement > this PEP. If you want to commit it before this alpha, I can shift it > by a few days, but not a whole week since I'm on vacation for one week > from June 2nd. It's now implemented in the default branch :-) Regards, Vinay Sajip From g.brandl at gmx.net Sat May 26 09:14:07 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 26 May 2012 09:14:07 +0200 Subject: [Python-Dev] cpython: #12586: add provisional email policy with new header parsing and folding. In-Reply-To: References: Message-ID: Am 26.05.2012 00:44, schrieb r.david.murray: > http://hg.python.org/cpython/rev/0189b9d2d6bc > changeset: 77148:0189b9d2d6bc > user: R David Murray > date: Fri May 25 18:42:14 2012 -0400 > summary: > #12586: add provisional email policy with new header parsing and folding. > > When the new policies are used (and only when the new policies are explicitly > used) headers turn into objects that have attributes based on their parsed > values, and can be set using objects that encapsulate the values, as well as > set directly from unicode strings. The folding algorithm then takes care of > encoding unicode where needed, and folding according to the highest level > syntactic objects. > > With this patch only date and time headers are parsed as anything other than > unstructured, but that is all the helper methods in the existing API handle. > I do plan to add more parsers, and complete the set specified in the RFC > before the package becomes stable. > > files: > Doc/library/email.policy.rst | 323 + > Lib/email/_encoded_words.py | 211 + > Lib/email/_header_value_parser.py | 2145 ++++++++ > Lib/email/_headerregistry.py | 456 + > Lib/email/_policybase.py | 12 +- > Lib/email/errors.py | 43 +- > Lib/email/generator.py | 11 +- > Lib/email/policy.py | 173 +- > Lib/email/utils.py | 7 + > Lib/test/test_email/__init__.py | 6 + > Lib/test/test_email/test__encoded_words.py | 187 + > Lib/test/test_email/test__header_value_parser.py | 2466 ++++++++++ > Lib/test/test_email/test__headerregistry.py | 717 ++ > Lib/test/test_email/test_generator.py | 170 +- > Lib/test/test_email/test_pickleable.py | 57 + > Lib/test/test_email/test_policy.py | 126 +- > 16 files changed, 6994 insertions(+), 116 deletions(-) > > > diff --git a/Doc/library/email.policy.rst b/Doc/library/email.policy.rst > --- a/Doc/library/email.policy.rst > +++ b/Doc/library/email.policy.rst > @@ -306,3 +306,326 @@ > ``7bit``, non-ascii binary data is CTE encoded using the ``unknown-8bit`` > charset. Otherwise the original source header is used, with its existing > line breaks and and any (RFC invalid) binary data it may contain. > + > + > +.. note:: > + > + The remainder of the classes documented below are included in the standard > + library on a :term:`provisional basis `. Backwards > + incompatible changes (up to and including removal of the feature) may occur > + if deemed necessary by the core developers. > + > + > +.. class:: EmailPolicy(**kw) > + > + This concrete :class:`Policy` provides behavior that is intended to be fully > + compliant with the current email RFCs. These include (but are not limited > + to) :rfc:`5322`, :rfc:`2047`, and the current MIME RFCs. > + > + This policy adds new header parsing and folding algorithms. Instead of > + simple strings, headers are custom objects with custom attributes depending > + on the type of the field. The parsing and folding algorithm fully implement > + :rfc:`2047` and :rfc:`5322`. > + > + In addition to the settable attributes listed above that apply to all > + policies, this policy adds the following additional attributes: > + > + .. attribute:: refold_source > + > + If the value for a header in the ``Message`` object originated from a > + :mod:`~email.parser` (as opposed to being set by a program), this > + attribute indicates whether or not a generator should refold that value > + when transforming the message back into stream form. The possible values > + are: > + > + ======== =============================================================== > + ``none`` all source values use original folding > + > + ``long`` source values that have any line that is longer than > + ``max_line_length`` will be refolded > + > + ``all`` all values are refolded. > + ======== =============================================================== > + > + The default is ``long``. > + > + .. attribute:: header_factory > + > + A callable that takes two arguments, ``name`` and ``value``, where > + ``name`` is a header field name and ``value`` is an unfolded header field > + value, and returns a string-like object that represents that header. A > + default ``header_factory`` is provided that understands some of the > + :RFC:`5322` header field types. (Currently address fields and date > + fields have special treatment, while all other fields are treated as > + unstructured. This list will be completed before the extension is marked > + stable.) > + > + The class provides the following concrete implementations of the abstract > + methods of :class:`Policy`: > + > + .. method:: header_source_parse(sourcelines) > + > + The implementation of this method is the same as that for the > + :class:`Compat32` policy. > + > + .. method:: header_store_parse(name, value) > + > + The name is returned unchanged. If the input value has a ``name`` > + attribute and it matches *name* ignoring case, the value is returned > + unchanged. Otherwise the *name* and *value* are passed to > + ``header_factory``, and the resulting custom header object is returned as > + the value. In this case a ``ValueError`` is raised if the input value > + contains CR or LF characters. > + > + .. method:: header_fetch_parse(name, value) > + > + If the value has a ``name`` attribute, it is returned to unmodified. > + Otherwise the *name*, and the *value* with any CR or LF characters > + removed, are passed to the ``header_factory``, and the resulting custom > + header object is returned. Any surrogateescaped bytes get turned into > + the unicode unknown-character glyph. > + > + .. method:: fold(name, value) > + > + Header folding is controlled by the :attr:`refold_source` policy setting. > + A value is considered to be a 'source value' if and only if it does not > + have a ``name`` attribute (having a ``name`` attribute means it is a > + header object of some sort). If a source value needs to be refolded > + according to the policy, it is converted into a custom header object by > + passing the *name* and the *value* with any CR and LF characters removed > + to the ``header_factory``. Folding of a custom header object is done by > + calling its ``fold`` method with the current policy. > + > + Source values are split into lines using :meth:`~str.splitlines`. If > + the value is not to be refolded, the lines are rejoined using the > + ``linesep`` from the policy and returned. The exception is lines > + containing non-ascii binary data. In that case the value is refolded > + regardless of the ``refold_source`` setting, which causes the binary data > + to be CTE encoded using the ``unknown-8bit`` charset. > + > + .. method:: fold_binary(name, value) > + > + The same as :meth:`fold` if :attr:`cte_type` is ``7bit``, except that > + the returned value is bytes. > + > + If :attr:`cte_type` is ``8bit``, non-ASCII binary data is converted back > + into bytes. Headers with binary data are not refolded, regardless of the > + ``refold_header`` setting, since there is no way to know whether the > + binary data consists of single byte characters or multibyte characters. > + > +The following instances of :class:`EmailPolicy` provide defaults suitable for > +specific application domains. Note that in the future the behavior of these > +instances (in particular the ``HTTP` instance) may be adjusted to conform even > +more closely to the RFCs relevant to their domains. > + > +.. data:: default > + > + An instance of ``EmailPolicy`` with all defaults unchanged. This policy > + uses the standard Python ``\n`` line endings rather than the RFC-correct > + ``\r\n``. > + > +.. data:: SMTP > + > + Suitable for serializing messages in conformance with the email RFCs. > + Like ``default``, but with ``linesep`` set to ``\r\n``, which is RFC > + compliant. > + > +.. data:: HTTP > + > + Suitable for serializing headers with for use in HTTP traffic. Like > + ``SMTP`` except that ``max_line_length`` is set to ``None`` (unlimited). > + > +.. data:: strict > + > + Convenience instance. The same as ``default`` except that > + ``raise_on_defect`` is set to ``True``. This allows any policy to be made > + strict by writing:: > + > + somepolicy + policy.strict > + > +With all of these :class:`EmailPolicies <.EmailPolicy>`, the effective API of > +the email package is changed from the Python 3.2 API in the following ways: > + > + * Setting a header on a :class:`~email.message.Message` results in that > + header being parsed and a custom header object created. > + > + * Fetching a header value from a :class:`~email.message.Message` results > + in that header being parsed and a custom header object created and > + returned. > + > + * Any custom header object, or any header that is refolded due to the > + policy settings, is folded using an algorithm that fully implements the > + RFC folding algorithms, including knowing where encoded words are required > + and allowed. > + > +From the application view, this means that any header obtained through the > +:class:`~email.message.Message` is a custom header object with custom > +attributes, whose string value is the fully decoded unicode value of the > +header. Likewise, a header may be assigned a new value, or a new header > +created, using a unicode string, and the policy will take care of converting > +the unicode string into the correct RFC encoded form. > + > +The custom header objects and their attributes are described below. All custom > +header objects are string subclasses, and their string value is the fully > +decoded value of the header field (the part of the field after the ``:``) > + > + > +.. class:: BaseHeader > + > + This is the base class for all custom header objects. It provides the > + following attributes: > + > + .. attribute:: name > + > + The header field name (the portion of the field before the ':'). > + > + .. attribute:: defects > + > + A possibly empty list of :class:`~email.errors.MessageDefect` objects > + that record any RFC violations found while parsing the header field. > + > + .. method:: fold(*, policy) > + > + Return a string containing :attr:`~email.policy.Policy.linesep` > + characters as required to correctly fold the header according > + to *policy*. A :attr:`~email.policy.Policy.cte_type` of > + ``8bit`` will be treated as if it were ``7bit``, since strings > + may not contain binary data. > + > + > +.. class:: UnstructuredHeader > + > + The class used for any header that does not have a more specific > + type. (The :mailheader:`Subject` header is an example of an > + unstructured header.) It does not have any additional attributes. > + > + > +.. class:: DateHeader > + > + The value of this type of header is a single date and time value. The > + primary example of this type of header is the :mailheader:`Date` header. > + > + .. attribute:: datetime > + > + A :class:`~datetime.datetime` encoding the date and time from the > + header value. > + > + The ``datetime`` will be a naive ``datetime`` if the value either does > + not have a specified timezone (which would be a violation of the RFC) or > + if the timezone is specified as ``-0000``. This timezone value indicates > + that the date and time is to be considered to be in UTC, but with no > + indication of the local timezone in which it was generated. (This > + contrasts to ``+0000``, which indicates a date and time that really is in > + the UTC ``0000`` timezone.) > + > + If the header value contains a valid timezone that is not ``-0000``, the > + ``datetime`` will be an aware ``datetime`` having a > + :class:`~datetime.tzinfo` set to the :class:`~datetime.timezone` > + indicated by the header value. > + > + A ``datetime`` may also be assigned to a :mailheader:`Date` type header. > + The resulting string value will use a timezone of ``-0000`` if the > + ``datetime`` is naive, and the appropriate UTC offset if the ``datetime`` is > + aware. > + > + > +.. class:: AddressHeader > + > + This class is used for all headers that can contain addresses, whether they > + are supposed to be singleton addresses or a list. > + > + .. attribute:: addresses > + > + A list of :class:`.Address` objects listing all of the addresses that > + could be parsed out of the field value. > + > + .. attribute:: groups > + > + A list of :class:`.Group` objects. Every address in :attr:`.addresses` > + appears in one of the group objects in the tuple. Addresses that are not > + syntactically part of a group are represented by ``Group`` objects whose > + ``name`` is ``None``. > + > + In addition to addresses in string form, any combination of > + :class:`.Address` and :class:`.Group` objects, singly or in a list, may be > + assigned to an address header. > + > + > +.. class:: Address(display_name='', username='', domain='', addr_spec=None): > + > + The class used to represent an email address. The general form of an > + address is:: > + > + [display_name] > + > + or:: > + > + username at domain > + > + where each part must conform to specific syntax rules spelled out in > + :rfc:`5322`. > + > + As a convenience *addr_spec* can be specified instead of *username* and > + *domain*, in which case *username* and *domain* will be parsed from the > + *addr_spec*. An *addr_spec* must be a properly RFC quoted string; if it is > + not ``Address`` will raise an error. Unicode characters are allowed and > + will be property encoded when serialized. However, per the RFCs, unicode is > + *not* allowed in the username portion of the address. > + > + .. attribute:: display_name > + > + The display name portion of the address, if any, with all quoting > + removed. If the address does not have a display name, this attribute > + will be an empty string. > + > + .. attribute:: username > + > + The ``username`` portion of the address, with all quoting removed. > + > + .. attribute:: domain > + > + The ``domain`` portion of the address. > + > + .. attribute:: addr_spec > + > + The ``username at domain`` portion of the address, correctly quoted > + for use as a bare address (the second form shown above). This > + attribute is not mutable. > + > + .. method:: __str__() > + > + The ``str`` value of the object is the address quoted according to > + :rfc:`5322` rules, but with no Content Transfer Encoding of any non-ASCII > + characters. > + > + > +.. class:: Group(display_name=None, addresses=None) > + > + The class used to represent an address group. The general form of an > + address group is:: > + > + display_name: [address-list]; > + > + As a convenience for processing lists of addresses that consist of a mixture > + of groups and single addresses, a ``Group`` may also be used to represent > + single addresses that are not part of a group by setting *display_name* to > + ``None`` and providing a list of the single address as *addresses*. > + > + .. attribute:: display_name > + > + The ``display_name`` of the group. If it is ``None`` and there is > + exactly one ``Address`` in ``addresses``, then the ``Group`` represents a > + single address that is not in a group. > + > + .. attribute:: addresses > + > + A possibly empty tuple of :class:`.Address` objects representing the > + addresses in the group. > + > + .. method:: __str__() > + > + The ``str`` value of a ``Group`` is formatted according to :rfc:`5322`, > + but with no Content Transfer Encoding of any non-ASCII characters. If > + ``display_name`` is none and there is a single ``Address`` in the > + ``addresses` list, the ``str`` value will be the same as the ``str`` of > + that single ``Address``. There's a lot of new stuff here: should have a versionadded? (Or do we need new markup for "provisional" stuff?) Georg From solipsis at pitrou.net Sat May 26 10:10:02 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 26 May 2012 10:10:02 +0200 Subject: [Python-Dev] cpython: Implemented PEP 405 (Python virtual environments). References: Message-ID: <20120526101002.27a2854c@pitrou.net> On Sat, 26 May 2012 04:48:49 +0200 vinay.sajip wrote: > +_sys_home = getattr(sys, '_home', None) > +if _sys_home and os.name == 'nt' and _sys_home.lower().endswith('pcbuild'): > + _sys_home = os.path.dirname(_sys_home) What about pcbuild/amd64? Does this work on 64-bit builds? > +_sys_home = getattr(sys, '_home', None) > +if _sys_home and os.name == 'nt' and _sys_home.lower().endswith('pcbuild'): > + _sys_home = os.path.dirname(_sys_home) Same question here. > +#!/usr/bin/env python I don't think there should be a shebang line in a test file. > +# > +# Copyright 2011 by Vinay Sajip. All Rights Reserved. > +# > +# Permission to use, copy, modify, and distribute this software and its > +# documentation for any purpose and without fee is hereby granted, > +# provided that the above copyright notice appear in all copies and that > +# both that copyright notice and this permission notice appear in > +# supporting documentation, and that the name of Vinay Sajip > +# not be used in advertising or publicity pertaining to distribution > +# of the software without specific, written prior permission. > +# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING > +# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL > +# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR > +# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER > +# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT > +# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Why the copyright boilerplate? > +# Use with a Python executable built from the Python fork at > +# > +# https://bitbucket.org/vinay.sajip/pythonv/ as follows: This needs to be updated. > +# You'll need an Internet connection (needed to download distribute_setup.py). > +# > +# The script will change to the environment's binary directory and run > +# > +# ./python distribute_setup.py [...] > +# This class will not be included in Python core; it's here for now to Well... either the comment should be fixed or the class removed. > + # XXX This option will be removed. > + # XXX This will be changed to EnvBuilder Same here. > diff --git a/Lib/venv/scripts/nt/pysetup3.exe b/Lib/venv/scripts/nt/pysetup3.exe > new file mode 100644 > index 0000000000000000000000000000000000000000..3f3c09ebc8e55f4ac3379041753cb34daef71892 > GIT binary patch What's this file and how was it compiled? Regards Antoine. From larry at hastings.org Sat May 26 11:07:18 2012 From: larry at hastings.org (Larry Hastings) Date: Sat, 26 May 2012 02:07:18 -0700 Subject: [Python-Dev] cpython: simplify and rewrite the zipimport part of 702009f3c0b1 a bit In-Reply-To: <20120525191431.0623368c@pitrou.net> References: <20120525191431.0623368c@pitrou.net> Message-ID: <4FC09D46.8030407@hastings.org> On 05/25/2012 10:14 AM, Antoine Pitrou wrote: > On Fri, 25 May 2012 18:57:57 +0200 > Georg Brandl wrote: >> This is probably minor, but wouldn't it make more sense to have those >> constants uppercased? At least that's the general style we have in >> the codebase for enum values. > +1, this surprised me too. FWIW I contributed the utime enum with the lowercase values. I don't uppercase enum values as a rule. Uppercasing preprocessor macros is a good idea because they're not safe. There are loads of ways they can produce unexpected behavior. So if something funny is going on, and the code involves some preprocessor slight-of-hand, those identifiers pop out at you and you know to double-check them. But enum values are as safe as houses. I think of them as equivalent to const ints, which I also don't uppercase. There's no need to draw attention to them. There's nothing in PEP 7 either way about enum nomenclature. But Benjamin has already uppercased these (and some other) enums, so I suppose the community has spoken. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat May 26 14:40:37 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 26 May 2012 22:40:37 +1000 Subject: [Python-Dev] [Python-checkins] peps: PEP 421 is implemented. In-Reply-To: References: Message-ID: On Sat, May 26, 2012 at 5:14 PM, georg.brandl wrote: > http://hg.python.org/peps/rev/cba34504163d > changeset: ? 4441:cba34504163d > user: ? ? ? ?Georg Brandl > date: ? ? ? ?Sat May 26 09:15:01 2012 +0200 > summary: > ?PEP 421 is implemented. Did you mean to move 405 instead? 421 is accepted, but not implemented yet. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From anfojp at gmail.com Sat May 26 14:42:23 2012 From: anfojp at gmail.com (Mr.T Beppu) Date: Sat, 26 May 2012 21:42:23 +0900 Subject: [Python-Dev] How to build a browser in Paython. cannot import webkit & object. Message-ID: I think that I will make a browser in Official Python (not MacPorts Python). What should I do in order to install Webkit for Official Python (not MacPorts Python) ? from tokyo Japan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Sat May 26 15:08:49 2012 From: phd at phdru.name (Oleg Broytman) Date: Sat, 26 May 2012 17:08:49 +0400 Subject: [Python-Dev] How to build a browser in Paython. cannot import webkit & object. In-Reply-To: References: Message-ID: <20120526130849.GA15398@iskra.aviel.ru> Hello. We are sorry but we cannot help you. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Thank you for understanding. On Sat, May 26, 2012 at 09:42:23PM +0900, "Mr.T Beppu" wrote: > I think that I will make a browser in Official Python (not MacPorts > Python). > What should I do in order to install Webkit for Official Python (not > MacPorts Python) ? Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From rdmurray at bitdance.com Sat May 26 15:37:28 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 26 May 2012 09:37:28 -0400 Subject: [Python-Dev] cpython: #12586: add provisional email policy with new header parsing and folding. In-Reply-To: References: Message-ID: <20120526133729.25E4425008C@webabinitio.net> On Sat, 26 May 2012 09:14:07 +0200, Georg Brandl wrote: > Am 26.05.2012 00:44, schrieb r.david.murray: > > http://hg.python.org/cpython/rev/0189b9d2d6bc > > changeset: 77148:0189b9d2d6bc > > user: R David Murray > > date: Fri May 25 18:42:14 2012 -0400 > > summary: > > #12586: add provisional email policy with new header parsing and folding. [...] > > > > diff --git a/Doc/library/email.policy.rst b/Doc/library/email.policy.rst > > --- a/Doc/library/email.policy.rst > > +++ b/Doc/library/email.policy.rst > > @@ -306,3 +306,326 @@ > > ``7bit``, non-ascii binary data is CTE encoded using the ``unknown-8bit`` > > charset. Otherwise the original source header is used, with its existing > > line breaks and and any (RFC invalid) binary data it may contain. [...] > > There's a lot of new stuff here: should have a versionadded? (Or do we need new > markup for "provisional" stuff?) The entire policy module is new in 3.3 and has a versionadded at the top. New markup for provisional would be cool, though. I think eventually some of these docs will get factored out of policy, but that probably won't happen until it is no longer provisional. At that point I'll be doing a massive doc reorganization to deprecate many of the old APIs. Another option here is to consider 'policy' itself as the provisional package...except that to use it requires hooks in the other packages (the policy= keyword arguments). And I'm pretty satisfied with the API of the policy module itself, so I don't think it needs to be considered provisional. --David From brett at python.org Sat May 26 20:27:21 2012 From: brett at python.org (Brett Cannon) Date: Sat, 26 May 2012 14:27:21 -0400 Subject: [Python-Dev] [Python-checkins] cpython: issue 14660: Implement PEP 420, namespace packages. In-Reply-To: <20120525104130.19b416fb@limelight.wooz.org> References: <20120525104130.19b416fb@limelight.wooz.org> Message-ID: On Fri, May 25, 2012 at 10:41 AM, Barry Warsaw wrote: > On May 25, 2012, at 10:31 AM, Brett Cannon wrote: > > >Is documentation coming in a separate commit? > > Yes. I've been reworking the import machinery documentation; it's a > work-in-progress on the pep-420 feature clone ('importdocs' branch). I > made > some good progress and then got side-tracked, but I'm planning on getting > back > to it soon. OK, great! Something to keep in the back of your head, Barry, is the naming of importlib.find_loader(). Since its return value is not the same as what the PEP introduces it might stand for a name change (it's new in Python 3.3 so it can be whatever makes sense). Also just noticed that there is no update to importlib.abc.Finder for find_loader(), which I assume is because of the hasattr() check in PathFinder. That's fine, but it would be good to update the docs for ABC so people know that is an optional interface they can implement. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat May 26 21:28:51 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 26 May 2012 21:28:51 +0200 Subject: [Python-Dev] Proposal for better SSL errors Message-ID: <20120526212851.7aefd334@pitrou.net> Hello, In http://bugs.python.org/issue14837 I have attached a proof-of-concept patch to improve the exceptions raised by the ssl module when OpenSSL signals an error. The current situation is quite dismal, since you get a sometimes cryptic error message with no viable opportunities for programmatic introspection: >>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) >>> ctx.verify_mode = ssl.CERT_REQUIRED >>> sock = socket.create_connection(("svn.python.org", 443)) >>> sock = ctx.wrap_socket(sock) Traceback (most recent call last): [...] ssl.SSLError: [Errno 1] _ssl.c:420: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed SSLError instances only have a "errno" attribute which doesn't actually contain a meaningful value. With the posted patch, the above error becomes: >>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) >>> ctx.verify_mode = ssl.CERT_REQUIRED >>> sock = socket.create_connection(("svn.python.org", 443)) >>> sock = ctx.wrap_socket(sock) Traceback (most recent call last): [...] ssl.SSLError: [Errno 5] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:494) [88296 refs] Not only does the error string contain more valuable information (the mnemonics "SSL" and "CERTIFICATE_VERIFY_FAILED" indicate, respectively, in which subpart of OpenSSL and which precise error occurred), but they are also introspectable: >>> e = sys.last_value >>> e.library 'SSL' >>> e.reason 'CERTIFICATE_VERIFY_FAILED' (these mnemonics correspond to OpenSSL's own #define'd numeric codes. I find it more Pythonic to expose the mnemonics than the numbers, though. Of course, the numbers <-> mnemnonics mappings can be separately exposed) You'll note there is still a "Errno 5" in that error message; I don't really know what to do with it. Hard-wiring the errno attribute to something like None *might* break existing software, although that would be unlikely since the current errno value is quite meaningless and confusing (it has nothing to do with POSIX errnos). To clarify a bit my request, I am asking for feedback on the principle more than on the implementation right now. Regards Antoine. From solipsis at pitrou.net Sat May 26 21:43:11 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 26 May 2012 21:43:11 +0200 Subject: [Python-Dev] cpython: Addressed some buildbot errors and comments on the checkin by Antoine on References: Message-ID: <20120526214311.7e71010b@pitrou.net> On Sat, 26 May 2012 21:39:36 +0200 vinay.sajip wrote: > return False > _sys_home = getattr(sys, '_home', None) > -if _sys_home and os.name == 'nt' and _sys_home.lower().endswith('pcbuild'): > +if _sys_home and os.name == 'nt' and \ > + _sys_home.lower().endswith(('pcbuild', 'pcbuild\\amd64')): > _sys_home = os.path.dirname(_sys_home) Ok, but is one os.path.dirname() call enough in the AMD64 case? It looks like you'd want to walk up two directories rather than one (but I might misunderstand). Regards Antoine. From tjreedy at udel.edu Sat May 26 23:44:08 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 26 May 2012 17:44:08 -0400 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: <20120526212851.7aefd334@pitrou.net> References: <20120526212851.7aefd334@pitrou.net> Message-ID: On 5/26/2012 3:28 PM, Antoine Pitrou wrote: > > Hello, > > In http://bugs.python.org/issue14837 I have attached a proof-of-concept > patch to improve the exceptions raised by the ssl module when OpenSSL > signals an error. The current situation is quite dismal, since you get > a sometimes cryptic error message with no viable opportunities for > programmatic introspection: > >>>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) >>>> ctx.verify_mode = ssl.CERT_REQUIRED >>>> sock = socket.create_connection(("svn.python.org", 443)) >>>> sock = ctx.wrap_socket(sock) > Traceback (most recent call last): > [...] > ssl.SSLError: [Errno 1] _ssl.c:420: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed I agree that this is not easy to read ;-) > SSLError instances only have a "errno" attribute which doesn't actually > contain a meaningful value. > > With the posted patch, the above error becomes: > >>>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) >>>> ctx.verify_mode = ssl.CERT_REQUIRED >>>> sock = socket.create_connection(("svn.python.org", 443)) >>>> sock = ctx.wrap_socket(sock) > Traceback (most recent call last): > [...] > ssl.SSLError: [Errno 5] [SSL: CERTIFICATE_VERIFY_FAILED] certificate > verify failed (_ssl.c:494) [88296 refs] Repeating the same reason in upper and lower case is unhelpful noise. Here is my suggested human-readable message. ssl.SSLError: in ssl sublibrary, certificate verify failed > Not only does the error string contain more valuable information (the > mnemonics "SSL" and "CERTIFICATE_VERIFY_FAILED" indicate, respectively, > in which subpart of OpenSSL and which precise error occurred), but they > are also introspectable: > >>>> e = sys.last_value >>>> e.library > 'SSL' Not being up on ssl sublibraries, I would tend to think that means the main ssl library that gets imported. If that is wrong, .sublibrary would be clearer to me, but knowledgable users may be fine with it as it is. >>>> e.reason > 'CERTIFICATE_VERIFY_FAILED' > > (these mnemonics correspond to OpenSSL's own #define'd numeric codes. I > find it more Pythonic to expose the mnemonics than the numbers, though. > Of course, the numbers<-> mnemnonics mappings can be separately > exposed) Python is not a 'minimize characters written' language and library. Inside an exception branch, if e.reason == 'CERTIFICATE_VERIFY_FAILED': is really clear, more so than any abbreviation. > You'll note there is still a "Errno 5" in that error message; I don't > really know what to do with it. Hard-wiring the errno attribute to > something like None *might* break existing software, although that > would be unlikely since the current errno value is quite meaningless > and confusing (it has nothing to do with POSIX errnos). Given what you have written, I think the aim should be to get rid of it. If you want to be conservative and not just delete it now, give SSLError a __getattr__(self,name) method that looks for name == 'errno' and when so, issues a DeprecationWarning "SSLError.errno is meaningless and will be removed in the future. It is currently fixed at 0." before returning 0. > To clarify a bit my request, I am asking for feedback on the principle > more than on the implementation right now. My view: better exception data is good. The exception class is useful both to people and programs. The exception message is mainly for people in tracebacks for uncaught exceptions. Other attributes are mainly for programs that catch the exception and need more information than just the class. Exceptions, like SSLErrors, reporting external conditions that a program can respond to, are prime candidates for such attributes. +1 to this enhancement. -- Terry Jan Reedy From solipsis at pitrou.net Sun May 27 01:23:48 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 27 May 2012 01:23:48 +0200 Subject: [Python-Dev] Proposal for better SSL errors References: <20120526212851.7aefd334@pitrou.net> Message-ID: <20120527012348.0fcae455@pitrou.net> On Sat, 26 May 2012 17:44:08 -0400 Terry Reedy wrote: > > Traceback (most recent call last): > > [...] > > ssl.SSLError: [Errno 5] [SSL: CERTIFICATE_VERIFY_FAILED] certificate > > verify failed (_ssl.c:494) [88296 refs] > > Repeating the same reason in upper and lower case is unhelpful noise. > Here is my suggested human-readable message. > > ssl.SSLError: in ssl sublibrary, certificate verify failed I should have made clearer that "certificate verify failed" is a string returned by OpenSSL. In this case it's exactly similar to the mnemonic, which might not always be the case (I honestly have not done any research on this point). Also, the mnemonic is useful to know which reason to test for, when seen in a traceback. (here I'd draw a parallel with POSIX errnos, where it has been a common request for years for OSError's str() to display the errno mnemonic, such as e.g. ENOENT, rather than its number) > >>>> e = sys.last_value > >>>> e.library > > 'SSL' > > Not being up on ssl sublibraries, I would tend to think that means the > main ssl library that gets imported. If that is wrong, .sublibrary would > be clearer to me, but knowledgable users may be fine with it as it is. Well, it's called "library" in OpenSSL-speak, so I kept that name, but I am not particularly attached to it, so "sublibrary" could work too. As for what it means, "SSL" refers to the implementation of the SSL network protocol (or TLS), while other OpenSSL "libraries" cater with e.g. certificate management ("X509"), parsing of certificate files ("PEM"), etc. > > You'll note there is still a "Errno 5" in that error message; I don't > > really know what to do with it. Hard-wiring the errno attribute to > > something like None *might* break existing software, although that > > would be unlikely since the current errno value is quite meaningless > > and confusing (it has nothing to do with POSIX errnos). > > Given what you have written, I think the aim should be to get rid of it. I also think it's desireable. Thanks for sharing your opinion Antoine. From cs at zip.com.au Sun May 27 04:00:57 2012 From: cs at zip.com.au (Cameron Simpson) Date: Sun, 27 May 2012 12:00:57 +1000 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: <20120526212851.7aefd334@pitrou.net> References: <20120526212851.7aefd334@pitrou.net> Message-ID: <20120527020057.GA5910@cskk.homeip.net> On 26May2012 21:28, Antoine Pitrou wrote: | Not only does the error string contain more valuable information (the | mnemonics "SSL" and "CERTIFICATE_VERIFY_FAILED" indicate, respectively, | in which subpart of OpenSSL and which precise error occurred), but they | are also introspectable: | | >>> e = sys.last_value | >>> e.library | 'SSL' | >>> e.reason | 'CERTIFICATE_VERIFY_FAILED' | | (these mnemonics correspond to OpenSSL's own #define'd numeric codes. I | find it more Pythonic to expose the mnemonics than the numbers, though. | Of course, the numbers <-> mnemnonics mappings can be separately | exposed) Would you be inclined to exposed both? Eg add .ssl_errno (or whatever short name is conventionally used in the SSL library itself, just as "errno" matches the POSIX error code name). | You'll note there is still a "Errno 5" in that error message; I don't | really know what to do with it. Hard-wiring the errno attribute to | something like None *might* break existing software, although that | would be unlikely since the current errno value is quite meaningless | and confusing (it has nothing to do with POSIX errnos). It is EIO ("I/O error"), and not inappropriate for a communictions failure. I don't think POSIX prohibits other library functions from setting errno, either. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Principles have no real force except when one is well fed. - Mark Twain From greg at krypto.org Sun May 27 04:31:55 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 26 May 2012 19:31:55 -0700 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: <20120526212851.7aefd334@pitrou.net> References: <20120526212851.7aefd334@pitrou.net> Message-ID: On Sat, May 26, 2012 at 12:28 PM, Antoine Pitrou wrote: > > Hello, > > In http://bugs.python.org/issue14837 I have attached a proof-of-concept > patch to improve the exceptions raised by the ssl module when OpenSSL > signals an error. The current situation is quite dismal, since you get > a sometimes cryptic error message with no viable opportunities for > programmatic introspection: > > >>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) > >>> ctx.verify_mode = ssl.CERT_REQUIRED > >>> sock = socket.create_connection(("svn.python.org", 443)) > >>> sock = ctx.wrap_socket(sock) > Traceback (most recent call last): > [...] > ssl.SSLError: [Errno 1] _ssl.c:420: error:14090086:SSL > routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed > > > SSLError instances only have a "errno" attribute which doesn't actually > contain a meaningful value. > > With the posted patch, the above error becomes: > > >>> ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1) > >>> ctx.verify_mode = ssl.CERT_REQUIRED > >>> sock = socket.create_connection(("svn.python.org", 443)) > >>> sock = ctx.wrap_socket(sock) > Traceback (most recent call last): > [...] > ssl.SSLError: [Errno 5] [SSL: CERTIFICATE_VERIFY_FAILED] certificate > verify failed (_ssl.c:494) [88296 refs] > > > Not only does the error string contain more valuable information (the > mnemonics "SSL" and "CERTIFICATE_VERIFY_FAILED" indicate, respectively, > in which subpart of OpenSSL and which precise error occurred), but they > are also introspectable: > > >>> e = sys.last_value > >>> e.library > 'SSL' > >>> e.reason > 'CERTIFICATE_VERIFY_FAILED' > > (these mnemonics correspond to OpenSSL's own #define'd numeric codes. I > find it more Pythonic to expose the mnemonics than the numbers, though. > Of course, the numbers <-> mnemnonics mappings can be separately > exposed) > > You'll note there is still a "Errno 5" in that error message; I don't > really know what to do with it. Hard-wiring the errno attribute to > something like None *might* break existing software, although that > would be unlikely since the current errno value is quite meaningless > and confusing (it has nothing to do with POSIX errnos). > > > To clarify a bit my request, I am asking for feedback on the principle > more than on the implementation right now. > +1 I like it. It is better than what we have today. As for the misleading errno attribute, since it is not a posix errno I think it could be hard wired to 0 for SSL errors if and only if openssl is not actually setting it to something meaningful. The fact that an exception was raised is the error and what the exception was about in the case of SSL errors can come from your new library and reason attributes. There is a long term caveat to the overall approach: It still leaves the exception details being OpenSSL specific. If someone wants to ditch OpenSSL and use something else such as NSS (for example) in a future _ssl implementation what would its exception error info story look like? I would go ahead with this work regardless. It improves on what we have today. Defining a nicer way for SSL exceptions that is library agnostic is a larger project that should be done independent of making what we have today easier to work with. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun May 27 09:43:58 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 27 May 2012 17:43:58 +1000 Subject: [Python-Dev] Hacking on the compiler in ways that break the frozen instance of importlib... Message-ID: So, I'm currently trying to fix the regression in handling __class__ references in 3.3. The first step in this is unwinding the name change for the closure reference so it goes back to using "__class__" (instead of "@__class__") before finding a different way to fix #12370. As near as I can tell, my efforts are getting killed by the frozen instance of importlib: if I make the change in the straightforward fashion, the frozen copy of FindLoader.load_module() uses zero-argument super(), which tries to look up "@__class__", which fails, which means initialisation goes pear-shaped. I'm going to fix it in this case by tweaking importlib._bootstrap to avoid using zero-argument super() (with an unmodified core) before applying the changes, but yeah, be warned that you're in for some fun when tinkering with any construct used by importlib._bootstrap and end up doing something that involves changing the PYC magic number. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From benjamin at python.org Sun May 27 09:50:25 2012 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 27 May 2012 00:50:25 -0700 Subject: [Python-Dev] Hacking on the compiler in ways that break the frozen instance of importlib... In-Reply-To: References: Message-ID: 2012/5/27 Nick Coghlan : > So, I'm currently trying to fix the regression in handling __class__ > references in 3.3. The first step in this is unwinding the name change > for the closure reference so it goes back to using "__class__" > (instead of "@__class__") before finding a different way to fix > #12370. > > As near as I can tell, my efforts are getting killed by the frozen > instance of importlib: if I make the change in the straightforward > fashion, the frozen copy of FindLoader.load_module() uses > zero-argument super(), which tries to look up "@__class__", which > fails, which means initialisation goes pear-shaped. > > I'm going to fix it in this case by tweaking importlib._bootstrap to > avoid using zero-argument super() (with an unmodified core) before > applying the changes, but yeah, be warned that you're in for some fun > when tinkering with any construct used by importlib._bootstrap and end > up doing something that involves changing the PYC magic number. Nasty! Perhaps freeze_importlib.py could be rewritten in C, so importlib could be recompiled when the compiler changes? -- Regards, Benjamin From martin at v.loewis.de Sun May 27 10:13:17 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sun, 27 May 2012 10:13:17 +0200 Subject: [Python-Dev] Hacking on the compiler in ways that break the frozen instance of importlib... In-Reply-To: References: Message-ID: <20120527101317.Horde.NAi0WbuWis5PweIduEel4eA@webmail.df.eu> > Nasty! Perhaps freeze_importlib.py could be rewritten in C, so > importlib could be recompiled when the compiler changes? Or we support bootstrapping from the source file, e.g. with an environment variable BOOTSTRAP_PY which points to the _bootstrap.py source. Regards, Martin From solipsis at pitrou.net Sun May 27 11:08:37 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 27 May 2012 11:08:37 +0200 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: References: <20120526212851.7aefd334@pitrou.net> Message-ID: <20120527110837.5b68c188@pitrou.net> On Sat, 26 May 2012 19:31:55 -0700 "Gregory P. Smith" wrote: > > There is a long term caveat to the overall approach: It still leaves the > exception details being OpenSSL specific. If someone wants to ditch > OpenSSL and use something else such as NSS (for example) in a future _ssl > implementation what would its exception error info story look like? That's a general issue with the ssl module. Unless we come up with our own API and abstraction layer (which has a cost in design effort and risks), we're following the OpenSSL architecture (e.g. the SSLContext idea). Regards Antoine. From solipsis at pitrou.net Sun May 27 11:29:15 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 27 May 2012 11:29:15 +0200 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: <20120527020057.GA5910@cskk.homeip.net> References: <20120526212851.7aefd334@pitrou.net> <20120527020057.GA5910@cskk.homeip.net> Message-ID: <20120527112915.27214175@pitrou.net> On Sun, 27 May 2012 12:00:57 +1000 Cameron Simpson wrote: > On 26May2012 21:28, Antoine Pitrou wrote: > | Not only does the error string contain more valuable information (the > | mnemonics "SSL" and "CERTIFICATE_VERIFY_FAILED" indicate, respectively, > | in which subpart of OpenSSL and which precise error occurred), but they > | are also introspectable: > | > | >>> e = sys.last_value > | >>> e.library > | 'SSL' > | >>> e.reason > | 'CERTIFICATE_VERIFY_FAILED' > | > | (these mnemonics correspond to OpenSSL's own #define'd numeric codes. I > | find it more Pythonic to expose the mnemonics than the numbers, though. > | Of course, the numbers <-> mnemnonics mappings can be separately > | exposed) > > Would you be inclined to exposed both? Eg add .ssl_errno (or whatever > short name is conventionally used in the SSL library itself, just as > "errno" matches the POSIX error code name). OpenSSL has a diversity of error codes. In this case there's the result code returned by OpenSSL's SSL_get_error(), which is 1 (SSL_ERROR_SSL) and is already recorded as "errno" (see below). There's the reason, as returned by OpenSSL's ERR_get_reason(), which is SSL_R_CERTIFICATE_VERIFY_FAILED. And I'm sure other oddities are lurking. > | You'll note there is still a "Errno 5" in that error message; I don't > | really know what to do with it. Hard-wiring the errno attribute to > | something like None *might* break existing software, although that > | would be unlikely since the current errno value is quite meaningless > | and confusing (it has nothing to do with POSIX errnos). > > It is EIO ("I/O error"), and not inappropriate for a communictions failure. That's a nice coincidence, but it's actually an OpenSSL-specific code. Also, there's a bug in the current patch, the right value should be 1 (SSL_ERROR_SSL) not 5. That said, I remember there's legacy code doing things like: except SSLError as e: if e.args[0] == SSL_ERROR_WANT_READ: ... so we can't ditch the errno, although in 3.3 you would write: except SSLWantReadError: ... Regards Antoine. From g.brandl at gmx.net Sun May 27 14:40:28 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 27 May 2012 14:40:28 +0200 Subject: [Python-Dev] Hacking on the compiler in ways that break the frozen instance of importlib... In-Reply-To: References: Message-ID: Am 27.05.2012 09:43, schrieb Nick Coghlan: > So, I'm currently trying to fix the regression in handling __class__ > references in 3.3. The first step in this is unwinding the name change > for the closure reference so it goes back to using "__class__" > (instead of "@__class__") before finding a different way to fix > #12370. > > As near as I can tell, my efforts are getting killed by the frozen > instance of importlib: if I make the change in the straightforward > fashion, the frozen copy of FindLoader.load_module() uses > zero-argument super(), which tries to look up "@__class__", which > fails, which means initialisation goes pear-shaped. > > I'm going to fix it in this case by tweaking importlib._bootstrap to > avoid using zero-argument super() (with an unmodified core) before > applying the changes, but yeah, be warned that you're in for some fun > when tinkering with any construct used by importlib._bootstrap and end > up doing something that involves changing the PYC magic number. I hate to say it, but: I told y'all so :) http://mail.python.org/pipermail/python-dev/2012-April/118790.html Georg From solipsis at pitrou.net Sun May 27 19:51:45 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 27 May 2012 19:51:45 +0200 Subject: [Python-Dev] Hacking on the compiler in ways that break the frozen instance of importlib... References: <20120527101317.Horde.NAi0WbuWis5PweIduEel4eA@webmail.df.eu> Message-ID: <20120527195145.2a14e7fe@pitrou.net> On Sun, 27 May 2012 10:13:17 +0200 martin at v.loewis.de wrote: > > Nasty! Perhaps freeze_importlib.py could be rewritten in C, so > > importlib could be recompiled when the compiler changes? > > Or we support bootstrapping from the source file, e.g. with an > environment variable BOOTSTRAP_PY which points to the _bootstrap.py > source. I've opened http://bugs.python.org/issue14928 and made it a release blocker. Regards Antoine. From cs at zip.com.au Mon May 28 04:52:42 2012 From: cs at zip.com.au (Cameron Simpson) Date: Mon, 28 May 2012 12:52:42 +1000 Subject: [Python-Dev] Proposal for better SSL errors In-Reply-To: <20120527112915.27214175@pitrou.net> References: <20120527112915.27214175@pitrou.net> Message-ID: <20120528025242.GA25283@cskk.homeip.net> On 27May2012 11:29, Antoine Pitrou wrote: | On Sun, 27 May 2012 12:00:57 +1000 | Cameron Simpson wrote: | > On 26May2012 21:28, Antoine Pitrou wrote: | > | You'll note there is still a "Errno 5" in that error message; I don't | > | really know what to do with it. Hard-wiring the errno attribute to | > | something like None *might* break existing software, although that | > | would be unlikely since the current errno value is quite meaningless | > | and confusing (it has nothing to do with POSIX errnos). | > | > It is EIO ("I/O error"), and not inappropriate for a communictions failure. | | That's a nice coincidence, but it's actually an OpenSSL-specific code. Oh. -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Do not taunt Happy Fun Coder. From g.brandl at gmx.net Mon May 28 08:53:04 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 28 May 2012 08:53:04 +0200 Subject: [Python-Dev] cpython (3.2): Issue12510: Attempting to get invalid tooltip no longer closes Idle. In-Reply-To: References: Message-ID: Am 28.05.2012 03:55, schrieb terry.reedy: > http://hg.python.org/cpython/rev/4a7582866735 > changeset: 77195:4a7582866735 > branch: 3.2 > parent: 77189:6737c2ca98ee > user: Terry Jan Reedy > date: Sun May 27 21:29:17 2012 -0400 > summary: > Issue12510: Attempting to get invalid tooltip no longer closes Idle. > Original patch by Roger Serwy. > > files: > Lib/idlelib/CallTips.py | 9 ++++++--- > Misc/NEWS | 3 +++ > 2 files changed, 9 insertions(+), 3 deletions(-) > > > diff --git a/Lib/idlelib/CallTips.py b/Lib/idlelib/CallTips.py > --- a/Lib/idlelib/CallTips.py > +++ b/Lib/idlelib/CallTips.py > @@ -110,7 +110,9 @@ > namespace.update(__main__.__dict__) > try: > return eval(name, namespace) > - except (NameError, AttributeError): > + # any exception is possible if evalfuncs True in open_calltip > + # at least Syntax, Name, Attribute, Index, and Key E. if not Is something missing here? The comment text seems cut off. > + except: > return None "except Exception" may be better here. Georg From vinay_sajip at yahoo.co.uk Mon May 28 19:25:10 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 28 May 2012 17:25:10 +0000 (UTC) Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support Message-ID: In the recent check-in I made of the PEP 405 functionality, there is a Windows executable. Antoine asked what this was in his comment on the checkin, but I couldn't respond via Gmane (my usual method) as for some reason his post doesn't appear there. PEP 397 (Python launcher for Windows) has not yet been accepted, so there still needs to be some way of natively launching scripts in Windows which is equivalent to /path/to/venv/bin/foo. The way setuptools (and hence Distribute) does this is to shadow each script with an executable: whereas a script foo would be simply placed in /path/to/venv/bin/ on POSIX, on Windows, the files foo.exe and foo-script.py (or foo-script.pyw) are placed in \path\to\venv\Scripts. The user can run \path\to\venv\Scripts\foo just as on POSIX. The foo.exe file is just a copy of a stock launcher executable which finds its name from the C argv[0], and based in that name (foo in this case), invokes foo-script.py or foo-script.pyw with the appropriate Python interpreter. There are two versions of the launcher - console and Windows - built from the same source. These append -script.py and -script.pyw respectively, hard-coded into the executable. The idea is for packaging to do the appropriate copying of stock-launcher.exe to foo.exe when installing scripts. AFAIK this is not yet in packaging, but I implemented it in the pythonv branch (that code was not part of the PEP 405 implementation - it just allowed me to explore how venvs would work with packaging on Windows). The setuptools versions of these scripts are compiled using MinGW. I don't know if we can use them as is, and as the functionality is fairly simple, I implemented it in a separate project using MSVC: https://bitbucket.org/vinay.sajip/simple_launcher We may not need any of this, if PEP 397 is accepted in time. However, if it isn't, I would expect that something like these launchers will be needed. In my packaging code in the pythonv branch, there are different variants - t32.exe, t64.exe, w32.exe, w64.exe - one of which is picked as the source for copying to the destination when installing a script. These .exes are UPX-compressed versions of the executables created by the Microsoft compiler and linker (using static linking). Comments welcome, especially on whether Windows users agree that something like this is needed in the absence of PEP 397 in Python 3.3. Regards, Vinay Sajip From solipsis at pitrou.net Mon May 28 19:39:22 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 May 2012 19:39:22 +0200 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: Message-ID: <20120528193922.6eed19ab@pitrou.net> On Mon, 28 May 2012 17:25:10 +0000 (UTC) Vinay Sajip wrote: > > The foo.exe file is just a copy of a stock launcher executable which finds its > name from the C argv[0], and based in that name (foo in this case), invokes > foo-script.py or foo-script.pyw with the appropriate Python interpreter. Regardless of what the executable is or does, its source code must be included somewhere in the Python source tree (and, preferably, there should be a simple procedure to build the binaries). Regards Antoine. From vinay_sajip at yahoo.co.uk Mon May 28 21:37:55 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 28 May 2012 19:37:55 +0000 (UTC) Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > Regardless of what the executable is or does, its source code must be > included somewhere in the Python source tree (and, preferably, there > should be a simple procedure to build the binaries). I understand that. Does it need to be checked in right now? It will need integrating with the existing VS2010 solution file, and at the moment I cannot do that integration because I haven't yet got a full VS2010 build environment, just a VS2008 one. Regards, Vinay Sajip From solipsis at pitrou.net Mon May 28 21:40:57 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 May 2012 21:40:57 +0200 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> Message-ID: <20120528214057.4bbdff62@pitrou.net> On Mon, 28 May 2012 19:37:55 +0000 (UTC) Vinay Sajip wrote: > Antoine Pitrou pitrou.net> writes: > > > Regardless of what the executable is or does, its source code must be > > included somewhere in the Python source tree (and, preferably, there > > should be a simple procedure to build the binaries). > > I understand that. Does it need to be checked in right now? Not necessarily, but OTOH, it is not really standard procedure to commit half-finished patches. Regards Antoine. From vinay_sajip at yahoo.co.uk Mon May 28 23:23:50 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Mon, 28 May 2012 21:23:50 +0000 (UTC) Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> <20120528214057.4bbdff62@pitrou.net> Message-ID: Antoine Pitrou pitrou.net> writes: > Not necessarily, but OTOH, it is not really standard procedure to > commit half-finished patches. I didn't want to miss the window for the upcoming alpha, and and I'm not sure exactly how things will pan out with respect to PEP 397 and packaging. If people generally feel strongly about this, I can delete the .exe and re-introduce it later if/when appropriate. It might have had a few rough edges, but I wouldn't have characterised the patch as "half-finished" - that seems a little harsh :-) Regards, Vinay Sajip From solipsis at pitrou.net Mon May 28 23:26:49 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 28 May 2012 23:26:49 +0200 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> <20120528214057.4bbdff62@pitrou.net> Message-ID: <20120528232649.042305ac@pitrou.net> On Mon, 28 May 2012 21:23:50 +0000 (UTC) Vinay Sajip wrote: > Antoine Pitrou pitrou.net> writes: > > > Not necessarily, but OTOH, it is not really standard procedure to > > commit half-finished patches. > > I didn't want to miss the window for the upcoming alpha, and and I'm not sure > exactly how things will pan out with respect to PEP 397 and packaging. If people > generally feel strongly about this, I can delete the .exe and re-introduce it > later if/when appropriate. It might have had a few rough edges, but I wouldn't > have characterised the patch as "half-finished" - that seems a little harsh :-) Yes, I shouldn't have said that. "Unfinished" is more appropriate. Regards Antoine. From martin at v.loewis.de Tue May 29 01:15:41 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 29 May 2012 01:15:41 +0200 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: References: Message-ID: <20120529011541.Horde.1pTFHklCcOxPxAcdsgWDAgA@webmail.df.eu> > Comments welcome, especially on whether Windows users agree that > something like this is needed in the absence of PEP 397 in Python 3.3. AFAICT, there is no need to check in the binary into revision control. Instead, the Windows build process should create, package, and deploy them, and venv should then just expect that they are there. So I request that this checkin is reverted, preferably before the alpha release. I also agree with the fundamental principle that an open source project should never ever include binaries for which it doesn't also provide source code. If you cannot release the sources right now, do not release the binaries either. Regards, Martin From ncoghlan at gmail.com Tue May 29 01:20:40 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2012 09:20:40 +1000 Subject: [Python-Dev] Missing command line documentation for new tools Message-ID: The documentation does not currently provide end user guides for pysetup or pyvenv under http://docs.python.org/dev/using/index.html This needs to be fixed before 3.3 is released. I've created the following issues as deferred blockers (since they don't need to be addressed before the alpha this week): pysetup: http://bugs.python.org/issue14940 pyvenv: http://bugs.python.org/issue14939 As the standard library comes to include more directly executed tools, we really need to focus on keeping the Setup & Usage docs up to date. The fact we've been historically lax on that front is no excuse for perpetuating the problem for new additions. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ncoghlan at gmail.com Tue May 29 01:24:26 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2012 09:24:26 +1000 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: References: <20120528193922.6eed19ab@pitrou.net> Message-ID: On Tue, May 29, 2012 at 5:37 AM, Vinay Sajip wrote: > Antoine Pitrou pitrou.net> writes: > >> Regardless of what the executable is or does, its source code must be >> included somewhere in the Python source tree (and, preferably, there >> should be a simple procedure to build the binaries). > > I understand that. Does it need to be checked in right now? It will need > integrating with the existing VS2010 solution file, and at the moment I cannot > do that integration because I haven't yet got a full VS2010 build environment, > just a VS2008 one. It would have been better if the issue of script management on Windows had been raised in PEP 405 itself - I likely would have declared PEP 397 a dependency *before* accepting it (even if that meant the feature missed the alpha 4 deadline and first appeared in beta 1, or potentially even missed 3.3 altogether). However, I'm not going to withdraw the acceptance of the PEP over this - while I would have made a different decision at the time given the additional information (due to the general preference to treat Windows as a first class deployment target), I think reversing my decision now would make the situation worse rather than better. That means the important question is what needs to happen before beta 1 at the end of June. As I see it, we have two ways forward: 1. My preferred option: bring PEP 397 up to scratch as a specification for the behaviour of the Python launcher (perhaps with Vinay stepping up as a co-author to help Mark if need be), find a BDFL delegate (MvL? Brian Curtin?) and submit that PEP for acceptance within the next few weeks. The updated PEP 397 should include an explanation of exactly how it will help with the correct implementation of PEP 405 on Windows (this may involve making the launcher pyvenv aware). 2. The fallback option: remove the currently checked in build artifacts from source control and incorporate them into the normal Windows build processes (both the main VS 2010 process, and at least the now-legacy VS 2008 process) For alpha 4, I suggest going with MvL's suggestion - drop the binaries from Mercurial and accept that this aspect of PEP 405 simply won't work on Windows until the first beta. Regards, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From skippy.hammond at gmail.com Tue May 29 02:04:20 2012 From: skippy.hammond at gmail.com (Mark Hammond) Date: Tue, 29 May 2012 10:04:20 +1000 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: References: <20120528193922.6eed19ab@pitrou.net> Message-ID: <4FC41284.8000703@gmail.com> Vinay originally wrote: > PEP 397 (Python launcher for Windows) has not yet been accepted, so there still > needs to be some way of natively launching scripts in Windows which is > equivalent to /path/to/venv/bin/foo. The way setuptools (and hence Distribute) > does this is to shadow each script with an executable: whereas a script foo > would be simply placed in /path/to/venv/bin/ on POSIX, on Windows, the files > foo.exe and foo-script.py (or foo-script.pyw) are placed in > \path\to\venv\Scripts. The user can run \path\to\venv\Scripts\foo just as on > POSIX. > > The foo.exe file is just a copy of a stock launcher executable which finds its > name from the C argv[0], and based in that name (foo in this case), invokes > foo-script.py or foo-script.pyw with the appropriate Python interpreter. I don't understand the relationship between this "stock launcher" and the PEP 397 launcher. They seem to have quite distinct requirements without much overlap. Specifically, I'm not aware that the current PEP 397 implementation could perform the same role as the "stock launcher" - IIUC, it has no special handling of the "-script" suffix or special logic based around its argv[0]. So unless I'm mistaken, even with PEP 397 accepted, either this "stock launcher" is still necessary anyway or the PEP398 launcher would need the addition of new features so it could replace the stock launcher. FWIW, Vinay and I exchanged some private mail recently about how to best integrate the PEP397 launcher with virtualenvs - and while we both agreed it would be nice, we couldn't come up with anything worthwhile. Having the launcher be aware of a virtualenv when invoked via file associations is problematic - for example, Windows Explorer is unlikely to have a virtualenv configured in its environment. Even when considering just command-line usage there are some edge-cases that make things problematic (eg, what if a script in a venv nominates a specific Python version via a shebang line? What if a venv is activated but the user/launcher attempts to execute a script outside the venv? etc.) On 29/05/2012 9:24 AM, Nick Coghlan wrote: ... > It would have been better if the issue of script management on Windows > had been raised in PEP 405 itself - I likely would have declared PEP > 397 a dependency *before* accepting it (even if that meant the feature > missed the alpha 4 deadline and first appeared in beta 1, or > potentially even missed 3.3 altogether). > > However, I'm not going to withdraw the acceptance of the PEP over this > - while I would have made a different decision at the time given the > additional information (due to the general preference to treat Windows > as a first class deployment target), I think reversing my decision now > would make the situation worse rather than better. > > That means the important question is what needs to happen before beta > 1 at the end of June. As I see it, we have two ways forward: > > 1. My preferred option: bring PEP 397 up to scratch as a specification > for the behaviour of the Python launcher (perhaps with Vinay stepping > up as a co-author to help Mark if need be), find a BDFL delegate (MvL? > Brian Curtin?) and submit that PEP for acceptance within the next few > weeks. The updated PEP 397 should include an explanation of exactly > how it will help with the correct implementation of PEP 405 on Windows > (this may involve making the launcher pyvenv aware). As above, it isn't clear to me how the additional complexity and list of caveats in real use make it worthwhile to have the PEP397 launcher pyvenv aware. > 2. The fallback option: remove the currently checked in build > artifacts from source control and incorporate them into the normal > Windows build processes (both the main VS 2010 process, and at least > the now-legacy VS 2008 process) > > For alpha 4, I suggest going with MvL's suggestion - drop the binaries > from Mercurial and accept that this aspect of PEP 405 simply won't > work on Windows until the first beta. Agreed - ISTM that this stock launcher is probably going to need to co-exist with the PEP397 launcher for the long term. Cheers, Mark From carl at oddbird.net Tue May 29 02:07:32 2012 From: carl at oddbird.net (Carl Meyer) Date: Mon, 28 May 2012 17:07:32 -0700 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: References: <20120528193922.6eed19ab@pitrou.net> Message-ID: <4FC41344.2030505@oddbird.net> On 05/28/2012 04:24 PM, Nick Coghlan wrote: > It would have been better if the issue of script management on Windows > had been raised in PEP 405 itself - I likely would have declared PEP > 397 a dependency *before* accepting it (even if that meant the feature > missed the alpha 4 deadline and first appeared in beta 1, or > potentially even missed 3.3 altogether). > > However, I'm not going to withdraw the acceptance of the PEP over this > - while I would have made a different decision at the time given the > additional information (due to the general preference to treat Windows > as a first class deployment target), I think reversing my decision now > would make the situation worse rather than better. I think it's unfortunate that this issue (which is http://bugs.python.org/issue12394) has become entangled with PEP 405 at all, since AFAICT it is entirely orthogonal. This is a distutils2/packaging issue regarding how scripts are installed on Windows. It happens to be relevant when trying to install things into a PEP 405 venv on Windows, but it applies to a non-virtual Python installation on Windows every bit as much as it applies to a PEP 405 environment. In an earlier discussion with Vinay I thought we had agreed that it was an orthogonal issue and that this proposed patch for it would be removed from the PEP 405 reference implementation before it was merged to CPython trunk; I think that would have been preferable. This is why there is no mention of the issue in PEP 405 - it doesn't belong there, because it is not related. > That means the important question is what needs to happen before beta > 1 at the end of June. As I see it, we have two ways forward: > > 1. My preferred option: bring PEP 397 up to scratch as a specification > for the behaviour of the Python launcher (perhaps with Vinay stepping > up as a co-author to help Mark if need be), find a BDFL delegate (MvL? > Brian Curtin?) and submit that PEP for acceptance within the next few > weeks. The updated PEP 397 should include an explanation of exactly > how it will help with the correct implementation of PEP 405 on Windows > (this may involve making the launcher pyvenv aware). > > 2. The fallback option: remove the currently checked in build > artifacts from source control and incorporate them into the normal > Windows build processes (both the main VS 2010 process, and at least > the now-legacy VS 2008 process) > > For alpha 4, I suggest going with MvL's suggestion - drop the binaries > from Mercurial and accept that this aspect of PEP 405 simply won't > work on Windows until the first beta. Regardless, these sound like the right options moving forward, with the clarification that it is not any "aspect of PEP 405" that will not work until a fix is merged, it is simply an existing limitation of distutils2/packaging on Windows. And that if anything needs to be reverted, temporarily or permanently, it should not be all of the PEP 405 implementation, rather just this packaging fix. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: OpenPGP digital signature URL: From ncoghlan at gmail.com Tue May 29 03:00:46 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2012 11:00:46 +1000 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: <4FC41344.2030505@oddbird.net> References: <20120528193922.6eed19ab@pitrou.net> <4FC41344.2030505@oddbird.net> Message-ID: On Tue, May 29, 2012 at 10:07 AM, Carl Meyer wrote: > On 05/28/2012 04:24 PM, Nick Coghlan wrote: >> It would have been better if the issue of script management on Windows >> had been raised in PEP 405 itself - I likely would have declared PEP >> 397 a dependency *before* accepting it (even if that meant the feature >> missed the alpha 4 deadline and first appeared in beta 1, or >> potentially even missed 3.3 altogether). >> >> However, I'm not going to withdraw the acceptance of the PEP over this >> - while I would have made a different decision at the time given the >> additional information (due to the general preference to treat Windows >> as a first class deployment target), I think reversing my decision now >> would make the situation worse rather than better. > > I think it's unfortunate that this issue (which is > http://bugs.python.org/issue12394) has become entangled with PEP 405 at > all, since AFAICT it is entirely orthogonal. This is a > distutils2/packaging issue regarding how scripts are installed on > Windows. It happens to be relevant when trying to install things into a > PEP 405 venv on Windows, but it applies to a non-virtual Python > installation on Windows every bit as much as it applies to a PEP 405 > environment. In an earlier discussion with Vinay I thought we had agreed > that it was an orthogonal issue and that this proposed patch for it > would be removed from the PEP 405 reference implementation before it was > merged to CPython trunk; I think that would have been preferable. > > This is why there is no mention of the issue in PEP 405 - it doesn't > belong there, because it is not related. Ah, thanks for the clarification. In that case: Vinay, please revert everything from the pyvenv commit that was actually related to issue #12394 rather than being part of the PEP 405 implementation. As Carl says, it's an unrelated change that needs to be discussed separately. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Tue May 29 03:44:04 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 28 May 2012 21:44:04 -0400 Subject: [Python-Dev] cpython (3.2): Issue12510: Attempting to get invalid tooltip no longer closes Idle. In-Reply-To: References: Message-ID: On 5/28/2012 2:53 AM, Georg Brandl wrote: > Am 28.05.2012 03:55, schrieb terry.reedy: >> namespace.update(__main__.__dict__) >> try: >> return eval(name, namespace) >> - except (NameError, AttributeError): >> + # any exception is possible if evalfuncs True in open_calltip >> + # at least Syntax, Name, Attribute, Index, and Key E. if not > > Is something missing here? The comment text seems cut off. There should be a ; at the end of the first line, but I think I will rewrite the comment instead. >> + except: >> return None > > "except Exception" may be better here. Idle's Shell catches all exceptions. I think the attempt to provide an optional help (a function signature) should too. -- Terry Jan Reedy From brian at python.org Tue May 29 03:48:17 2012 From: brian at python.org (Brian Curtin) Date: Mon, 28 May 2012 20:48:17 -0500 Subject: [Python-Dev] cpython (3.2): Issue12510: Attempting to get invalid tooltip no longer closes Idle. In-Reply-To: References: Message-ID: On Mon, May 28, 2012 at 8:44 PM, Terry Reedy wrote: >>> + ? ? ? ? ? ?except: >>> ? ? ? ? ? ? ? ? ?return None >> >> >> "except Exception" may be better here. > > > Idle's Shell catches all exceptions. I think the attempt to provide an > optional help (a function signature) should too. Can you explain what this means? You should probably not have a bare except - I'm not sure what IDLE has to do with it. From ncoghlan at gmail.com Tue May 29 04:31:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2012 12:31:16 +1000 Subject: [Python-Dev] Missing command line documentation for new tools In-Reply-To: References: Message-ID: On Tue, May 29, 2012 at 9:20 AM, Nick Coghlan wrote: > As the standard library comes to include more directly executed tools, > we really need to focus on keeping the Setup & Usage docs up to date. > The fact we've been historically lax on that front is no excuse for > perpetuating the problem for new additions. Given an exchange on the tracker, I feel I should expand on this point. Historically, our Setup & Usage documentation has *only* covered the main Python executable, even though we actually install additional tools, including pydoc, idle, 2to3 and now pysetup and pyvenv, and provide additional documented and supported command line functionality via command line execution of certain modules. It is my view that this lack of centralised command line usage documentation is an *oversight*, not a deliberate policy that should be continued (hence the two new blocking issues for 3.3 as noted in my previous post). I've now created two more (lower priority) issues to cover the other officially supported command line interfaces that are currently missing from the setup & usage documentation: Existing scripts (pydoc, idle, 2to3): http://bugs.python.org/issue14944 Supported -m commands: http://bugs.python.org/issue14945 Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From merwok at netwok.org Tue May 29 04:34:06 2012 From: merwok at netwok.org (=?UTF-8?B?w4lyaWMgQXJhdWpv?=) Date: Mon, 28 May 2012 22:34:06 -0400 Subject: [Python-Dev] Missing command line documentation for new tools In-Reply-To: References: Message-ID: <4FC4359E.2060902@netwok.org> Le 28/05/2012 22:31, Nick Coghlan a ?crit : > Historically, our Setup & Usage documentation has *only* covered the > main Python executable, even though we actually install additional > tools, including pydoc, idle, 2to3 and now pysetup and pyvenv, and > provide additional documented and supported command line functionality > via command line execution of certain modules. > > It is my view that this lack of centralised command line usage > documentation is an *oversight*, not a deliberate policy that should > be continued (hence the two new blocking issues for 3.3 as noted in my > previous post). This makes sense. Let?s expand the Setup and Usage docs! Cheers From guido at python.org Tue May 29 04:45:23 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 28 May 2012 19:45:23 -0700 Subject: [Python-Dev] cpython: simplify and rewrite the zipimport part of 702009f3c0b1 a bit In-Reply-To: <4FC09D46.8030407@hastings.org> References: <20120525191431.0623368c@pitrou.net> <4FC09D46.8030407@hastings.org> Message-ID: On Sat, May 26, 2012 at 2:07 AM, Larry Hastings wrote: > > On 05/25/2012 10:14 AM, Antoine Pitrou wrote: > > On Fri, 25 May 2012 18:57:57 +0200 > Georg Brandl wrote: > > This is probably minor, but wouldn't it make more sense to have those > constants uppercased? At least that's the general style we have in > the codebase for enum values. > > +1, this surprised me too. > > > FWIW I contributed the utime enum with the lowercase values.? I don't > uppercase enum values as a rule. > > Uppercasing preprocessor macros is a good idea because they're not safe. > There are loads of ways they can produce unexpected behavior.? So if > something funny is going on, and the code involves some preprocessor > slight-of-hand, those identifiers pop out at you and you know to > double-check them.? But enum values are as safe as houses.? I think of them > as equivalent to const ints, which I also don't uppercase.? There's no need > to draw attention to them. > > There's nothing in PEP 7 either way about enum nomenclature.? But Benjamin > has already uppercased these (and some other) enums, so I suppose the > community has spoken. I think the convention is that constants are uppercased -- enums are definitely constants. It helps the reader quickly to see what is variable and what is constant in an expression -- when I see x == 42, I know which is which, but when I see x == y, I don't. If I see x == Y, I know. -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Tue May 29 04:59:50 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 28 May 2012 22:59:50 -0400 Subject: [Python-Dev] cpython (3.2): Issue12510: Attempting to get invalid tooltip no longer closes Idle. In-Reply-To: References: Message-ID: On 5/28/2012 9:48 PM, Brian Curtin wrote: > On Mon, May 28, 2012 at 8:44 PM, Terry Reedy wrote: snipped context: return eval(user_entered_expression, namespace) >>>> + except: >>>> return None >>> >>> >>> "except Exception" may be better here. >> >> >> Idle's Shell catches all exceptions. I think the attempt to provide an >> optional help (a function signature) should too. > > Can you explain what this means? What what means? I am not sure what you are asking. The issue might help http://bugs.python.org/issue12510 > You should probably not have a bare except Idle code already has many of them Some perhaps should not be, but I cannot tell with my current level of understanding of how Idle works. Would you prefer 'except BaseException:' ? > I'm not sure what IDLE has to do with it. This is a patch to Idle. tjr From ncoghlan at gmail.com Tue May 29 05:19:08 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 May 2012 13:19:08 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Issue 14814: Add namespaces keyword arg to find(*) methods in _elementtree. In-Reply-To: References: Message-ID: On Tue, May 29, 2012 at 1:03 PM, eli.bendersky wrote: > http://hg.python.org/cpython/rev/7d252dbfbee3 > changeset: ? 77217:7d252dbfbee3 > user: ? ? ? ?Eli Bendersky > date: ? ? ? ?Tue May 29 06:02:56 2012 +0300 > summary: > ?Issue 14814: Add namespaces keyword arg to find(*) methods in _elementtree. > Add attrib keyword to Element and SubElement in _elementtree. > Patch developed with Ezio Melotti. I'm not sure which issue you intended to reference here, but I'm fairly sure the PEP 3144 integration issue wasn't it :) Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From tjreedy at udel.edu Tue May 29 05:31:27 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 28 May 2012 23:31:27 -0400 Subject: [Python-Dev] Missing command line documentation for new tools In-Reply-To: References: Message-ID: On 5/28/2012 10:31 PM, Nick Coghlan wrote: > Given an exchange on the tracker, I feel I should expand on this point. > > Historically, our Setup& Usage documentation has *only* covered the > main Python executable, even though we actually install additional > tools, including pydoc, idle, 2to3 and now pysetup and pyvenv, and > provide additional documented and supported command line functionality > via command line execution of certain modules. > > It is my view that this lack of centralised command line usage > documentation is an *oversight*, not a deliberate policy that should > be continued (hence the two new blocking issues for 3.3 as noted in my > previous post). > > I've now created two more (lower priority) issues to cover the other > officially supported command line interfaces that are currently > missing from the setup& usage documentation: > Existing scripts (pydoc, idle, 2to3): http://bugs.python.org/issue14944 I added a preliminary, rough outline for idle to the issue. > Supported -m commands: http://bugs.python.org/issue14945 -- Terry Jan Reedy From g.brandl at gmx.net Tue May 29 08:26:43 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 29 May 2012 08:26:43 +0200 Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support In-Reply-To: <20120529011541.Horde.1pTFHklCcOxPxAcdsgWDAgA@webmail.df.eu> References: <20120529011541.Horde.1pTFHklCcOxPxAcdsgWDAgA@webmail.df.eu> Message-ID: Am 29.05.2012 01:15, schrieb martin at v.loewis.de: >> Comments welcome, especially on whether Windows users agree that >> something like this is needed in the absence of PEP 397 in Python 3.3. > > AFAICT, there is no need to check in the binary into revision control. > Instead, the Windows build process should create, package, and deploy > them, and venv should then just expect that they are there. > > So I request that this checkin is reverted, preferably before the alpha > release. > > I also agree with the fundamental principle that an open source project > should never ever include binaries for which it doesn't also provide > source code. If you cannot release the sources right now, do not release > the binaries either. Agreed. Vinay, please either let me know when this is rectified (see also Nick's request about reverting #12394 specific parts of the commit), or revert the whole PEP 405 implementation for now, if the time is too short: I don't want to delay the alpha much longer. There is still time until beta after all. Georg From vinay_sajip at yahoo.co.uk Tue May 29 12:20:53 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 29 May 2012 10:20:53 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?PEP_405_=28Python_Virtual_Environments=29_?= =?utf-8?q?and_Windows=09script_support?= References: <20120529011541.Horde.1pTFHklCcOxPxAcdsgWDAgA@webmail.df.eu> Message-ID: Georg Brandl gmx.net> writes: > Agreed. Vinay, please either let me know when this is rectified (see also > Nick's request about reverting #12394 specific parts of the commit), or > revert the whole PEP 405 implementation for now, if the time is too short: > I don't want to delay the alpha much longer. There is still time until > beta after all. I didn't put any of the #12394 functionality into the PEP 405 work that I committed; the pysetup3.exe was data - the scripts that are installed to a venv - and I just overlooked it. That has now been rectified in 01381723bc50 - the .exe is gone. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Tue May 29 12:22:46 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 29 May 2012 10:22:46 +0000 (UTC) Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> <4FC41344.2030505@oddbird.net> Message-ID: Nick Coghlan gmail.com> writes: > In that case: Vinay, please revert everything from the pyvenv commit > that was actually related to issue #12394 rather than being part of > the PEP 405 implementation. As Carl says, it's an unrelated change > that needs to be discussed separately. There's nothing in there related to #12394 - just a pysetup3.exe file, which I had originally overlooked and have now removed. Regards, Vinay Sajip From vinay_sajip at yahoo.co.uk Tue May 29 12:37:05 2012 From: vinay_sajip at yahoo.co.uk (Vinay Sajip) Date: Tue, 29 May 2012 10:37:05 +0000 (UTC) Subject: [Python-Dev] PEP 405 (Python Virtual Environments) and Windows script support References: <20120528193922.6eed19ab@pitrou.net> <4FC41284.8000703@gmail.com> Message-ID: Mark Hammond gmail.com> writes: > I don't understand the relationship between this "stock launcher" and > the PEP 397 launcher. They seem to have quite distinct requirements > without much overlap. Specifically, I'm not aware that the current PEP > 397 implementation could perform the same role as the "stock launcher" - > IIUC, it has no special handling of the "-script" suffix or special > logic based around its argv[0]. > Actually the stock launcher's job is similar to the 397 launcher, though it doesn't address many of the things that are in PEP 397. The basic requirement is to run the correct Python for a script installed as part of a package; that's done by having shebang lines (set up during installation) which point to the correct Python. The stock launcher reads the shebang line and invokes the appropriate Python. It's a reimplementation of the launcher used in setuptools and a much simplified version of the 397 launcher, which I put together when exploring how packaging would work with venvs on Windows. In theory, if the PEP 397 launcher is installed, you don't need the stock launcher; any script installed by packaging (or setuptools/Distribute) in a venv should have the correct shebang line, and the PEP 397 launcher should do the right thing. I'm sorry for all the confusion I've caused here :-( Regards, Vinay Sajip From nadeem.vawda at gmail.com Tue May 29 17:50:31 2012 From: nadeem.vawda at gmail.com (Nadeem Vawda) Date: Tue, 29 May 2012 08:50:31 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14744: Use the new _PyUnicodeWriter internal API to speed up str%args In-Reply-To: References: Message-ID: Since this changeset, building on Windows seems to be broken (see http://python.org/dev/buildbot/all/builders/x86%20XP-5%203.x/builds/450 for example). Nadeem From victor.stinner at gmail.com Tue May 29 18:55:24 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 29 May 2012 18:55:24 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14744: Use the new _PyUnicodeWriter internal API to speed up str%args In-Reply-To: References: Message-ID: 2012/5/29 Nadeem Vawda : > Since this changeset, building on Windows seems to be broken (see > http://python.org/dev/buildbot/all/builders/x86%20XP-5%203.x/builds/450 > for example). The following changesets should fix the two errors, but not warnings. changeset: 77231:df0144f68d76 tag: tip user: Victor Stinner date: Tue May 29 18:53:56 2012 +0200 files: Objects/unicodeobject.c description: Issue #14744: Fix compilation on Windows (part 2) changeset: 77230:6abab1a103a6 user: Victor Stinner date: Tue May 29 18:51:10 2012 +0200 files: Objects/longobject.c description: Issue #14744: Fix compilation on Windows Victor From p.f.moore at gmail.com Tue May 29 19:45:38 2012 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 29 May 2012 18:45:38 +0100 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14744: Use the new _PyUnicodeWriter internal API to speed up str%args In-Reply-To: References: Message-ID: On 29 May 2012 17:55, Victor Stinner wrote: > 2012/5/29 Nadeem Vawda : >> Since this changeset, building on Windows seems to be broken (see >> http://python.org/dev/buildbot/all/builders/x86%20XP-5%203.x/builds/450 >> for example). > > The following changesets should fix the two errors, but not warnings. > > changeset: ? 77231:df0144f68d76 > tag: ? ? ? ? tip > user: ? ? ? ?Victor Stinner > date: ? ? ? ?Tue May 29 18:53:56 2012 +0200 > files: ? ? ? Objects/unicodeobject.c > description: > Issue #14744: Fix compilation on Windows (part 2) > > > changeset: ? 77230:6abab1a103a6 > user: ? ? ? ?Victor Stinner > date: ? ? ? ?Tue May 29 18:51:10 2012 +0200 > files: ? ? ? Objects/longobject.c > description: > Issue #14744: Fix compilation on Windows Build worked, there are still a couple of test failures, but they are in test_venv and test_logging. http://www.python.org/dev/buildbot/builders/x86%20XP-5%203.x/builds/456/steps/test/logs/stdio Paul From avassalotti at gmail.com Tue May 29 11:19:20 2012 From: avassalotti at gmail.com (Alexandre Vassalotti) Date: Tue, 29 May 2012 05:19:20 -0400 Subject: [Python-Dev] What should we do with cProfile? Message-ID: Hello, As per PEP 3108, we were supposed to merge profile/cProfile into one unified module. I initially championed the change, but other things got in the way and I have never got to the point of a useful patch. I posted some code and outlined an approach how the merge could be done. However, there still a lot of details to be worked out. So I wondering whether we should abandon the change all together or attempt it for the next release. Personally, I slightly leaning on the former option since the two modules are actually fairly different underneath even though they are used similarly. And also, because it is getting late to make such backward incompatible changes. I am willing to volunteer to push the change though if it is still desired by the community. Cheers! http://bugs.python.org/issue2919 -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed May 30 00:44:05 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 30 May 2012 00:44:05 +0200 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: Hi, > ?* Use a Py_UCS4 buffer and then convert to the canonical form (ASCII, > UCS1 or UCS2). Approach taken by io.StringIO. io.StringIO is not only > used to write, but also to read and so a Py_UCS4 buffer is a good > compromise. > ?* PyAccu API: optimized version of chunks=[]; for ...: ... > chunks.append(text); return ''.join(chunks). > ?* Two steps: compute the length and maximum character of the output > string, allocate the output string and then write characters. str%args > was using it. > ?* Optimistic approach. Start with a ASCII buffer, enlarge and widen > (to UCS2 and then UCS4) the buffer when new characters are written. > Approach used by the UTF-8 decoder and by str%args since today. I ran extensive benchmarks on these 4 methods for str%args and str.format(args). The "two steps" method is not promising: parsing the format string twice is slower than other methods. The PyAccu API is faster than a Py_UCS4 buffer to concatenate a lot of strings, but it is slower in many other cases. I implemented the last method as the new internal "_PyUnicodeWriter" API: resize / widen the string buffer when writing new characters. I implemented more optimizations: * overallocate the buffer to limit the cost of realloc() * write characters directly in the buffer, avoid temporary buffers when possible (it is possible in most cases) * disable overallocation when formating the last argument * don't copy by value but copy by reference if the result is just a string (optimization already implemented indirectly in the PyAccu API) The _PyUnicodeWriter is the fastest method: it gives a speed up of 30% over the Py_UCS4 / PyAccu in general, and from 60% to 100% in some specific cases! I also compared str%args and str.format() with Python 2.7 (byte strings), 3.2 (UTF-16 or UCS-4) and 3.3 (PEP 393): Python 3.3 is as fast as Python 2.7 and sometimes faster! (Whereras Python 3.2 is 10 to 30% slower than Python 2 in general) -- I wrote a tool to run benchmarks and to compare results: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py https://bitbucket.org/haypo/misc/src/tip/python/bench_str.py Run the benchmark: ./python benchmark.py --file=FILE script bench_str.py Compare results: ./python benchmark.py compare_to FILE1 FILE2 FILE3 ... -- Python 2.7 vs 3.2 vs 3.3: http://bugs.python.org/file25685/REPORT_32BIT_2.7_3.2_writer http://bugs.python.org/file25687/REPORT_64BIT_2.7_3.2_writer http://bugs.python.org/file25757/report_windows7 Warning: For the Windows benchmark, Python 3.3 is compiled in 32 bits, whereas 2.7 and 3.2 are compiled in 64 bits (formatting integers is slower in 32 bits). -- UCS4 vs PyAccu vs _PyUnicodeWriter: http://bugs.python.org/file25686/REPORT_32BIT_3.3 http://bugs.python.org/file25688/REPORT_64BIT_3.3 Victor From ncoghlan at gmail.com Wed May 30 00:51:27 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 May 2012 08:51:27 +1000 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: On Wed, May 30, 2012 at 8:44 AM, Victor Stinner wrote: > I also compared str%args and str.format() with Python 2.7 (byte > strings), 3.2 (UTF-16 or UCS-4) and 3.3 (PEP 393): Python 3.3 is as > fast as Python 2.7 and sometimes faster! (Whereras Python 3.2 is 10 to > 30% slower than Python 2 in general) Very cool news! Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From steve at pearwood.info Wed May 30 01:58:41 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 30 May 2012 09:58:41 +1000 Subject: [Python-Dev] What should we do with cProfile? In-Reply-To: References: Message-ID: <4FC562B1.7060002@pearwood.info> Alexandre Vassalotti wrote: > Hello, > > As per PEP 3108, we were supposed to merge profile/cProfile into one > unified module. I initially championed the change, but other things got in > the way and I have never got to the point of a useful patch. I posted some > code and outlined an approach how the merge could be done. However, there > still a lot of details to be worked out. > > So I wondering whether we should abandon the change all together or attempt > it for the next release. Personally, I slightly leaning on the former > option since the two modules are actually fairly different underneath even > though they are used similarly. And also, because it is getting late to > make such backward incompatible changes. > > I am willing to volunteer to push the change though if it is still desired > by the community. I don't have a strong opinion either way, but if it was worth merging them for 3.3, then it's worth merging them for 3.4. Don't let "I won't be finished in time for 3.3" stop you. -- Steven From tjreedy at udel.edu Wed May 30 02:08:22 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 29 May 2012 20:08:22 -0400 Subject: [Python-Dev] cpython (3.2): Issue12510: Attempting to get invalid tooltip no longer closes Idle. In-Reply-To: References: Message-ID: On 5/28/2012 10:59 PM, Terry Reedy wrote: > On 5/28/2012 9:48 PM, Brian Curtin wrote: > > You should probably not have a bare except > > Idle code already has many of them At least 29 by grep. After further discussion, Roger Serwy and I have agreed that we should start reducing that number rather than increasing it. There is a nearby 'except:' that I believe should be 'except AttributeError:'. -- Terry Jan Reedy From eliben at gmail.com Wed May 30 05:01:17 2012 From: eliben at gmail.com (Eli Bendersky) Date: Wed, 30 May 2012 05:01:17 +0200 Subject: [Python-Dev] What should we do with cProfile? In-Reply-To: <4FC562B1.7060002@pearwood.info> References: <4FC562B1.7060002@pearwood.info> Message-ID: >> As per PEP 3108, we were supposed to merge profile/cProfile into one >> unified module. I initially championed the change, but other things got in >> the way and I have never got to the point of a useful patch. I posted some >> code and outlined an approach how the merge could be done. However, there >> still a lot of details to be worked out. >> >> So I wondering whether we should abandon the change all together or >> attempt >> it for the next release. Personally, I slightly leaning on the former >> option since the two modules are actually fairly different underneath even >> though they are used similarly. And also, because it is getting late to >> make such backward incompatible changes. >> >> I am willing to volunteer to push the change though if it is still desired >> by the community. > > > > I don't have a strong opinion either way, but if it was worth merging them > for 3.3, then it's worth merging them for 3.4. Don't let "I won't be > finished in time for 3.3" stop you. > +1 IMHO merging modules with their C accelerators is a worthy goal, because having two modules in the stdlib doing the same is confusing. At worst, the merged module can do everything it can in C and defer the things it can't do to Python (or defer *everything* on platforms where the C extension can't be built for some reason). And as Steven said, the 3.3 timeline doesn't have anything really special about it. Although there's still time until the beta release, even if this is done for 3.4 it will be great. Eli From ncoghlan at gmail.com Wed May 30 05:30:33 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 30 May 2012 13:30:33 +1000 Subject: [Python-Dev] What should we do with cProfile? In-Reply-To: References: <4FC562B1.7060002@pearwood.info> Message-ID: On Wed, May 30, 2012 at 1:01 PM, Eli Bendersky wrote: > And as Steven said, the 3.3 timeline doesn't have anything really > special about it. Although there's still time until the beta release, > even if this is done for 3.4 it will be great. Yep - there's a reason the 3.4 target gets added to the tracker even before 3.3 is out. It's precisely so we can bump things as soon as we reach a point where we're comparing the effort we think is needed to get them agreed on and/or bedded down properly and the time remaining before the first beta and officially say "not going to happen". I've already done that for Eugene Toder's proposed compiler enhancements. It's a promising approach, but it's *way* too late in the 3.3 cycle to be contemplating that kind of change. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From v+python at g.nevcal.com Wed May 30 06:45:16 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 29 May 2012 21:45:16 -0700 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: <4FC5A5DC.1000608@g.nevcal.com> On 5/29/2012 3:51 PM, Nick Coghlan wrote: > On Wed, May 30, 2012 at 8:44 AM, Victor Stinner > wrote: >> I also compared str%args and str.format() with Python 2.7 (byte >> strings), 3.2 (UTF-16 or UCS-4) and 3.3 (PEP 393): Python 3.3 is as >> fast as Python 2.7 and sometimes faster! (Whereras Python 3.2 is 10 to >> 30% slower than Python 2 in general) > Very cool news! > > Cheers, > Nick. > Very cool indeed! Thanks, Victor. I have programs that are just full of formatting operations, that will benefit from this work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyberdupo56 at gmail.com Wed May 30 07:58:22 2012 From: cyberdupo56 at gmail.com (cyberdupo56 at gmail.com) Date: Tue, 29 May 2012 22:58:22 -0700 Subject: [Python-Dev] Property inheritance in Python Message-ID: <20120530055822.GA13259@avalon> Hi, I apologize if I violate (or am violating) some sacred mailing list rules. Torsten wrote back in 2010 (http://mail.python.org/pipermail/python-dev/2010-April/099672.html) about property inheritance behavior and super(). Specifically, only fget() behavior of properties work with super(), not fset() or fdel(). I apologize if there's some obvious reason this has not been addressed since then, but it seems to be expected behavior and most Pythonic, and confused me greatly when I ran into it recently. Torsten's original thread seems to have gone as if unseen, so I hesitantly bump this topic in the hopes of a resolution. Thanks, Allen From g.brandl at gmx.net Wed May 30 08:52:00 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 30 May 2012 08:52:00 +0200 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: Message-ID: Am 29.05.2012 18:54, schrieb victor.stinner: > http://hg.python.org/cpython/rev/df0144f68d76 > changeset: 77231:df0144f68d76 > user: Victor Stinner > date: Tue May 29 18:53:56 2012 +0200 > summary: > Issue #14744: Fix compilation on Windows (part 2) All Windows buildbots are still failing the test suite, with an "invalid format string" ValueError, so I assume that is related to your string formatting speedups -- can you please have a look at it before I can tag the alpha? thanks, Georg From tjreedy at udel.edu Wed May 30 08:56:14 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 30 May 2012 02:56:14 -0400 Subject: [Python-Dev] Property inheritance in Python In-Reply-To: <20120530055822.GA13259@avalon> References: <20120530055822.GA13259@avalon> Message-ID: On 5/30/2012 1:58 AM, cyberdupo56 at gmail.com wrote: > Hi, > > I apologize if I violate (or am violating) some sacred mailing list rules. > > Torsten wrote back in 2010 > (http://mail.python.org/pipermail/python-dev/2010-April/099672.html) about > property inheritance behavior and super(). Specifically, only fget() behavior > of properties work with super(), not fset() or fdel(). > > I apologize if there's some obvious reason this has not been addressed since > then, but it seems to be expected behavior and most Pythonic, and confused me > greatly when I ran into it recently. Torsten's original thread seems to have > gone as if unseen, so I hesitantly bump this topic in the hopes of a > resolution. This sort of idea should either be posted on python-ideas to get support or put on the tracker if the proposal is specific enough to write a patch (or both, with a patch making it more likely to happen). -- Terry Jan Reedy From storchaka at gmail.com Wed May 30 11:32:53 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 30 May 2012 12:32:53 +0300 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: On 30.05.12 01:44, Victor Stinner wrote: > The "two steps" method is not promising: parsing the format string > twice is slower than other methods. The "1.5 steps" method is more promising -- first parse the format string in an efficient internal representation, and then allocate the output string and then write characters (or enlarge and widen the buffer, but with more information in any case). The internal representation can be cached (as for struct module) that for a repeated formatting will reduce the cost of parsing to zero. From storchaka at gmail.com Wed May 30 11:37:48 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 30 May 2012 12:37:48 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Issue #14744: Use the new _PyUnicodeWriter internal API to speed up str%args In-Reply-To: References: Message-ID: On 29.05.12 19:55, Victor Stinner wrote: > The following changesets should fix the two errors, but not warnings. Why not move `TYPE *p` declaration inside WRITE_DIGITS? From victor.stinner at gmail.com Wed May 30 13:26:14 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 30 May 2012 13:26:14 +0200 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: >> The "two steps" method is not promising: parsing the format string >> twice is slower than other methods. > > The "1.5 steps" method is more promising -- first parse the format string in > an efficient internal representation, and then allocate the output string > and then write characters (or enlarge and widen the buffer, but with more > information in any case). The internal representation can be cached (as for > struct module) that for a repeated formatting will reduce the cost of > parsing to zero. I implemented something like that, and it was not efficient and very complex. See for example the (incomplete) patch for str%args attached to the issue #14687: http://bugs.python.org/file25413/pyunicode_format-2.patch IMO this approach is less efficient than the "Unicode writer" approach because: - you have to create many substrings or temporary strings in the first step, or (worse) compute each argument twice: the writer approach is more efficient here because it avoids computing substrings and temporary strings - you have to parse the format string twice, or you have to write two versions of the code: first create a list of items, then concatenate items. The PyAccu method concatenates substrings at the end, it is less efficient than the writer method (e.g. it has to create a string of N fill characters to pad to WIDTH characters). - the code is more complex than the writer method (which is very similar to what is used in Python 2.7 and 3.2) I wrote a much more complex patch for str%args to remember variables of the first step to avoid most of the parsing work in the second step. The patch was very complex and hard to maintain. I chose to not publish it and try another approach (the Unicode writer). Note: I'm talking about str%args and str.format(args), the Unicode writer is not the most efficient method for *any* function creating strings! Victor From larry at hastings.org Wed May 30 14:06:28 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 30 May 2012 05:06:28 -0700 Subject: [Python-Dev] Python Language Summit, Florence, July 2012 Message-ID: <4FC60D44.7050602@hastings.org> Like Python? Like Italy? Like meetings? Then I've got a treat for you! I'll be chairing a Python Language Summit this July in historic Florence, Italy. It'll be on July 1st (the day before EuroPython starts) at the Grand Hotel Mediterraneo conference center. Language Summits are when the Python core contributors step away from their computers and get together for a day to argue in person. Email me if you're interested in attending; I can send you the final details and simultaneously get a rough headcount. Volunteers to take notes are greatly appreciated, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed May 30 14:08:44 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 30 May 2012 15:08:44 +0300 Subject: [Python-Dev] Optimize Unicode strings in Python 3.3 In-Reply-To: References: Message-ID: On 30.05.12 14:26, Victor Stinner wrote: > I implemented something like that, and it was not efficient and very complex. > > See for example the (incomplete) patch for str%args attached to the > issue #14687: > http://bugs.python.org/file25413/pyunicode_format-2.patch I have seen and commented on this patch. That's not what I'm talking about. > IMO this approach is less efficient than the "Unicode writer" approach because: I brought this approach is not for the opposition of the "Unicode writer", and for comparison with a straight "two steps" method. Of course, this can be combined with the "Unicode writer" to get the benefits of both methods. For example, you can advance to widen the output buffer to a width of format string, or disable overallocation when formating the last argument with non-empty suffix. > - you have to create many substrings or temporary strings in the > first step, or (worse) compute each argument twice: the writer > approach is more efficient here because it avoids computing substrings > and temporary strings Not on the first step but on the second step (and this is the only single step if you use caching), if you use the "Unicode writer". > - you have to parse the format string twice, or you have to write two > versions of the code: first create a list of items, then concatenate > items. The PyAccu method concatenates substrings at the end, it is > less efficient than the writer method (e.g. it has to create a string > of N fill characters to pad to WIDTH characters). The code is divided into the compiler and the interpreter. Only the first one parses the format string. See Modules/_struct.c. > - the code is more complex than the writer method (which is very > similar to what is used in Python 2.7 and 3.2) The code that uses the writer method to be rather complicated, the difference in the total complexity of these approaches has become smaller. ;-) But it is really not easy work, not assure success, so let waits for its time. From kristjan at ccpgames.com Wed May 30 16:03:44 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 30 May 2012 14:03:44 +0000 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: Message-ID: Curiously, the 64bit debug windows build cannot run the unittests either. There are crash bugs in the release build and I wanted to repro it using the debug version , but failed. This is likely to be related to the virtualenv changes, perhaps. see http://bugs.python.org/issue14952 > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On > Behalf Of Georg Brandl > Sent: 30. ma? 2012 06:52 > To: python-dev at python.org > Subject: Re: [Python-Dev] cpython: Issue #14744: Fix compilation on > Windows (part 2) > > Am 29.05.2012 18:54, schrieb victor.stinner: > > http://hg.python.org/cpython/rev/df0144f68d76 > > changeset: 77231:df0144f68d76 > > user: Victor Stinner > > date: Tue May 29 18:53:56 2012 +0200 > > summary: > > Issue #14744: Fix compilation on Windows (part 2) > > All Windows buildbots are still failing the test suite, with an "invalid format > string" ValueError, so I assume that is related to your string formatting > speedups -- can you please have a look at it before I can tag the alpha? > > thanks, > Georg > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python- > dev/kristjan%40ccpgames.com From rdmurray at bitdance.com Wed May 30 16:40:25 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 30 May 2012 10:40:25 -0400 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: Message-ID: <20120530144025.A7CD625003F@webabinitio.net> On Wed, 30 May 2012 14:03:44 -0000, =?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?= wrote: > > -----Original Message----- > > From: python-dev-bounces+kristjan=ccpgames.com at python.org > > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On > > Behalf Of Georg Brandl > > Sent: 30. ma?? 2012 06:52 > > To: python-dev at python.org > > Subject: Re: [Python-Dev] cpython: Issue #14744: Fix compilation on > > Windows (part 2) > > > > Am 29.05.2012 18:54, schrieb victor.stinner: > > > http://hg.python.org/cpython/rev/df0144f68d76 > > > changeset: 77231:df0144f68d76 > > > user: Victor Stinner > > > date: Tue May 29 18:53:56 2012 +0200 > > > summary: > > > Issue #14744: Fix compilation on Windows (part 2) > > > > All Windows buildbots are still failing the test suite, with an "invalid format > > string" ValueError, so I assume that is related to your string formatting > > speedups -- can you please have a look at it before I can tag the alpha? > > Curiously, the 64bit debug windows build cannot run the unittests either. > There are crash bugs in the release build and I wanted to repro it using the debug version , but failed. > This is likely to be related to the virtualenv changes, perhaps. > see http://bugs.python.org/issue14952 The "ValueError: Invalid format string" was coming from a broken-on-windows test_calendar test I checked in. It is fixed now and the stable windows buildbots are green. --David From guido at python.org Wed May 30 17:28:27 2012 From: guido at python.org (Guido van Rossum) Date: Wed, 30 May 2012 08:28:27 -0700 Subject: [Python-Dev] Property inheritance in Python In-Reply-To: References: <20120530055822.GA13259@avalon> Message-ID: Agreed this could go on the tracker, but I don't see the need for a Python-Ideas detour. It seems worth fixing (and I vaguely recall there was some follow-up last time?). --Guido van Rossum (sent from Android phone) On May 29, 2012 11:58 PM, "Terry Reedy" wrote: > On 5/30/2012 1:58 AM, cyberdupo56 at gmail.com wrote: > >> Hi, >> >> I apologize if I violate (or am violating) some sacred mailing list rules. >> >> Torsten wrote back in 2010 >> (http://mail.python.org/**pipermail/python-dev/2010-**April/099672.html) >> about >> property inheritance behavior and super(). Specifically, only fget() >> behavior >> of properties work with super(), not fset() or fdel(). >> >> I apologize if there's some obvious reason this has not been addressed >> since >> then, but it seems to be expected behavior and most Pythonic, and >> confused me >> greatly when I ran into it recently. Torsten's original thread seems to >> have >> gone as if unseen, so I hesitantly bump this topic in the hopes of a >> resolution. >> > > This sort of idea should either be posted on python-ideas to get support > or put on the tracker if the proposal is specific enough to write a patch > (or both, with a patch making it more likely to happen). > > -- > Terry Jan Reedy > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristjan at ccpgames.com Wed May 30 17:46:29 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 30 May 2012 15:46:29 +0000 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: <20120530144025.A7CD625003F@webabinitio.net> References: <20120530144025.A7CD625003F@webabinitio.net> Message-ID: > -----Original Message----- > From: R. David Murray [mailto:rdmurray at bitdance.com] > > > The "ValueError: Invalid format string" was coming from a broken-on- > windows test_calendar test I checked in. It is fixed now and the stable > windows buildbots are green. > > --David Hm, there appear to be no x64 buildbots. K From brian at python.org Wed May 30 17:56:15 2012 From: brian at python.org (Brian Curtin) Date: Wed, 30 May 2012 10:56:15 -0500 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: <20120530144025.A7CD625003F@webabinitio.net> Message-ID: On Wed, May 30, 2012 at 10:46 AM, Kristj?n Valur J?nsson wrote: > > >> -----Original Message----- >> From: R. David Murray [mailto:rdmurray at bitdance.com] >> >> >> The "ValueError: Invalid format string" was coming from a broken-on- >> windows test_calendar test I checked in. ?It is fixed now and the stable >> windows buildbots are green. >> >> --David > > Hm, there appear to be no x64 buildbots. Antoine asked about one several weeks ago. I have a Windows 8 x64 machine that I am planning on setting up once I have the time. I'll try to shift things around and get it back into the fleet. The machine used to be the 2008 x64 build slave we had last year. From tjreedy at udel.edu Wed May 30 18:45:29 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 30 May 2012 12:45:29 -0400 Subject: [Python-Dev] ur'string literal' in 3.3: make same as in 2.x? Message-ID: In 2.7, 'r' and 'ur' string literal prefixes have different effects: "When an 'r' or 'R' prefix is present, a character following a backslash is included in the string without change, and all backslashes are left in the string." "When an 'r' or 'R' prefix is used in conjunction with a 'u' or 'U' prefix, then the \uXXXX and \UXXXXXXXX escape sequences are processed while all other backslashes are left in the string." When 'u' was deleted in 3.0, the first meaning was kept. Was any thought given to restoring this difference in 3.3, along with restoring 'u', so that code using 'ur' prefixes would truly be cross-compatible? (I checked, and it has not been.) Cross-compatibility is the point of adding 'u' back, and this would give 'u' prefixes an actual, useful function even in Python 3. This issue came up today in python-list thread 'python3 raw strings and \u escapes' by 'rurpy', who uses 'ur' for re strings with unicode chars and is trying to port his code to 3.x. -- Terry Jan Reedy From martin at v.loewis.de Wed May 30 21:43:42 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 30 May 2012 21:43:42 +0200 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: <4FC6786E.8000707@v.loewis.de> > I hereby predict that Microsoft will revert this decision, and that > VS Express > 11 will be able to build CPython. > > But will it be able to target Windows XP? I'll still need to try. I couldn't easily find a Windows XP installation to try out whether a hello world application runs on XP. Please understand that Visual Studio never had the notion of "targetting" an operating system. The Windows SDK has that notion, and it appears that targetting XP continues to be supported. Visual Studio C/C++ only targets Debug vs. Release, and Win32 vs. IA-64 vs. x64 (leaving CE/Windows Mobile/WP7 aside). The only place where platform support matters is the CRT, and this is what I still want to test. E.g. it might be that the C RT works on XP, and the C++ RT might use newer API. Regards, Martin From g.brandl at gmx.net Wed May 30 22:13:44 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 30 May 2012 22:13:44 +0200 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: Message-ID: Am 30.05.2012 16:03, schrieb Kristj?n Valur J?nsson: > Curiously, the 64bit debug windows build cannot run the unittests either. > There are crash bugs in the release build and I wanted to repro it using the debug version , but failed. > This is likely to be related to the virtualenv changes, perhaps. > see http://bugs.python.org/issue14952 For alpha4, I don't see that as a blocker (better give the venv stuff testing on all other platforms than to revert it now), but of course it has to be resolved before beta. Here's hoping for a Windows x64 buildbot... Georg From g.brandl at gmx.net Wed May 30 22:14:22 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 30 May 2012 22:14:22 +0200 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: <20120530144025.A7CD625003F@webabinitio.net> References: <20120530144025.A7CD625003F@webabinitio.net> Message-ID: Am 30.05.2012 16:40, schrieb R. David Murray: > On Wed, 30 May 2012 14:03:44 -0000, =?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?= wrote: >> > -----Original Message----- >> > From: python-dev-bounces+kristjan=ccpgames.com at python.org >> > [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On >> > Behalf Of Georg Brandl >> > Sent: 30. ma?? 2012 06:52 >> > To: python-dev at python.org >> > Subject: Re: [Python-Dev] cpython: Issue #14744: Fix compilation on >> > Windows (part 2) >> > >> > Am 29.05.2012 18:54, schrieb victor.stinner: >> > > http://hg.python.org/cpython/rev/df0144f68d76 >> > > changeset: 77231:df0144f68d76 >> > > user: Victor Stinner >> > > date: Tue May 29 18:53:56 2012 +0200 >> > > summary: >> > > Issue #14744: Fix compilation on Windows (part 2) >> > >> > All Windows buildbots are still failing the test suite, with an "invalid format >> > string" ValueError, so I assume that is related to your string formatting >> > speedups -- can you please have a look at it before I can tag the alpha? >> >> Curiously, the 64bit debug windows build cannot run the unittests either. >> There are crash bugs in the release build and I wanted to repro it using the debug version , but failed. >> This is likely to be related to the virtualenv changes, perhaps. >> see http://bugs.python.org/issue14952 > > The "ValueError: Invalid format string" was coming from a > broken-on-windows test_calendar test I checked in. It is fixed now and > the stable windows buildbots are green. Indeed. Sorry Victor, bad David ;) Georg From kristjan at ccpgames.com Wed May 30 22:35:04 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 30 May 2012 20:35:04 +0000 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: <20120530144025.A7CD625003F@webabinitio.net> Message-ID: -----Original Message----- From: Brian Curtin [mailto:brian at python.org] Sent: 30. ma? 2012 15:56 To: Kristj?n Valur J?nsson Cc: python-dev at python.org Subject: Re: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) appear to be no x64 buildbots. >Antoine asked about one several weeks ago. I have a Windows 8 x64 machine that I am planning on setting up once I have the time. I'll try to shift things around and get it back into the >fleet. >The machine used to be the 2008 x64 build slave we had last year. I freely admit to not being a buildbot specialist, haven't touched those things for some years. Hence my perhaps silly question: Why do we need multiple windows machines, since they can cross compile and cross run left right and centre? Virtual PCs can be used to test compatibility with earlier versions. K From ncoghlan at gmail.com Wed May 30 23:03:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 31 May 2012 07:03:49 +1000 Subject: [Python-Dev] Property inheritance in Python In-Reply-To: References: <20120530055822.GA13259@avalon> Message-ID: On May 31, 2012 1:31 AM, "Guido van Rossum" wrote: > > Agreed this could go on the tracker, but I don't see the need for a Python-Ideas detour. +1 > It seems worth fixing (and I vaguely recall there was some follow-up last time?). You may be thinking of the abstract property fixes that went in - I'm fairly sure this specific problem report fell through the cracks. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed May 30 23:07:03 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 31 May 2012 07:07:03 +1000 Subject: [Python-Dev] ur'string literal' in 3.3: make same as in 2.x? In-Reply-To: References: Message-ID: Sounds reasonable and within the intent of the PEP, so a tracker issue would be the next step. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Wed May 30 23:14:54 2012 From: brian at python.org (Brian Curtin) Date: Wed, 30 May 2012 16:14:54 -0500 Subject: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) In-Reply-To: References: <20120530144025.A7CD625003F@webabinitio.net> Message-ID: On Wed, May 30, 2012 at 3:35 PM, Kristj?n Valur J?nsson wrote: > > > -----Original Message----- > From: Brian Curtin [mailto:brian at python.org] > Sent: 30. ma? 2012 15:56 > To: Kristj?n Valur J?nsson > Cc: python-dev at python.org > Subject: Re: [Python-Dev] cpython: Issue #14744: Fix compilation on Windows (part 2) > appear to be no x64 buildbots. > >>Antoine asked about one several weeks ago. I have a Windows 8 x64 machine that I am planning on setting up once I have the time. I'll try to shift things around and get it back into the >fleet. > >>The machine used to be the 2008 x64 build slave we had last year. > > I freely admit to not being a buildbot specialist, haven't touched those things for some years. ?Hence my perhaps silly question: ?Why do we need multiple windows machines, since they can cross compile and cross run left right and centre? > Virtual PCs can be used to test compatibility with earlier versions. > K I don't have the quality of hardware or the knowledge and time to set any of that up, so that's my excuse. I just turn my computer on and write code. When it doesn't work I reboot it. That's the extent of my system administration skills. From larry at hastings.org Thu May 31 02:05:30 2012 From: larry at hastings.org (Larry Hastings) Date: Wed, 30 May 2012 17:05:30 -0700 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <4FC6786E.8000707@v.loewis.de> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> <4FC6786E.8000707@v.loewis.de> Message-ID: <4FC6B5CA.8080507@hastings.org> On 05/30/2012 12:43 PM, "Martin v. L?wis" wrote: > Please understand that Visual Studio never had the notion of > "targetting" an operating system. The Windows SDK has that notion, and > it appears that targetting XP continues to be supported. I may be misremembering, but--the C API of necessity calls the Win32 API. So if Microsoft chooses to call new Win32 APIs as part of the C API, this can force you to require a minimum Windows version. I dimly recall an incident some years back where part of the startup code for a C program (code called before main / WinMain) was calling a newish API, and thus programs generated with that version of the compiler could not support older Windows versions. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nyamatongwe at me.com Thu May 31 01:21:29 2012 From: nyamatongwe at me.com (Neil Hodgson) Date: Thu, 31 May 2012 09:21:29 +1000 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: <4FC6786E.8000707@v.loewis.de> References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> <4FC6786E.8000707@v.loewis.de> Message-ID: <2475FDDD-F47A-4DBE-8636-370F1AABB48D@me.com> Curt: >> But will it be able to target Windows XP? It will likely be possible in a reasonable manner at some point. From http://blogs.msdn.com/b/visualstudio/archive/2012/05/18/a-look-ahead-at-the-visual-studio-11-product-lineup-and-platform-support.aspx : """C++ developers can also use the multi-targeting capability included in Visual Studio 11 to continue using the compilers and libraries included in Visual Studio 2010 to target Windows XP and Windows Server 2003. Multi-targeting for C++ applications currently requires a side-by-side installation of Visual Studio 2010. Separately, we are evaluating options for C++ that would enable developers to directly target XP without requiring a side-by-side installation of Visual Studio 2010 and intend to deliver this update post-RTM. """ Martin v. L?wis wrote: > The only place where platform support matters is the CRT, and this is > what I still want to test. E.g. it might be that the C RT works on XP, > and the C++ RT might use newer API. C++ runtime is more dependent on post-XP features than C runtime but even the C runtime currently needs some thunks: http://tedwvc.wordpress.com/ Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu May 31 05:11:14 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 31 May 2012 13:11:14 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Make parameterized tests in email less hackish. In-Reply-To: References: Message-ID: I'm not clear on why this is a metaclass rather than a simple class decorator. On Thu, May 31, 2012 at 11:54 AM, r.david.murray wrote: > + ? ?In a _params dictioanry, the keys become part of the name of the generated > + ? ?tests. ?In a _params list, the values in the list are converted into a > + ? ?string by joining the string values of the elements of the tuple by '_' and > + ? ?converting any blanks into '_'s, and this become part of the name. ?The > + ? ?full name of a generated test is the portion of the _params name before the > + ? ?'_params' portion, plus an '_', plus the name derived as explained above. Your description doesn't match your examples or implementation. Assuming the example and implementation are correct (and they look more sensible than the currently described approach), I believe that last sentence should be: "The full name of a generated test is a 'test_' prefix, the portion of the test function name after the '_as_' separator, plus an '_', plus the name derived as explained above." Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From ericsnowcurrently at gmail.com Thu May 31 09:21:36 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 31 May 2012 01:21:36 -0600 Subject: [Python-Dev] a new type for sys.implementation Message-ID: The implementation for sys.implementation is going to use a new (but "private") type[1]. It's basically equivalent to the following: class namespace: def __init__(self, **kwargs): self.__dict__.update(kwargs) def __repr__(self): keys = (k for k in self.__dict__ if not k.startswith('_')) pairs = ("{}={!r}".format(k, self.__dict__[k]) for k in sorted(keys)) return "{}({})".format(type(self).__name__, ", ".join(pairs)) There were other options for the type, but the new type was the best fit and not hard to do. Neither the type nor its API is directly exposed publicly, but it is still accessible through "type(sys.implementation)". This brings me to a couple of questions: 1. should we make the new type un-instantiable (null out tp_new and tp_init)? 2. would it be feasible to officially add the type (or something like it) in 3.3 or 3.4? I've had quite a bit of positive feedback on the type (otherwise I wouldn't bother bringing it up). But, if we don't add a type like this to Python, I'd rather close the loophole and call it good (i.e. *not* introduce a new type by stealth). My preference is for the type (or equivalent) to be blessed in the language. Regardless of the specific details of such a type, my more immediate concern is with the impact on sys.implementation of python-dev's general sentiment in this space. -eric [1] http://bugs.python.org/issue14673 From steve at pearwood.info Thu May 31 11:28:42 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 31 May 2012 19:28:42 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: Message-ID: <20120531092841.GA1567@ando> On Thu, May 31, 2012 at 01:21:36AM -0600, Eric Snow wrote: > 1. should we make the new type un-instantiable (null out tp_new and tp_init)? Please don't. "Consenting adults" and all that. There's little things more frustrating that having a working type that does exactly what you want, except that some B&D coder has made it un-instantiable. Leave it undocumented and/or a single underscore name for the time being, with an aim to make it public in 3.4 if it is useful and there are no major objections. -- Steven From mark at hotpy.org Thu May 31 12:26:07 2012 From: mark at hotpy.org (Mark Shannon) Date: Thu, 31 May 2012 11:26:07 +0100 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: Message-ID: <4FC7473F.5020802@hotpy.org> Eric Snow wrote: > The implementation for sys.implementation is going to use a new (but > "private") type[1]. It's basically equivalent to the following: Does this really need to be written in C rather than Python? > > class namespace: > def __init__(self, **kwargs): > self.__dict__.update(kwargs) > def __repr__(self): > keys = (k for k in self.__dict__ if not k.startswith('_')) > pairs = ("{}={!r}".format(k, self.__dict__[k]) for k in sorted(keys)) > return "{}({})".format(type(self).__name__, ", ".join(pairs)) > > There were other options for the type, but the new type was the best > fit and not hard to do. Neither the type nor its API is directly > exposed publicly, but it is still accessible through > "type(sys.implementation)". > > This brings me to a couple of questions: > > 1. should we make the new type un-instantiable (null out tp_new and tp_init)? > 2. would it be feasible to officially add the type (or something like > it) in 3.3 or 3.4? > > I've had quite a bit of positive feedback on the type (otherwise I > wouldn't bother bringing it up). But, if we don't add a type like > this to Python, I'd rather close the loophole and call it good (i.e. > *not* introduce a new type by stealth). My preference is for the type > (or equivalent) to be blessed in the language. Regardless of the > specific details of such a type, my more immediate concern is with the > impact on sys.implementation of python-dev's general sentiment in this > space. > > -eric > > > [1] http://bugs.python.org/issue14673 > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mark%40hotpy.org From rdmurray at bitdance.com Thu May 31 12:58:46 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 31 May 2012 06:58:46 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Make parameterized tests in email less hackish. In-Reply-To: References: Message-ID: <20120531105847.05274250072@webabinitio.net> On Thu, 31 May 2012 13:11:14 +1000, Nick Coghlan wrote: > I'm not clear on why this is a metaclass rather than a simple class decorator. Because I didn't think of it. I don't (yet) think of "class" and "decorator" in the same sentence :) > On Thu, May 31, 2012 at 11:54 AM, r.david.murray > wrote: > > + ?? ??In a _params dictioanry, the keys become part of the name of the generated > > + ?? ??tests. ??In a _params list, the values in the list are converted into a > > + ?? ??string by joining the string values of the elements of the tuple by '_' and > > + ?? ??converting any blanks into '_'s, and this become part of the name. ??The > > + ?? ??full name of a generated test is the portion of the _params name before the > > + ?? ??'_params' portion, plus an '_', plus the name derived as explained above. > > Your description doesn't match your examples or implementation. > Assuming the example and implementation are correct (and they look > more sensible than the currently described approach), I believe that > last sentence should be: > > "The full name of a generated test is a 'test_' prefix, the portion > of the test function name after the '_as_' separator, plus an '_', > plus the name derived as explained above." Oops, yes. Thanks for the catch. --David From ncoghlan at gmail.com Thu May 31 14:31:10 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 31 May 2012 22:31:10 +1000 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <4FC7473F.5020802@hotpy.org> References: <4FC7473F.5020802@hotpy.org> Message-ID: On Thu, May 31, 2012 at 8:26 PM, Mark Shannon wrote: > Eric Snow wrote: >> >> The implementation for sys.implementation is going to use a new (but >> "private") type[1]. ?It's basically equivalent to the following: > > > Does this really need to be written in C rather than Python? Yes, because we want to use it in the sys module. As you get lower down in the interpreter stack, implementing things in Python actually starts getting painful because of bootstrapping issues (e.g. that's why both _structseq and collections.namedtuple exist). Personally, I suggest we just expose the new type as types.SimpleNamespace (implemented in Lib/types.py as "SimpleNamespace = type(sys.implementation)" and call it done. Cheers, Nick. -- Nick Coghlan?? |?? ncoghlan at gmail.com?? |?? Brisbane, Australia From barry at python.org Thu May 31 17:02:23 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 31 May 2012 11:02:23 -0400 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: Message-ID: <20120531110223.2c3a0bf1@limelight.wooz.org> On May 31, 2012, at 01:21 AM, Eric Snow wrote: >The implementation for sys.implementation is going to use a new (but >"private") type[1]. It's basically equivalent to the following: > >class namespace: > def __init__(self, **kwargs): > self.__dict__.update(kwargs) > def __repr__(self): > keys = (k for k in self.__dict__ if not k.startswith('_')) > pairs = ("{}={!r}".format(k, self.__dict__[k]) for k in sorted(keys)) > return "{}({})".format(type(self).__name__, ", ".join(pairs)) > >There were other options for the type, but the new type was the best >fit and not hard to do. Neither the type nor its API is directly >exposed publicly, but it is still accessible through >"type(sys.implementation)". I did the initial review of the four patches that Eric uploaded and I agreed with him that this was the best fit for sys.implementation. (I need to review his updated patch, which I'll try to get to later today.) >This brings me to a couple of questions: > >1. should we make the new type un-instantiable (null out tp_new and tp_init)? I don't think this is necessary. >2. would it be feasible to officially add the type (or something like >it) in 3.3 or 3.4? I wouldn't be against it, but the implementation above (or really, the C equivalent in Eric's patch) isn't quite appropriate to be that type. Specifically, while I think that filtering out _names in the repr is fine for sys.implementation, it would not be appropriate for a generalized, public type. OTOH, I'd have no problem just dropping that detail from sys.implementation too. (Note of course that even with that, you can get the full repr via sys.implementation.__dict__.) Cheers, -Barry From barry at python.org Thu May 31 17:06:02 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 31 May 2012 11:06:02 -0400 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> Message-ID: <20120531110602.176b7524@limelight.wooz.org> On May 31, 2012, at 10:31 PM, Nick Coghlan wrote: >Personally, I suggest we just expose the new type as >types.SimpleNamespace (implemented in Lib/types.py as "SimpleNamespace >= type(sys.implementation)" and call it done. Great idea, +1. Eric, if you want to remove the special case for _names in the repr, and update your patch to include types.SimpleNamespace, I'd happily review it again and support its inclusion. Cheers, -Barry From ericsnowcurrently at gmail.com Thu May 31 17:23:37 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Thu, 31 May 2012 09:23:37 -0600 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: <20120531110602.176b7524@limelight.wooz.org> References: <4FC7473F.5020802@hotpy.org> <20120531110602.176b7524@limelight.wooz.org> Message-ID: On Thu, May 31, 2012 at 9:06 AM, Barry Warsaw wrote: > On May 31, 2012, at 10:31 PM, Nick Coghlan wrote: > >>Personally, I suggest we just expose the new type as >>types.SimpleNamespace (implemented in Lib/types.py as "SimpleNamespace >>= type(sys.implementation)" and call it done. > > Great idea, +1. > > Eric, if you want to remove the special case for _names in the repr, and > update your patch to include types.SimpleNamespace, I'd happily review it > again and support its inclusion. Will do. -eric From martin at v.loewis.de Thu May 31 18:47:50 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 31 May 2012 18:47:50 +0200 Subject: [Python-Dev] VS 11 Express is Metro only. In-Reply-To: References: <20120525003647.Horde.IQhrGKGZi1VPvrf-vtuimuA@webmail.df.eu> <20120525140622.Horde.U3HcfruWis5Pv3W_5yzihNA@webmail.df.eu> Message-ID: <4FC7A0B6.7050705@v.loewis.de> > I hereby predict that Microsoft will revert this decision, and that > VS Express 11 will be able to build CPython. > > But will it be able to target Windows XP? I have now tried, and it seems that the chances are really low (unless you use the VS 2010 tool chain, in which case you can just as well use VS 2010 Express in the first place). The VS 11 linker sets the "OS version" and "subsystem version" to 6.0, which means that XP refuses to recognize the files as executables. While the /subsystem switch allows to specify a different version, specifying 5.02 (needed for XP) gives an error that this is smaller than the minimum supported version. So for that reason alone, VS 11 cannot produce binaries that work on XP, but that would be easy to change for MS. In addition, the CRT uses various API in its startup code already that are Vista+. I already mentioned GetTickCount64, which is used to initialize the security cookie (for /GS). In addition, TLS is now implemented using FlsAlloc to better support fibers, which is also Vista+. This dependency cannot be easily broken, except to access FlsAlloc through LoadLibrary/GetProcAddress or weak externals. There may be more dependencies on Vista+. Regards, Martin From benjamin at python.org Thu May 31 18:56:28 2012 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 31 May 2012 09:56:28 -0700 Subject: [Python-Dev] a new type for sys.implementation In-Reply-To: References: <4FC7473F.5020802@hotpy.org> Message-ID: 2012/5/31 Nick Coghlan : > On Thu, May 31, 2012 at 8:26 PM, Mark Shannon wrote: >> Eric Snow wrote: >>> >>> The implementation for sys.implementation is going to use a new (but >>> "private") type[1]. ?It's basically equivalent to the following: >> >> >> Does this really need to be written in C rather than Python? > > Yes, because we want to use it in the sys module. As you get lower > down in the interpreter stack, implementing things in Python actually > starts getting painful because of bootstrapping issues (e.g. that's > why both _structseq and collections.namedtuple exist). sys.implementation could be added by site or some other startup file. -- Regards, Benjamin From georg at python.org Thu May 31 22:40:59 2012 From: georg at python.org (Georg Brandl) Date: Thu, 31 May 2012 22:40:59 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 alpha 4 Message-ID: <4FC7D75B.5080309@python.org> On behalf of the Python development team, I'm happy to announce the fourth alpha release of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, syntax for delegating to a subgenerator ("yield from") * PEP 393, flexible string representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) is based on importlib by default * The new "packaging" module (also known as distutils2, and released standalone under this name), implementing the new packaging formats and deprecating "distutils" * The new "lzma" module with LZMA/XZ support * PEP 405, virtual environment support in core * PEP 420, namespace package support * PEP 3151, reworking the OS and IO exception hierarchy * PEP 3155, qualified name for classes and functions * PEP 409, suppressing exception context * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * PEP 412, a new key-sharing dictionary implementation that significantly saves memory for object-oriented code * The new "faulthandler" module that helps diagnosing crashes * The new "unittest.mock" module * The new "ipaddress" module * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html (*) To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! (*) Please note that this document is usually finalized late in the release cycle and therefore may have stubs and missing entries at this point. -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors)