From jcea at jcea.es Wed Aug 1 05:58:06 2012 From: jcea at jcea.es (Jesus Cea) Date: Wed, 01 Aug 2012 05:58:06 +0200 Subject: [Python-Dev] HTTPS repositories failing when using selfsigned certs Message-ID: <5018A94E.1050104@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 My mercurial clone is , and today I can't create a patch from it (in the bug tracker). No explanation in the web interface, but checking the sourcecode of the resulting page, I see a SSL certificate failure. So, looks like bugs.python.org is now verifying repository certificates. My certificate is selfsigned and, moreover, it is behind a SNI server, so the certificate python.org is getting is a selfsigned "jcea.es" certificate. What can I do, beside buying a "real" cert?. Do we have a certificate whitelist, like mercurial?. In my .hgrc, I use """ [hostfingerprints] # En realidad es www.jcea.es. hg.jcea.es esta tras SNI hg.jcea.es = 54:7e:a7:36:56:c6:80:41:f8:fd:d6:c0:95:44:68:a9:93:58:ca:4c """ PS: If I try to use the http version of my repository (), I get an error: "('invalid token', 97)". - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQCVAwUBUBipTplgi5GaxT1NAQKyjQP9F1rIKSlDIs8uHLrhIVmaOodRH3umYeyl zhkiGm34+Cw6I22OQre3VoJ+9vrUF/Go/LpU+UpAH5adzgq4Xfef3Q8jRhclSZmU ADvGpKhmlzDCahxsQoYXD7UHkc/BLkfNvx+q0wzfDUELbinLyITF4pp2/dLtoNtN LFG9te1M55A= =pgh1 -----END PGP SIGNATURE----- From mark at hotpy.org Wed Aug 1 10:46:35 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 01 Aug 2012 09:46:35 +0100 Subject: [Python-Dev] PEP 0424: A method for exposing a length hint In-Reply-To: References: Message-ID: <5018ECEB.4060403@hotpy.org> While the idea behind PEP 424 is sound, the text of the PEP is rather vague and missing a lot of details. There was extended discussion on the details, but none of that has appeared in the PEP yet. So Alex, how about adding those details? Also the rationale is rather poor. Given that CPython is the reference implementation, PyPy should be compared to CPython, not vice-versa. Reversing PyPy and CPython in the rationale gives: ''' Being able to pre-allocate lists based on the expected size, as estimated by __length_hint__, can be a significant optimization. PyPy has been observed to run some code slower than CPython, purely because this optimization is absent. ''' Which is a PyPy bug report, not a rationale for a PEP ;) Perhaps a better rationale would something along the lines of: ''' Adding a __length_hint__ method to the iterator protocol allows sequences, notably lists, to be initialised from iterators with only a single resize operation. This allows sequences to be intialised quickly, yet have a small growth factor, reducing memory use. ''' Cheers, Mark. From fijall at gmail.com Wed Aug 1 11:39:57 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 1 Aug 2012 11:39:57 +0200 Subject: [Python-Dev] PEP 0424: A method for exposing a length hint In-Reply-To: <5018ECEB.4060403@hotpy.org> References: <5018ECEB.4060403@hotpy.org> Message-ID: On Wed, Aug 1, 2012 at 10:46 AM, Mark Shannon wrote: > While the idea behind PEP 424 is sound, the text of the PEP is rather vague > and missing a lot of details. > There was extended discussion on the details, but none of that has appeared > in the PEP yet. > > So Alex, how about adding those details? > > Also the rationale is rather poor. > Given that CPython is the reference implementation, PyPy should be compared > to CPython, not vice-versa. > Reversing PyPy and CPython in the rationale gives: > > ''' > Being able to pre-allocate lists based on the expected size, as estimated by > __length_hint__, > > can be a significant optimization. > PyPy has been observed to run some code slower than CPython, purely because > this optimization is absent. > ''' > > Which is a PyPy bug report, not a rationale for a PEP ;) > > Perhaps a better rationale would something along the lines of: > > ''' > Adding a __length_hint__ method to the iterator protocol allows sequences, > notably lists, > to be initialised from iterators with only a single resize operation. > This allows sequences to be intialised quickly, yet have a small growth > factor, reducing memory use. > ''' > Hi Mark. It's not about saving memory. It really is about speed. Noone bothered measuring cpython with length hint disabled to compare, however we did that for pypy hence the rationale contains it. It's merely to state "this seems like an important optimization". Since the C-level code involved is rather similar (it's mostly runtime anyway), it seems reasonable to draw a conclusion that removing length hint from cpython would cause slowdown. Cheers, fijal From mark at hotpy.org Wed Aug 1 12:06:19 2012 From: mark at hotpy.org (Mark Shannon) Date: Wed, 01 Aug 2012 11:06:19 +0100 Subject: [Python-Dev] PEP 0424: A method for exposing a length hint In-Reply-To: References: <5018ECEB.4060403@hotpy.org> Message-ID: <5018FF9B.8030200@hotpy.org> Maciej Fijalkowski wrote: > On Wed, Aug 1, 2012 at 10:46 AM, Mark Shannon wrote: >> While the idea behind PEP 424 is sound, the text of the PEP is rather vague >> and missing a lot of details. >> There was extended discussion on the details, but none of that has appeared >> in the PEP yet. >> >> So Alex, how about adding those details? >> >> Also the rationale is rather poor. >> Given that CPython is the reference implementation, PyPy should be compared >> to CPython, not vice-versa. >> Reversing PyPy and CPython in the rationale gives: >> >> ''' >> Being able to pre-allocate lists based on the expected size, as estimated by >> __length_hint__, >> >> can be a significant optimization. >> PyPy has been observed to run some code slower than CPython, purely because >> this optimization is absent. >> ''' >> >> Which is a PyPy bug report, not a rationale for a PEP ;) >> >> Perhaps a better rationale would something along the lines of: >> >> ''' >> Adding a __length_hint__ method to the iterator protocol allows sequences, >> notably lists, >> to be initialised from iterators with only a single resize operation. >> This allows sequences to be intialised quickly, yet have a small growth >> factor, reducing memory use. >> ''' >> > > Hi Mark. > > It's not about saving memory. It really is about speed. Noone bothered > measuring cpython with length hint disabled to compare, however we did > that for pypy hence the rationale contains it. It's merely to state > "this seems like an important optimization". Since the C-level code > involved is rather similar (it's mostly runtime anyway), it seems > reasonable to draw a conclusion that removing length hint from cpython > would cause slowdown. It is not about making it faster *or* saving memory, but *both*. Without __length_hint__ there is a trade off between speed and memory use. You can have speed at the cost of memory by increasing the resize factor. With __length_hint__ you can get both speed and good memory use. Cheers, Mark From fijall at gmail.com Wed Aug 1 12:37:23 2012 From: fijall at gmail.com (Maciej Fijalkowski) Date: Wed, 1 Aug 2012 12:37:23 +0200 Subject: [Python-Dev] PEP 0424: A method for exposing a length hint In-Reply-To: <5018FF9B.8030200@hotpy.org> References: <5018ECEB.4060403@hotpy.org> <5018FF9B.8030200@hotpy.org> Message-ID: On Wed, Aug 1, 2012 at 12:06 PM, Mark Shannon wrote: > Maciej Fijalkowski wrote: >> >> On Wed, Aug 1, 2012 at 10:46 AM, Mark Shannon wrote: >>> >>> While the idea behind PEP 424 is sound, the text of the PEP is rather >>> vague >>> and missing a lot of details. >>> There was extended discussion on the details, but none of that has >>> appeared >>> in the PEP yet. >>> >>> So Alex, how about adding those details? >>> >>> Also the rationale is rather poor. >>> Given that CPython is the reference implementation, PyPy should be >>> compared >>> to CPython, not vice-versa. >>> Reversing PyPy and CPython in the rationale gives: >>> >>> ''' >>> Being able to pre-allocate lists based on the expected size, as estimated >>> by >>> __length_hint__, >>> >>> can be a significant optimization. >>> PyPy has been observed to run some code slower than CPython, purely >>> because >>> this optimization is absent. >>> ''' >>> >>> Which is a PyPy bug report, not a rationale for a PEP ;) >>> >>> Perhaps a better rationale would something along the lines of: >>> >>> ''' >>> Adding a __length_hint__ method to the iterator protocol allows >>> sequences, >>> notably lists, >>> to be initialised from iterators with only a single resize operation. >>> This allows sequences to be intialised quickly, yet have a small growth >>> factor, reducing memory use. >>> ''' >>> >> >> Hi Mark. >> >> It's not about saving memory. It really is about speed. Noone bothered >> measuring cpython with length hint disabled to compare, however we did >> that for pypy hence the rationale contains it. It's merely to state >> "this seems like an important optimization". Since the C-level code >> involved is rather similar (it's mostly runtime anyway), it seems >> reasonable to draw a conclusion that removing length hint from cpython >> would cause slowdown. > > > It is not about making it faster *or* saving memory, but *both*. > Without __length_hint__ there is a trade off between speed and memory use. > You can have speed at the cost of memory by increasing the resize factor. No, you cannot. if you allocate a huge region, you're not gonna make much of speed, because at the end you need to copy stuff anyway. Besides large allocations are slow. With length hint that is correct (sometimes you can do that) you have a zero-copy scenario From solipsis at pitrou.net Wed Aug 1 14:12:54 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Aug 2012 14:12:54 +0200 Subject: [Python-Dev] HTTPS repositories failing when using selfsigned certs References: <5018A94E.1050104@jcea.es> Message-ID: <20120801141254.1af9a1fe@pitrou.net> On Wed, 01 Aug 2012 05:58:06 +0200 Jesus Cea wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > My mercurial clone is , and today I > can't create a patch from it (in the bug tracker). No explanation in > the web interface, but checking the sourcecode of the resulting page, > I see a SSL certificate failure. > > So, looks like bugs.python.org is now verifying repository certificates. > > My certificate is selfsigned and, moreover, it is behind a SNI server, > so the certificate python.org is getting is a selfsigned "jcea.es" > certificate. > > What can I do, beside buying a "real" cert?. Why don't you just use a HTTP URL? Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From solipsis at pitrou.net Wed Aug 1 14:19:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Aug 2012 14:19:19 +0200 Subject: [Python-Dev] HTTPS repositories failing when using selfsigned certs References: <5018A94E.1050104@jcea.es> <20120801141254.1af9a1fe@pitrou.net> Message-ID: <20120801141919.5df8b1ce@pitrou.net> On Wed, 1 Aug 2012 14:12:54 +0200 Antoine Pitrou wrote: > On Wed, 01 Aug 2012 05:58:06 +0200 > Jesus Cea wrote: > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > My mercurial clone is , and today I > > can't create a patch from it (in the bug tracker). No explanation in > > the web interface, but checking the sourcecode of the resulting page, > > I see a SSL certificate failure. > > > > So, looks like bugs.python.org is now verifying repository certificates. > > > > My certificate is selfsigned and, moreover, it is behind a SNI server, > > so the certificate python.org is getting is a selfsigned "jcea.es" > > certificate. > > > > What can I do, beside buying a "real" cert?. > > Why don't you just use a HTTP URL? Whoops, I hadn't seen the P.S. in your e-mail: > PS: If I try to use the http version of my repository > (), I get an error: "('invalid token', > 97)". In this case the issue with the http version should perhaps be figured out first. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From storchaka at gmail.com Wed Aug 1 22:52:02 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 01 Aug 2012 23:52:02 +0300 Subject: [Python-Dev] cpython: Fix findnocoding.p and pysource.py scripts In-Reply-To: <3WnN5F1BX5zPgm@mail.python.org> References: <3WnN5F1BX5zPgm@mail.python.org> Message-ID: <501996F2.7000208@gmail.com> On 01.08.12 21:16, victor.stinner wrote: > http://hg.python.org/cpython/rev/67d36e8ddcfc > changeset: 78375:67d36e8ddcfc > user: Victor Stinner > date: Wed Aug 01 20:12:51 2012 +0200 > summary: > Fix findnocoding.p and pysource.py scripts > > I suppose that these scripts didn't work since Python 3.0. > - line1 = infile.readline() > - line2 = infile.readline() > + with infile: > + line1 = infile.readline() > + line2 = infile.readline() > > - if get_declaration(line1) or get_declaration(line2): > - # the file does have an encoding declaration, so trust it > - infile.close() > - return False > + if get_declaration(line1) or get_declaration(line2): > + # the file does have an encoding declaration, so trust it > + infile.close() > + return False infile.close() is unnecessary here. From victor.stinner at gmail.com Thu Aug 2 00:12:17 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 2 Aug 2012 00:12:17 +0200 Subject: [Python-Dev] cpython: Fix findnocoding.p and pysource.py scripts In-Reply-To: <501996F2.7000208@gmail.com> References: <3WnN5F1BX5zPgm@mail.python.org> <501996F2.7000208@gmail.com> Message-ID: 2012/8/1 Serhiy Storchaka : >> changeset: 78375:67d36e8ddcfc >> Fix findnocoding.p and pysource.py scripts >> > >> - line1 = infile.readline() >> - line2 = infile.readline() >> + with infile: >> + line1 = infile.readline() >> + line2 = infile.readline() >> >> - if get_declaration(line1) or get_declaration(line2): >> - # the file does have an encoding declaration, so trust it >> - infile.close() >> - return False >> + if get_declaration(line1) or get_declaration(line2): >> + # the file does have an encoding declaration, so trust it >> + infile.close() >> + return False > > infile.close() is unnecessary here. Ah yes correct, I forgot this one. The new changeset 8ace059cdffd removes it. It is not perfect, there is still a race condition in looks_like_python() (pysource.py) if KeyboardInterrupt occurs between the file is opened and the beginning of the "with infile" block, but it don't think that it is really important ;-) Victor From shanthulinux at gmail.com Thu Aug 2 09:28:36 2012 From: shanthulinux at gmail.com (Shanth Kumar) Date: Thu, 2 Aug 2012 12:58:36 +0530 Subject: [Python-Dev] Introduction Message-ID: Hi I am Shanthkumar from Bangalore, India, working for a software firm. Good to see the mailing group, as i am new to python curious to ask you people couple of queireis. -- Regards..., Shanthkumara O.D. -------------- next part -------------- An HTML attachment was scrubbed... URL: From catch-all at masklinn.net Thu Aug 2 09:44:26 2012 From: catch-all at masklinn.net (Xavier Morel) Date: Thu, 2 Aug 2012 09:44:26 +0200 Subject: [Python-Dev] Introduction In-Reply-To: References: Message-ID: <560C7568-ED90-4AA2-BABA-4C8E96FFC844@masklinn.net> On 2012-08-02, at 09:28 , Shanth Kumar wrote: > Hi I am Shanthkumar from Bangalore, India, working for a software firm. > Good to see the mailing group, as i am new to python curious to ask you > people couple of queireis. I fear that is very likely the wrong mailing list for that: python-dev is about the development of the CPython runtime and the language itself, not so much about learning it and developing in it. You probably want python-list (http://mail.python.org/mailman/listinfo/python-list) or tutor (http://mail.python.org/mailman/listinfo/tutor), or your local user group's mailing list (http://mail.python.org/mailman/listinfo/bangpypers) From roundup-admin at psf.upfronthosting.co.za Thu Aug 2 13:45:36 2012 From: roundup-admin at psf.upfronthosting.co.za (Python tracker) Date: Thu, 02 Aug 2012 11:45:36 +0000 Subject: [Python-Dev] Failed issue tracker submission Message-ID: <20120802114536.4316D1CDFE@psf.upfronthosting.co.za> There was a problem with the message you sent: This issue can't be closed until issue 15502 is closed. Mail Gateway Help ================= Incoming messages are examined for multiple parts: . In a multipart/mixed message or part, each subpart is extracted and examined. The text/plain subparts are assembled to form the textual body of the message, to be stored in the file associated with a "msg" class node. Any parts of other types are each stored in separate files and given "file" class nodes that are linked to the "msg" node. . In a multipart/alternative message or part, we look for a text/plain subpart and ignore the other parts. . A message/rfc822 is treated similar tomultipart/mixed (except for special handling of the first text part) if unpack_rfc822 is set in the mailgw config section. Summary ------- The "summary" property on message nodes is taken from the first non-quoting section in the message body. The message body is divided into sections by blank lines. Sections where the second and all subsequent lines begin with a ">" or "|" character are considered "quoting sections". The first line of the first non-quoting section becomes the summary of the message. Addresses --------- All of the addresses in the To: and Cc: headers of the incoming message are looked up among the user nodes, and the corresponding users are placed in the "recipients" property on the new "msg" node. The address in the From: header similarly determines the "author" property of the new "msg" node. The default handling for addresses that don't have corresponding users is to create new users with no passwords and a username equal to the address. (The web interface does not permit logins for users with no passwords.) If we prefer to reject mail from outside sources, we can simply register an auditor on the "user" class that prevents the creation of user nodes with no passwords. Actions ------- The subject line of the incoming message is examined to determine whether the message is an attempt to create a new item or to discuss an existing item. A designator enclosed in square brackets is sought as the first thing on the subject line (after skipping any "Fwd:" or "Re:" prefixes). If an item designator (class name and id number) is found there, the newly created "msg" node is added to the "messages" property for that item, and any new "file" nodes are added to the "files" property for the item. If just an item class name is found there, we attempt to create a new item of that class with its "messages" property initialized to contain the new "msg" node and its "files" property initialized to contain any new "file" nodes. Triggers -------- Both cases may trigger detectors (in the first case we are calling the set() method to add the message to the item's spool; in the second case we are calling the create() method to create a new node). If an auditor raises an exception, the original message is bounced back to the sender with the explanatory message given in the exception. -------------- next part -------------- Return-Path: X-Original-To: report at bugs.python.org Delivered-To: roundup+tracker at psf.upfronthosting.co.za Received: from mail.python.org (mail.python.org [IPv6:2001:888:2000:d::a6]) by psf.upfronthosting.co.za (Postfix) with ESMTPS id 86BB41CD7E for ; Thu, 2 Aug 2012 13:45:35 +0200 (CEST) Received: from albatross.python.org (localhost [127.0.0.1]) by mail.python.org (Postfix) with ESMTP id 3WnqMM2J6szNbw for ; Thu, 2 Aug 2012 13:45:35 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901; t=1343907935; bh=gCR6sftsrpIrVDajA2ztlOvXtnpd1hd96n2LwhPSZkk=; h=MIME-Version:Content-Type:Content-Transfer-Encoding:From:To: Subject:Message-Id:Date; b=eGlIBm9r/zVe4tZZOqtKy8ctuyEEfVHryOb98//t3VAYMgz7Th9wKgpUWNd6OEJnQ hWSOrYKAgNROXrwx+ITyPbwGgMbE1CwRfQiCuxe6RMTQJqoM8kS1MebdE/JiBVrD6U 0h/vjTxlMpKY1YjzbOCqvmuHnaSuTRo3LLieIRRc= Received: from localhost (HELO mail.python.org) (127.0.0.1) by albatross.python.org with SMTP; 02 Aug 2012 13:45:35 +0200 Received: from virt-7yvsjn.psf.osuosl.org (virt-7yvsjn.psf.osuosl.org [140.211.10.72]) by mail.python.org (Postfix) with ESMTP for ; Thu, 2 Aug 2012 13:45:34 +0200 (CEST) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 From: python-dev at python.org To: report at bugs.python.org Subject: [issue15519] [status=closed; resolution=fixed; stage=committed/rejected] Message-Id: <3WnqMM2J6szNbw at mail.python.org> Date: Thu, 2 Aug 2012 13:45:35 +0200 (CEST) TmV3IGNoYW5nZXNldCBhMWFjMWUxM2M1YTAgYnkgTmljayBDb2dobGFuIGluIGJyYW5jaCAnZGVm YXVsdCc6CkNsb3NlICMxNTUxOTogUHJvcGVybHkgZXhwb3NlIFdpbmRvd3NSZWdpc3RyeUZpbmRl ciBpbiBpbXBvcnRsaWIgYW5kIGJyaW5nIHRoZSBuYW1lIGludG8gbGluZSB3aXRoIG5vcm1hbCBp bXBvcnQgdGVybWlub2xvZ3kuIE9yaWdpbmFsIHBhdGNoIGJ5IEVyaWMgU25vdwpodHRwOi8vaGcu cHl0aG9uLm9yZy9jcHl0aG9uL3Jldi9hMWFjMWUxM2M1YTAK From barry at python.org Thu Aug 2 16:32:19 2012 From: barry at python.org (Barry Warsaw) Date: Thu, 2 Aug 2012 10:32:19 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Bring the importlib.PathFinder docs and docstring more in line In-Reply-To: <3Wns631KFWzPhs@mail.python.org> References: <3Wns631KFWzPhs@mail.python.org> Message-ID: <20120802103219.19acba49@resist.wooz.org> On Aug 02, 2012, at 03:04 PM, nick.coghlan wrote: >http://hg.python.org/cpython/rev/1f8351cf00f3 >changeset: 78382:1f8351cf00f3 >user: Nick Coghlan >date: Thu Aug 02 23:03:58 2012 +1000 >summary: > Issue #15502: Bring the importlib.PathFinder docs and docstring more in line with the new import system documentation, and fix various parts of the new docs that weren't quite right given PEP 420 or were otherwise a bit misleading. Also note the key terminology problem still being discussed in the issue > >files: > Doc/library/importlib.rst | 15 +- > Doc/reference/import.rst | 221 +- > Lib/importlib/_bootstrap.py | 8 +- > Python/importlib.h | 2583 +++++++++++----------- > 4 files changed, 1449 insertions(+), 1378 deletions(-) > > >diff --git a/Doc/reference/import.rst b/Doc/reference/import.rst >--- a/Doc/reference/import.rst >+++ b/Doc/reference/import.rst >@@ -69,7 +75,7 @@ > > It's important to keep in mind that all packages are modules, but not all > modules are packages. Or put another way, packages are just a special kind of >-module. Specifically, any module that contains an ``__path__`` attribute is >+module. Specifically, any module that contains a ``__path__`` attribute is I find this change hilarious! Is it "an under-under path" or "a dunder path"? Personally, I think word "dunder" should never be used unless it's followed by the word "mifflin", but that might be a bit too regional. Or maybe too old skool (yeah, I know all the kids love the dunder). I suppose I don't care that much, but it would be useful to have some consistency, like whether we use American or British spellings. >@@ -90,7 +96,7 @@ > packages are traditional packages as they existed in Python 3.2 and earlier. > A regular package is typically implemented as a directory containing an > ``__init__.py`` file. When a regular package is imported, this >-``__init__.py`` file is implicitly imported, and the objects it defines are >+``__init__.py`` file is implicitly executed, and the objects it defines are Perhaps "loaded" instead of either "imported" or "executed"? >@@ -107,9 +113,9 @@ > three/ > __init__.py > >-Importing ``parent.one`` will implicitly import ``parent/__init__.py`` and >+Importing ``parent.one`` will implicitly execute ``parent/__init__.py`` and Same. > ``parent/one/__init__.py``. Subsequent imports of ``parent.two`` or >-``parent.three`` will import ``parent/two/__init__.py`` and >+``parent.three`` will execute ``parent/two/__init__.py`` and And here. > ``parent/three/__init__.py`` respectively. > > >@@ -128,6 +134,12 @@ > objects on the file system; they may be virtual modules that have no concrete > representation. > >+Namespace packages do not use an ordinary list for their ``__path__`` >+attribute. They instead use a custom iterable type which will automatically >+perform a new search for package portions on the next import attempt within >+that package if the path of their parent package (or :data:`sys.path` for a >+top level package) changes. Nice addition. I can't help but suggest a slight rephrasing: Namespace packages use a custom iterable type for their ``__path__`` attribute, so that changes in their parent package's path (or :data:`sys.path` for a top level package) are handled automatically. >@@ -172,14 +184,18 @@ > :exc:`ImportError` is raised. If the module name is missing, Python will > continue searching for the module. > >-:data:`sys.modules` is writable. Deleting a key will not destroy the >-associated module, but it will invalidate the cache entry for the named >-module, causing Python to search anew for the named module upon its next >-import. Beware though, because if you keep a reference to the module object, >+:data:`sys.modules` is writable. Deleting a key may not destroy the >+associated module (as other modules may hold references to it), s/as other modules may hold references to it/due to circular references/ >@@ -369,7 +404,7 @@ > > Here are the exact rules used: > >- * If the module has an ``__loader__`` and that loader has a >+ * If the module has a ``__loader__`` and that loader has a > :meth:`module_repr()` method, call it with a single argument, which is the > module object. The value returned is used as the module's repr. > >@@ -377,10 +412,10 @@ > and discarded, and the calculation of the module's repr continues as if > :meth:`module_repr()` did not exist. > >- * If the module has an ``__file__`` attribute, this is used as part of the >+ * If the module has a ``__file__`` attribute, this is used as part of the > module's repr. > >- * If the module has no ``__file__`` but does have an ``__loader__``, then the >+ * If the module has no ``__file__`` but does have a ``__loader__``, then the > loader's repr is used as part of the module's repr. > > * Otherwise, just use the module's ``__name__`` in the repr. C'mon Michael Scott, make it stop! Do you remember the episode where Dwight and Jim team up to... Oh, I was in stitches! >@@ -430,15 +467,20 @@ > path`, which contains a list of :term:`path entries `. Each path > entry names a location to search for modules. > >-Path entries may name file system locations, and by default the :term:`path >-importer` knows how to provide traditional file system imports. It implements >-all the semantics for finding modules on the file system, handling special >-file types such as Python source code (``.py`` files), Python byte code >-(``.pyc`` and ``.pyo`` files) and shared libraries (e.g. ``.so`` files). >+The path importer itself doesn't know how to import anything. Instead, it >+traverses the individual path entries, associating each of them with a >+path entry finder that knows how to handle that particular kind of path. >+ >+The default set of path entry finders implement all the semantics for finding >+modules on the file system, handling special file types such as Python source >+code (``.py`` files), Python byte code (``.pyc`` and ``.pyo`` files) and >+shared libraries (e.g. ``.so`` files). When supported by the :mod:`zipimport` >+module in the standard library, the default path entry finders also handle >+loading all of these file types (other than shared libraries) from zipfiles. s/zipfiles/zip files/ > > Path entries need not be limited to file system locations. They can refer to >-the contents of zip files, URLs, database queries, or any other location that >-can be specified as a string. >+the URLs, database queries, or any other location that can be specified as a >+string. s/the URLs/URLs/ > > The :term:`path importer` provides additional hooks and protocols so that you > can extend and customize the types of searchable path entries. For example, >@@ -534,29 +578,59 @@ > Path entry finder protocol > -------------------------- > >-Path entry finders support the same :meth:`find_module()` method that meta >-path finders support, however path entry finder's :meth:`find_module()` >-methods are never called with a ``path`` argument. >- >-The :meth:`find_module()` method on path entry finders is deprecated though, >-and instead path entry finders should implement the :meth:`find_loader()` >-method. If it exists on the path entry finder, :meth:`find_loader()` will >-always be called instead of :meth:`find_module()`. >+In order to support imports of modules and initialized packages and also to >+contribute portions to namespace packages, path entry finders must implement >+the :meth:`find_loader()` method. This is an improvement, because it de-emphasizes the deprecated API. Perhaps add a footnote that describes the legacy API? > :meth:`find_loader()` takes one argument, the fully qualified name of the > module being imported. :meth:`find_loader()` returns a 2-tuple where the > first item is the loader and the second item is a namespace :term:`portion`. > When the first item (i.e. the loader) is ``None``, this means that while the >-path entry finder does not have a loader for the named module, it knows that >-the :term:`path entry` contributes to a namespace portion for the named >-module. This will almost always be the case where Python is asked to import a >-:term:`namespace package` that has no physical presence on the file system. >-When a path entry finder returns ``None`` for the loader, the second item of >-the 2-tuple return value must be a sequence, although it can be empty. >+path entry finder does not have a loader for the named module, it knows that the >+path entry contributes to a namespace portion for the named module. This will >+almost always be the case where Python is asked to import a namespace package >+that has no physical presence on the file system. When a path entry finder >+returns ``None`` for the loader, the second item of the 2-tuple return value >+must be a sequence, although it can be empty. > > If :meth:`find_loader()` returns a non-``None`` loader value, the portion is >-ignored and the loader is returned from the :term:`path importer`, terminating >-the :term:`import path` search. >+ignored and the loader is returned from the path importer, terminating the >+search through the path entries. >+ >+For backwards compatibility with other implementations of the import >+protocol, many path entry finders also support the same, >+traditional :meth:`find_module()` method that meta path finders support. >+However path entry finder :meth:`find_module()` methods are never called >+with a ``path`` argument (they are expected to record the appropriate >+path information from the initial call to the path hook). >+ >+The :meth:`find_module()` method on path entry finders is deprecated, >+as it does not allow the path entry finder to contribute portions to >+namespace packages. Instead path entry finders should implement the >+:meth:`find_loader()` method as described above. If it exists on the path >+entry finder, the import system will always call :meth:`find_loader()` >+in preference to :meth:`find_module()`. The above is what could be moved to the footnote. Cheers, -Barry From chris.jerdonek at gmail.com Thu Aug 2 16:41:05 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 2 Aug 2012 07:41:05 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Bring the importlib.PathFinder docs and docstring more in line In-Reply-To: <20120802103219.19acba49@resist.wooz.org> References: <3Wns631KFWzPhs@mail.python.org> <20120802103219.19acba49@resist.wooz.org> Message-ID: On Thu, Aug 2, 2012 at 7:32 AM, Barry Warsaw wrote: > On Aug 02, 2012, at 03:04 PM, nick.coghlan wrote: > >>-module. Specifically, any module that contains an ``__path__`` attribute is >>+module. Specifically, any module that contains a ``__path__`` attribute is > > I find this change hilarious! Is it "an under-under path" or "a dunder path"? Personally, I just say "path" (same with __init__, __main__, etc) and rely on context. --Chris From solipsis at pitrou.net Thu Aug 2 16:43:12 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 2 Aug 2012 16:43:12 +0200 Subject: [Python-Dev] cpython: Issue #15502: Bring the importlib.PathFinder docs and docstring more in line References: <3Wns631KFWzPhs@mail.python.org> <20120802103219.19acba49@resist.wooz.org> Message-ID: <20120802164312.6e337335@pitrou.net> On Thu, 2 Aug 2012 07:41:05 -0700 Chris Jerdonek wrote: > On Thu, Aug 2, 2012 at 7:32 AM, Barry Warsaw wrote: > > On Aug 02, 2012, at 03:04 PM, nick.coghlan wrote: > > > >>-module. Specifically, any module that contains an ``__path__`` attribute is > >>+module. Specifically, any module that contains a ``__path__`` attribute is > > > > I find this change hilarious! Is it "an under-under path" or "a dunder path"? > > Personally, I just say "path" (same with __init__, __main__, etc) and > rely on context. +1. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From brett at python.org Thu Aug 2 17:35:55 2012 From: brett at python.org (Brett Cannon) Date: Thu, 2 Aug 2012 11:35:55 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Bring the importlib ABCs into line with the current state of the In-Reply-To: <3Wnpx25v1RzPhl@mail.python.org> References: <3Wnpx25v1RzPhl@mail.python.org> Message-ID: Mostly as a note to myself (although if someone else wants to change this, then great), PathEntryFinder.find_loader() specifies self while the other docs don't bother to list it. And I just realized that the other classes defined in the importlib docs don't list their superclasses like MetaPathFinder and PathEntryFinder now do. On Thu, Aug 2, 2012 at 7:26 AM, nick.coghlan wrote: > http://hg.python.org/cpython/rev/184700df5b6a > changeset: 78379:184700df5b6a > parent: 78376:8ace059cdffd > user: Nick Coghlan > date: Thu Aug 02 21:26:03 2012 +1000 > summary: > Issue #15502: Bring the importlib ABCs into line with the current state > of the import protocols given PEP 420. Original patch by Eric Snow. > > files: > Doc/library/importlib.rst | 77 +++++--- > Lib/importlib/abc.py | 82 ++++++--- > Lib/test/test_importlib/source/test_abc_loader.py | 23 +- > Lib/test/test_importlib/test_abc.py | 7 +- > Misc/NEWS | 3 + > 5 files changed, 127 insertions(+), 65 deletions(-) > > > diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst > --- a/Doc/library/importlib.rst > +++ b/Doc/library/importlib.rst > @@ -125,32 +125,49 @@ > > .. class:: Finder > > - An abstract base class representing a :term:`finder`. > - See :pep:`302` for the exact definition for a finder. > - > - .. method:: find_loader(self, fullname): > - > - An abstract method for finding a :term:`loader` for the specified > - module. Returns a 2-tuple of ``(loader, portion)`` where portion > is a > - sequence of file system locations contributing to part of a > namespace > - package. The sequence may be empty. When present, > `find_loader()` is > - preferred over `find_module()`. > - > - .. versionadded: 3.3 > - > - .. method:: find_module(fullname, path=None) > - > - An abstract method for finding a :term:`loader` for the specified > - module. If the :term:`finder` is found on :data:`sys.meta_path` > and the > - module to be searched for is a subpackage or module then *path* > will > - be the value of :attr:`__path__` from the parent package. If a > loader > - cannot be found, ``None`` is returned. > + An abstract base class representing a :term:`finder`. Finder > + implementations should derive from (or register with) the more specific > + :class:`MetaPathFinder` or :class:`PathEntryFinder` ABCs. > > .. method:: invalidate_caches() > > - An optional method which, when called, should invalidate any > internal > - cache used by the finder. Used by :func:`invalidate_caches()` when > - invalidating the caches of all cached finders. > + An optional method which, when called, should invalidate any > internal > + cache used by the finder. Used by :func:`invalidate_caches()` when > + invalidating the caches of all cached finders. > + > + .. versionchanged:: 3.3 > + The API signatures for meta path finders and path entry finders > + were separated by PEP 420. Accordingly, the Finder ABC no > + longer requires implementation of a ``find_module()`` method. > + > + > +.. class:: MetaPathFinder(Finder) > + > + An abstract base class representing a :term:`meta path finder`. > + > + .. versionadded:: 3.3 > + > + .. method:: find_module(fullname, path) > + > + An abstract method for finding a :term:`loader` for the specified > + module. If this is a top-level import, *path* will be ``None``. > + Otheriwse, this is a search for a subpackage or module and *path* > + will be the value of :attr:`__path__` from the parent > + package. If a loader cannot be found, ``None`` is returned. > + > + > +.. class:: PathEntryFinder(Finder) > + > + An abstract base class representing a :term:`path entry finder`. > + > + .. versionadded:: 3.3 > + > + .. method:: find_loader(self, fullname): > + > + An abstract method for finding a :term:`loader` for the specified > + module. Returns a 2-tuple of ``(loader, portion)`` where portion > is a > + sequence of file system locations contributing to part of a > namespace > + package. The sequence may be empty. > > > .. class:: Loader > @@ -569,8 +586,8 @@ > > An :term:`importer` for built-in modules. All known built-in modules > are > listed in :data:`sys.builtin_module_names`. This class implements the > - :class:`importlib.abc.Finder` and :class:`importlib.abc.InspectLoader` > - ABCs. > + :class:`importlib.abc.MetaPathFinder` and > + :class:`importlib.abc.InspectLoader` ABCs. > > Only class methods are defined by this class to alleviate the need for > instantiation. > @@ -579,8 +596,8 @@ > .. class:: FrozenImporter > > An :term:`importer` for frozen modules. This class implements the > - :class:`importlib.abc.Finder` and :class:`importlib.abc.InspectLoader` > - ABCs. > + :class:`importlib.abc.MetaPathFinder` and > + :class:`importlib.abc.InspectLoader` ABCs. > > Only class methods are defined by this class to alleviate the need for > instantiation. > @@ -589,7 +606,7 @@ > .. class:: PathFinder > > :term:`Finder` for :data:`sys.path`. This class implements the > - :class:`importlib.abc.Finder` ABC. > + :class:`importlib.abc.MetaPathFinder` ABC. > > This class does not perfectly mirror the semantics of > :keyword:`import` in > terms of :data:`sys.path`. No implicit path hooks are assumed for > @@ -616,8 +633,8 @@ > > .. class:: FileFinder(path, \*loader_details) > > - A concrete implementation of :class:`importlib.abc.Finder` which caches > - results from the file system. > + A concrete implementation of :class:`importlib.abc.PathEntryFinder` > which > + caches results from the file system. > > The *path* argument is the directory for which the finder is in charge > of > searching. > diff --git a/Lib/importlib/abc.py b/Lib/importlib/abc.py > --- a/Lib/importlib/abc.py > +++ b/Lib/importlib/abc.py > @@ -23,6 +23,61 @@ > abstract_cls.register(frozen_cls) > > > +class Finder(metaclass=abc.ABCMeta): > + > + """Common abstract base class for import finders. > + > + Finder implementations should derive from the more specific > + MetaPathFinder or PathEntryFinder ABCs rather than directly from > Finder. > + """ > + > + def find_module(self, fullname, path=None): > + """An optional legacy method that should find a module. > + The fullname is a str and the optional path is a str or None. > + Returns a Loader object. > + > + The path finder will use this method only if find_loader() does > + not exist. It may optionally be implemented for compatibility > + with legacy third party reimplementations of the import system. > + """ > + raise NotImplementedError > + > + # invalidate_caches() is a completely optional method, so no default > + # implementation is provided. See the docs for details. > + > + > +class MetaPathFinder(Finder): > + > + """Abstract base class for import finders on sys.meta_path.""" > + > + @abc.abstractmethod > + def find_module(self, fullname, path): > + """Abstract method which when implemented should find a module. > + The fullname is a str and the path is a str or None. > + Returns a Loader object. > + """ > + raise NotImplementedError > + > +_register(MetaPathFinder, machinery.BuiltinImporter, > machinery.FrozenImporter, > + machinery.PathFinder) > + > + > +class PathEntryFinder(Finder): > + > + """Abstract base class for path entry finders used by PathFinder.""" > + > + @abc.abstractmethod > + def find_loader(self, fullname): > + """Abstract method which when implemented returns a module loader. > + The fullname is a str. Returns a 2-tuple of (Loader, portion) > where > + portion is a sequence of file system locations contributing to > part of > + a namespace package. The sequence may be empty. > + """ > + raise NotImplementedError > + > +_register(PathEntryFinder, machinery.FileFinder) > + > + > class Loader(metaclass=abc.ABCMeta): > > """Abstract base class for import loaders.""" > @@ -40,33 +95,6 @@ > raise NotImplementedError > > > -class Finder(metaclass=abc.ABCMeta): > - > - """Abstract base class for import finders.""" > - > - @abc.abstractmethod > - def find_loader(self, fullname): > - """Abstract method which when implemented returns a module loader. > - The fullname is a str. Returns a 2-tuple of (Loader, portion) > where > - portion is a sequence of file system locations contributing to > part of > - a namespace package. The sequence may be empty. When present, > - `find_loader()` is preferred over `find_module()`. > - """ > - raise NotImplementedError > - > - @abc.abstractmethod > - def find_module(self, fullname, path=None): > - """Abstract method which when implemented should find a module. > - The fullname is a str and the optional path is a str or None. > - Returns a Loader object. This method is only called if > - `find_loader()` is not present. > - """ > - raise NotImplementedError > - > -_register(Finder, machinery.BuiltinImporter, machinery.FrozenImporter, > - machinery.PathFinder, machinery.FileFinder) > - > - > class ResourceLoader(Loader): > > """Abstract base class for loaders which can return data from their > diff --git a/Lib/test/test_importlib/source/test_abc_loader.py > b/Lib/test/test_importlib/source/test_abc_loader.py > --- a/Lib/test/test_importlib/source/test_abc_loader.py > +++ b/Lib/test/test_importlib/source/test_abc_loader.py > @@ -778,23 +778,32 @@ > expect = io.IncrementalNewlineDecoder(None, True).decode(source) > self.assertEqual(mock.get_source(name), expect) > > + > class AbstractMethodImplTests(unittest.TestCase): > > """Test the concrete abstractmethod implementations.""" > > + class MetaPathFinder(abc.MetaPathFinder): > + def find_module(self, fullname, path): > + super().find_module(fullname, path) > + > + class PathEntryFinder(abc.PathEntryFinder): > + def find_module(self, _): > + super().find_module(_) > + > + def find_loader(self, _): > + super().find_loader(_) > + > + class Finder(abc.Finder): > + def find_module(self, fullname, path): > + super().find_module(fullname, path) > + > class Loader(abc.Loader): > def load_module(self, fullname): > super().load_module(fullname) > def module_repr(self, module): > super().module_repr(module) > > - class Finder(abc.Finder): > - def find_module(self, _): > - super().find_module(_) > - > - def find_loader(self, _): > - super().find_loader(_) > - > class ResourceLoader(Loader, abc.ResourceLoader): > def get_data(self, _): > super().get_data(_) > diff --git a/Lib/test/test_importlib/test_abc.py > b/Lib/test/test_importlib/test_abc.py > --- a/Lib/test/test_importlib/test_abc.py > +++ b/Lib/test/test_importlib/test_abc.py > @@ -30,11 +30,16 @@ > "{0} is not a superclass of {1}".format(superclass, > self.__test)) > > > -class Finder(InheritanceTests, unittest.TestCase): > +class MetaPathFinder(InheritanceTests, unittest.TestCase): > > + superclasses = [abc.Finder] > subclasses = [machinery.BuiltinImporter, machinery.FrozenImporter, > machinery.PathFinder] > > +class PathEntryFinder(InheritanceTests, unittest.TestCase): > + > + superclasses = [abc.Finder] > + subclasses = [machinery.FileFinder] > > class Loader(InheritanceTests, unittest.TestCase): > > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -72,6 +72,9 @@ > Library > ------- > > +- Issue #15502: Bring the importlib ABCs into line with the current state > + of the import protocols given PEP 420. Original patch by Eric Snow. > + > - Issue #15499: Launching a webbrowser in Unix used to sleep for a few > seconds. Original patch by Anton Barkovsky. > > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Thu Aug 2 17:43:38 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Thu, 2 Aug 2012 08:43:38 -0700 Subject: [Python-Dev] PEP 0424: A method for exposing a length hint In-Reply-To: <5018ECEB.4060403@hotpy.org> References: <5018ECEB.4060403@hotpy.org> Message-ID: <7F1504FC-DD46-4C8A-B5F4-DA200C179353@gmail.com> On Aug 1, 2012, at 1:46 AM, Mark Shannon wrote: > > ''' > Being able to pre-allocate lists based on the expected size, as estimated by __length_hint__, > can be a significant optimization. > PyPy has been observed to run some code slower than CPython, purely because this optimization is absent. > ''' > > Which is a PyPy bug report, not a rationale for a PEP ;) Alex's rationale is correct and well expressed. Your proposed revision reflects fuzzy thinking about why __length_hint__ is useful. Regardless of resizing growth factors, it is *always* helpful to know how much memory to allocate. Calls to the allocators (especially for large blocks) and possible the recopying of data should be avoided when possible. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Thu Aug 2 17:49:28 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 03 Aug 2012 00:49:28 +0900 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Bring the importlib.PathFinder docs and docstring more in line In-Reply-To: References: <3Wns631KFWzPhs@mail.python.org> <20120802103219.19acba49@resist.wooz.org> Message-ID: <876291l2ev.fsf@uwakimon.sk.tsukuba.ac.jp> Chris Jerdonek writes: > On Thu, Aug 2, 2012 at 7:32 AM, Barry Warsaw wrote: > > I find this change hilarious! Is it "an under-under path" or "a > > dunder path"? > > Personally, I just say "path" (same with __init__, __main__, etc) and > rely on context. I think that's what the Chicago Manual of Style recommends. Now-needing-surgery-for-my-cheek-hernia-ly y'rs, From brett at python.org Thu Aug 2 23:53:46 2012 From: brett at python.org (Brett Cannon) Date: Thu, 2 Aug 2012 17:53:46 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Update the What's New details for importlib based on doc/ABC changes. In-Reply-To: <3Wp4n324DJzPhs@mail.python.org> References: <3Wp4n324DJzPhs@mail.python.org> Message-ID: Sorry about accidentally committing another minor cleanup to importlib in this commit; thought I had already committed it separately. On Thu, Aug 2, 2012 at 5:50 PM, brett.cannon wrote: > http://hg.python.org/cpython/rev/083534cd7874 > changeset: 78388:083534cd7874 > user: Brett Cannon > date: Thu Aug 02 17:50:06 2012 -0400 > summary: > Update the What's New details for importlib based on doc/ABC changes. > > files: > Doc/library/importlib.rst | 10 ++++++---- > Doc/whatsnew/3.3.rst | 24 +++++++++++++++++++----- > 2 files changed, 25 insertions(+), 9 deletions(-) > > > diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst > --- a/Doc/library/importlib.rst > +++ b/Doc/library/importlib.rst > @@ -141,9 +141,10 @@ > longer requires implementation of a ``find_module()`` method. > > > -.. class:: MetaPathFinder(Finder) > +.. class:: MetaPathFinder > > - An abstract base class representing a :term:`meta path finder`. > + An abstract base class representing a :term:`meta path finder` and > + inheriting from :class:`Finder`. > > .. versionadded:: 3.3 > > @@ -156,9 +157,10 @@ > package. If a loader cannot be found, ``None`` is returned. > > > -.. class:: PathEntryFinder(Finder) > +.. class:: PathEntryFinder > > - An abstract base class representing a :term:`path entry finder`. > + An abstract base class representing a :term:`path entry finder` and > + inheriting from :class:`Finder`. > > .. versionadded:: 3.3 > > diff --git a/Doc/whatsnew/3.3.rst b/Doc/whatsnew/3.3.rst > --- a/Doc/whatsnew/3.3.rst > +++ b/Doc/whatsnew/3.3.rst > @@ -519,7 +519,15 @@ > making the import statement work. That means the various importers that > were > once implicit are now fully exposed as part of the :mod:`importlib` > package. > > -In terms of finders, * :class:`importlib.machinery.FileFinder` exposes the > +The abstract base classes defined in :mod:`importlib.abc` have been > expanded > +to properly delineate between :term:`meta path finders ` > +and :term:`path entry finders ` by introducing > +:class:`importlib.abc.MetaPathFinder` and > +:class:`importlib.abc.PathEntryFinder`, respectively. The old ABC of > +:class:`importlib.abc.Finder` is now only provided for > backwards-compatibility > +and does not enforce any method requirements. > + > +In terms of finders, :class:`importlib.machinery.FileFinder` exposes the > mechanism used to search for source and bytecode files of a module. > Previously > this class was an implicit member of :attr:`sys.path_hooks`. > > @@ -547,10 +555,10 @@ > > Beyond the expanse of what :mod:`importlib` now exposes, there are other > visible changes to import. The biggest is that :attr:`sys.meta_path` and > -:attr:`sys.path_hooks` now store all of the finders used by import > explicitly. > -Previously the finders were implicit and hidden within the C code of > import > -instead of being directly exposed. This means that one can now easily > remove or > -change the order of the various finders to fit one's needs. > +:attr:`sys.path_hooks` now store all of the meta path finders and path > entry > +hooks used by import. Previously the finders were implicit and hidden > within > +the C code of import instead of being directly exposed. This means that > one can > +now easily remove or change the order of the various finders to fit one's > needs. > > Another change is that all modules have a ``__loader__`` attribute, > storing the > loader used to create the module. :pep:`302` has been updated to make this > @@ -1733,6 +1741,12 @@ > both the modification time and size of the source file the bytecode > file was > compiled from. > > +* :class:`importlib.abc.Finder` no longer specifies a `find_module()` > abstract > + method that must be implemented. If you were relying on subclasses to > + implement that method, make sure to check for the method's existence > first. > + You will probably want to check for `find_loader()` first, though, in > the > + case of working with :term:`path entry finders `. > + > * :mod:`pkgutil` has been converted to use :mod:`importlib` internally. > This > eliminates many edge cases where the old behaviour of the PEP 302 import > emulation failed to match the behaviour of the real import system. The > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Aug 3 07:13:50 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Aug 2012 15:13:50 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Bring the importlib.PathFinder docs and docstring more in line In-Reply-To: References: <3Wns631KFWzPhs@mail.python.org> <20120802103219.19acba49@resist.wooz.org> Message-ID: On Fri, Aug 3, 2012 at 12:41 AM, Chris Jerdonek wrote: > On Thu, Aug 2, 2012 at 7:32 AM, Barry Warsaw wrote: >> On Aug 02, 2012, at 03:04 PM, nick.coghlan wrote: >> >>>-module. Specifically, any module that contains an ``__path__`` attribute is >>>+module. Specifically, any module that contains a ``__path__`` attribute is >> >> I find this change hilarious! Is it "an under-under path" or "a dunder path"? > > Personally, I just say "path" (same with __init__, __main__, etc) and > rely on context. Yup. That's why I would write "a __path__ attribute", but "an __init__ method". If I have to be explicit when speaking, I'll typically say either "dunder " or "double underscore ". I find that's only needed if there is a name class between the double underscore name and a non-prefixed variable though, which doesn't happen very often. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dheerajgoswami at yahoo.com Fri Aug 3 20:53:05 2012 From: dheerajgoswami at yahoo.com (Dheeraj Goswami) Date: Fri, 3 Aug 2012 11:53:05 -0700 (PDT) Subject: [Python-Dev] Question Message-ID: <1344019985.56562.YahooMailNeo@web122406.mail.ne1.yahoo.com> Hi Python Gurus, I am an experienced Java developer and have been working on it for about 8 years. I need to build a web 2.0/AJAX based website/application and I am thinking to build it in Django which means I need to learn and move to Python. Please advise is it really worth moving from Java world into Python world? Will it really be fun to build in Django. PS: I dont like magics so I will never use Ruby/Rails as I am not that kind of guy. cheers /Dg -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Fri Aug 3 21:15:12 2012 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 03 Aug 2012 20:15:12 +0100 Subject: [Python-Dev] Question In-Reply-To: <1344019985.56562.YahooMailNeo@web122406.mail.ne1.yahoo.com> References: <1344019985.56562.YahooMailNeo@web122406.mail.ne1.yahoo.com> Message-ID: <501C2340.1020803@mrabarnett.plus.com> On 03/08/2012 19:53, Dheeraj Goswami wrote: > Hi Python Gurus, > I am an experienced Java developer and have been working on it for about > 8 years. I need to build a web 2.0/AJAX based website/application and I > am thinking to build it in Django which means I need to learn and move > to Python. > > Please advise is it really worth moving from Java world into Python > world? Will it really be fun to build in Django. > > PS: I dont like magics so I will never use Ruby/Rails as I am not that > kind of guy. > This list is for the development _of_ Python, not development _with_ Python. Please post to python-list at python.org instead. From rosuav at gmail.com Fri Aug 3 21:20:25 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 4 Aug 2012 05:20:25 +1000 Subject: [Python-Dev] Question In-Reply-To: <1344019985.56562.YahooMailNeo@web122406.mail.ne1.yahoo.com> References: <1344019985.56562.YahooMailNeo@web122406.mail.ne1.yahoo.com> Message-ID: On Sat, Aug 4, 2012 at 4:53 AM, Dheeraj Goswami wrote: > Hi Python Gurus, > I am an experienced Java developer and have been working on it for about 8 > years. I need to build a web 2.0/AJAX based website/application and I am > thinking to build it in Django which means I need to learn and move to > Python. > > Please advise is it really worth moving from Java world into Python world? > Will it really be fun to build in Django. This list is more about the development _of_ Python rather than development _with_ Python. You'll get more responses on python-list at python.org instead. But I would say that yes, it IS worth moving from Java to Python. If nothing else, learning more languages is always advantageous! Chris Angelico From chris.jerdonek at gmail.com Fri Aug 3 22:46:09 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 3 Aug 2012 13:46:09 -0700 Subject: [Python-Dev] issue 15510: textwrap.wrap() returning empty list Message-ID: I would like people's opinions on issue 15510, specifically whether it should be addressed and in what versions: http://bugs.python.org/issue15510 Jes?s suggested that I ask. The issue relates to textwrap.wrap()'s behavior when wrapping strings that contain no non-whitespace characters -- in particular the empty string. Thanks, --Chris From python at mrabarnett.plus.com Fri Aug 3 23:51:33 2012 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 03 Aug 2012 22:51:33 +0100 Subject: [Python-Dev] issue 15510: textwrap.wrap() returning empty list In-Reply-To: References: Message-ID: <501C47E5.9090600@mrabarnett.plus.com> On 03/08/2012 21:46, Chris Jerdonek wrote: > I would like people's opinions on issue 15510, specifically whether it > should be addressed and in what versions: > > http://bugs.python.org/issue15510 > > Jes?s suggested that I ask. The issue relates to textwrap.wrap()'s > behavior when wrapping strings that contain no non-whitespace > characters -- in particular the empty string. > If you don't want the empty list, you could just write: wrap(text) or [''] From "ja...py" at farowl.co.uk Sat Aug 4 01:34:16 2012 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Sat, 04 Aug 2012 00:34:16 +0100 Subject: [Python-Dev] Understanding the buffer API Message-ID: <501C5FF8.10209@farowl.co.uk> I'm implementing the buffer API and some of memoryview for Jython. I have read with interest, and mostly understood, the discussion in Issue #10181 that led to the v3.3 re-implementation of memoryview and much-improved documentation of the buffer API. Although Jython is targeting v2.7 at the moment, and 1-D bytes (there's no Jython NumPy), I'd like to lay a solid foundation that benefits from the recent CPython work. I hope that some of the complexity in memoryview stems from legacy considerations I don't have to deal with in Jython. I am puzzled that PEP 3118 makes some specifications that seem unnecessary and complicate the implementation. Would those who know the API inside out answer a few questions? My understanding is this: When a consumer requests a buffer from the exporter it specifies using flags how it intends to navigate it. If the buffer actually needs more apparatus than the consumer proposes, this raises an exception. If the buffer needs less apparatus than the consumer proposes, the exporter has to supply what was asked for. For example, if the consumer sets PyBUF_STRIDES, and the buffer can only be navigated by using suboffsets (PIL-style) this raises an exception. Alternatively, if the consumer sets PyBUF_STRIDES, and the buffer is just a simple byte array, the exporter has to supply shape and strides arrays (with trivial values), since the consumer is going to use those arrays. Is there any harm is supplying shape and strides when they were not requested? The PEP says: "PyBUF_ND ... If this is not given then shape will be NULL". It doesn't stipulate that strides will be null if PyBUF_STRIDES is not given, but the library documentation says so. suboffsets is different since even when requested, it will be null if not needed. Similar, but simpler, the PEP says "PyBUF_FORMAT ... If format is not explicitly requested then the format must be returned as NULL (which means "B", or unsigned bytes)". What would be the harm in returning "B"? One place where this really matters is in the implementation of memoryview. PyMemoryView requests a buffer with the flags PyBUF_FULL_RO, so even a simple byte buffer export will come with shape, strides and format. A consumer (of the memoryview's buffer API) might specify PyBUF_SIMPLE: according to the PEP I can't simply give it the original buffer since required fields (that the consumer will presumably not access) are not NULL. In practice, I'd like to: what could possibly go wrong? Jeff Allen From stefan at bytereef.org Sat Aug 4 11:11:50 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 11:11:50 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <501C5FF8.10209@farowl.co.uk> References: <501C5FF8.10209@farowl.co.uk> Message-ID: <20120804091150.GA12337@sleipnir.bytereef.org> Jeff Allen wrote: > I'd like to lay a solid foundation that benefits from the > recent CPython work. I hope that some of the complexity in > memoryview stems from legacy considerations I don't have to deal > with in Jython. I'm afraid not: PEP-3118 is really that complex. ;) > My understanding is this: When a consumer requests a buffer from the > exporter it specifies using flags how it intends to navigate it. If > the buffer actually needs more apparatus than the consumer proposes, > this raises an exception. If the buffer needs less apparatus than > the consumer proposes, the exporter has to supply what was asked > for. For example, if the consumer sets PyBUF_STRIDES, and the > buffer can only be navigated by using suboffsets (PIL-style) this > raises an exception. Alternatively, if the consumer sets > PyBUF_STRIDES, and the buffer is just a simple byte array, the > exporter has to supply shape and strides arrays (with trivial > values), since the consumer is going to use those arrays. Yes. > Is there any harm is supplying shape and strides when they were not > requested? The PEP says: "PyBUF_ND ... If this is not given then > shape will be NULL". It doesn't stipulate that strides will be null > if PyBUF_STRIDES is not given, but the library documentation says > so. suboffsets is different since even when requested, it will be > null if not needed. You are right that the PEP does not explicitly state that rule for strides. However, NULL always has an implied meaning: format=NULL -> treat the buffer as unsigned bytes. shape=NULL -> one-dimensional AND treat the buffer as unsigned bytes. strides=NULL -> C-contiguous I think relaxing the NULL rule for strides would complicate things, since it would introduce yet another special case. > Similar, but simpler, the PEP says "PyBUF_FORMAT ... If format is > not explicitly requested then the format must be returned as NULL > (which means "B", or unsigned bytes)". What would be the harm in > returning "B"? Ah, yes. The key here is this: "This would be used when the consumer is going to be checking for what 'kind' of data is actually stored." Conversely, if not requested, format=NULL indicates that the real format may be e.g. 'L', but the consumer wants to treat the buffer as unsigned bytes. This works because the 'len' field stores the length of the memory area in bytes (for contiguous buffers at least). The 'itemsize' field may be wrong though in this special case. In general, format=NULL is a cast of a (possibly multi-dimensional) C-contiguous buffer to a one-dimensional buffer of unsigned bytes. IMO only the following combinations make sense. These two are self explanatory: 1) shape=NULL, format=NULL -> e.g. PyBUF_SIMPLE 2) shape!=NULL, format!=NULL -> e.g. PyBUF_FULL 1) can break the invariant product(shape) * itemsize = len! The next combination exists as part of PyBUF_STRIDED: 3) shape!=NULL, format=NULL. It can break two invariants (product(shape) * itemsize = len, calcsize(format) = itemsize), but since it's explicitly part of PyBUF_STRIDED, memoryview_getbuf() allows it. The remaining combination is disallowed, since the buffer is already assumed to be unsigned bytes: 4) shape=NULL, format!=NULL. > One place where this really matters is in the implementation of > memoryview. PyMemoryView requests a buffer with the flags > PyBUF_FULL_RO, so even a simple byte buffer export will come with > shape, strides and format. A consumer (of the memoryview's buffer > API) might specify PyBUF_SIMPLE: according to the PEP I can't simply > give it the original buffer since required fields (that the consumer > will presumably not access) are not NULL. In practice, I'd like to: > what could possibly go wrong? Because of all the implied meanings of NULL, I think the safest way is to implement memoryview_getbuf() for Jython. After all the PEP describes a protocol, so everyone should really be doing the same thing. Whether the protocol needs to be that complex is another question. Partially initialized buffers are a pain to handle on the C level since it is necessary to reconstruct the missing values -- at least if you want to keep your sanity :). I think the protocol would benefit from changing the getbuffer rules to: a) The buffer gets a 'flags' field that can store properties like PyBUF_SIMPLE, PyBUF_C_CONTIGUOUS etc. b) The exporter must *always* provide full information. c) If a buffer can be exported as unsigned bytes but has a different layout, the exporter must perform a full cast so that the above mentioned invariants are kept. The disadvantage of this is that the original layout is lost for the consumer. I do not know if there is a use case that requires the consumer to have the original layout information. Stefan Krah From "ja...py" at farowl.co.uk Sat Aug 4 15:48:47 2012 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Sat, 04 Aug 2012 14:48:47 +0100 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <20120804091150.GA12337@sleipnir.bytereef.org> References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> Message-ID: <501D283F.9050207@farowl.co.uk> Thanks for a swift reply: you're just the person I hoped would do so. On 04/08/2012 10:11, Stefan Krah wrote: > You are right that the PEP does not explicitly state that rule for > strides. However, NULL always has an implied meaning: > > format=NULL -> treat the buffer as unsigned bytes. > > shape=NULL -> one-dimensional AND treat the buffer as unsigned bytes. > > strides=NULL -> C-contiguous > > I think relaxing the NULL rule for strides would complicate things, > since it would introduce yet another special case. ... Ok, I think I see that how the absence of certain arrays is used to deduce structural simplicity, over and above their straightforward use in navigating the data. So although no shape array is (sort of) equivalent to ndim==1, shape[0]==len, it also means I can call simpler code instead of using the arrays for navigation. I still don't see why, if the consumer says "I'm assuming 1-D unsigned bytes", and that's what the data is, memoryview_getbuf could not provide a shape and strides that agree with the data. Is the catch perhaps that there is code (in abstract.c etc.) that does not know what the consumer promised not to use/look at? Would it actually break, e.g. not treat it as bytes, or just be inefficient? > Because of all the implied meanings of NULL, I think the safest way is > to implement memoryview_getbuf() for Jython. After all the PEP describes > a protocol, so everyone should really be doing the same thing. I'll look carefully at what you've written (snipped here) because it is these "consumer expectations" that are most important. The Jython buffer API is necessarily a lot different from the C one: some things are not possible in Java (pointer arithmetic) and some are just un-Javan activities (allocate a struct and have the library fill it in). I'm only going for a logical conformance to the PEP: the same navigational and other attributes, that mean the same things for the consumer. When you say such-and-such is disallowed, but the PEP or the data structures seem to provide for it, you mean memoryview_getbuf() disallows it, since you've concluded it is not sensible? > I think the protocol would benefit from changing the getbuffer rules to: > > a) The buffer gets a 'flags' field that can store properties like > PyBUF_SIMPLE, PyBUF_C_CONTIGUOUS etc. > > b) The exporter must *always* provide full information. > > c) If a buffer can be exported as unsigned bytes but has a different > layout, the exporter must perform a full cast so that the above > mentioned invariants are kept. > Just like PyManagedBuffer mbuf and its sister view in memoryview? I've thought the same things, but the tricky part is to do it compatibly. a) I think I can achieve this. As I have interfaces and polymorphism on my side, and a commitment only to logical equivalence to CPython, I can have the preserved flags stashed away inside to affect behaviour. But it's not as simple as saving the consumer's request, and I'm still trying to work it out what to do, e.g. when the consumer didn't ask for C-contiguity, but in this case it happens to be true. In the same way, functions you have in abstract.c etc. can be methods that, rather than work out by inspection of a struct how to navigate the data on this call, already know what kind of buffer they are in. So SimpleBuffer.isContiguous(char order) can simply return true. b) What I'm hoping can work, but maybe not. c) Java will not of course give you raw memory it thinks is one thing, to treat as another, so this aspect is immature in my thinking. I got as far as accommodating multi-byte items, but have no use for them as yet. Thanks again for the chance to test my ideas. Jeff Allen From ncoghlan at gmail.com Sat Aug 4 16:51:06 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Aug 2012 00:51:06 +1000 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <20120804091150.GA12337@sleipnir.bytereef.org> References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> Message-ID: On Sat, Aug 4, 2012 at 7:11 PM, Stefan Krah wrote: > You are right that the PEP does not explicitly state that rule for > strides. However, NULL always has an implied meaning: > > format=NULL -> treat the buffer as unsigned bytes. > > shape=NULL -> one-dimensional AND treat the buffer as unsigned bytes. > > strides=NULL -> C-contiguous > > > I think relaxing the NULL rule for strides would complicate things, > since it would introduce yet another special case. I took Jeff's question as being slightly different and applying in the following situations: 1. If the consumer has NOT requested format data, can the provider return accurate format data anyway, if that's easier than returning NULL but is consistent with doing so? 2. The consumer has NOT requested shape data, can shape data be provided anyway, if that's easier than returning NULL but is consistent with doing so? 3. The consumer has NOT requested strides data, can strides data be provided anyway, if that's easier than returning NULL but is consistent with doing so? That's what I believe is Jeff's main question: is a provider that always publishes complete information, even if the consumer doesn't ask for it, in compliance with the API, so long as any cases where the consumer's stated assumption (as indicated by the request flags) would be violated are handled as errors instead of successfully populating the buffer? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Sat Aug 4 17:25:49 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 17:25:49 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <501D283F.9050207@farowl.co.uk> References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> Message-ID: <20120804152549.GA16358@sleipnir.bytereef.org> Jeff Allen wrote: > I still don't see why, if the consumer says "I'm assuming 1-D > unsigned bytes", and that's what the data is, memoryview_getbuf > could not provide a shape and strides that agree with the data. In most cases it won't matter. However, a consumer is entitled to rely on shape==NULL in response to a PyBUF_SIMPLE request. Perhaps there is code that tests for shape==NULL to determine C-contiguity. This is an example that might occur in C. You hinted at the fact that not all of this may be relevant for Java, but on that I can't comment. > When you say such-and-such is disallowed, but the PEP or the data > structures seem to provide for it, you mean memoryview_getbuf() > disallows it, since you've concluded it is not sensible? The particular request of PyBUF_SIMPLE|PyBUF_FORMAT, when applied to any array that is not one-dimensional with format 'B' would lead to a contradiction: PyBUF_SIMPLE implies 'B', but format would be set to something else. It is also a useless combination, since a plain PyBUF_SIMPLE suffices. > >I think the protocol would benefit from changing the getbuffer rules to: > > > > a) The buffer gets a 'flags' field that can store properties like > > PyBUF_SIMPLE, PyBUF_C_CONTIGUOUS etc. > > > > b) The exporter must *always* provide full information. > > > > c) If a buffer can be exported as unsigned bytes but has a different > > layout, the exporter must perform a full cast so that the above > > mentioned invariants are kept. > > > Just like PyManagedBuffer mbuf and its sister view in memoryview? > I've thought the same things, but the tricky part is to do it > compatibly. > > a) I think I can achieve this. As I have interfaces and polymorphism > on my side, and a commitment only to logical equivalence to CPython, > I can have the preserved flags stashed away inside to affect > behaviour. But it's not as simple as saving the consumer's request, > and I'm still trying to work it out what to do, e.g. when the > consumer didn't ask for C-contiguity, but in this case it happens to > be true. > > In the same way, functions you have in abstract.c etc. can be > methods that, rather than work out by inspection of a struct how to > navigate the data on this call, already know what kind of buffer > they are in. So SimpleBuffer.isContiguous(char order) can simply > return true. Avoiding repeated calls to PyBuffer_IsContiguous() was in fact the main reason for storing flags in the new MemoryViewObject. It would be handy to have these flags in the Py_buffer structure, but that can only be considered for a future version of Python, perhaps no earlier than 4.0. The same applies of course to all three points that I made above. Stefan Krah From storchaka at gmail.com Sat Aug 4 17:25:40 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 04 Aug 2012 18:25:40 +0300 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> Message-ID: On 04.08.12 17:51, Nick Coghlan wrote: > I took Jeff's question as being slightly different and applying in the > following situations: > > 1. If the consumer has NOT requested format data, can the provider > return accurate format data anyway, if that's easier than returning > NULL but is consistent with doing so? > > 2. The consumer has NOT requested shape data, can shape data be > provided anyway, if that's easier than returning NULL but is > consistent with doing so? > > 3. The consumer has NOT requested strides data, can strides data be > provided anyway, if that's easier than returning NULL but is > consistent with doing so? 4. The consumer has NOT requested writable buffer, can readonly flag of provided buffer be false anyway? From ncoghlan at gmail.com Sat Aug 4 17:35:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Aug 2012 01:35:00 +1000 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <20120804152549.GA16358@sleipnir.bytereef.org> References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> <20120804152549.GA16358@sleipnir.bytereef.org> Message-ID: On Sun, Aug 5, 2012 at 1:25 AM, Stefan Krah wrote: > In most cases it won't matter. However, a consumer is entitled to rely > on shape==NULL in response to a PyBUF_SIMPLE request. Perhaps there > is code that tests for shape==NULL to determine C-contiguity. > > This is an example that might occur in C. You hinted at the fact that not > all of this may be relevant for Java, but on that I can't comment. Think about trying to specify the buffer protocol using only C++ references rather than pointers. In Java, it's a lot easier to say "this value must be a reference to 'B'" than it is to say "this value must be NULL". (My Java is a little rusty, but I'm still pretty sure you can only get NullPointerException by messing about with the JNI). I think it's worth defining an "OR" clause for each of the current "X must be NULL" cases, where it is legal for the provider to emit an appropriate non-NULL value that would be consistent with the consumer assuming that the returned value is consistent with what they requested. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan at bytereef.org Sat Aug 4 17:39:17 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 17:39:17 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> Message-ID: <20120804153916.GA16584@sleipnir.bytereef.org> Nick Coghlan wrote: > I took Jeff's question as being slightly different and applying in the > following situations: I think I attempted to answer the same thing. :) > 1. If the consumer has NOT requested format data, can the provider > return accurate format data anyway, if that's easier than returning > NULL but is consistent with doing so? No, this is definitely disallowed by the PEP (PyBUF_FORMAT): "If format is not explicitly requested then the format must be returned as NULL (which means "B", or unsigned bytes)." > 2. The consumer has NOT requested shape data, can shape data be > provided anyway, if that's easier than returning NULL but is > consistent with doing so? Also explicitly disallowed (PyBUF_ND): "If this is not given then shape will be NULL." > 3. The consumer has NOT requested strides data, can strides data be > provided anyway, if that's easier than returning NULL but is > consistent with doing so? This is not explicitly disallowed, but IMO the intent is that strides should also be NULL in that case. For example, strides==NULL might be used for a quick C-contiguity test. Stefan Krah From chris.jerdonek at gmail.com Sat Aug 4 17:41:33 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sat, 4 Aug 2012 08:41:33 -0700 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): Make TextIOWrapper's documentation clearer by copying the newline argument's In-Reply-To: <3WpkGy3N7VzPgX@mail.python.org> References: <3WpkGy3N7VzPgX@mail.python.org> Message-ID: On Fri, Aug 3, 2012 at 3:59 PM, antoine.pitrou wrote: > http://hg.python.org/cpython/rev/f17a1410ebe5 > changeset: 78401:f17a1410ebe5 > summary: > Make TextIOWrapper's documentation clearer by copying the newline argument's description from open(). Now that this change is made, it may make sense to update the subprocess documentation to reference TextIOWrapper's documentation instead of open()'s (since use of the 'U' flag to open() is discouraged in new code). "All line endings will be converted to '\n' as described for the universal newlines 'U' mode argument to open()." (from http://docs.python.org/dev/library/subprocess.html#frequently-used-arguments ) --Chris From stefan at bytereef.org Sat Aug 4 17:43:58 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 17:43:58 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> Message-ID: <20120804154358.GB16584@sleipnir.bytereef.org> Serhiy Storchaka wrote: > 4. The consumer has NOT requested writable buffer, can readonly flag > of provided buffer be false anyway? Yes, per the new documentation. This is not explicitly mentioned in the PEP but was existing practice and greatly simplifies several things: http://docs.python.org/dev/c-api/buffer.html#PyBUF_WRITABLE Stefan Krah From stefan at bytereef.org Sat Aug 4 18:41:40 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 4 Aug 2012 18:41:40 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> <20120804152549.GA16358@sleipnir.bytereef.org> Message-ID: <20120804164140.GA16845@sleipnir.bytereef.org> Nick Coghlan wrote: > Think about trying to specify the buffer protocol using only C++ > references rather than pointers. In Java, it's a lot easier to say > "this value must be a reference to 'B'" than it is to say "this value > must be NULL". (My Java is a little rusty, but I'm still pretty sure > you can only get NullPointerException by messing about with the JNI). > > I think it's worth defining an "OR" clause for each of the current "X > must be NULL" cases, where it is legal for the provider to emit an > appropriate non-NULL value that would be consistent with the consumer > assuming that the returned value is consistent with what they > requested. I think any implementation that doesn't use the Py_buffer struct directly in a C-API should just always return a full buffer if a specific request can be met according to the rules. For the C-API, I would be cautious: - The number of case splits in testing getbuffer flags is already staggering. Defining an "OR" clause would introduce new cases. - Consumers may simply rely on the status-quo. As I said in my earlier mail, for Python 4.0, I'd rather see that buffers have mandatory full information. Querying individual Py_buffer fields for NULL should be replaced by a set of flags that would determine contiguity, buffer "history" (has the buffer been cast to unsigned bytes?) etc. It would also be possible to add new flags for things like byte order. The main reason is that it turns out that in any general C function that takes a Py_buffer argument one has to reconstruct full information anyway, otherwise obscure cases *will* be overlooked (in the absence of a formal proof that takes care of all case splits). Stefan Krah From ncoghlan at gmail.com Sat Aug 4 19:13:48 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Aug 2012 03:13:48 +1000 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: <20120804164140.GA16845@sleipnir.bytereef.org> References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> <20120804152549.GA16358@sleipnir.bytereef.org> <20120804164140.GA16845@sleipnir.bytereef.org> Message-ID: On Sun, Aug 5, 2012 at 2:41 AM, Stefan Krah wrote: > Nick Coghlan wrote: >> Think about trying to specify the buffer protocol using only C++ >> references rather than pointers. In Java, it's a lot easier to say >> "this value must be a reference to 'B'" than it is to say "this value >> must be NULL". (My Java is a little rusty, but I'm still pretty sure >> you can only get NullPointerException by messing about with the JNI). >> >> I think it's worth defining an "OR" clause for each of the current "X >> must be NULL" cases, where it is legal for the provider to emit an >> appropriate non-NULL value that would be consistent with the consumer >> assuming that the returned value is consistent with what they >> requested. > > I think any implementation that doesn't use the Py_buffer struct directly > in a C-API should just always return a full buffer if a specific request > can be met according to the rules. Since Jeff is talking about an inspired-by API, rather than using the C API directly, I think that's the way Jython should go: *require* that those fields be populated appropriately, rather than allowing them to be None. > For the C-API, I would be cautious: > > - The number of case splits in testing getbuffer flags is already > staggering. Defining an "OR" clause would introduce new cases. > > - Consumers may simply rely on the status-quo. > > > As I said in my earlier mail, for Python 4.0, I'd rather see that buffers > have mandatory full information. Querying individual Py_buffer fields for > NULL should be replaced by a set of flags that would determine contiguity, > buffer "history" (has the buffer been cast to unsigned bytes?) etc. Making a switch to mandatory full information later suggest that we need to at least make it optional now. I do agree with what you suggest though, which is that, if a buffer chooses to always publish full and accurate information it must do so for *all* fields.Tthat should reduce the combinatorial explosion. It does place a constraint on consumers that they can't assume those fields will be NULL just because they didn't ask for them, but I'm struggling to think of any reason why a client would actually *check* that instead of just assuming it. I guess the dodgy Py_buffer-copying code in the old memoryview implementation only mostly works because those fields are almost always NULL, but that approach was just deeply broken in general. > The main reason is that it turns out that in any general C function that > takes a Py_buffer argument one has to reconstruct full information anyway, > otherwise obscure cases *will* be overlooked (in the absence of a formal > proof that takes care of all case splits). Right, that's why I think we should declare it legal to *provide* full information even if the consumer didn't ask for it, *as long as* any consumer assumptions implied by the limited request (such as unsigned byte data, a single dimension or C contiguity) remain valid. Consumers that can't handle that correctly (which would likely include the pre-3.3 memoryview) are officially broken. As you say, we likely can't make providing full information mandatory during the 3.x cycle, but we can at least pave the way for it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Sat Aug 4 20:51:58 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 4 Aug 2012 20:51:58 +0200 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): Make TextIOWrapper's documentation clearer by copying the newline argument's In-Reply-To: References: <3WpkGy3N7VzPgX@mail.python.org> Message-ID: 2012/8/4 Chris Jerdonek : > On Fri, Aug 3, 2012 at 3:59 PM, antoine.pitrou > wrote: >> http://hg.python.org/cpython/rev/f17a1410ebe5 >> changeset: 78401:f17a1410ebe5 >> summary: >> Make TextIOWrapper's documentation clearer by copying the newline argument's description from open(). > > Now that this change is made, it may make sense to update the > subprocess documentation to reference TextIOWrapper's documentation > instead of open()'s (since use of the 'U' flag to open() is > discouraged in new code). > > "All line endings will be converted to '\n' as described for the > universal newlines 'U' mode argument to open()." > > (from http://docs.python.org/dev/library/subprocess.html#frequently-used-arguments > ) Good idea, can you please open an issue? The documentation is wrong: UTF-8 is not used, it's the locale encoding. Victor From "ja...py" at farowl.co.uk Sun Aug 5 12:08:58 2012 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Sun, 05 Aug 2012 11:08:58 +0100 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> <20120804152549.GA16358@sleipnir.bytereef.org> <20120804164140.GA16845@sleipnir.bytereef.org> Message-ID: <501E463A.9090200@farowl.co.uk> - Summary: The PEP, or sometimes just the documentation, definitely requires that features not requested shall be NULL. The API would benefit from: a. stored flags that tell you the actual structural features. b. requiring exporters to provide full information (e.g. strides = {1}, format = "B") even when trivial. It could and possibly should work this way in Python 4.0. Nick thinks we could *allow* exporters to behave this way (PEP change) in Python 3.x. Stefan thinks not, because "Perhaps there is code that tests for shape==NULL to determine C-contiguity." Jython exporters should return full information unconditionally from the start: "any implementation that doesn't use the Py_buffer struct directly in a C-API should just always return a full buffer" (Stefan); "I think that's the way Jython should go: *require* that those fields be populated appropriately" (Nick). - But what I now think is: _If the only problem really is_ "code that tests for shape==NULL to determine C-contiguity", or makes similar deductions, I agree that providing unasked-for information is_safe_. I think the stipulation in PEP/documentation has some efficiency value: on finding shape!=NULL the code has to do a more complicated test, as inPyBuffer_IsContiguous(). I have the option to provide an isContiguous that has the answer written down already, so the risk is only from/to ported code. If it is only a risk to the efficiency of ported code, I'm relaxed: I hesitate only to check that there's no circumstance that logically requires nullity for correctness. Whether it was safe that was the key question. In the hypothetical Python 4.0 buffer API (and in Jython) where feature flags are provided, the efficiency is still useful, but complicated deductive logic in the consumer should be deprecated in favour of (functions for) interrogating the flags. An example illustrating the semantics would then be: 1. consumer requests a buffer, saying "I can cope with a strided arrays" (PyBUF_STRIDED); 2. exporter provides a strides array, but in the feature flags STRIDED=0, meaning "you don't need the strides array"; 3. exporter (optionally) uses efficient, non-strided access. _I do not think_ that full provision by the exporter has to be _mandatory_, as the discussion has gone on to suggest. I know your experience is that you have often had to regenerate the missing information to write generic code, but I think this does not continue once you have the feature flags. An example would be: 1. consumer requests a buffer, saying "I can cope with a N-dimensional but not strided arrays" (PyBUF_ND); 2. exporter sets strides=NULL, and the feature flag STRIDED=0; 3. exporter accesses the data, without reference to the strides array, as it planned; 4. new generic code that respects the feature flag STRIDED=0, does not reference the strides array; 5. old generic code, ignorant of the feature flags, finds the strides=NULL and so does not dereference strides. Insofar as it is not necessary, there is some efficiency in not providing it. There would only be a problem with broken code that both ignores the feature flag and uses the strides array unchecked. But this code was always broken. Really useful discussion this. Jeff -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sun Aug 5 15:00:04 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 5 Aug 2012 15:00:04 +0200 Subject: [Python-Dev] cpython: Close #15559: Implementing __index__ creates a nasty interaction with the bytes References: <3WqZgS2zn9zPkY@mail.python.org> Message-ID: <20120805150004.6e38ffdc@pitrou.net> On Sun, 5 Aug 2012 10:20:36 +0200 (CEST) nick.coghlan wrote: > http://hg.python.org/cpython/rev/5abea8a43f19 > changeset: 78426:5abea8a43f19 > user: Nick Coghlan > date: Sun Aug 05 18:20:17 2012 +1000 > summary: > Close #15559: Implementing __index__ creates a nasty interaction with the bytes constructor. At least for 3.3, ipaddress objects must now be explicitly converted with int() and thus can't be passed directly to the hex() builtin. __index__, as the name implies, allows instances to be used as sequence indices, which does sound like a weird thing to serve as for an IP address :-) Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From ncoghlan at gmail.com Sun Aug 5 15:42:38 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Aug 2012 23:42:38 +1000 Subject: [Python-Dev] cpython: Close #15559: Implementing __index__ creates a nasty interaction with the bytes In-Reply-To: <20120805150004.6e38ffdc@pitrou.net> References: <3WqZgS2zn9zPkY@mail.python.org> <20120805150004.6e38ffdc@pitrou.net> Message-ID: On Sun, Aug 5, 2012 at 11:00 PM, Antoine Pitrou wrote: > On Sun, 5 Aug 2012 10:20:36 +0200 (CEST) > nick.coghlan wrote: > >> http://hg.python.org/cpython/rev/5abea8a43f19 >> changeset: 78426:5abea8a43f19 >> user: Nick Coghlan >> date: Sun Aug 05 18:20:17 2012 +1000 >> summary: >> Close #15559: Implementing __index__ creates a nasty interaction with the bytes constructor. At least for 3.3, ipaddress objects must now be explicitly converted with int() and thus can't be passed directly to the hex() builtin. I noticed this when I tried "bytes(ipaddress.Ipv4Address('192.168.0.1')" Apparently allocating and initialising a 3.2 GB array on an ASUS Zenbook consumes large amounts of time and makes the X server rather unresponsive. Even faulthandler's timeout thread took more than ten times the specified timeout to actually kill the operation. Who knew? :) > __index__, as the name implies, allows instances to be used as sequence > indices, which does sound like a weird thing to serve as for an IP > address :-) I expect the original reasoning had to do with the hex() builtin. In 2.x you could selectively support that by implementing __hex__ directly. In 3.x, the __oct__ and __hex__ methods are gone and the only way to support those builtins (as well as bin()) is by implementing __index__ instead. However, implementing __index__ makes the type usable in a whole host of other contexts as well, so the naive __hex__ -> __index__ conversion really wasn't a good idea. I'm thinking it may make sense to eventually implement __bytes__, as having bytes(address) be equivalent to address.packed *does* make sense. No hurry on that, though. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From chris.jerdonek at gmail.com Sun Aug 5 17:55:10 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 5 Aug 2012 08:55:10 -0700 Subject: [Python-Dev] [Python-checkins] devguide: It has been pointed out this paragraph was incorrect. One of the Windows devs In-Reply-To: <3WqbHw6rP5zPl0@mail.python.org> References: <3WqbHw6rP5zPl0@mail.python.org> Message-ID: On Sun, Aug 5, 2012 at 1:48 AM, nick.coghlan wrote: > http://hg.python.org/devguide/rev/f518f23d06d5 > changeset: 539:f518f23d06d5 > summary: > It has been pointed out this paragraph was incorrect. One of the Windows devs will need to fill in more accurate info > diff --git a/setup.rst b/setup.rst > -For Windows systems, all the necessary components should be included in the > -CPython checkout. This issue may not provide the information you're looking for, but it is related: http://bugs.python.org/issue14873 It has a patch from a few days ago awaiting review. Incidentally, the UNIX-specific information added in http://hg.python.org/devguide/rev/80358cdac0a6 might be good to link to in the UNIX section here: "Do take note of what modules were not built as stated at the end of your build. More than likely you are missing a dependency for the module(s) that were not built, and so you can install the dependencies and re-run both configure and make (if available for your OS)." (from http://docs.python.org/devguide/setup.html#unix ) Or else move the UNIX-specific information into a subsection of the UNIX section. --Chris From chris.jerdonek at gmail.com Sun Aug 5 21:56:38 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 5 Aug 2012 12:56:38 -0700 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): Make TextIOWrapper's documentation clearer by copying the newline argument's In-Reply-To: References: <3WpkGy3N7VzPgX@mail.python.org> Message-ID: On Sat, Aug 4, 2012 at 11:51 AM, Victor Stinner wrote: > 2012/8/4 Chris Jerdonek : >> Now that this change is made, it may make sense to update the >> subprocess documentation to reference TextIOWrapper's documentation >> instead of open()'s (since use of the 'U' flag to open() is >> discouraged in new code). > > Good idea, can you please open an issue? The documentation is wrong: > UTF-8 is not used, it's the locale encoding. I created an issue for this (with patch) here: http://bugs.python.org/issue15561 --Chris From breamoreboy at yahoo.co.uk Mon Aug 6 00:31:01 2012 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Sun, 05 Aug 2012 23:31:01 +0100 Subject: [Python-Dev] No summary of tracker issues this week? Message-ID: Hi all, I keep an eye open for this but can't find one for Saturday 03/08/2012. Have I missed it, has it been stopped, has something gone wrong with its production or what? -- Cheers. Mark Lawrence. From python at mrabarnett.plus.com Mon Aug 6 01:39:22 2012 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 06 Aug 2012 00:39:22 +0100 Subject: [Python-Dev] No summary of tracker issues this week? In-Reply-To: References: Message-ID: <501F042A.6010702@mrabarnett.plus.com> On 05/08/2012 23:31, Mark Lawrence wrote: > Hi all, > > I keep an eye open for this but can't find one for Saturday 03/08/2012. > Have I missed it, has it been stopped, has something gone wrong with > its production or what? > I haven't seen it either. From status at bugs.python.org Mon Aug 6 03:25:02 2012 From: status at bugs.python.org (Python tracker) Date: Mon, 6 Aug 2012 03:25:02 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120806012502.3EAA41CC04@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-07-30 - 2012-08-06) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3559 (+18) closed 23781 (+51) total 27340 (+69) Open issues with patches: 1534 Issues opened (42) ================== #13072: Getting a buffer from a Unicode array uses invalid format http://bugs.python.org/issue13072 reopened by haypo #15494: Move test/support.py into a test.support subpackage http://bugs.python.org/issue15494 opened by cjerdonek #15495: enable type truncation warnings for gcc builds http://bugs.python.org/issue15495 opened by jkloth #15496: harden directory removal for tests on Windows http://bugs.python.org/issue15496 opened by jkloth #15497: correct characters in TextWrapper.replace_whitespace docs http://bugs.python.org/issue15497 opened by cjerdonek #15498: Eliminate the use of deprecated OS X APIs in getpath.c http://bugs.python.org/issue15498 opened by ned.deily #15500: Python should support naming threads http://bugs.python.org/issue15500 opened by bra #15501: Document exception classes in subprocess module http://bugs.python.org/issue15501 opened by anton.barkovsky #15502: Meta path finders and path entry finders are different, but sh http://bugs.python.org/issue15502 opened by ncoghlan #15504: pickle/cPickle saves invalid/incomplete data http://bugs.python.org/issue15504 opened by Philipp.Lies #15505: unittest.installHandler incorrectly assumes SIGINT handler is http://bugs.python.org/issue15505 opened by twouters #15506: configure should use PKG_PROG_PKG_CONFIG http://bugs.python.org/issue15506 opened by vapier #15507: test_subprocess assumes SIGINT is not being ignored. http://bugs.python.org/issue15507 opened by twouters #15509: webbrowser.open sometimes passes zero-length argument to the b http://bugs.python.org/issue15509 opened by anton.barkovsky #15510: textwrap.wrap('') returns empty list http://bugs.python.org/issue15510 opened by cjerdonek #15511: _decimal does not build in PGUpdate mode http://bugs.python.org/issue15511 opened by skrah #15513: Correct __sizeof__ support for pickle http://bugs.python.org/issue15513 opened by storchaka #15516: exception-handling bug in PyString_Format http://bugs.python.org/issue15516 opened by tromey #15518: Provide test coverage for filecmp.dircmp.report methods. http://bugs.python.org/issue15518 opened by cbc #15520: Document datetime.timestamp() in 3.3 What's New http://bugs.python.org/issue15520 opened by djc #15522: impove 27 percent performance on stringpbject.c( by prefetch a http://bugs.python.org/issue15522 opened by abael #15523: Block on close TCP socket in SocketServer.py http://bugs.python.org/issue15523 opened by jarvisliang #15526: regrtest crash on Windows 7 AMD64 http://bugs.python.org/issue15526 opened by ncoghlan #15527: Double parens in functions references http://bugs.python.org/issue15527 opened by storchaka #15528: Better support for finalization with weakrefs http://bugs.python.org/issue15528 opened by sbt #15533: subprocess.Popen(cwd) documentation http://bugs.python.org/issue15533 opened by cjerdonek #15535: Fix pickling of named tuples in 2.7.3 http://bugs.python.org/issue15535 opened by thomie #15539: Fixing Tools/scripts/pindent.py http://bugs.python.org/issue15539 opened by storchaka #15542: Documentation incorrectly suggests __init__ called after direc http://bugs.python.org/issue15542 opened by Aaron.Staley #15543: central documentation for 'universal newlines' http://bugs.python.org/issue15543 opened by cjerdonek #15544: math.isnan fails with some Decimal NaNs http://bugs.python.org/issue15544 opened by stevenjd #15545: sqlite3.Connection.iterdump() does not work with row_factory = http://bugs.python.org/issue15545 opened by plemarre #15548: Mention all new os functions in What's New in Python 3.3 http://bugs.python.org/issue15548 opened by haypo #15549: openssl version in windows builds does not support renegotiati http://bugs.python.org/issue15549 opened by cory.mintz #15550: Trailing white spaces http://bugs.python.org/issue15550 opened by storchaka #15552: gettext: if looking for .mo in default locations, also look in http://bugs.python.org/issue15552 opened by Dominique.Leuenberger #15553: Segfault in test_6_daemon_threads() of test_threading, on Mac http://bugs.python.org/issue15553 opened by haypo #15554: correct and clarify str.splitlines() documentation http://bugs.python.org/issue15554 opened by cjerdonek #15555: Default newlines of io.TextIOWrapper http://bugs.python.org/issue15555 opened by ishimoto #15556: os.stat fails for file pending delete on Windows http://bugs.python.org/issue15556 opened by jkloth #15557: Tests for webbrowser module http://bugs.python.org/issue15557 opened by anton.barkovsky #15561: update subprocess docs to reference io.TextIOWrapper http://bugs.python.org/issue15561 opened by cjerdonek Most recent 15 issues with no replies (15) ========================================== #15561: update subprocess docs to reference io.TextIOWrapper http://bugs.python.org/issue15561 #15557: Tests for webbrowser module http://bugs.python.org/issue15557 #15553: Segfault in test_6_daemon_threads() of test_threading, on Mac http://bugs.python.org/issue15553 #15552: gettext: if looking for .mo in default locations, also look in http://bugs.python.org/issue15552 #15549: openssl version in windows builds does not support renegotiati http://bugs.python.org/issue15549 #15535: Fix pickling of named tuples in 2.7.3 http://bugs.python.org/issue15535 #15533: subprocess.Popen(cwd) documentation http://bugs.python.org/issue15533 #15527: Double parens in functions references http://bugs.python.org/issue15527 #15523: Block on close TCP socket in SocketServer.py http://bugs.python.org/issue15523 #15506: configure should use PKG_PROG_PKG_CONFIG http://bugs.python.org/issue15506 #15501: Document exception classes in subprocess module http://bugs.python.org/issue15501 #15497: correct characters in TextWrapper.replace_whitespace docs http://bugs.python.org/issue15497 #15485: CROSS: append gcc library search paths http://bugs.python.org/issue15485 #15483: CROSS: initialise include and library paths in setup.py http://bugs.python.org/issue15483 #15480: Drop TYPE_INT64 from marshal in Python 3.4 http://bugs.python.org/issue15480 Most recent 15 issues waiting for review (15) ============================================= #15561: update subprocess docs to reference io.TextIOWrapper http://bugs.python.org/issue15561 #15557: Tests for webbrowser module http://bugs.python.org/issue15557 #15554: correct and clarify str.splitlines() documentation http://bugs.python.org/issue15554 #15552: gettext: if looking for .mo in default locations, also look in http://bugs.python.org/issue15552 #15550: Trailing white spaces http://bugs.python.org/issue15550 #15548: Mention all new os functions in What's New in Python 3.3 http://bugs.python.org/issue15548 #15544: math.isnan fails with some Decimal NaNs http://bugs.python.org/issue15544 #15539: Fixing Tools/scripts/pindent.py http://bugs.python.org/issue15539 #15535: Fix pickling of named tuples in 2.7.3 http://bugs.python.org/issue15535 #15528: Better support for finalization with weakrefs http://bugs.python.org/issue15528 #15518: Provide test coverage for filecmp.dircmp.report methods. http://bugs.python.org/issue15518 #15513: Correct __sizeof__ support for pickle http://bugs.python.org/issue15513 #15511: _decimal does not build in PGUpdate mode http://bugs.python.org/issue15511 #15510: textwrap.wrap('') returns empty list http://bugs.python.org/issue15510 #15509: webbrowser.open sometimes passes zero-length argument to the b http://bugs.python.org/issue15509 Top 10 most discussed issues (10) ================================= #15502: Meta path finders and path entry finders are different, but sh http://bugs.python.org/issue15502 54 msgs #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 25 msgs #15510: textwrap.wrap('') returns empty list http://bugs.python.org/issue15510 25 msgs #15496: harden directory removal for tests on Windows http://bugs.python.org/issue15496 17 msgs #15544: math.isnan fails with some Decimal NaNs http://bugs.python.org/issue15544 14 msgs #13072: Getting a buffer from a Unicode array uses invalid format http://bugs.python.org/issue13072 10 msgs #15550: Trailing white spaces http://bugs.python.org/issue15550 10 msgs #15231: update PyPI upload doc to say --no-raw passed to rst2html.py http://bugs.python.org/issue15231 9 msgs #15511: _decimal does not build in PGUpdate mode http://bugs.python.org/issue15511 9 msgs #15494: Move test/support.py into a test.support subpackage http://bugs.python.org/issue15494 8 msgs Issues closed (48) ================== #8847: crash appending list and namedtuple http://bugs.python.org/issue8847 closed by loewis #12073: regrtest: use faulthandler to dump the tracebacks on SIGUSR1 http://bugs.python.org/issue12073 closed by haypo #12288: tkinter SimpleDialog initialvalue http://bugs.python.org/issue12288 closed by asvetlov #12507: tkSimpleDialog problem http://bugs.python.org/issue12507 closed by ronaldoussoren #13052: IDLE: replace ending with '\' causes crash http://bugs.python.org/issue13052 closed by asvetlov #13119: Newline for print() is \n on Windows, and not \r\n as expected http://bugs.python.org/issue13119 closed by python-dev #13371: Some Carbon extensions don't build on OSX 10.7 http://bugs.python.org/issue13371 closed by ronaldoussoren #14018: OS X installer does not detect bad symlinks created by Xcode 3 http://bugs.python.org/issue14018 closed by ned.deily #15077: Regexp match goes into infinite loop http://bugs.python.org/issue15077 closed by storchaka #15202: followlinks/follow_symlinks/symlinks flags unification http://bugs.python.org/issue15202 closed by storchaka #15295: Import machinery documentation http://bugs.python.org/issue15295 closed by barry #15441: test_posixpath fails on Japanese edition of Windows http://bugs.python.org/issue15441 closed by haypo #15463: test_faulthandler can fail if install path is too long http://bugs.python.org/issue15463 closed by haypo #15469: Correct __sizeof__ support for deque http://bugs.python.org/issue15469 closed by python-dev #15470: Stuck/hang when reading ssl object http://bugs.python.org/issue15470 closed by seamus.mckenna #15473: importlib no longer uses imp.NullImporter http://bugs.python.org/issue15473 closed by brett.cannon #15481: Add exec_module() as part of the import loader API http://bugs.python.org/issue15481 closed by eric.snow #15482: __import__() change between 3.2 and 3.3 http://bugs.python.org/issue15482 closed by brett.cannon #15486: Standardised mechanism for stripping importlib frames from tra http://bugs.python.org/issue15486 closed by python-dev #15492: textwrap.wrap expand_tabs does not behave as expected http://bugs.python.org/issue15492 closed by r.david.murray #15499: Sleep is hardcoded in webbrowser.UnixBrowser http://bugs.python.org/issue15499 closed by python-dev #15503: concatenating string in dict unexpected performance http://bugs.python.org/issue15503 closed by pitrou #15508: __import__.__doc__ has outdated information about level http://bugs.python.org/issue15508 closed by brett.cannon #15512: Correct __sizeof__ support for parser http://bugs.python.org/issue15512 closed by python-dev #15514: Correct __sizeof__ support for cpu_set http://bugs.python.org/issue15514 closed by python-dev #15515: Regular expression match does not return http://bugs.python.org/issue15515 closed by crouleau #15517: Minor trimming for ASDL parser http://bugs.python.org/issue15517 closed by python-dev #15519: finish exposing WindowsRegistryImporter in importlib http://bugs.python.org/issue15519 closed by ncoghlan #15521: Dev Guide should say how to run tests in 2.7 http://bugs.python.org/issue15521 closed by ezio.melotti #15524: Dict items() ordering varies across interpreter invocations http://bugs.python.org/issue15524 closed by pitrou #15525: test_multiprocessing failure on Windows XP http://bugs.python.org/issue15525 closed by sbt #15529: PyIter_Check evaluates to 0 for Python list object http://bugs.python.org/issue15529 closed by benjamin.peterson #15530: Enhance Py_MIN and Py_MAX http://bugs.python.org/issue15530 closed by loewis #15531: os.path symlink docs missing http://bugs.python.org/issue15531 closed by larry #15532: "for line in file" is *still* broken in Python 2.7 on pipes http://bugs.python.org/issue15532 closed by ned.deily #15534: xmlrpc escaping breaks on unicode \u043c http://bugs.python.org/issue15534 closed by python-dev #15536: re.split doesn't respect MULTILINE http://bugs.python.org/issue15536 closed by r.david.murray #15537: MULTILINE confuses re.split http://bugs.python.org/issue15537 closed by ezio.melotti #15538: Avoid nonstandard s6_addr8 http://bugs.python.org/issue15538 closed by pitrou #15540: Python 3.3 and numpy http://bugs.python.org/issue15540 closed by ncoghlan #15541: logging.exception doesn't accept 'extra' http://bugs.python.org/issue15541 closed by vinay.sajip #15546: Iteration breaks with bz2.open(filename,'rt') http://bugs.python.org/issue15546 closed by nadeem.vawda #15547: Why do we have os.truncate() and os.ftruncate() whereas os.tru http://bugs.python.org/issue15547 closed by larry #15551: Unit tests that return generators silently fail http://bugs.python.org/issue15551 closed by michael.foord #15558: webbrowser output to console http://bugs.python.org/issue15558 closed by r.david.murray #15559: Bad interaction between ipaddress addresses and the bytes cons http://bugs.python.org/issue15559 closed by python-dev #15560: _sqlite3.so is built with wrong include file on OS X when usin http://bugs.python.org/issue15560 closed by ned.deily #15562: CaseFolding not working properly http://bugs.python.org/issue15562 closed by benjamin.peterson From rdmurray at bitdance.com Mon Aug 6 03:30:13 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 05 Aug 2012 21:30:13 -0400 Subject: [Python-Dev] No summary of tracker issues this week? In-Reply-To: <501F042A.6010702@mrabarnett.plus.com> References: <501F042A.6010702@mrabarnett.plus.com> Message-ID: <20120806013014.58FBA2500FA@webabinitio.net> On Mon, 06 Aug 2012 00:39:22 +0100, MRAB wrote: > On 05/08/2012 23:31, Mark Lawrence wrote: > > Hi all, > > > > I keep an eye open for this but can't find one for Saturday 03/08/2012. > > Have I missed it, has it been stopped, has something gone wrong with > > its production or what? > > > I haven't seen it either. Thanks for noticing. It should be fixed now. I triggered a run of the report manually, but be aware that it reports the last week, so it starts from last Monday early AM UTC, and ends now...so there will be some reporting overlap in next Friday's report. (Not that that should matter much unless someone is using them to track week-to-week statistics.) --David From dinov at microsoft.com Tue Aug 7 01:31:57 2012 From: dinov at microsoft.com (Dino Viehland) Date: Mon, 6 Aug 2012 23:31:57 +0000 Subject: [Python-Dev] yield from, user defined iterators, and StopIteration w/ a value... Message-ID: I'm trying to create an object which works like a generator and delegates to a generator for its implementation, but can also participate in yield from using 3.3 beta. I want my wrapper object to be able to cache some additional information - such as whether or not the generator has completed - as well as have it provide some additional methods for interacting with the state of the generator. But currently this doesn't seem possible because raising StopIteration from a user defined iterator has its value ignored as far as yield from is concerned. Here's a simplified example of the problem: class C: def __iter__(self): return self def __next__(self): raise StopIteration(100) def g(): if False: yield 100 return 100 def f(val): x = yield from val print('x', x) print(list(f(C()))) print(list(f(g()))) Which outputs: x None [] x 100 [] So you can see for the C case the value I raise from StopIteration is ignored, but the value from the generator is propagated out. From my reading of PEP 380 the behavior here is incorrect for the user defined iterator case. next(iter(C())) raises StopIteration exception with a value and that should be the resulting value of the yield from expression according to the formal semantics. Ok, looking at the implementation this seems to be because PyIter_Next clears the exception which prevents it from being seen in the yield from code path. So should yield from just be doing "(*iter->ob_type->tp_iternext)(iter);" directly and avoid the error checking code? Or am I wrong and this is the intended behavior? From benjamin at python.org Tue Aug 7 02:02:35 2012 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 6 Aug 2012 17:02:35 -0700 Subject: [Python-Dev] yield from, user defined iterators, and StopIteration w/ a value... In-Reply-To: References: Message-ID: 2012/8/6 Dino Viehland : > I'm trying to create an object which works like a generator and delegates to a generator for its implementation, but can also participate in yield from using 3.3 beta. I want my wrapper object to be able to cache some additional information - such as whether or not the generator has completed - as well as have it provide some additional methods for interacting with the state of the generator. But currently this doesn't seem possible because raising StopIteration from a user defined iterator has its value ignored as far as yield from is concerned. Here's a simplified example of the problem: > > class C: > def __iter__(self): return self > def __next__(self): > raise StopIteration(100) > > > def g(): > if False: > yield 100 > return 100 > > def f(val): > x = yield from val > print('x', x) > > print(list(f(C()))) > print(list(f(g()))) > > Which outputs: > x None > [] > x 100 > [] > > So you can see for the C case the value I raise from StopIteration is ignored, but the value from the generator is propagated out. From my reading of PEP 380 the behavior here is incorrect for the user defined iterator case. next(iter(C())) raises StopIteration exception with a value and that should be the resulting value of the yield from expression according to the formal semantics. Looks like a bug to me. Please file an issue. > > Ok, looking at the implementation this seems to be because PyIter_Next clears the exception which prevents it from being seen in the yield from code path. So should yield from just be doing "(*iter->ob_type->tp_iternext)(iter);" directly and avoid the error checking code? Or am I wrong and this is the intended behavior? This is probably the simpliest fix. In C, returning NULL from __next__ with no exception set is shorthand for StopIteration. -- Regards, Benjamin From ncoghlan at gmail.com Tue Aug 7 03:54:00 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Aug 2012 11:54:00 +1000 Subject: [Python-Dev] yield from, user defined iterators, and StopIteration w/ a value... In-Reply-To: References: Message-ID: On Tue, Aug 7, 2012 at 10:02 AM, Benjamin Peterson wrote: > 2012/8/6 Dino Viehland : >> I'm trying to create an object which works like a generator and delegates to a generator for its implementation, but can also participate in yield from using 3.3 beta. I want my wrapper object to be able to cache some additional information - such as whether or not the generator has completed - as well as have it provide some additional methods for interacting with the state of the generator. But currently this doesn't seem possible because raising StopIteration from a user defined iterator has its value ignored as far as yield from is concerned. Here's a simplified example of the problem: >> >> class C: >> def __iter__(self): return self >> def __next__(self): >> raise StopIteration(100) >> >> >> def g(): >> if False: >> yield 100 >> return 100 >> >> def f(val): >> x = yield from val >> print('x', x) >> >> print(list(f(C()))) >> print(list(f(g()))) >> >> Which outputs: >> x None >> [] >> x 100 >> [] >> >> So you can see for the C case the value I raise from StopIteration is ignored, but the value from the generator is propagated out. From my reading of PEP 380 the behavior here is incorrect for the user defined iterator case. next(iter(C())) raises StopIteration exception with a value and that should be the resulting value of the yield from expression according to the formal semantics. > > Looks like a bug to me. Please file an issue. > >> >> Ok, looking at the implementation this seems to be because PyIter_Next clears the exception which prevents it from being seen in the yield from code path. So should yield from just be doing "(*iter->ob_type->tp_iternext)(iter);" directly and avoid the error checking code? Or am I wrong and this is the intended behavior? > > This is probably the simpliest fix. In C, returning NULL from __next__ > with no exception set is shorthand for StopIteration. +1 from me for the simple fix. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From stefan_ml at behnel.de Tue Aug 7 22:52:30 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 07 Aug 2012 22:52:30 +0200 Subject: [Python-Dev] PEP 366 is unclear about what it specifies Message-ID: Hi, could someone please add a sentence to PEP 366 that describes the actual content of the new "__package__" attribute (and thus, the PEP)? http://www.python.org/dev/peps/pep-0366/ I had to read through almost the entire document to be assured that "__package__" is really supposed to contain a string and I had a hard time nailing down its content. The only real hint in there is the example, and even that is ambiguous. Please change the first paragraph in the "proposed change" section to this: """ The major proposed change is the introduction of a new module level [NEW]string[/NEW] attribute, __package__.[NEW:] It contains the fully qualified name of the package that the module lives in, without the module name itself[/NEW]. When it is present, ... """ Thanks, Stefan From brett at python.org Tue Aug 7 23:26:48 2012 From: brett at python.org (Brett Cannon) Date: Tue, 7 Aug 2012 17:26:48 -0400 Subject: [Python-Dev] PEP 366 is unclear about what it specifies In-Reply-To: References: Message-ID: On Tue, Aug 7, 2012 at 4:52 PM, Stefan Behnel wrote: > Hi, > > could someone please add a sentence to PEP 366 that describes the actual > content of the new "__package__" attribute (and thus, the PEP)? > > http://www.python.org/dev/peps/pep-0366/ > > I had to read through almost the entire document to be assured that > "__package__" is really supposed to contain a string and I had a hard time > nailing down its content. The only real hint in there is the example, and > even that is ambiguous. > Two things when it comes to understanding import now. One is that Barry's heavy rewrite of the import docs makes all of this much clearer and thus should be used as the reference over the PEPs, e.g. http://docs.python.org/dev/py3k/reference/import.html?highlight=__package__#loaders explains how __package__ should be set very clearly (yes, it references PEP 366 for nitty-gritty discussion and details, but honestly there isn't anything in the PEP that he didn't explain in the reference). Two is that importlib makes it fairly straight-forward to read the source code to understand how some aspect of imports work and should be clear enough to not require reading PEPs to understand how a feature works. Honestly, updating the PEPs constantly is a pain. We have code and docs already that most users follow, so worrying about constantly touching up PEPs is simply a third thing to have to keep track of that a majority of our users will never see. I know I don't plan to touch PEP 302 anymore in order to keep things straight in that doc as it's more hassle than it's worth. -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Wed Aug 8 00:25:36 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 08 Aug 2012 00:25:36 +0200 Subject: [Python-Dev] PEP 366 is unclear about what it specifies In-Reply-To: References: Message-ID: <502195E0.9060100@v.loewis.de> > Honestly, updating the PEPs constantly is a pain. Please understand that Stefan's request is not about updating the PEP in order to match the current implementation - I agree that this is a pain, and should not be done. Consequentially, relying on the PEPs to understand what CPython does is also flawed. However, what Stefan requests is that the PEP be changed to say what really is the minimum requirement for acceptance - that the PEP actually says what the proposed change is. What it currently does say is the "change is the introduction of a new module level attribute, __package__". It nowhere says what value this attribute will have; setting it to math.PI would be conforming, but clearly not intended. The only occurrence of the word "string" is the sentence "Note that setting __package__ to the empty string explicitly is permitted". So we know at least one possible value of the attribute: the empty string - but that still clearly isn't the intention of the PEP. Of course, it's up to the PEP author to make a change, and perhaps (to some degree) to non-author committers (who all can take the role of a PEP editor, acknowledging that the proposed change is editorial). FWIW, the documentation (simple_stmts.rst) says that __package__ (if present) is "the name of package that contains the module or package". Regards, Martin From ncoghlan at gmail.com Wed Aug 8 01:00:12 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Aug 2012 09:00:12 +1000 Subject: [Python-Dev] PEP 366 is unclear about what it specifies In-Reply-To: <502195E0.9060100@v.loewis.de> References: <502195E0.9060100@v.loewis.de> Message-ID: I'm pretty sure the PEP already limits it to the same type as__name__, but I'll check. We may have assumed that was obvious, so nobody noticed I had left it out at the time. -- Sent from my phone, thus the relative brevity :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Wed Aug 8 09:54:48 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Wed, 8 Aug 2012 00:54:48 -0700 Subject: [Python-Dev] What's New in Python 3.3 Message-ID: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> Hello all, I'll soon be starting the edits of Whatsnew for 3.3. When I did this for 3.2, it took over 150 hours of work to research all the changes. This time there are many more changes, so my previous process won't work (reviewing every "new in 3.3" entry in the docs, every entry in the voluminous Misc/NEWS file, etc). You can help out by checking-in draft entries for your favorite new features. That way, I can avoid the time consuming curation step and focus on the text, organization, and examples. I appreciate your help, Raymond From stefan at bytereef.org Wed Aug 8 12:30:53 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 8 Aug 2012 12:30:53 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed Message-ID: <20120808103053.GA29719@sleipnir.bytereef.org> Hello, currently the only cheap way for developers to test on SPARC that I'm aware of is using this old Debian qemu image: http://people.debian.org/~aurel32/qemu/sparc/ That image still uses linuxthreads and may contain any number of platform bugs. It is currently impossible to run the test suite without bus errors, see: http://bugs.python.org/issue15589 Could someone with access to a SPARC machine (perhaps with a modern version of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run the test suite? Also, it would be really nice if someone could donate a SPARC buildbot. Stefan Krah From stefan at bytereef.org Wed Aug 8 12:47:00 2012 From: stefan at bytereef.org (Stefan Krah) Date: Wed, 8 Aug 2012 12:47:00 +0200 Subject: [Python-Dev] Understanding the buffer API In-Reply-To: References: <501C5FF8.10209@farowl.co.uk> <20120804091150.GA12337@sleipnir.bytereef.org> <501D283F.9050207@farowl.co.uk> <20120804152549.GA16358@sleipnir.bytereef.org> <20120804164140.GA16845@sleipnir.bytereef.org> Message-ID: <20120808104700.GA29854@sleipnir.bytereef.org> Nick Coghlan wrote: > It does place a constraint on consumers that they can't assume those > fields will be NULL just because they didn't ask for them, but I'm > struggling to think of any reason why a client would actually *check* > that instead of just assuming it. Can we continue this discussion some other time, perhaps after 3.3 is out? I'd like to respond, but need a bit more time to think about it than I have right now (for this issue). Stefan Krah From martin at v.loewis.de Wed Aug 8 15:25:12 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 08 Aug 2012 15:25:12 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: <20120808103053.GA29719@sleipnir.bytereef.org> References: <20120808103053.GA29719@sleipnir.bytereef.org> Message-ID: <502268B8.2090903@v.loewis.de> > Could someone with access to a SPARC machine (perhaps with a modern version > of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run > the test suite? I'd invoke the "scratch your own itch" principle here. SPARC, these days, is a "minority platform"; I wouldn't mind deleting all SPARC support from Python in some upcoming release. In no way I feel obliged to take efforts that Python 3.3 works on SPARC (and remember that it was me who donated the first buildbot slave, and that was a SPARC machine - which I now had to take down, ten years later). Of course, when somebody has access to SPARC hardware, *and* they have some interest that Python 3.3 works on it, they should test it. But testing it as a favor to the community is IMO irrelevant now; that particular community is shrinking rapidly. What I personally really never cared about is SparcLinux; if sparc, then it ought to be Solaris. IOW: if it breaks, no big deal. Someone may or may not contribute a patch. Regards, Martin From fwierzbicki at gmail.com Wed Aug 8 17:19:52 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Wed, 8 Aug 2012 08:19:52 -0700 Subject: [Python-Dev] What's New in Python 3.3 In-Reply-To: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> References: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> Message-ID: On Wed, Aug 8, 2012 at 12:54 AM, Raymond Hettinger wrote: > Hello all, > > I'll soon be starting the edits of Whatsnew for 3.3. > > When I did this for 3.2, it took over 150 hours of work to research all the changes. This time there are many more changes, so my previous process won't work (reviewing every "new in 3.3" entry in the docs, every entry in the voluminous Misc/NEWS file, etc). > Thanks for all of this work! And thanks to A.M. Kuchling and everyone else that goes through the effort to make the "What's new in Python" documents so great. They are my high level roadmap for re-implementing in Jython. It would be so much harder without them. -Frank From solipsis at pitrou.net Wed Aug 8 19:56:41 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 08 Aug 2012 19:56:41 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: <502268B8.2090903@v.loewis.de> References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> Message-ID: Le 08/08/2012 15:25, "Martin v. L?wis" a ?crit : > > Of course, when somebody has access to SPARC hardware, *and* they > have some interest that Python 3.3 works on it, they should test it. > But testing it as a favor to the community is IMO irrelevant now; > that particular community is shrinking rapidly. > > What I personally really never cared about is SparcLinux; > if sparc, then it ought to be Solaris. What Martin said; SPARC under Linux is probably a hobbyist platform. Enterprise users of Solaris SPARC systems can still volunteer to provide and maintain a buildslave. Regards Antoine. From victor.stinner at gmail.com Thu Aug 9 01:04:46 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 9 Aug 2012 01:04:46 +0200 Subject: [Python-Dev] What's New in Python 3.3 In-Reply-To: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> References: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> Message-ID: Does Python 3.3 support cross-compilation? There are two new option for configure: --host and --build, but it's not mentioned in What's New in Python 3.3. Victor From flub at devork.be Thu Aug 9 01:26:26 2012 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 9 Aug 2012 00:26:26 +0100 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> Message-ID: On 8 August 2012 18:56, Antoine Pitrou wrote: > Le 08/08/2012 15:25, "Martin v. L?wis" a ?crit : > >> >> Of course, when somebody has access to SPARC hardware, *and* they >> have some interest that Python 3.3 works on it, they should test it. >> But testing it as a favor to the community is IMO irrelevant now; >> that particular community is shrinking rapidly. >> >> What I personally really never cared about is SparcLinux; >> if sparc, then it ought to be Solaris. > > > What Martin said; SPARC under Linux is probably a hobbyist platform. > Enterprise users of Solaris SPARC systems can still volunteer to provide and > maintain a buildslave. Is http://wiki.python.org/moin/BuildBot the relevant documentation? It still seems to refer to subversion, I presume that is no longer needed and just mercurial will do? I've set up a blank solaris 10 zone on a sparc T1000 with the OpenCSW toolchain (gcc 4.6.3) on our server and installed buildslave. According to the instructions this is the point where I ask for a slave name and password. Also, would it make sense to support OpenCSW more out of the box? Currently we carry some patches for setup.py in order to pick up e.g. sqlite from /opt/csw etc. Would there be an interest in supporting this? Regards, Floris From solipsis at pitrou.net Thu Aug 9 09:11:08 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 09 Aug 2012 09:11:08 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> Message-ID: Le 09/08/2012 01:26, Floris Bruynooghe a ?crit : >> >> What Martin said; SPARC under Linux is probably a hobbyist platform. >> Enterprise users of Solaris SPARC systems can still volunteer to provide and >> maintain a buildslave. > > > Is http://wiki.python.org/moin/BuildBot the relevant documentation? Yes, it is, but parts of it may be out of date. Please amend the instructions where necessary :-) > It still seems to refer to subversion, I presume that is no longer > needed and just mercurial will do? True. > I've set up a blank solaris 10 > zone on a sparc T1000 with the OpenCSW toolchain (gcc 4.6.3) on our > server and installed buildslave. According to the instructions this > is the point where I ask for a slave name and password. Ok, I'll send you one in a couple of days (away from Paris right now). > Also, would it make sense to support OpenCSW more out of the box? > Currently we carry some patches for setup.py in order to pick up e.g. > sqlite from /opt/csw etc. Would there be an interest in supporting > this? I don't know, what is OpenCSW? I think the answer also depends on the complexity of said patches. Regards Antoine. From martin at v.loewis.de Thu Aug 9 09:22:29 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 09 Aug 2012 09:22:29 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> Message-ID: <50236535.7000600@v.loewis.de> Am 09.08.12 01:26, schrieb Floris Bruynooghe: > According to the instructions this > is the point where I ask for a slave name and password. Sent in a private message. > Also, would it make sense to support OpenCSW more out of the box? > Currently we carry some patches for setup.py in order to pick up e.g. > sqlite from /opt/csw etc. Would there be an interest in supporting > this? If all that needs to be done is to add /opt/csw into search lists where a search list already exists, I see no problem doing so - except that this could be considered a new feature, so it might be only possible to do it for 3.4. If the patches are more involved, we would have to consider them on a case-by-case basis. Regards, Martin From eliben at gmail.com Thu Aug 9 10:48:50 2012 From: eliben at gmail.com (Eli Bendersky) Date: Thu, 9 Aug 2012 11:48:50 +0300 Subject: [Python-Dev] python 3.3 b2 In-Reply-To: References: <7E4C776A-2456-4FDC-A86F-E1F0851DEB25@gmail.com> Message-ID: > As I've explained on python-committers, it's currently on hold pending > the resolution of some importlib issues as well as a bug with the > cross-compiling code. I won't issue a concrete date, but I expect the > release to be made some time before next Sunday. > The release dates on python.org aren't up-to-date. Could you update them with the latest estimate(s). It doesn't have to be precise, but surely there will be no RC2 release 3 days from now. Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From flub at devork.be Thu Aug 9 16:02:09 2012 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 9 Aug 2012 15:02:09 +0100 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: <50236535.7000600@v.loewis.de> References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> <50236535.7000600@v.loewis.de> Message-ID: On 9 August 2012 08:22, "Martin v. L?wis" wrote: > Am 09.08.12 01:26, schrieb Floris Bruynooghe: > >> According to the instructions this >> is the point where I ask for a slave name and password. > > > Sent in a private message. Thanks, it seems to be working fine. I triggered a build for 27 and 3.x. I'm assuming other builds will just be triggered automatically when needed from now on? >> Also, would it make sense to support OpenCSW more out of the box? >> Currently we carry some patches for setup.py in order to pick up e.g. >> sqlite from /opt/csw etc. Would there be an interest in supporting >> this? > > If all that needs to be done is to add /opt/csw into search lists > where a search list already exists, I see no problem doing so - except > that this could be considered a new feature, so it might be only > possible to do it for 3.4. It is for 2.x, setup.py seems to have changed substantially in 3.x and I haven't built that yet with OpenCSW but I presume I just need to find the right place there too. I'll open an issue for it instead of discussing it here. Regards, Floris From flub at devork.be Thu Aug 9 16:11:15 2012 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 9 Aug 2012 15:11:15 +0100 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> Message-ID: On 9 August 2012 08:11, Antoine Pitrou wrote: > Le 09/08/2012 01:26, Floris Bruynooghe a ?crit : >> Also, would it make sense to support OpenCSW more out of the box? >> Currently we carry some patches for setup.py in order to pick up e.g. >> sqlite from /opt/csw etc. Would there be an interest in supporting >> this? > > I don't know, what is OpenCSW? > I think the answer also depends on the complexity of said patches. OpenCSW is a community effort (CSW == Community SoftWare) to build a repository of GNU/Linux userland binaries for Solaris. It makes package management as simple as on GNU/Linux, e.g.: "pkgutil --install wget gcc4core libsqlite3_dev" which would otherwise be a very long and laborious exercise. It is very well maintained and I consider it a must for any Solaris box which isn't tightly locked down. As said in my other mail the patches are rather trivial but I will open an issue to discuss there. Regards, Floris From flub at devork.be Fri Aug 10 00:24:54 2012 From: flub at devork.be (Floris Bruynooghe) Date: Thu, 9 Aug 2012 23:24:54 +0100 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: <20120808103053.GA29719@sleipnir.bytereef.org> References: <20120808103053.GA29719@sleipnir.bytereef.org> Message-ID: Hi, On 8 August 2012 11:30, Stefan Krah wrote: > Could someone with access to a SPARC machine (perhaps with a modern version > of Debian-sparc) grab a clone from http://hg.python.org/cpython/ and run > the test suite? One more thing that might be interesting, the OpenCSW project provides access to their build farm to upstream maintainers. They say various/all versions of solaris are available and compilers etc are already setup, but I have never tried this out. In case someone is interested in this, see http://www.opencsw.org/extend-it/signup/to-upstream-maintainers/ Regards, Floris From eliben at gmail.com Fri Aug 10 05:11:40 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 10 Aug 2012 06:11:40 +0300 Subject: [Python-Dev] [Python-checkins] peps: 3.3 schedule update. In-Reply-To: <3WtJPK4zlDzPqQ@mail.python.org> References: <3WtJPK4zlDzPqQ@mail.python.org> Message-ID: Awesome, thanks! On Thu, Aug 9, 2012 at 9:47 PM, georg.brandl wrote: > http://hg.python.org/peps/rev/fdf8b99178c4 > changeset: 4497:fdf8b99178c4 > user: Georg Brandl > date: Thu Aug 09 20:48:06 2012 +0200 > summary: > 3.3 schedule update. > > files: > pep-0398.txt | 8 ++++---- > 1 files changed, 4 insertions(+), 4 deletions(-) > > > diff --git a/pep-0398.txt b/pep-0398.txt > --- a/pep-0398.txt > +++ b/pep-0398.txt > @@ -42,10 +42,10 @@ > > (No new features beyond this point.) > > -- 3.3.0 beta 2: July 21, 2012 > -- 3.3.0 candidate 1: August 4, 2012 > -- 3.3.0 candidate 2: August 18, 2012 > -- 3.3.0 final: September 1, 2012 > +- 3.3.0 beta 2: August 11, 2012 > +- 3.3.0 candidate 1: August 25, 2012 > +- 3.3.0 candidate 2: September 8, 2012 > +- 3.3.0 final: September 22, 2012 > > .. don't forget to update final date above as well > > > -- > Repository URL: http://hg.python.org/peps > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Fri Aug 10 07:38:55 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 10 Aug 2012 07:38:55 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> <50236535.7000600@v.loewis.de> Message-ID: <50249E6F.10606@v.loewis.de> > Thanks, it seems to be working fine. I triggered a build for 27 and > 3.x. I'm assuming other builds will just be triggered automatically > when needed from now on? Indeed; you have probably seen it happening in the waterfall already. Thanks for providing that slave. Regards, Martin From g.brandl at gmx.net Fri Aug 10 07:47:41 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Fri, 10 Aug 2012 07:47:41 +0200 Subject: [Python-Dev] python 3.3 b2 In-Reply-To: References: <7E4C776A-2456-4FDC-A86F-E1F0851DEB25@gmail.com> Message-ID: On 09.08.2012 10:48, Eli Bendersky wrote: > > As I've explained on python-committers, it's currently on hold pending > the resolution of some importlib issues as well as a bug with the > cross-compiling code. I won't issue a concrete date, but I expect the > release to be made some time before next Sunday. > > > The release dates on python.org aren't up-to-date. Could you > update them with the latest estimate(s). It doesn't have to be precise, but > surely there will be no RC2 release 3 days from now. Now they are up to date again. Georg From martin at v.loewis.de Fri Aug 10 07:48:04 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 10 Aug 2012 07:48:04 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> <50236535.7000600@v.loewis.de> Message-ID: <5024A094.7060705@v.loewis.de> >> Sent in a private message. > > Thanks, it seems to be working fine. Actually, there appears to be a glitch in the network setup: it appears that connections to localhost are not possible in your zone. The tests fail with an assertion self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) AssertionError: 128 != 146 where 128 is ENETUNREACH. It would be good if localhost was reachable on a build slave. Also, if you haven't done so, please make sure that the build slave restarts when the zone or the machine is restarted. Don't worry that restarting will abort builds in progress - that happens from time to time on any slave. Regards, Martin From stefan at bytereef.org Fri Aug 10 14:05:29 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 10 Aug 2012 14:05:29 +0200 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: References: <20120808103053.GA29719@sleipnir.bytereef.org> Message-ID: <20120810120529.GA18915@sleipnir.bytereef.org> Floris Bruynooghe wrote: > One more thing that might be interesting, the OpenCSW project provides > access to their build farm to upstream maintainers. They say > various/all versions of solaris are available and compilers etc are > already setup, but I have never tried this out. In case someone is > interested in this, see > http://www.opencsw.org/extend-it/signup/to-upstream-maintainers/ Thanks for the link. Perhaps I'll try to get an account there. Stefan Krah From flub at devork.be Fri Aug 10 15:09:03 2012 From: flub at devork.be (Floris Bruynooghe) Date: Fri, 10 Aug 2012 14:09:03 +0100 Subject: [Python-Dev] SPARC testers (and buildbot!) needed In-Reply-To: <5024A094.7060705@v.loewis.de> References: <20120808103053.GA29719@sleipnir.bytereef.org> <502268B8.2090903@v.loewis.de> <50236535.7000600@v.loewis.de> <5024A094.7060705@v.loewis.de> Message-ID: On 10 August 2012 06:48, "Martin v. L?wis" wrote: > Actually, there appears to be a glitch in the network setup: it appears > that connections to localhost are not possible in your zone. The tests > fail with an assertion > > self.assertEqual(cm.exception.errno, errno.ECONNREFUSED) > AssertionError: 128 != 146 > > where 128 is ENETUNREACH. It would be good if localhost was reachable > on a build slave. The localhost network seems fine, which is shown by the test_socket test just before. I think the issue here is that socket.create_connection iterates over the result of socket.getaddrinfo('localhost', port, 0, SOCK_STREAM) which returns [(2, 2, 0, '', ('127.0.0.1', 0)), (26, 2, 0, '', ('::1', 0, 0, 0))] on this host. The first result is tried and returns ECONNREFUSED but then the second address is tried and this returns ENETUNREACH because this host has not IPv6 network configured. And create_connection() raises the last exception it received. If getaddrinfo() is called with the AI_ADDRCONFIG flag then it will only return the IPv4 version of localhost. I've created an issue to track this: http://bugs.python.org/issue15617 > Also, if you haven't done so, please make sure that the build slave > restarts when the zone or the machine is restarted. Don't worry that > restarting will abort builds in progress - that happens from time > to time on any slave. I'll check this, thanks for the reminder. Regards, Floris From status at bugs.python.org Fri Aug 10 18:06:53 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 10 Aug 2012 18:06:53 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120810160653.511641C84D@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-08-03 - 2012-08-10) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3588 (+24) closed 23808 (+44) total 27396 (+68) Open issues with patches: 1543 Issues opened (47) ================== #13072: Getting a buffer from a Unicode array uses invalid format http://bugs.python.org/issue13072 reopened by haypo #15527: Double parens in functions references http://bugs.python.org/issue15527 reopened by storchaka #15552: gettext: if looking for .mo in default locations, also look in http://bugs.python.org/issue15552 opened by Dominique.Leuenberger #15553: Segfault in test_6_daemon_threads() of test_threading, on Mac http://bugs.python.org/issue15553 opened by haypo #15555: Default newlines of io.TextIOWrapper http://bugs.python.org/issue15555 opened by ishimoto #15556: os.stat fails for file pending delete on Windows http://bugs.python.org/issue15556 opened by jkloth #15557: Tests for webbrowser module http://bugs.python.org/issue15557 opened by anton.barkovsky #15561: update subprocess docs to reference io.TextIOWrapper http://bugs.python.org/issue15561 opened by cjerdonek #15564: cgi.FieldStorage should not call read_multi on files http://bugs.python.org/issue15564 opened by patrick.vrijlandt #15566: tarfile.TarInfo.frombuf documentation is out of date http://bugs.python.org/issue15566 opened by sebastinas #15569: Doc doc: incorrect description of some roles as format-only http://bugs.python.org/issue15569 opened by cjerdonek #15570: email.header.decode_header parses differently http://bugs.python.org/issue15570 opened by ddvoinikov #15571: Python version of TextIOWrapper ignores "write_through" arg http://bugs.python.org/issue15571 opened by ncoghlan #15573: Support unknown formats in memoryview comparisons http://bugs.python.org/issue15573 opened by skrah #15574: IDLE crashes using clipboard copy command on OS X with ActiveT http://bugs.python.org/issue15574 opened by Leon.Maurer #15575: Tutorial is unclear on multiple imports of a module. http://bugs.python.org/issue15575 opened by roysmith #15576: importlib: ExtensionFileLoader not used to load packages from http://bugs.python.org/issue15576 opened by scoder #15577: Real argc and argv in embedded interpreter http://bugs.python.org/issue15577 opened by nordaux #15578: Crash when modifying sys.modules during import http://bugs.python.org/issue15578 opened by twouters #15581: curses: segfault in addstr() http://bugs.python.org/issue15581 opened by hut #15582: Enhance inspect.getdoc to follow inheritance chains http://bugs.python.org/issue15582 opened by ncoghlan #15586: Provide some examples for usage of ElementTree methods/attribu http://bugs.python.org/issue15586 opened by Sarbjit.singh #15588: quopri: encodestring and decodestring handle bytes, not string http://bugs.python.org/issue15588 opened by patrick.vrijlandt #15589: Bus error on Debian sparc http://bugs.python.org/issue15589 opened by skrah #15590: --libs is inconsistent for python-config --libs and pkgconfig http://bugs.python.org/issue15590 opened by doko #15591: when building the extensions, stdout is lost when stdout is re http://bugs.python.org/issue15591 opened by doko #15592: subprocess.communicate() breaks on no input with universal new http://bugs.python.org/issue15592 opened by cjerdonek #15593: urlparse.parse_qs documentation wrong re: urlencode http://bugs.python.org/issue15593 opened by Rob.Kinyon #15594: test_copyfile_named_pipe() fails on Mac OS X Snow Leopard: OSE http://bugs.python.org/issue15594 opened by haypo #15595: subprocess.Popen(universal_newlines=True) does not work for ce http://bugs.python.org/issue15595 opened by cjerdonek #15596: pickle: Faster serialization of Unicode strings http://bugs.python.org/issue15596 opened by haypo #15599: test_circular_imports() of test_threaded_import fails on FreeB http://bugs.python.org/issue15599 opened by haypo #15600: expose the finder details used by the FileFinder path hook http://bugs.python.org/issue15600 opened by eric.snow #15604: PyObject_IsTrue failure checks http://bugs.python.org/issue15604 opened by storchaka #15605: Explain sphinx documentation building in devguide http://bugs.python.org/issue15605 opened by Daniel.Ellis #15606: re.VERBOSE doesn't ignore certain whitespace http://bugs.python.org/issue15606 opened by stevencollins #15607: New print's argument "flush" is not mentioned in docstring http://bugs.python.org/issue15607 opened by storchaka #15608: Improve socketserver doc http://bugs.python.org/issue15608 opened by terry.reedy #15609: Format string: add more fast-path http://bugs.python.org/issue15609 opened by haypo #15610: PyImport_ImportModuleEx always fails in 3.3 with "ValueError: http://bugs.python.org/issue15610 opened by dmalcolm #15611: devguide: add "core mentors" area to Experts Index http://bugs.python.org/issue15611 opened by cjerdonek #15612: Rewrite StringIO to use the _PyUnicodeWriter API http://bugs.python.org/issue15612 opened by haypo #15613: argparse ArgumentDefaultsHelpFormatter interacts badly with -- http://bugs.python.org/issue15613 opened by aj #15615: More tests for JSON decoder to test Exceptions http://bugs.python.org/issue15615 opened by kushaldas #15616: logging.FileHandler not correctly created with PyYaml (unpickl http://bugs.python.org/issue15616 opened by jordipf #15617: FAIL: test_create_connection (test.test_socket.NetworkConnecti http://bugs.python.org/issue15617 opened by flub #15618: turtle.pencolor() chokes on unicode http://bugs.python.org/issue15618 opened by apalala Most recent 15 issues with no replies (15) ========================================== #15618: turtle.pencolor() chokes on unicode http://bugs.python.org/issue15618 #15615: More tests for JSON decoder to test Exceptions http://bugs.python.org/issue15615 #15613: argparse ArgumentDefaultsHelpFormatter interacts badly with -- http://bugs.python.org/issue15613 #15609: Format string: add more fast-path http://bugs.python.org/issue15609 #15606: re.VERBOSE doesn't ignore certain whitespace http://bugs.python.org/issue15606 #15593: urlparse.parse_qs documentation wrong re: urlencode http://bugs.python.org/issue15593 #15591: when building the extensions, stdout is lost when stdout is re http://bugs.python.org/issue15591 #15588: quopri: encodestring and decodestring handle bytes, not string http://bugs.python.org/issue15588 #15582: Enhance inspect.getdoc to follow inheritance chains http://bugs.python.org/issue15582 #15581: curses: segfault in addstr() http://bugs.python.org/issue15581 #15577: Real argc and argv in embedded interpreter http://bugs.python.org/issue15577 #15569: Doc doc: incorrect description of some roles as format-only http://bugs.python.org/issue15569 #15566: tarfile.TarInfo.frombuf documentation is out of date http://bugs.python.org/issue15566 #15564: cgi.FieldStorage should not call read_multi on files http://bugs.python.org/issue15564 #15557: Tests for webbrowser module http://bugs.python.org/issue15557 Most recent 15 issues waiting for review (15) ============================================= #15615: More tests for JSON decoder to test Exceptions http://bugs.python.org/issue15615 #15612: Rewrite StringIO to use the _PyUnicodeWriter API http://bugs.python.org/issue15612 #15610: PyImport_ImportModuleEx always fails in 3.3 with "ValueError: http://bugs.python.org/issue15610 #15609: Format string: add more fast-path http://bugs.python.org/issue15609 #15607: New print's argument "flush" is not mentioned in docstring http://bugs.python.org/issue15607 #15605: Explain sphinx documentation building in devguide http://bugs.python.org/issue15605 #15604: PyObject_IsTrue failure checks http://bugs.python.org/issue15604 #15600: expose the finder details used by the FileFinder path hook http://bugs.python.org/issue15600 #15596: pickle: Faster serialization of Unicode strings http://bugs.python.org/issue15596 #15595: subprocess.Popen(universal_newlines=True) does not work for ce http://bugs.python.org/issue15595 #15590: --libs is inconsistent for python-config --libs and pkgconfig http://bugs.python.org/issue15590 #15589: Bus error on Debian sparc http://bugs.python.org/issue15589 #15586: Provide some examples for usage of ElementTree methods/attribu http://bugs.python.org/issue15586 #15573: Support unknown formats in memoryview comparisons http://bugs.python.org/issue15573 #15571: Python version of TextIOWrapper ignores "write_through" arg http://bugs.python.org/issue15571 Top 10 most discussed issues (10) ================================= #15510: textwrap.wrap('') returns empty list http://bugs.python.org/issue15510 22 msgs #15589: Bus error on Debian sparc http://bugs.python.org/issue15589 19 msgs #13072: Getting a buffer from a Unicode array uses invalid format http://bugs.python.org/issue13072 17 msgs #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 12 msgs #15586: Provide some examples for usage of ElementTree methods/attribu http://bugs.python.org/issue15586 12 msgs #15216: Support setting the encoding on a text stream after creation http://bugs.python.org/issue15216 10 msgs #15573: Support unknown formats in memoryview comparisons http://bugs.python.org/issue15573 10 msgs #15595: subprocess.Popen(universal_newlines=True) does not work for ce http://bugs.python.org/issue15595 10 msgs #15424: __sizeof__ of array should include size of items http://bugs.python.org/issue15424 8 msgs #15527: Double parens in functions references http://bugs.python.org/issue15527 8 msgs Issues closed (43) ================== #5819: Add PYTHONPREFIXES environment variable http://bugs.python.org/issue5819 closed by asvetlov #13052: IDLE: replace ending with '\' causes crash http://bugs.python.org/issue13052 closed by asvetlov #13119: Newline for print() is \n on Windows, and not \r\n as expected http://bugs.python.org/issue13119 closed by python-dev #14182: collections.Counter equality test thrown-off by zero counts http://bugs.python.org/issue14182 closed by rhettinger #14966: Fully document subprocess.CalledProcessError http://bugs.python.org/issue14966 closed by asvetlov #15077: Regexp match goes into infinite loop http://bugs.python.org/issue15077 closed by storchaka #15163: pydoc displays __loader__ as module data http://bugs.python.org/issue15163 closed by brett.cannon #15202: followlinks/follow_symlinks/symlinks flags unification http://bugs.python.org/issue15202 closed by storchaka #15365: Traceback reporting can fail if IO cannot be imported http://bugs.python.org/issue15365 closed by kristjan.jonsson #15471: importlib's __import__() argument style nit http://bugs.python.org/issue15471 closed by brett.cannon #15482: __import__() change between 3.2 and 3.3 http://bugs.python.org/issue15482 closed by brett.cannon #15492: textwrap.wrap expand_tabs does not behave as expected http://bugs.python.org/issue15492 closed by r.david.murray #15501: Document exception classes in subprocess module http://bugs.python.org/issue15501 closed by asvetlov #15530: Enhance Py_MIN and Py_MAX http://bugs.python.org/issue15530 closed by loewis #15531: os.path symlink docs missing http://bugs.python.org/issue15531 closed by larry #15536: re.split doesn't respect MULTILINE http://bugs.python.org/issue15536 closed by r.david.murray #15541: logging.exception doesn't accept 'extra' http://bugs.python.org/issue15541 closed by vinay.sajip #15546: Iteration breaks with bz2.open(filename,'rt') http://bugs.python.org/issue15546 closed by nadeem.vawda #15547: Why do we have os.truncate() and os.ftruncate() whereas os.tru http://bugs.python.org/issue15547 closed by larry #15550: Trailing white spaces http://bugs.python.org/issue15550 closed by ned.deily #15551: Unit tests that return generators silently fail http://bugs.python.org/issue15551 closed by michael.foord #15554: correct and clarify str.splitlines() documentation http://bugs.python.org/issue15554 closed by r.david.murray #15558: webbrowser output to console http://bugs.python.org/issue15558 closed by r.david.murray #15559: Bad interaction between ipaddress addresses and the bytes cons http://bugs.python.org/issue15559 closed by python-dev #15560: _sqlite3.so is built with wrong include file on OS X when usin http://bugs.python.org/issue15560 closed by ned.deily #15562: CaseFolding not working properly http://bugs.python.org/issue15562 closed by benjamin.peterson #15563: wrong conversion reported by http://bugs.python.org/issue15563 closed by loewis #15565: pdb displays runt Exception strings http://bugs.python.org/issue15565 closed by r.david.murray #15567: threading.py contains undefined name in self-test code http://bugs.python.org/issue15567 closed by brian.curtin #15568: yield from missed StopIteration raised from iterator instead o http://bugs.python.org/issue15568 closed by python-dev #15572: Python2 documentation of the file() built-in function http://bugs.python.org/issue15572 closed by python-dev #15579: some unicode keys not found using in dictionary.keys() http://bugs.python.org/issue15579 closed by r.david.murray #15580: fix True/False/None reST markup http://bugs.python.org/issue15580 closed by georg.brandl #15583: Provide examples in Python doc for usage of various modules http://bugs.python.org/issue15583 closed by r.david.murray #15584: os.popen deprecation warning not in Python 3 docs http://bugs.python.org/issue15584 closed by r.david.murray #15585: usage of os.popen in standard library http://bugs.python.org/issue15585 closed by r.david.murray #15587: IDLE is pixelated on the Macbook Pro with Retina Display http://bugs.python.org/issue15587 closed by ned.deily #15597: exception __suppress_context__ test failures on OS X ppc syste http://bugs.python.org/issue15597 closed by python-dev #15598: relative import unexpectedly binds name http://bugs.python.org/issue15598 closed by benjamin.peterson #15601: tkinter test_variables fails with OS X Aqua Tk 8.4 http://bugs.python.org/issue15601 closed by asvetlov #15602: zipfile: wrong encoding charset of member filename http://bugs.python.org/issue15602 closed by loewis #15603: Multiprocessing creates incorrect pids http://bugs.python.org/issue15603 closed by cfriedline #15614: print statement not showing valid result http://bugs.python.org/issue15614 closed by loewis From apalala at gmail.com Fri Aug 10 22:52:08 2012 From: apalala at gmail.com (=?ISO-8859-1?Q?=22Juancarlo_A=F1ez_=28Apalala=29=22?=) Date: Fri, 10 Aug 2012 16:22:08 -0430 Subject: [Python-Dev] Tests of of 2.7 tip on Ubuntu 12.04 amd64 Message-ID: Hello, Please let me know if this is normal: 1 test failed: test_readline 1 test altered the execution environment: test_subprocess 32 tests skipped: test_aepack test_al test_applesingle test_bsddb test_bsddb185 test_bsddb3 test_cd test_cl test_curses test_dl test_gl test_imageop test_imgfile test_kqueue test_linuxaudiodev test_macos test_macostools test_msilib test_ossaudiodev test_scriptpackages test_smtpnet test_socketserver test_startfile test_sunaudiodev test_timeout test_tk test_ttk_guionly test_urllib2net test_urllibnet test_winreg test_winsound test_zipfile64 2 skips unexpected on linux2: test_bsddb test_bsddb3 Thanks in advance, -- Juancarlo From brian at python.org Fri Aug 10 23:07:53 2012 From: brian at python.org (Brian Curtin) Date: Fri, 10 Aug 2012 16:07:53 -0500 Subject: [Python-Dev] Tests of of 2.7 tip on Ubuntu 12.04 amd64 In-Reply-To: References: Message-ID: On Fri, Aug 10, 2012 at 3:52 PM, "Juancarlo A?ez (Apalala)" wrote: > Hello, > > Please let me know if this is normal: > > 1 test failed: > test_readline > 1 test altered the execution environment: > test_subprocess > 32 tests skipped: > test_aepack test_al test_applesingle test_bsddb test_bsddb185 > test_bsddb3 test_cd test_cl test_curses test_dl test_gl > test_imageop test_imgfile test_kqueue test_linuxaudiodev > test_macos test_macostools test_msilib test_ossaudiodev > test_scriptpackages test_smtpnet test_socketserver test_startfile > test_sunaudiodev test_timeout test_tk test_ttk_guionly > test_urllib2net test_urllibnet test_winreg test_winsound > test_zipfile64 > 2 skips unexpected on linux2: > test_bsddb test_bsddb3 It's never normal to have tests failing. Perhaps try running test_readline directly and then report your findings on http://bugs.python.org As for the skips, those are fine. As for the unexpected skips, you probably need to install bsddb in your system for those tests to be executed. From solipsis at pitrou.net Fri Aug 10 23:14:38 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 10 Aug 2012 23:14:38 +0200 Subject: [Python-Dev] Tests of of 2.7 tip on Ubuntu 12.04 amd64 References: Message-ID: <20120810231438.70807a3d@pitrou.net> On Fri, 10 Aug 2012 16:07:53 -0500 Brian Curtin wrote: > On Fri, Aug 10, 2012 at 3:52 PM, "Juancarlo A?ez (Apalala)" > wrote: > > Hello, > > > > Please let me know if this is normal: > > > > 1 test failed: > > test_readline > > 1 test altered the execution environment: > > test_subprocess > > 32 tests skipped: > > test_aepack test_al test_applesingle test_bsddb test_bsddb185 > > test_bsddb3 test_cd test_cl test_curses test_dl test_gl > > test_imageop test_imgfile test_kqueue test_linuxaudiodev > > test_macos test_macostools test_msilib test_ossaudiodev > > test_scriptpackages test_smtpnet test_socketserver test_startfile > > test_sunaudiodev test_timeout test_tk test_ttk_guionly > > test_urllib2net test_urllibnet test_winreg test_winsound > > test_zipfile64 > > 2 skips unexpected on linux2: > > test_bsddb test_bsddb3 > > It's never normal to have tests failing. Perhaps try running > test_readline directly and then report your findings on > http://bugs.python.org In any case, please also take a look at http://docs.python.org/devguide/runtests.html Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From apalala at gmail.com Sat Aug 11 00:02:30 2012 From: apalala at gmail.com (=?UTF-8?Q?Juancarlo_A=C3=B1ez?=) Date: Fri, 10 Aug 2012 17:32:30 -0430 Subject: [Python-Dev] Tests of of 2.7 tip on Ubuntu 12.04 amd64 In-Reply-To: References: Message-ID: On Fri, Aug 10, 2012 at 4:37 PM, Brian Curtin wrote: > It's never normal to have tests failing. Perhaps try running > test_readline directly and then report your findings on > http://bugs.python.org > The test script depends on a readline function not available in readline 5 or 6. Reported. > As for the skips, those are fine. As for the unexpected skips, you > probably need to install bsddb in your system for those tests to be > executed. > The bsddb development library was missing when I issued 'make'. As to the others: Is the bsddb185 test still relevant in 2.7? Why is the imageops test disabled for amd64? Thanks! -- Juancarlo *A?ez* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Sat Aug 11 00:15:27 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Fri, 10 Aug 2012 16:15:27 -0600 Subject: [Python-Dev] [Python-checkins] cpython: update docstring per the extension package fix, refactor In-Reply-To: <3WtyKr20cfzPHX@mail.python.org> References: <3WtyKr20cfzPHX@mail.python.org> Message-ID: On Fri, Aug 10, 2012 at 2:17 PM, philip.jenvey wrote: > http://hg.python.org/cpython/rev/e024f6ba5ed8 > changeset: 78487:e024f6ba5ed8 > user: Philip Jenvey > date: Fri Aug 10 11:53:54 2012 -0700 > summary: > update docstring per the extension package fix, refactor > > files: > Lib/importlib/_bootstrap.py | 9 +- > Python/importlib.h | 3353 +++++++++++----------- > 2 files changed, 1685 insertions(+), 1677 deletions(-) > > > diff --git a/Lib/importlib/_bootstrap.py b/Lib/importlib/_bootstrap.py > --- a/Lib/importlib/_bootstrap.py > +++ b/Lib/importlib/_bootstrap.py > @@ -1102,13 +1102,10 @@ > raise > > def is_package(self, fullname): > - """Return False as an extension module can never be a package.""" > + """Return if the extension module is a package.""" s/Return if/Return True if/ -eric From chris.jerdonek at gmail.com Sat Aug 11 01:56:43 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 10 Aug 2012 16:56:43 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Finish bringing importlib.abc in line with the current In-Reply-To: <3Wts5v4zWZzPvR@mail.python.org> References: <3Wts5v4zWZzPvR@mail.python.org> Message-ID: On Fri, Aug 10, 2012 at 9:21 AM, brett.cannon wrote: > http://hg.python.org/cpython/rev/0a75ce232f56 > changeset: 78485:0a75ce232f56 > user: Brett Cannon > date: Fri Aug 10 12:21:12 2012 -0400 > summary: > Issue #15502: Finish bringing importlib.abc in line with the current > + cache used by the finder. Used by :func:`invalidate_caches()` when Minor style nit: the Dev Guide says not to include the trailing parentheses in :func: text: "func: The name of a Python function; dotted names may be used. The role text should not include trailing parentheses to enhance readability..." (from http://hg.python.org/devguide/file/f518f23d06d5/documenting.rst#l888 ) (though I don't know why the Dev Guide says the opposite for :c:func: and is silent on :meth:.) On a related note: this may not be common knowledge, but it is possible to hyperlink to a function definition while still showing different text (in particular passed arguments) by using the following Sphinx syntax: :func:`locale.getpreferredencoding(False) ` This is in the Dev Guide, but I haven't seen the construct used in many places where it seems like it would be helpful and appropriate. --Chris From rdmurray at bitdance.com Sat Aug 11 16:49:41 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 11 Aug 2012 10:49:41 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Finish bringing importlib.abc in line with the current In-Reply-To: References: <3Wts5v4zWZzPvR@mail.python.org> Message-ID: <20120811144942.5AD662500FA@webabinitio.net> On Fri, 10 Aug 2012 16:56:43 -0700, Chris Jerdonek wrote: > On Fri, Aug 10, 2012 at 9:21 AM, brett.cannon > wrote: > > http://hg.python.org/cpython/rev/0a75ce232f56 > > changeset: 78485:0a75ce232f56 > > user: Brett Cannon > > date: Fri Aug 10 12:21:12 2012 -0400 > > summary: > > Issue #15502: Finish bringing importlib.abc in line with the current > > > + cache used by the finder. Used by :func:`invalidate_caches()` when > > Minor style nit: the Dev Guide says not to include the trailing > parentheses in :func: text: > > "func: The name of a Python function; dotted names may be used. The > role text should not include trailing parentheses to enhance > readability..." > > (from http://hg.python.org/devguide/file/f518f23d06d5/documenting.rst#l888 ) > > (though I don't know why the Dev Guide says the opposite for :c:func: > and is silent on :meth:.) To clarify: :func: automatically adds the ()s, so if you put them in the source you get double: invalidate_caches()(). As Chris said, use the 'alternate text' form if you want to show a call with arguments. --David From g.brandl at gmx.net Sat Aug 11 18:40:35 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 11 Aug 2012 18:40:35 +0200 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Finish bringing importlib.abc in line with the current In-Reply-To: <20120811144942.5AD662500FA@webabinitio.net> References: <3Wts5v4zWZzPvR@mail.python.org> <20120811144942.5AD662500FA@webabinitio.net> Message-ID: On 08/11/2012 04:49 PM, R. David Murray wrote: > On Fri, 10 Aug 2012 16:56:43 -0700, Chris Jerdonek wrote: >> On Fri, Aug 10, 2012 at 9:21 AM, brett.cannon >> wrote: >> > http://hg.python.org/cpython/rev/0a75ce232f56 >> > changeset: 78485:0a75ce232f56 >> > user: Brett Cannon >> > date: Fri Aug 10 12:21:12 2012 -0400 >> > summary: >> > Issue #15502: Finish bringing importlib.abc in line with the current >> >> > + cache used by the finder. Used by :func:`invalidate_caches()` when >> >> Minor style nit: the Dev Guide says not to include the trailing >> parentheses in :func: text: >> >> "func: The name of a Python function; dotted names may be used. The >> role text should not include trailing parentheses to enhance >> readability..." >> >> (from http://hg.python.org/devguide/file/f518f23d06d5/documenting.rst#l888 ) >> >> (though I don't know why the Dev Guide says the opposite for :c:func: >> and is silent on :meth:.) > > To clarify: :func: automatically adds the ()s, so if you put them in > the source you get double: invalidate_caches()(). That is not true: they are stripped if present before they are added again. What will give double parens is if you don't leave them empty, such as :func:`invalidate_caches(foo)`. This is because the reference markup is not meant for code snippets. Georg From chris.jerdonek at gmail.com Sat Aug 11 19:05:08 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sat, 11 Aug 2012 10:05:08 -0700 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15502: Finish bringing importlib.abc in line with the current In-Reply-To: References: <3Wts5v4zWZzPvR@mail.python.org> <20120811144942.5AD662500FA@webabinitio.net> Message-ID: On Sat, Aug 11, 2012 at 9:40 AM, Georg Brandl wrote: > On 08/11/2012 04:49 PM, R. David Murray wrote: >> On Fri, 10 Aug 2012 16:56:43 -0700, Chris Jerdonek wrote: >>> On Fri, Aug 10, 2012 at 9:21 AM, brett.cannon >>> wrote: >>> > http://hg.python.org/cpython/rev/0a75ce232f56 >>> > changeset: 78485:0a75ce232f56 >>> > user: Brett Cannon >>> > date: Fri Aug 10 12:21:12 2012 -0400 >>> > summary: >>> > Issue #15502: Finish bringing importlib.abc in line with the current >>> >>> > + cache used by the finder. Used by :func:`invalidate_caches()` when >>> >>> Minor style nit: the Dev Guide says not to include the trailing >>> parentheses in :func: text: >>> >>> "func: The name of a Python function; dotted names may be used. The >>> role text should not include trailing parentheses to enhance >>> readability..." >>> >>> (from http://hg.python.org/devguide/file/f518f23d06d5/documenting.rst#l888 ) >>> >>> (though I don't know why the Dev Guide says the opposite for :c:func: >>> and is silent on :meth:.) >> >> To clarify: :func: automatically adds the ()s, so if you put them in >> the source you get double: invalidate_caches()(). > > That is not true: they are stripped if present before they are added again. Yes, and the Dev Guide in part says this afterward. Its statement not to include trailing parentheses to enhance readability was I assume for readability of the .rst files rather than readability of the output. --Chris > What will give double parens is if you don't leave them empty, such as > :func:`invalidate_caches(foo)`. This is because the reference markup is not > meant for code snippets. > > Georg > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com From victor.stinner at gmail.com Sat Aug 11 20:30:22 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 11 Aug 2012 20:30:22 +0200 Subject: [Python-Dev] AST optimizer implemented in Python Message-ID: Hi, I started to implement an AST optimizer in Python. It's easy to create a new AST tree, so I'm surprised that I didn't find any existing project. https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py To test its peephole optimizations (by checking manually its final bytecode), I wrote a patch for Python to disable Python internal peephole optimizer (on bytecode): https://bitbucket.org/haypo/misc/src/tip/python/compile_disable_peephole.patch -- There is BytecodeAssembler [1], but it seems to be specialized on bytecode. There are (at least?) 3 different issues to implement an AST optimizer, but in C, not in Python: http://bugs.python.org/issue1346238 http://bugs.python.org/issue10399 http://bugs.python.org/issue11549 -- My proof-of-concept only implements very basic optimizations like 1+1 => 2 or "abcdef"[:3] => "abc", but it should easy to extend it to do more interesting optimization like function inlining. To allow more optimization, the optimizer permits to declare variables as constant. For example, sys.hexversion is replaced by its value by the optimizer, and checks like "sys.hexversion > 0x3000000" are done at compile time. This feature can be used to drop completly debug code at compilation, without the need of a preprocessor. For example, calls to logging.debug() can simply be dropped if the log level is hardcoded to ERROR (40). Other idea to improve this optimizer: - move invariant out of loops. Example: "x=[]; for i in range(10): x.append(i)" => "x=[]; x_append=x.append; for i in range(10): x_append(i)". Require to infer the type of variables. - unroll loops - inline small functions - etc. To be able to use such optimizer on a whole Python project, a new function (ex: sys.setastoptimizer()) can be added to CPython to call the optimizer between the compilation to AST and the compilation to bytecode. So the optimizer would be optionnal and it avoids bootstrap issues. [1] http://pypi.python.org/pypi/BytecodeAssembler Victor From greg at krypto.org Sat Aug 11 20:47:13 2012 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 11 Aug 2012 11:47:13 -0700 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 11:30 AM, Victor Stinner wrote: > Hi, > > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. > > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py Neat! > > > To test its peephole optimizations (by checking manually its final > bytecode), I wrote a patch for Python to disable Python internal > peephole optimizer (on bytecode): > > https://bitbucket.org/haypo/misc/src/tip/python/compile_disable_peephole.patch > > -- > > There is BytecodeAssembler [1], but it seems to be specialized on > bytecode. There are (at least?) 3 different issues to implement an AST > optimizer, but in C, not in Python: > > http://bugs.python.org/issue1346238 > http://bugs.python.org/issue10399 > http://bugs.python.org/issue11549 > > -- > > My proof-of-concept only implements very basic optimizations like 1+1 > => 2 or "abcdef"[:3] => "abc", but it should easy to extend it to do > more interesting optimization like function inlining. > > To allow more optimization, the optimizer permits to declare variables > as constant. For example, sys.hexversion is replaced by its value by > the optimizer, and checks like "sys.hexversion > 0x3000000" are done > at compile time. This feature can be used to drop completly debug code > at compilation, without the need of a preprocessor. For example, calls > to logging.debug() can simply be dropped if the log level is hardcoded > to ERROR (40). > > Other idea to improve this optimizer: > - move invariant out of loops. Example: "x=[]; for i in range(10): > x.append(i)" => "x=[]; x_append=x.append; for i in range(10): > x_append(i)". Require to infer the type of variables. > - unroll loops > - inline small functions > - etc. > > To be able to use such optimizer on a whole Python project, a new > function (ex: sys.setastoptimizer()) can be added to CPython to call > the optimizer between the compilation to AST and the compilation to > bytecode. So the optimizer would be optionnal and it avoids bootstrap > issues. > +1 We should add some form of setastoptimizer API in 3.4. Please start a PEP for this. It would be nice to include the ability to properly cache the ast optimizer output so that it does not have to run every time (in pyc files or similar, etc) but can verify that it is the specific ast optimizer that ran on a give existing cached copy. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitsink at gmail.com Sat Aug 11 23:14:22 2012 From: bitsink at gmail.com (Nam Nguyen) Date: Sat, 11 Aug 2012 14:14:22 -0700 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 11:47 AM, Gregory P. Smith wrote: > > On Sat, Aug 11, 2012 at 11:30 AM, Victor Stinner > wrote: >> >> Hi, >> >> I started to implement an AST optimizer in Python. It's easy to create >> a new AST tree, so I'm surprised that I didn't find any existing >> project. >> >> https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py > > > Neat! > > +1 We should add some form of setastoptimizer API in 3.4. Please start a > PEP for this. It would be nice to include the ability to properly cache the > ast optimizer output so that it does not have to run every time (in pyc > files or similar, etc) but can verify that it is the specific ast optimizer > that ran on a give existing cached copy. Once .pyc is created, I do not think that we keep the AST around any longer. Otherwise, I wouldn't have to write PyXfuscator. https://bitbucket.org/namn/pyxfuscator Or perhaps I am misunderstanding you. Nam From martin at v.loewis.de Sat Aug 11 23:27:49 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 11 Aug 2012 23:27:49 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: <5026CE55.5030108@v.loewis.de> >> +1 We should add some form of setastoptimizer API in 3.4. Please start a >> PEP for this. It would be nice to include the ability to properly cache the >> ast optimizer output so that it does not have to run every time (in pyc >> files or similar, etc) but can verify that it is the specific ast optimizer >> that ran on a give existing cached copy. > > Once .pyc is created, I do not think that we keep the AST around any > longer. Otherwise, I wouldn't have to write PyXfuscator. > > https://bitbucket.org/namn/pyxfuscator > > Or perhaps I am misunderstanding you. I think you misunderstood. What gps is concerned about (IIUC) that some people add ast optimizers in some run of Python, but other AST optimizers in a different run. Then, if you use a Python byte code file, you should be able to find out what AST optimizers have been run to create the pyc file, so you know whether you have to recompile or not. Of course, if that is really a desirable feature, you may want multiple pyc files, per combination of AST optimizers. The __pycache__ directory would readily support that. This seems to get complicated quickly, so a PEP is indeed desirable. Regards, Martin From ned at nedbatchelder.com Sun Aug 12 00:29:53 2012 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sat, 11 Aug 2012 18:29:53 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: <5026DCE1.8050901@nedbatchelder.com> On 8/11/2012 2:30 PM, Victor Stinner wrote: > Hi, > > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. > > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py > > To test its peephole optimizations (by checking manually its final > bytecode), I wrote a patch for Python to disable Python internal > peephole optimizer (on bytecode): > https://bitbucket.org/haypo/misc/src/tip/python/compile_disable_peephole.patch > > I would very much like to see the ability to disable all optimizers. As work continues on the various forms of optimization, please remember that sometimes programs are executed to reason about them, for example, when measuring test coverage, or when debugging. Just as gcc has a -O0 option to disable all code optimizations so that you can more easily debug programs, it would be fabulous for CPython to have an option to disable all code optimizations. This would include the existing peephole optimizer as well as any new optimizers such as Victor's AST optimizer. Full disclosure: I previously raised this possibility in ticket, and it was not as popular as I had hoped: http://bugs.python.org/issue2506 --Ned. From ericsnowcurrently at gmail.com Sun Aug 12 01:02:43 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 11 Aug 2012 17:02:43 -0600 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: <5026CE55.5030108@v.loewis.de> References: <5026CE55.5030108@v.loewis.de> Message-ID: On Sat, Aug 11, 2012 at 3:27 PM, "Martin v. L?wis" wrote: > I think you misunderstood. What gps is concerned about (IIUC) that some > people add ast optimizers in some run of Python, but other AST optimizers in > a different run. Then, if you use a Python byte code > file, you should be able to find out what AST optimizers have been > run to create the pyc file, so you know whether you have to recompile > or not. > > Of course, if that is really a desirable feature, you may want multiple > pyc files, per combination of AST optimizers. The __pycache__ directory > would readily support that. Perhaps it's a bit of an abuse, but modifying sys.implementation.cache_tag should result in separate .pyc file in __pycache__. Using the same cache tag later would mean that .pyc file gets loaded during import. -eric From rosuav at gmail.com Sun Aug 12 01:22:17 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 12 Aug 2012 09:22:17 +1000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sun, Aug 12, 2012 at 4:30 AM, Victor Stinner wrote: > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. Very nice idea! > Other idea to improve this optimizer: > - move invariant out of loops. Example: "x=[]; for i in range(10): > x.append(i)" => "x=[]; x_append=x.append; for i in range(10): > x_append(i)". Require to infer the type of variables. But this is risky. It's theoretically possible for x.append to replace itself. Sure it may not be a normal or common thing to do, but it's possible. And yet it could be pretty advantageous for most programs. Perhaps the best way is to hide potentially-risky optimizations behind command-line options? The default mode could be to do every change that's guaranteed not to affect execution, and everything else is an extra (like French, music, and washing). ChrisA From brett at python.org Sun Aug 12 02:03:32 2012 From: brett at python.org (Brett Cannon) Date: Sat, 11 Aug 2012 20:03:32 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 2:30 PM, Victor Stinner wrote: > Hi, > > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. > > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py > > To test its peephole optimizations (by checking manually its final > bytecode), I wrote a patch for Python to disable Python internal > peephole optimizer (on bytecode): > > https://bitbucket.org/haypo/misc/src/tip/python/compile_disable_peephole.patch > > -- > > There is BytecodeAssembler [1], but it seems to be specialized on > bytecode. There are (at least?) 3 different issues to implement an AST > optimizer, but in C, not in Python: > > http://bugs.python.org/issue1346238 > http://bugs.python.org/issue10399 > http://bugs.python.org/issue11549 > > -- > > My proof-of-concept only implements very basic optimizations like 1+1 > => 2 or "abcdef"[:3] => "abc", but it should easy to extend it to do > more interesting optimization like function inlining. > > To allow more optimization, the optimizer permits to declare variables > as constant. For example, sys.hexversion is replaced by its value by > the optimizer, and checks like "sys.hexversion > 0x3000000" are done > at compile time. This feature can be used to drop completly debug code > at compilation, without the need of a preprocessor. For example, calls > to logging.debug() can simply be dropped if the log level is hardcoded > to ERROR (40). > > Other idea to improve this optimizer: > - move invariant out of loops. Example: "x=[]; for i in range(10): > x.append(i)" => "x=[]; x_append=x.append; for i in range(10): > x_append(i)". Require to infer the type of variables. > - unroll loops > - inline small functions > - etc. > > To be able to use such optimizer on a whole Python project, a new > function (ex: sys.setastoptimizer()) can be added to CPython to call > the optimizer between the compilation to AST and the compilation to > bytecode. So the optimizer would be optionnal and it avoids bootstrap > issues. > It would also be very easy to expand importlib.abc.SourceLoader to add a method which is called with source and returns the bytecode to be written out which people could override with AST optimizations before sending the bytecode back. That way we don't have to get into the whole business of AST transformations if we don't want to (although, as Victor pointed out, there are some people who do want this formally supported). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Sun Aug 12 02:16:12 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 11 Aug 2012 18:16:12 -0600 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 6:03 PM, Brett Cannon wrote: > It would also be very easy to expand importlib.abc.SourceLoader to add a > method which is called with source and returns the bytecode to be written > out Yes, please. Not having to hack around this would be nice. > which people could override with AST optimizations before sending the > bytecode back. That way we don't have to get into the whole business of AST > transformations if we don't want to (although, as Victor pointed out, there > are some people who do want this formally supported). Also cool. -eric From brett at python.org Sun Aug 12 02:45:13 2012 From: brett at python.org (Brett Cannon) Date: Sat, 11 Aug 2012 20:45:13 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 8:16 PM, Eric Snow wrote: > On Sat, Aug 11, 2012 at 6:03 PM, Brett Cannon wrote: > > It would also be very easy to expand importlib.abc.SourceLoader to add a > > method which is called with source and returns the bytecode to be written > > out > > Yes, please. Not having to hack around this would be nice. > http://bugs.python.org/issue15627 -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Sun Aug 12 03:17:54 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 12 Aug 2012 03:17:54 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: >> Other idea to improve this optimizer: >> - move invariant out of loops. Example: "x=[]; for i in range(10): >> x.append(i)" => "x=[]; x_append=x.append; for i in range(10): >> x_append(i)". Require to infer the type of variables. > > But this is risky. It's theoretically possible for x.append to replace > itself. For this specific example, x.append cannot be modified: it raises AttributeError('list' object attribute 'append' is read-only). The idea would be to allow the developer to specify explicitly what he wants to optimize. I'm using a configuration class with a list of what can be optimized (ex: len(int)), but it can be changed to something different later. It must be configurable to be able to specify: "this specific variable is constant in my project".. > Perhaps the best way is to hide potentially-risky optimizations behind > command-line options? The default mode could be to do every change > that's guaranteed not to affect execution, and everything else is an > extra (like French, music, and washing). I don't care of integration into Python yet (but it would be nice to prepare such feature in Python 3.4). It can be a third party module, something like: import ast_optimizer ast_optimizer.hack_import_machinery() ast_optimizer.constants.add('application.DEBUG') (added on the top of your main script) Victor From rosuav at gmail.com Sun Aug 12 03:36:01 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sun, 12 Aug 2012 11:36:01 +1000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sun, Aug 12, 2012 at 11:17 AM, Victor Stinner wrote: > The idea would be to allow the developer to specify explicitly what he > wants to optimize. I'm using a configuration class with a list of what > can be optimized (ex: len(int)), but it can be changed to something > different later. > > It must be configurable to be able to specify: "this specific variable > is constant in my project".. That sounds like a plan. It's possibly even worth making blanket statements like "in this module, once a name is bound to an object, it will never be rebound to an object of different type". ChrisA From stefan_ml at behnel.de Sun Aug 12 06:42:25 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 12 Aug 2012 06:42:25 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: Chris Angelico, 12.08.2012 01:22: >> Other idea to improve this optimizer: >> - move invariant out of loops. Example: "x=[]; for i in range(10): >> x.append(i)" => "x=[]; x_append=x.append; for i in range(10): >> x_append(i)". Require to infer the type of variables. > > But this is risky. It's theoretically possible for x.append to replace > itself. Sure it may not be a normal or common thing to do, but it's > possible. Not only that. It changes semantics. If x.append is not defined, the exception would now be raised outside of the loop, and the loop itself may have side-effects already. In fact, the mere lookup of x.append may have side effects as well ... Stefan From stefan_ml at behnel.de Sun Aug 12 06:50:20 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 12 Aug 2012 06:50:20 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: Stefan Behnel, 12.08.2012 06:42: > Chris Angelico, 12.08.2012 01:22: >>> Other idea to improve this optimizer: >>> - move invariant out of loops. Example: "x=[]; for i in range(10): >>> x.append(i)" => "x=[]; x_append=x.append; for i in range(10): >>> x_append(i)". Require to infer the type of variables. >> >> But this is risky. It's theoretically possible for x.append to replace >> itself. Sure it may not be a normal or common thing to do, but it's >> possible. > > Not only that. It changes semantics. If x.append is not defined, the > exception would now be raised outside of the loop, and the loop itself may > have side-effects already. In fact, the mere lookup of x.append may have > side effects as well ... That being said, the specific case above can be optimised, we do something like this in Cython, too. It requires both (simple) type inference and control flow analysis, though, because you need to know that a) x holds a list and b) it was assigned outside of the loop and is not being assigned to inside. So it might look simple and obvious, but it requires quite a bit of compiler infrastructure. Stefan From stefan_ml at behnel.de Sun Aug 12 08:00:51 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 12 Aug 2012 08:00:51 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: Victor Stinner, 11.08.2012 20:30: > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. > > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py Since you're about to do pretty much the same thing, here is the code that Cython uses for its static tree optimisations. It's not based on Python's own AST, but most of the concepts should apply more or less directly to that as well. https://github.com/cython/cython/blob/master/Cython/Compiler/Optimize.py The code uses a visitor pattern, so you basically register a tree node callback method by naming it after the node class, e.g. "visit_NameNode()" to intercept on variables etc. For processing builtins, we use the same mechanism with method names like "_handle_simple_method_bytes_decode()" for a ".decode()" method call on a "bytes" object that receives no keyword arguments or args/kwargs ("simple"). Constant folding is a bit funny in that it calculates lots of constants, but not all of them will be used in the code. Some are just kept around in tree nodes to serve as support for later optimisations. Very helpful. Some more tree processors are here, mostly infrastructure: https://github.com/cython/cython/blob/master/Cython/Compiler/ParseTreeTransforms.py It also contains the recursive unwinding of parallel assignments (a,b=c,d), which leads to much better code at least at the C level. Not sure how interesting something like this is for an interpreter. It might save some tuples and/or stack operations along the way. Also, extended iterable unpacking can be optimised this way. Some builtins are generically optimised here (straight through the type system instead of the AST): https://github.com/cython/cython/blob/master/Cython/Compiler/Builtin.py For refactoring the tree, we manually build tree nodes most of the time, but for the more involved cases, we sometimes pass templates through the parser. The whole compiler pipeline, including all AST processing steps, is constructed here: https://github.com/cython/cython/blob/master/Cython/Compiler/Pipeline.py#L123 Stefan From stefan_ml at behnel.de Sun Aug 12 08:06:41 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 12 Aug 2012 08:06:41 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: <5026CE55.5030108@v.loewis.de> References: <5026CE55.5030108@v.loewis.de> Message-ID: "Martin v. L?wis", 11.08.2012 23:27: >>> +1 We should add some form of setastoptimizer API in 3.4. Please start a >>> PEP for this. It would be nice to include the ability to properly cache >>> the >>> ast optimizer output so that it does not have to run every time (in pyc >>> files or similar, etc) but can verify that it is the specific ast optimizer >>> that ran on a give existing cached copy. >> >> Once .pyc is created, I do not think that we keep the AST around any >> longer. Otherwise, I wouldn't have to write PyXfuscator. >> >> https://bitbucket.org/namn/pyxfuscator >> >> Or perhaps I am misunderstanding you. > > I think you misunderstood. What gps is concerned about (IIUC) that some > people add ast optimizers in some run of Python, but other AST optimizers > in a different run. Then, if you use a Python byte code > file, you should be able to find out what AST optimizers have been > run to create the pyc file, so you know whether you have to recompile > or not. > > Of course, if that is really a desirable feature, you may want multiple > pyc files, per combination of AST optimizers. The __pycache__ directory > would readily support that. > > This seems to get complicated quickly, so a PEP is indeed desirable. I think that gets overly complicated due to state explosion. Just recompile when things do not match, or simply ignore mismatches (possibly based on a command line switch). Compilers for statically compiled languages have been working nicely like that for ages. Most people won't run the interpreter with different optimiser settings, and if they do, they can just delete the __pycache__ before the next run. Stefan From stefan_ml at behnel.de Sun Aug 12 10:05:02 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 12 Aug 2012 10:05:02 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: Stefan Behnel, 12.08.2012 08:00: > Victor Stinner, 11.08.2012 20:30: >> I started to implement an AST optimizer in Python. It's easy to create >> a new AST tree, so I'm surprised that I didn't find any existing >> project. >> >> https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py > > Since you're about to do pretty much the same thing, here is the code that > Cython uses for its static tree optimisations. It's not based on Python's > own AST, but most of the concepts should apply more or less directly to > that as well. > > https://github.com/cython/cython/blob/master/Cython/Compiler/Optimize.py Another thing I think I should mention is my TreePath implementation, mostly stealing from Fredrik Lundh's ElementPath. https://github.com/cython/cython/blob/master/Cython/Compiler/TreePath.py We basically only use it for test assertions against the AST, for example here: https://github.com/cython/cython/blob/master/tests/run/builtinslice.pyx But it wouldn't be all that hard to extend it into a pattern matching tool, so that you could basically say "if this expression matches the current subtree, then jump to optimiser a), for another expression use optimiser b), else leave things untouched". Stefan From chris.jerdonek at gmail.com Sun Aug 12 14:40:36 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 12 Aug 2012 05:40:36 -0700 Subject: [Python-Dev] [Python-checkins] cpython (3.2): zip() returns an iterator, make a list() of it; thanks to Martin from docs@ In-Reply-To: <3WvtRk6prQzPr6@mail.python.org> References: <3WvtRk6prQzPr6@mail.python.org> Message-ID: On Sun, Aug 12, 2012 at 1:25 AM, sandro.tosi wrote: > http://hg.python.org/cpython/rev/233673503217 > changeset: 78512:233673503217 > user: Sandro Tosi > date: Sun Aug 12 10:24:50 2012 +0200 > summary: > zip() returns an iterator, make a list() of it; thanks to Martin from docs@ > diff --git a/Doc/tutorial/datastructures.rst b/Doc/tutorial/datastructures.rst > - >>> zip(*matrix) > + >>> list(zip(*matrix)) > [(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)] Is there a reason we don't run the doctests in the Doc/ folder's .rst files as part of regrtest (e.g. via DocFileSuite), or is that something we just haven't gotten around to doing? --Chris From andrew.svetlov at gmail.com Sun Aug 12 15:00:54 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 12 Aug 2012 16:00:54 +0300 Subject: [Python-Dev] [Python-checkins] cpython (3.2): zip() returns an iterator, make a list() of it; thanks to Martin from docs@ In-Reply-To: References: <3WvtRk6prQzPr6@mail.python.org> Message-ID: Just now doctest-like code blocks in Doc/* are used for two different targets: 1. regular doctests 2. notation for documentation While former can be tested the later will definitely fail (missing variables, functions, files etc.) Also docs contains mixed notation, when, say, function declared as regular code block than called from doctest (see functools.lru_cache examples). Doctest obviously failed because it cannot find function. For now if you will try to run doctest on Doc/**.rst you will get *a lot* of failures. I doubt if we will convert all docs to pass doctests, at least quickly. Also making docs doctest-safe sometimes requires less clean and worse readable notation. On Sun, Aug 12, 2012 at 3:40 PM, Chris Jerdonek wrote: > On Sun, Aug 12, 2012 at 1:25 AM, sandro.tosi wrote: >> http://hg.python.org/cpython/rev/233673503217 >> changeset: 78512:233673503217 >> user: Sandro Tosi >> date: Sun Aug 12 10:24:50 2012 +0200 >> summary: >> zip() returns an iterator, make a list() of it; thanks to Martin from docs@ > >> diff --git a/Doc/tutorial/datastructures.rst b/Doc/tutorial/datastructures.rst >> - >>> zip(*matrix) >> + >>> list(zip(*matrix)) >> [(1, 5, 9), (2, 6, 10), (3, 7, 11), (4, 8, 12)] > > Is there a reason we don't run the doctests in the Doc/ folder's .rst > files as part of regrtest (e.g. via DocFileSuite), or is that > something we just haven't gotten around to doing? > > --Chris > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From ncoghlan at gmail.com Sun Aug 12 15:37:44 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Aug 2012 23:37:44 +1000 Subject: [Python-Dev] [Python-checkins] cpython (3.2): zip() returns an iterator, make a list() of it; thanks to Martin from docs@ In-Reply-To: References: <3WvtRk6prQzPr6@mail.python.org> Message-ID: On Sun, Aug 12, 2012 at 11:00 PM, Andrew Svetlov wrote: > I doubt if we will convert all docs to pass doctests, at least quickly. > Also making docs doctest-safe sometimes requires less clean and worse > readable notation. About the only thing that could work in a reasonable way is a doctest mode for 3.4 where it could be told to ignore files unless they contained some kind of "doctest-safe" marker. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From andrew.svetlov at gmail.com Sun Aug 12 15:40:23 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Sun, 12 Aug 2012 16:40:23 +0300 Subject: [Python-Dev] [Python-checkins] cpython (3.2): zip() returns an iterator, make a list() of it; thanks to Martin from docs@ In-Reply-To: References: <3WvtRk6prQzPr6@mail.python.org> Message-ID: Sounds good. On Sun, Aug 12, 2012 at 4:37 PM, Nick Coghlan wrote: > On Sun, Aug 12, 2012 at 11:00 PM, Andrew Svetlov > wrote: >> I doubt if we will convert all docs to pass doctests, at least quickly. >> Also making docs doctest-safe sometimes requires less clean and worse >> readable notation. > > About the only thing that could work in a reasonable way is a doctest > mode for 3.4 where it could be told to ignore files unless they > contained some kind of "doctest-safe" marker. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -- Thanks, Andrew Svetlov From chris.jerdonek at gmail.com Sun Aug 12 16:01:27 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 12 Aug 2012 07:01:27 -0700 Subject: [Python-Dev] [Python-checkins] cpython (3.2): zip() returns an iterator, make a list() of it; thanks to Martin from docs@ In-Reply-To: References: <3WvtRk6prQzPr6@mail.python.org> Message-ID: On Sun, Aug 12, 2012 at 6:37 AM, Nick Coghlan wrote: > On Sun, Aug 12, 2012 at 11:00 PM, Andrew Svetlov > wrote: >> I doubt if we will convert all docs to pass doctests, at least quickly. >> Also making docs doctest-safe sometimes requires less clean and worse >> readable notation. > > About the only thing that could work in a reasonable way is a doctest > mode for 3.4 where it could be told to ignore files unless they > contained some kind of "doctest-safe" marker. I created an issue for this here: http://bugs.python.org/issue15629 --Chris From georg at python.org Sun Aug 12 17:01:36 2012 From: georg at python.org (Georg Brandl) Date: Sun, 12 Aug 2012 17:01:36 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 beta 2 Message-ID: <5027C550.6070700@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On behalf of the Python development team, I'm happy to announce the second beta release of Python 3.3.0 -- a little later than originally scheduled, but much better for it. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, syntax for delegating to a subgenerator ("yield from") * PEP 393, flexible string representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) now based on importlib by default * The new "lzma" module with LZMA/XZ support * PEP 397, a Python launcher for Windows * PEP 405, virtual environment support in core * PEP 420, namespace package support * PEP 3151, reworking the OS and IO exception hierarchy * PEP 3155, qualified name for classes and functions * PEP 409, suppressing exception context * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * PEP 412, a new key-sharing dictionary implementation that significantly saves memory for object-oriented code * PEP 362, the function-signature object * The new "faulthandler" module that helps diagnosing crashes * The new "unittest.mock" module * The new "ipaddress" module * The "sys.implementation" attribute * A policy framework for the email package, with a provisional (see PEP 411) policy that adds much improved unicode support for email header parsing * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default In total, almost 500 API items are new or improved in Python 3.3. For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html (*) To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! (*) Please note that this document is usually finalized late in the release cycle and therefore may have stubs and missing entries at this point. - -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.19 (GNU/Linux) iEYEARECAAYFAlAnxVAACgkQN9GcIYhpnLAECACcDeE+N2AfYVnuwMkq682znfDU ODAAn0J87+MVA9WHEV5iYZd3ub9ZhbpC =LvY0 -----END PGP SIGNATURE----- From pje at telecommunity.com Sun Aug 12 20:23:55 2012 From: pje at telecommunity.com (PJ Eby) Date: Sun, 12 Aug 2012 14:23:55 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 8:03 PM, Brett Cannon wrote: > > It would also be very easy to expand importlib.abc.SourceLoader to add a > method which is called with source and returns the bytecode to be written > out which people could override with AST optimizations before sending the > bytecode back. That way we don't have to get into the whole business of AST > transformations if we don't want to (although, as Victor pointed out, there > are some people who do want this formally supported). I'm not sure if this is directly related or not, but making this mechanism support custom compilation for new filename suffixes would be nice, especially for various e.g. HTML/XML templating systems that compile to Python or bytecode. Specifically, having a way to add a new source suffix (e.g. ".kid", ".zpt", etc.) and a matching compilation function, such that it's automatically picked up for compilation by both the filesystem and zip importers would be awesome. It'd also allow for DSLs and syntax experiments using alternative filename extensions. From martin at v.loewis.de Sun Aug 12 21:48:19 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sun, 12 Aug 2012 21:48:19 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: <20120812214819.Horde.4qyKSdjz9kRQKAiDECY2FNA@webmail.df.eu> > I'm not sure if this is directly related or not, but making this > mechanism support custom compilation for new filename suffixes would > be nice, especially for various e.g. HTML/XML templating systems that > compile to Python or bytecode. > > Specifically, having a way to add a new source suffix (e.g. ".kid", > ".zpt", etc.) and a matching compilation function, such that it's > automatically picked up for compilation by both the filesystem and zip > importers would be awesome. It'd also allow for DSLs and syntax > experiments using alternative filename extensions. How would the compilation (and the resulting code) then be invoked? If it is through import statements, it should already be possible to have import load an html file. However, ISTM that what you want is not modules, but files which rather are similar to individual functions. So the feature would go as an extension to exec() or eval(). I'm skeptical that this can be generalized in a useful manner - the exec/eval/execfile family already has variations depending on whether the thing to run is a single statement, a block, or an expression. It matters whether it gets its parameters passed, or somehow draws them from the environment - and then, which sources? If you would want to support HTML template engines alone, you find that they typically have many distinct parameter sets (the request, the ORM, the process environment, and then actual python-level parameters). So defining something that compiles it may be the easy part; the tricky part is defining something that then executes it. Regards, Martin From victor.stinner at gmail.com Sun Aug 12 22:49:57 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 12 Aug 2012 22:49:57 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. I done more research. I found the AST optimizer of PyPy, which implements basic optimizations: https://bitbucket.org/pypy/pypy/src/default/pypy/interpreter/astcompiler/optimize.py So there is also something like an AST optimizer in Cython: https://github.com/cython/cython/blob/master/Cython/Compiler/Optimize.py https://github.com/cython/cython/blob/master/Cython/Compiler/ParseTreeTransforms.py https://github.com/cython/cython/blob/master/Cython/Compiler/Builtin.py https://github.com/cython/cython/blob/master/Cython/Compiler/Pipeline.py#L123 -- > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py I moved the script to a new dedicated project on Bitbucket: https://bitbucket.org/haypo/astoptimizer Join the project if you want to help me to build a better optimizer! It now works on Python 2.5-3.3. > There is BytecodeAssembler [1], but it seems to be specialized on > bytecode. There are (at least?) 3 different issues to implement an AST > optimizer, but in C, not in Python: > > http://bugs.python.org/issue1346238 > http://bugs.python.org/issue10399 > http://bugs.python.org/issue11549 Oh, http://bugs.python.org/issue10399 includes an optimizer implemented in Python: Lib/__optimizer__.py. It inlines functions and create specialized versions of a function. > My proof-of-concept only implements very basic optimizations like 1+1 > => 2 or "abcdef"[:3] => "abc", but it should easy to extend it to do > more interesting optimization like function inlining. It is also possible to call functions and methods at compile time, if there have no border effect (and don't depend on the environnement). For example, len("abc") is always 3. I added a lot of such functions, especially builtin functions. I added options to enable more aggressive optimizations if we know that result will run on the same host than the compiler (os.name, sys.byteorder, ... are replaced by their value). Victor From meadori at gmail.com Sun Aug 12 23:05:15 2012 From: meadori at gmail.com (Meador Inge) Date: Sun, 12 Aug 2012 16:05:15 -0500 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sat, Aug 11, 2012 at 1:30 PM, Victor Stinner wrote: > Hi, > > I started to implement an AST optimizer in Python. It's easy to create > a new AST tree, so I'm surprised that I didn't find any existing > project. > > https://bitbucket.org/haypo/misc/src/tip/python/ast_optimizer.py Very cool. > To test its peephole optimizations (by checking manually its final > bytecode), I wrote a patch for Python to disable Python internal > peephole optimizer (on bytecode): > https://bitbucket.org/haypo/misc/src/tip/python/compile_disable_peephole.patch > > -- > > There is BytecodeAssembler [1], but it seems to be specialized on > bytecode. There are (at least?) 3 different issues to implement an AST > optimizer, but in C, not in Python: > > http://bugs.python.org/issue1346238 > http://bugs.python.org/issue10399 > http://bugs.python.org/issue11549 I read through the issues a while back and each is interesting in its own right. However, each is a specific implementation that is somewhat general, but geared towards one optimization (folding, inlining, etc...). ISTM, that we need to step back a bit and define a what an AST optimizer for Python should look like (or even if it really makes any sense at all). I imagine having some facilities to manage and add new passes would be useful, for instance. I think this work probably merits a PEP (considering we essentially have four competing implementations for AST optimization now). This is an interesting project and I would happily volunteer to help flesh out the details of a prototype and working on a PEP. -- # Meador From ericsnowcurrently at gmail.com Mon Aug 13 00:07:45 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sun, 12 Aug 2012 16:07:45 -0600 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Aug 12, 2012 12:56 PM, "PJ Eby" wrote: > I'm not sure if this is directly related or not, but making this > mechanism support custom compilation for new filename suffixes would > be nice, especially for various e.g. HTML/XML templating systems that > compile to Python or bytecode. > > Specifically, having a way to add a new source suffix (e.g. ".kid", > ".zpt", etc.) and a matching compilation function, such that it's > automatically picked up for compilation by both the filesystem and zip > importers would be awesome. It'd also allow for DSLs and syntax > experiments using alternative filename extensions. +1 I'm hacking around this right now for a project I'm working on. I definitely do this through the import system. Inserting a look-alike path hook and monkeypatching the cached path entry finders is not difficult, but certainly fragile and less than ideal. Consequently I've been looking into simple and not-so-simple solutions to making it easier to add new suffixes to be handled. The source/pyc-to-code-to-module path during imports is so prevalent and critical that it may benefit from its own model. This is similar to how the path-based import subsystem has its own special-case model. Or source-based imports just need better special-casing in the path-based import subsystem. Perhaps just adding and using sys.source_suffixes as a mapping of suffixes to loader classes. -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Aug 13 01:12:27 2012 From: brett at python.org (Brett Cannon) Date: Sun, 12 Aug 2012 19:12:27 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Sun, Aug 12, 2012 at 6:07 PM, Eric Snow wrote: > On Aug 12, 2012 12:56 PM, "PJ Eby" wrote: > > I'm not sure if this is directly related or not, but making this > > mechanism support custom compilation for new filename suffixes would > > be nice, especially for various e.g. HTML/XML templating systems that > > compile to Python or bytecode. > > > > Specifically, having a way to add a new source suffix (e.g. ".kid", > > ".zpt", etc.) and a matching compilation function, such that it's > > automatically picked up for compilation by both the filesystem and zip > > importers would be awesome. It'd also allow for DSLs and syntax > > experiments using alternative filename extensions. > > +1 > > I'm hacking around this right now for a project I'm working on. I > definitely do this through the import system. Inserting a look-alike path > hook and monkeypatching the cached path entry finders is not difficult, but > certainly fragile and less than ideal. > Why are you doing that? Can't you just use FileFinder with new loaders and file suffixes? Why do you feel the need to much with any cache? > Consequently I've been looking into simple and not-so-simple solutions to > making it easier to add new suffixes to be handled. The > source/pyc-to-code-to-module path during imports is so prevalent and > critical that it may benefit from its own model. > 3.4 will expose more of this. The source-to-code method would get you the transformation that PJE wants while 3.3 already has FileFinder handle the finding of the proper files for you. Toss in a method or two to help in parsing byte-compiled files and writing them and most of the process is then exposed. > This is similar to how the path-based import subsystem has its own > special-case model. Or source-based imports just need better > special-casing in the path-based import subsystem. Perhaps just adding and > using sys.source_suffixes as a mapping of suffixes to loader classes. > This is starting to get off-topic, but no more adding stuff to sys in regards to import. There is already too much global state that is preventing good encapsulation and we don't need to make it any worse. The more we add to sys in regards to import the farther we get from any import engine solution that we might want to evolve towards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Aug 13 03:20:23 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Aug 2012 11:20:23 +1000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 7:05 AM, Meador Inge wrote: > This is an interesting project and I would happily volunteer to help flesh > out the details of a prototype and working on a PEP. Also, if there are possible AST improvements that would help in 3.4+, that option *is* on the table (this was an issue with cleaning up some of the constant folding support - there's some silliness in the current AST that was inherited from the 2.x series, but really isn't needed in 3.x). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From urllib3 at yahoo.com Mon Aug 13 03:28:47 2012 From: urllib3 at yahoo.com (Python Urlopen) Date: Sun, 12 Aug 2012 18:28:47 -0700 (PDT) Subject: [Python-Dev] python 2.7 + https + urlopen = ? Message-ID: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> I am a python 2.7.x user and am hoping to reach with this email some python developers who would be sympathetic to this scenario (And I understand that you might not, which is perfectly fine -- I've already requested one developer not to reply ) : How would you feel, if you issued : import urllib urlopen("""https://server.domain.com""").read() and the command got you data from some other URL without telling you! You use firefox, and the site is different than the data you got! Same with chrome. Safari. Even IE ! Cheated? (Well I was mad -- after IE worked). Then, you dig a little and say, hey there are bugs in networks/code, lets try the other tools that are available on python 2.x, who uses urlopen from urllib in 2012. There are tons, right? urllib2, urllib, urllib3, requests, twisted.getPage, ... None of them worked! Wow. Then you wonder, whats going on. You poke one of the server administrator, and he sends you the logs, and you see the problem. The keyword being "SNI". Now you start googling. First read about SNI perhaps. Here is a 2 line summary: SNI is a server side "feature" that extends SSL and TLS protocols to let you talk to a https server which is on an IP that serves multiple certificates for multiple https servers. SNI was first used in 2004 and OpenSSL started support in 2006. In 2007, it was backported to OpenSSL 0.9.x. In 2009 there was a bug filed with python-devs for fixing this in 2.6. The feature enhancement (or "bug fix") eventually happened -- for 3.2+. (http://en.wikipedia.org/wiki/Server_Name_Indication) Then you google more and you land up on this page: http://bugs.python.org/issue5639 which shows you that 2.6 has a patch. Then you wonder, why wasn't it included in 2.7 -- and you read -- AP : "No, Python 2 only receives bug fixes.". You instantly hate the guy. Sorry AP, nothing personal, but please do not reply to this post. I think I know what your reply will be.?? After a lot of pain, I got myself out of this trouble, and my code now works correctly on 2.7.x (thanks to?Jean-Paul Calderone's pyopenssl). But do "you" think this is a "feature" and not a "bug"? -- And do you think debating on this, killing time on the debate, and letting all python 2.x users suffer sooner or later is right --. Something as basic as urlopen!? Thanks for your time and I wish good luck to most python users. From urllib3 at yahoo.com Mon Aug 13 03:39:44 2012 From: urllib3 at yahoo.com (Python Urlopen) Date: Sun, 12 Aug 2012 18:39:44 -0700 (PDT) Subject: [Python-Dev] python 2.7 + https + urlopen = ? In-Reply-To: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> References: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> Message-ID: <1344821984.95249.YahooMailNeo@web140301.mail.bf1.yahoo.com> I am a python 2.7.x user and am hoping to reach with this email some python developers who would be sympathetic to this scenario (And I understand that you might not, which is perfectly fine -- I've already requested one developer not to reply ) : How would you feel, if you issued : import urllib urlopen("""https://server.domain.com""").read() and the command got you data from some other URL without telling you! You use firefox, and the site is different than the data you got! Same with chrome. Safari. Even IE ! Cheated? (Well I was mad -- after IE worked). Then, you dig a little and say, hey there are bugs in networks/code, lets try the other tools that are available on python 2.x, who uses urlopen from urllib in 2012. There are tons, right? urllib2, urllib, urllib3, requests, twisted.getPage, ... None of them worked! Wow. Then you wonder, whats going on. You poke one of the server administrator, and he sends you the logs, and you see the problem. The keyword being "SNI". Now you start googling. First read about SNI perhaps. Here is a 2 line summary: SNI is a server side "feature" that extends SSL and TLS protocols to let you talk to a https server which is on an IP that serves multiple certificates for multiple https servers. SNI was first used in 2004 and OpenSSL started support in 2006. In 2007, it was backported to OpenSSL 0.9.x. In 2009 there was a bug filed with python-devs for fixing this in 2.6. The feature enhancement (or "bug fix") eventually happened -- for 3.2+. (http://en.wikipedia.org/wiki/Server_Name_Indication) Then you google more and you land up on this page: http://bugs.python.org/issue5639 which shows you that 2.6 has a patch. Then you wonder, why wasn't it included in 2.7 -- and you read -- AP : "No, Python 2 only receives bug fixes.". You instantly hate the guy. Sorry AP, nothing personal, but please do not reply to this post. I think I know what your reply will be.?? After a lot of pain, I got myself out of this trouble, and my code now works correctly on 2.7.x (thanks to?Jean-Paul Calderone's pyopenssl). But do "you" think this is a "feature" and not a "bug"? -- And do you think debating on this, killing time on the debate, and letting all python 2.x users suffer sooner or later is right --. Something as basic as urlopen!? Thanks for your time and I wish good luck to most python users. From ncoghlan at gmail.com Mon Aug 13 04:07:26 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Aug 2012 12:07:26 +1000 Subject: [Python-Dev] python 2.7 + https + urlopen = ? In-Reply-To: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> References: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> Message-ID: On Mon, Aug 13, 2012 at 11:28 AM, Python Urlopen wrote: > which shows you that 2.6 has a patch. Then you wonder, why wasn't it included in 2.7 -- and you read -- AP : "No, Python 2 only receives bug fixes.". You instantly hate the guy. Sorry AP, nothing personal, but please do not reply to this post. I think I know what your reply will be. It's not merely Antoine that will give that reply. Yes, there are many features that Python 2 is lacking relative to the Python 3 series. That's what "maintenance mode" means. It's incredibly frustrating when you hit one of them (for myself, I feel the pain every time an error in an exception handler conceals the original exception). The available solutions are: 1. Use a third party PyPI package which offers that feature (in this case, the requirement seems to be to use PyOpenSSL) 2. Upgrade to Python 3 3. Fork Python 2 to create a Python 2.8 which adds new backported features from the Python 3 series That last option has indeed been discussed by a few people at various times, but the first option generally ends up being the preferred choice if the second isn't yet viable due to missing dependencies. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From python at mrabarnett.plus.com Mon Aug 13 04:13:11 2012 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 13 Aug 2012 03:13:11 +0100 Subject: [Python-Dev] python 2.7 + https + urlopen = ? In-Reply-To: <1344821984.95249.YahooMailNeo@web140301.mail.bf1.yahoo.com> References: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> <1344821984.95249.YahooMailNeo@web140301.mail.bf1.yahoo.com> Message-ID: <502862B7.8040508@mrabarnett.plus.com> On 13/08/2012 02:39, Python Urlopen wrote: > [snip] > After a lot of pain, I got myself out of this trouble, and my code > now works correctly on 2.7.x (thanks to Jean-Paul Calderone's > pyopenssl). But do "you" think this is a "feature" and not a "bug"? > -- And do you think debating on this, killing time on the debate, and > letting all python 2.x users suffer sooner or later is right --. > Something as basic as urlopen! > It doesn't sound like a bug to me, more a missing feature, added in a later version of Python. That's what upgrading is all about. From mark at hotpy.org Mon Aug 13 10:34:10 2012 From: mark at hotpy.org (Mark Shannon) Date: Mon, 13 Aug 2012 09:34:10 +0100 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: <5028BC02.7030003@hotpy.org> Brett Cannon wrote: > > > On Sat, Aug 11, 2012 at 8:16 PM, Eric Snow > wrote: > > On Sat, Aug 11, 2012 at 6:03 PM, Brett Cannon > wrote: > > It would also be very easy to expand importlib.abc.SourceLoader > to add a > > method which is called with source and returns the bytecode to be > written > > out > > Yes, please. Not having to hack around this would be nice. > > > http://bugs.python.org/issue15627 AST transformation is a lot more general than just optimization. Adding an AST transformation is a relatively painless way to add to Python the last element of lisp-ness that it lacks: Namely being able to treat code as data and transform it at runtime, after parsing but before execution. Some examples: Profiling code be added by an AST transformation. IMO this would have been a more elegant way to implement CProfile and similar profilers than the current approach. AST transformations allow DSLs to be implemented in Python (I don't know if that is a + or - ). Access to the AST of a function at runtime would also be of use to method-based dynamic optimizers, or dynamic de-optimizers for static compilers. All for the price of adding a single method to SourceLoader. What a bargain :) Cheers, Mark. From doko at ubuntu.com Mon Aug 13 12:24:34 2012 From: doko at ubuntu.com (Matthias Klose) Date: Mon, 13 Aug 2012 12:24:34 +0200 Subject: [Python-Dev] What's New in Python 3.3 In-Reply-To: References: <4F5145DA-2810-4DEE-B662-9CB3B7247F27@gmail.com> Message-ID: <5028D5E2.6090407@ubuntu.com> On 09.08.2012 01:04, Victor Stinner wrote: > Does Python 3.3 support cross-compilation? There are two new option > for configure: --host and --build, but it's not mentioned in What's > New in Python 3.3. it does work, but it is only tested for the linux -> linux case. the mingw and macosx cross builds did require changes, which didn't go into 3.3 before the first beta release. what is completely missing is the cross build infrastructure to build third party extensions. so maybe it's a bit early to announce it in the release notes. Matthias From martin at v.loewis.de Mon Aug 13 12:40:35 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Mon, 13 Aug 2012 12:40:35 +0200 Subject: [Python-Dev] python 2.7 + https + urlopen = ? In-Reply-To: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> References: <1344821327.97369.YahooMailNeo@web140303.mail.bf1.yahoo.com> Message-ID: <20120813124035.Horde.Mh8KMbuWis5QKNmjw78ARIA@webmail.df.eu> > How would you feel, if you issued : > > import urllib > urlopen("""https://server.domain.com""").read() > > and the command got you data from some other URL without telling > you! You use firefox, and the site is different than the data you > got! Same with chrome. Safari. Even IE ! > Cheated? (Well I was mad -- after IE worked). [...] > None of them worked! Wow. Then you wonder, whats going on. You poke > one of the server administrator, and he sends you the logs, and you > see the problem. The keyword being "SNI". I believe there is a bug in the HTTP server; it doesn't conform to the HTTP/1.1 protocol. Even without the client using SNI, you should still get the right page, since the HTTP Host: header indicates the host you are trying to contact at this point, not SNI. The SNI is only relevant for the certificate that the server presents. Regards, Martin From guido at python.org Mon Aug 13 16:45:23 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 13 Aug 2012 07:45:23 -0700 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: <5028BC02.7030003@hotpy.org> References: <5028BC02.7030003@hotpy.org> Message-ID: Not so fast. If you make this a language feature you force all Python implementations to support an identical AST API. That's a big step. Not that AST manipulation isn't cool -- but I'd like to warn against over-enthusiasm that might backfire on the language (or its community) as a whole. --Guido On Mon, Aug 13, 2012 at 1:34 AM, Mark Shannon wrote: > Brett Cannon wrote: >> >> >> >> On Sat, Aug 11, 2012 at 8:16 PM, Eric Snow > > wrote: >> >> On Sat, Aug 11, 2012 at 6:03 PM, Brett Cannon > > wrote: >> > It would also be very easy to expand importlib.abc.SourceLoader >> to add a >> > method which is called with source and returns the bytecode to be >> written >> > out >> >> Yes, please. Not having to hack around this would be nice. >> >> >> http://bugs.python.org/issue15627 > > > AST transformation is a lot more general than just optimization. > > Adding an AST transformation is a relatively painless way to add to > Python the last element of lisp-ness that it lacks: > Namely being able to treat code as data and transform it at runtime, > after parsing but before execution. > > Some examples: > Profiling code be added by an AST transformation. > IMO this would have been a more elegant way to implement CProfile > and similar profilers than the current approach. > > AST transformations allow DSLs to be implemented in Python > (I don't know if that is a + or - ). > > Access to the AST of a function at runtime would also be of use to > method-based dynamic optimizers, or dynamic de-optimizers for static > compilers. > > All for the price of adding a single method to SourceLoader. > What a bargain :) > > Cheers, > Mark. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From tjreedy at udel.edu Mon Aug 13 21:00:40 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 13 Aug 2012 15:00:40 -0400 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: <5028BC02.7030003@hotpy.org> Message-ID: On 8/13/2012 10:45 AM, Guido van Rossum wrote: > Not so fast. If you make this a language feature you force all Python > implementations to support an identical AST API. That's a big step. I have been wondering about this. One could think from the manuals that we are there already. From the beginning of the ast chapter: "The ast module helps Python applications to process trees of *the* Python abstract syntax grammar. ... An abstract syntax tree can be generated by passing ast.PyCF_ONLY_AST as a flag to the compile() built-in function" (emphasis on *the* added). and the entry for compile(): "Compile the source into a code or AST object." I see nothing about ast possibly being CPython only. Should there be? -- Terry Jan Reedy From brett at python.org Mon Aug 13 21:06:23 2012 From: brett at python.org (Brett Cannon) Date: Mon, 13 Aug 2012 15:06:23 -0400 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) Message-ID: On Mon, Aug 13, 2012 at 3:00 PM, Terry Reedy wrote: > On 8/13/2012 10:45 AM, Guido van Rossum wrote: > >> Not so fast. If you make this a language feature you force all Python >> implementations to support an identical AST API. That's a big step. >> > > I have been wondering about this. One could think from the manuals that we > are there already. From the beginning of the ast chapter: > > "The ast module helps Python applications to process trees of *the* Python > abstract syntax grammar. ... An abstract syntax tree can be generated by > passing ast.PyCF_ONLY_AST as a flag to the compile() built-in function" > (emphasis on *the* added). > > and the entry for compile(): "Compile the source into a code or AST > object." > > I see nothing about ast possibly being CPython only. Should there be? Time to ask the other VMs what they are currently doing (the ast module came into existence in Python 2.6 so all the VMs should be answer the question since Jython is in alpha for 2.7 compatibility). -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Aug 13 21:37:46 2012 From: brett at python.org (Brett Cannon) Date: Mon, 13 Aug 2012 15:37:46 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Push importlib ABC hierarchy chart. In-Reply-To: <3Wwmvf0TfYzPmf@mail.python.org> References: <3Wwmvf0TfYzPmf@mail.python.org> Message-ID: For documentation, this appears to be for http://bugs.python.org/issue15628 . On Mon, Aug 13, 2012 at 3:19 PM, andrew.svetlov wrote: > http://hg.python.org/cpython/rev/1c8a6df94602 > changeset: 78547:1c8a6df94602 > user: Andrew Svetlov > date: Mon Aug 13 22:19:01 2012 +0300 > summary: > Push importlib ABC hierarchy chart. > > files: > Doc/library/importlib.rst | 15 +++++++++++++++ > 1 files changed, 15 insertions(+), 0 deletions(-) > > > diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst > --- a/Doc/library/importlib.rst > +++ b/Doc/library/importlib.rst > @@ -121,6 +121,21 @@ > used by :keyword:`import`. Some subclasses of the core abstract base > classes > are also provided to help in implementing the core ABCs. > > +ABC hierarchy:: > + > + object > + +-- Finder > + | +-- MetaPathFinder > + | +-- PathEntryFinder > + +-- Loader > + +-- ResourceLoader --------+ > + +-- InspectLoader | > + +-- ExecutionLoader --+ > + +-- FileLoader > + +-- SourceLoader > + +-- PyLoader > + +-- PyPycLoader > + > > .. class:: Finder > > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Mon Aug 13 22:05:29 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 13 Aug 2012 13:05:29 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: >> I see nothing about ast possibly being CPython only. Should there be? > > > Time to ask the other VMs what they are currently doing (the ast module came > into existence in Python 2.6 so all the VMs should be answer the question > since Jython is in alpha for 2.7 compatibility). 2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 ast.py. I haven't applied the same obsessiveness to 2.7, but I do intend to look closely at Jython's ast.py results compared to CPython's in the 3.x effort. Also I plan to allow some backwards compatibility compromises between early point releases of our 2.7 series, as I want to apply what I learn in our 3.x effort to 2.7 point releases, so we should be able to keep up with most simple ast.py changes. I'm not so sure that the current discussion are going to be "simple though" :) -- if it's pure python we should hopefully be alright. -Frank From fwierzbicki at gmail.com Mon Aug 13 22:06:43 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 13 Aug 2012 13:06:43 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 1:05 PM, fwierzbicki at gmail.com wrote: > 2.5+ contains I should have said *Jython* 2.5+ -Frank From guido at python.org Mon Aug 13 22:46:09 2012 From: guido at python.org (Guido van Rossum) Date: Mon, 13 Aug 2012 13:46:09 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 1:05 PM, fwierzbicki at gmail.com wrote: > On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: >>> I see nothing about ast possibly being CPython only. Should there be? >> >> >> Time to ask the other VMs what they are currently doing (the ast module came >> into existence in Python 2.6 so all the VMs should be answer the question >> since Jython is in alpha for 2.7 compatibility). [Jython] > 2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 > ast.py. But CPython's ast.py contains very little code -- it's all done in ast.c. Still, I'm glad you are actually considering this a cross-language feature, and I will gladly retract my warning. (Still, I don't know if it is subject to the usual backward compatibility constraints.) > I haven't applied the same obsessiveness to 2.7, but I do > intend to look closely at Jython's ast.py results compared to > CPython's in the 3.x effort. Also I plan to allow some backwards > compatibility compromises between early point releases of our 2.7 > series, as I want to apply what I learn in our 3.x effort to 2.7 point > releases, so we should be able to keep up with most simple ast.py > changes. I'm not so sure that the current discussion are going to be > "simple though" :) -- if it's pure python we should hopefully be > alright. It might be pure python for Jython, but it's not for CPython. -- --Guido van Rossum (python.org/~guido) From fwierzbicki at gmail.com Mon Aug 13 23:21:59 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 13 Aug 2012 14:21:59 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 1:46 PM, Guido van Rossum wrote: > On Mon, Aug 13, 2012 at 1:05 PM, fwierzbicki at gmail.com > wrote: >> On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: >>>> I see nothing about ast possibly being CPython only. Should there be? >>> >>> >>> Time to ask the other VMs what they are currently doing (the ast module came >>> into existence in Python 2.6 so all the VMs should be answer the question >>> since Jython is in alpha for 2.7 compatibility). > > [Jython] >> 2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 >> ast.py. > > But CPython's ast.py contains very little code -- it's all done in ast.c. What I did was dump a pretty print of the ast from Jython and CPython for every file in Lib/* and diff the results with a script. I got the differences down to a small number of minor variations. > Still, I'm glad you are actually considering this a cross-language > feature, and I will gladly retract my warning. (Still, I don't know if > it is subject to the usual backward compatibility constraints.) I don't know if IronPython does the same though... we might want to wait for them to respond. > It might be pure python for Jython, but it's not for CPython. It's actually Java for us :) -- in fact the internal AST uses the exact Java that is exposed from our _ast.py - which I've come to regard as a mistake (though it was useful at the time). I want to do the same obsessive diff game with 3.x but then probably separate out our internal ast implementation (possibly making ast.py pure Python). BTW - is Python's internal AST exactly exposed by ast.py or is there a separate internal AST implementation? -Frank From tjreedy at udel.edu Mon Aug 13 23:27:28 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 13 Aug 2012 17:27:28 -0400 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On 8/13/2012 4:46 PM, Guido van Rossum wrote: > On Mon, Aug 13, 2012 at 1:05 PM,fwierzbicki at gmail.com > wrote: >> >On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: >>>> >>>I see nothing about ast possibly being CPython only. Should there be? >>> >> >>> >> >>> >>Time to ask the other VMs what they are currently doing (the ast module came >>> >>into existence in Python 2.6 so all the VMs should be answer the question >>> >>since Jython is in alpha for 2.7 compatibility). > [Jython] >> >2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 >> >ast.py. > But CPython's ast.py contains very little code -- it's all done in ast.c. > > Still, I'm glad you are actually considering this a cross-language > feature, and I will gladly retract my warning. (Still, I don't know if > it is subject to the usual backward compatibility constraints.) I should have quoted a bit more. After the first sentence "The ast module helps Python applications to process trees of the Python abstract syntax grammar." the next sentence is "The abstract syntax itself might change with each Python release; this module helps to find out programmatically what the current grammar looks like." The 'current grammar' is given in 30.2.2. Abstract Grammar. -- Terry Jan Reedy From brett at python.org Tue Aug 14 00:10:58 2012 From: brett at python.org (Brett Cannon) Date: Mon, 13 Aug 2012 18:10:58 -0400 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Aug 13, 2012 5:22 PM, "fwierzbicki at gmail.com" wrote: > > On Mon, Aug 13, 2012 at 1:46 PM, Guido van Rossum wrote: > > On Mon, Aug 13, 2012 at 1:05 PM, fwierzbicki at gmail.com > > wrote: > >> On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: > >>>> I see nothing about ast possibly being CPython only. Should there be? > >>> > >>> > >>> Time to ask the other VMs what they are currently doing (the ast module came > >>> into existence in Python 2.6 so all the VMs should be answer the question > >>> since Jython is in alpha for 2.7 compatibility). > > > > [Jython] > >> 2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 > >> ast.py. > > > > But CPython's ast.py contains very little code -- it's all done in ast.c. > What I did was dump a pretty print of the ast from Jython and CPython > for every file in Lib/* and diff the results with a script. I got the > differences down to a small number of minor variations. > > > Still, I'm glad you are actually considering this a cross-language > > feature, and I will gladly retract my warning. (Still, I don't know if > > it is subject to the usual backward compatibility constraints.) > I don't know if IronPython does the same though... we might want to > wait for them to respond. > > > It might be pure python for Jython, but it's not for CPython. > It's actually Java for us :) -- in fact the internal AST uses the > exact Java that is exposed from our _ast.py - which I've come to > regard as a mistake (though it was useful at the time). I want to do > the same obsessive diff game with 3.x but then probably separate out > our internal ast implementation (possibly making ast.py pure Python). > > BTW - is Python's internal AST exactly exposed by ast.py or is there a > separate internal AST implementation? Direct. There is an AST grammar file that gets compiled into C and Python objects which are used by the compiler (c version) or exposed to users (Python version). > > -Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at python.org Tue Aug 14 00:15:06 2012 From: brian at python.org (Brian Curtin) Date: Mon, 13 Aug 2012 17:15:06 -0500 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): Merge 3.2 In-Reply-To: <3WwrmX1nQgzPtV@mail.python.org> References: <3WwrmX1nQgzPtV@mail.python.org> Message-ID: On Mon, Aug 13, 2012 at 5:13 PM, brian.curtin wrote: > http://hg.python.org/cpython/rev/256bfee696c5 > changeset: 78552:256bfee696c5 > parent: 78549:edcbf3edf701 > parent: 78551:fcad4566910b > user: Brian Curtin > date: Mon Aug 13 17:12:02 2012 -0500 > summary: > Merge 3.2 > > files: > Lib/test/support.py | 68 +++++++- > Misc/NEWS | 265 ++++++++++++++++++++++++++++++++ this Misc/NEWS disaster didn't appear in the diff. Fixing now... From ncoghlan at gmail.com Tue Aug 14 00:33:29 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 14 Aug 2012 08:33:29 +1000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: <5028BC02.7030003@hotpy.org> Message-ID: Implementations are currently required to *have* an AST (or else declare non compliance with that particular flag to compile). They're definitely not required to have the *same* AST, thus all AST manipulation, like bytecode manipulation, is necessarily implementation dependent. We don't even guarantee AST compatibility between versions of CPython. I believe the appropriate warnings are already present in the ast module docs, but there may be misleading wording elsewhere that needs to be cleaned up. Regards, Nick. -- Sent from my phone, thus the relative brevity :) On Aug 14, 2012 5:03 AM, "Terry Reedy" wrote: > On 8/13/2012 10:45 AM, Guido van Rossum wrote: > >> Not so fast. If you make this a language feature you force all Python >> implementations to support an identical AST API. That's a big step. >> > > I have been wondering about this. One could think from the manuals that we > are there already. From the beginning of the ast chapter: > > "The ast module helps Python applications to process trees of *the* Python > abstract syntax grammar. ... An abstract syntax tree can be generated by > passing ast.PyCF_ONLY_AST as a flag to the compile() built-in function" > (emphasis on *the* added). > > and the entry for compile(): "Compile the source into a code or AST > object." > > I see nothing about ast possibly being CPython only. Should there be? > > -- > Terry Jan Reedy > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > ncoghlan%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fwierzbicki at gmail.com Tue Aug 14 00:35:35 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 13 Aug 2012 15:35:35 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 3:10 PM, Brett Cannon wrote: > Direct. There is an AST grammar file that gets compiled into C and Python > objects which are used by the compiler (c version) or exposed to users > (Python version). At the risk of making you repeat yourself, and just to be sure I understand: There are C objects used by the compiler and Python objects that are exposed to the users (written in C though) that are generated by the AST grammar. That at least sounds like they are different. The last I checked the grammar was Python.asdl and the translater was asdl_c.py resulting in /Python/Python-ast.c which looks like it is the implementation of _ast.py Are the AST objects from Python-ast.c used by the compiler? And what is the relationship between Python-ast.c and /Python/ast.c? And what about the CST mentioned at the top of /Python/ast.c? I ask all of this because I want to be sure that separating the internal AST in Jython from the one exposed in ast.py is really a good idea. If CPython does not make this distinction that will be a strike against the idea. -Frank From brett at python.org Tue Aug 14 01:31:47 2012 From: brett at python.org (Brett Cannon) Date: Mon, 13 Aug 2012 19:31:47 -0400 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 6:35 PM, fwierzbicki at gmail.com < fwierzbicki at gmail.com> wrote: > On Mon, Aug 13, 2012 at 3:10 PM, Brett Cannon wrote: > > > Direct. There is an AST grammar file that gets compiled into C and Python > > objects which are used by the compiler (c version) or exposed to users > > (Python version). > At the risk of making you repeat yourself, and just to be sure I > understand: There are C objects used by the compiler and Python > objects that are exposed to the users (written in C though) that are > generated by the AST grammar. Both sets of objects are generated from the grammar. It's wrapping some C structs (the C version of the AST) in an extension module where the fields of the struct and the names of the types are all the same (the Python version) no matter if it is C or Python. Converting between the two is just a matter of allocating memory and copying data from one struct to another. > That at least sounds like they are > different. Are you asking if we pass the objects through transparently, or if they are just the same API? The are the same API since the AST nodes used by the compiler just have an extension exposing them that has the same names, fields, etc. But to expose the API to Python code the C-level objects are taken, pulled apart, and used to populate and exact API copy of them as Python object (i.e. the ast2obj_* functions defined in Python/Python-ast.c). To try to make this really clear, consider the Assign node type. At the C level it's just a struct:: struct { asdl_seq *targets; expr_ty value; } Assign; An asdl_seq is just an array of AST nodes. So converting to Python code is just a matter of allocating the equivalent Assign_type (which is a PythonTypeObject), and then populating its 'targets' attribute with a list of the AST nodes and its expr instance for its 'value' type. It's all very mechanical since it is all code-generated. > The last I checked the grammar was Python.asdl and the > translater was asdl_c.py resulting in /Python/Python-ast.c which looks > like it is the implementation of _ast.py > There is no _ast.py, only Lib/ast.py which just provides helper code for working with the AST (e.g. a NodeVisitor class). The builtin _ast module comes from Python/Python-ast.c. > > Are the AST objects from Python-ast.c used by the compiler? And what > is the relationship between Python-ast.c and /Python/ast.c? And what > about the CST mentioned at the top of /Python/ast.c? > http://docs.python.org/devguide/compiler.html explains it all. > > I ask all of this because I want to be sure that separating the > internal AST in Jython from the one exposed in ast.py is really a good > idea. If CPython does not make this distinction that will be a strike > against the idea. > As I said, depends if you mean API or actual objects. The compiler itself uses C objects which are nothing more than structs and unions. The AST exposed by the _ast module uses the same names, fields, etc., but are actual Python objects instead of structs and unions. The separation allows the compiler to save on memory costs by only using structs instead of a complete PyObject struct which would have tons of stuff that the compiler doesn't need (e.g. the AST has no methods so why waste memory on PyObject allocation for method slots that will never be set?). -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Tue Aug 14 04:10:19 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 14 Aug 2012 12:10:19 +1000 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: <5029B38B.1070501@pearwood.info> On 14/08/12 06:46, Guido van Rossum wrote: > On Mon, Aug 13, 2012 at 1:05 PM, fwierzbicki at gmail.com > wrote: >> On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: >>>> I see nothing about ast possibly being CPython only. Should there be? >>> >>> >>> Time to ask the other VMs what they are currently doing (the ast module came >>> into existence in Python 2.6 so all the VMs should be answer the question >>> since Jython is in alpha for 2.7 compatibility). > > [Jython] >> 2.5+ contains an ast.py that I obsessively compared to CPython's 2.5 >> ast.py. > > But CPython's ast.py contains very little code -- it's all done in ast.c. > > Still, I'm glad you are actually considering this a cross-language > feature, and I will gladly retract my warning. (Still, I don't know if > it is subject to the usual backward compatibility constraints.) Well, that's Jython. What about IronPython, TinyPy, CLPython, etc. and future implementations? Perhaps ast should be considered a quality of implementation module. Lack of one does not disqualify from being "Python", but it does make it a second-class implementation. -- Steven From fwierzbicki at gmail.com Tue Aug 14 05:02:32 2012 From: fwierzbicki at gmail.com (fwierzbicki at gmail.com) Date: Mon, 13 Aug 2012 20:02:32 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: Thanks Brett, that cleared everything up for me! And indeed it is what I'm thinking of doing for Jython (Minimal nodes for the compiler and parallel PyObjects for Python). -Frank From alex.gaynor at gmail.com Tue Aug 14 07:33:37 2012 From: alex.gaynor at gmail.com (Alex Gaynor) Date: Tue, 14 Aug 2012 05:33:37 +0000 (UTC) Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) References: Message-ID: Brett Cannon python.org> writes: > > Time to ask the other VMs what they are currently doing (the ast module came into existence in Python 2.6 so all the VMs should be answer the question since Jython is in alpha for 2.7 compatibility). > As far as I know PyPy supports the ast module, and produces ASTs that are the same as CPython's. That said I do regard this as an implementation detail, further I'm guessing this is the context of the AST optimizer thread, and though I have neither the time nor the inclination to wade into that, put me down as -1 a) everything proposed there is possible, b) making this a front-and-center API makes it really easy to shoot themselves in the foot, by doing things like breaking Python with invalid optimizations (hint: almost every optimization proposed in that thread is invalid in the general case). Alex From andrew.svetlov at gmail.com Tue Aug 14 10:46:25 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 14 Aug 2012 11:46:25 +0300 Subject: [Python-Dev] [Python-checkins] cpython: Review of signature docs. In-Reply-To: <3Wx5Sr22MhzQ0c@mail.python.org> References: <3Wx5Sr22MhzQ0c@mail.python.org> Message-ID: Thank you for review. On Tue, Aug 14, 2012 at 10:45 AM, georg.brandl wrote: > http://hg.python.org/cpython/rev/e1e7d628c0b9 > changeset: 78560:e1e7d628c0b9 > user: Georg Brandl > date: Tue Aug 14 09:45:28 2012 +0200 > summary: > Review of signature docs. > > files: > Doc/library/inspect.rst | 127 +++++++++++++-------------- > 1 files changed, 62 insertions(+), 65 deletions(-) > > > diff --git a/Doc/library/inspect.rst b/Doc/library/inspect.rst > --- a/Doc/library/inspect.rst > +++ b/Doc/library/inspect.rst > @@ -397,25 +397,18 @@ > > .. _inspect-signature-object: > > -Introspecting callables with Signature Object > ---------------------------------------------- > - > -Signature object represents the call signature of a callable object and its > -return annotation. To get a Signature object use the :func:`signature` > -function. > - > +Introspecting callables with the Signature object > +------------------------------------------------- > > .. versionadded:: 3.3 > > -.. seealso:: > - > - :pep:`362` - Function Signature Object. > - The detailed specification, implementation details and examples. > - > +The Signature object represents the call signature of a callable object and its > +return annotation. To retrieve a Signature object, use the :func:`signature` > +function. > > .. function:: signature(callable) > > - Returns a :class:`Signature` object for the given ``callable``:: > + Return a :class:`Signature` object for the given ``callable``:: > > >>> from inspect import signature > >>> def foo(a, *, b:int, **kwargs): > @@ -432,24 +425,24 @@ > >>> sig.parameters['b'].annotation > > > - Accepts a wide range of python callables, from plain functions and classes > - to :func:`functools.partial` objects. > + Accepts a wide range of python callables, from plain functions and classes to > + :func:`functools.partial` objects. > > .. note:: > > - Some callables may not be introspectable in certain implementations > - of Python. For example, in CPython, built-in functions defined in C > - provide no metadata about their arguments. > + Some callables may not be introspectable in certain implementations of > + Python. For example, in CPython, built-in functions defined in C provide > + no metadata about their arguments. > > > .. class:: Signature > > - A Signature object represents the call signature of a function and its > - return annotation. For each parameter accepted by the function it > - stores a :class:`Parameter` object in its :attr:`parameters` collection. > + A Signature object represents the call signature of a function and its return > + annotation. For each parameter accepted by the function it stores a > + :class:`Parameter` object in its :attr:`parameters` collection. > > - Signature objects are *immutable*. Use :meth:`Signature.replace` to make > - a modified copy. > + Signature objects are *immutable*. Use :meth:`Signature.replace` to make a > + modified copy. > > .. attribute:: Signature.empty > > @@ -462,30 +455,29 @@ > > .. attribute:: Signature.return_annotation > > - The "return" annotation for the callable. If the callable has > - no "return" annotation, this attribute is set to > - :attr:`Signature.empty`. > + The "return" annotation for the callable. If the callable has no "return" > + annotation, this attribute is set to :attr:`Signature.empty`. > > .. method:: Signature.bind(*args, **kwargs) > > - Creates a mapping from positional and keyword arguments to parameters. > - Returns :class:`BoundArguments` if ``*args`` and ``**kwargs`` match > - the signature, or raises a :exc:`TypeError`. > + Create a mapping from positional and keyword arguments to parameters. > + Returns :class:`BoundArguments` if ``*args`` and ``**kwargs`` match the > + signature, or raises a :exc:`TypeError`. > > .. method:: Signature.bind_partial(*args, **kwargs) > > - Works the same way as :meth:`Signature.bind`, but allows the > - omission of some required arguments (mimics :func:`functools.partial` > - behavior.) Returns :class:`BoundArguments`, or raises a :exc:`TypeError` > - if the passed arguments do not match the signature. > + Works the same way as :meth:`Signature.bind`, but allows the omission of > + some required arguments (mimics :func:`functools.partial` behavior.) > + Returns :class:`BoundArguments`, or raises a :exc:`TypeError` if the > + passed arguments do not match the signature. > > .. method:: Signature.replace([parameters], *, [return_annotation]) > > - Creates a new Signature instance based on the instance replace was > - invoked on. It is possible to pass different ``parameters`` and/or > - ``return_annotation`` to override the corresponding properties of > - the base signature. To remove return_annotation from the copied > - Signature, pass in :attr:`Signature.empty`. > + Create a new Signature instance based on the instance replace was invoked > + on. It is possible to pass different ``parameters`` and/or > + ``return_annotation`` to override the corresponding properties of the base > + signature. To remove return_annotation from the copied Signature, pass in > + :attr:`Signature.empty`. > > :: > > @@ -497,38 +489,36 @@ > "(a, b) -> 'new return anno'" > > > - > .. class:: Parameter > > - Parameter objects are *immutable*. Instead of modifying a Parameter object, > + Parameter objects are *immutable*. Instead of modifying a Parameter object, > you can use :meth:`Parameter.replace` to create a modified copy. > > .. attribute:: Parameter.empty > > - A special class-level marker to specify absence of default > - values and annotations. > + A special class-level marker to specify absence of default values and > + annotations. > > .. attribute:: Parameter.name > > - The name of the parameter as a string. Must be a valid python identifier > - name (with the exception of ``POSITIONAL_ONLY`` parameters, which can > - have it set to ``None``.) > + The name of the parameter as a string. Must be a valid python identifier > + name (with the exception of ``POSITIONAL_ONLY`` parameters, which can have > + it set to ``None``). > > .. attribute:: Parameter.default > > - The default value for the parameter. If the parameter has no default > + The default value for the parameter. If the parameter has no default > value, this attribute is set to :attr:`Parameter.empty`. > > .. attribute:: Parameter.annotation > > - The annotation for the parameter. If the parameter has no annotation, > + The annotation for the parameter. If the parameter has no annotation, > this attribute is set to :attr:`Parameter.empty`. > > .. attribute:: Parameter.kind > > - Describes how argument values are bound to the parameter. > - Possible values (accessible via :class:`Parameter`, like > - ``Parameter.KEYWORD_ONLY``): > + Describes how argument values are bound to the parameter. Possible values > + (accessible via :class:`Parameter`, like ``Parameter.KEYWORD_ONLY``): > > +------------------------+----------------------------------------------+ > | Name | Meaning | > @@ -577,10 +567,10 @@ > > .. method:: Parameter.replace(*, [name], [kind], [default], [annotation]) > > - Creates a new Parameter instance based on the instance replaced was > - invoked on. To override a :class:`Parameter` attribute, pass the > - corresponding argument. To remove a default value or/and an annotation > - from a Parameter, pass :attr:`Parameter.empty`. > + Create a new Parameter instance based on the instance replaced was invoked > + on. To override a :class:`Parameter` attribute, pass the corresponding > + argument. To remove a default value or/and an annotation from a > + Parameter, pass :attr:`Parameter.empty`. > > :: > > @@ -604,18 +594,18 @@ > .. attribute:: BoundArguments.arguments > > An ordered, mutable mapping (:class:`collections.OrderedDict`) of > - parameters' names to arguments' values. Contains only explicitly > - bound arguments. Changes in :attr:`arguments` will reflect in > - :attr:`args` and :attr:`kwargs`. > + parameters' names to arguments' values. Contains only explicitly bound > + arguments. Changes in :attr:`arguments` will reflect in :attr:`args` and > + :attr:`kwargs`. > > - Should be used in conjunction with :attr:`Signature.parameters` for > - any arguments processing purposes. > + Should be used in conjunction with :attr:`Signature.parameters` for any > + argument processing purposes. > > .. note:: > > Arguments for which :meth:`Signature.bind` or > :meth:`Signature.bind_partial` relied on a default value are skipped. > - However, if needed, it's easy to include them > + However, if needed, it is easy to include them. > > :: > > @@ -638,15 +628,16 @@ > > .. attribute:: BoundArguments.args > > - Tuple of positional arguments values. Dynamically computed > - from the :attr:`arguments` attribute. > + A tuple of positional arguments values. Dynamically computed from the > + :attr:`arguments` attribute. > > .. attribute:: BoundArguments.kwargs > > - Dict of keyword arguments values. Dynamically computed > - from the :attr:`arguments` attribute. > + A dict of keyword arguments values. Dynamically computed from the > + :attr:`arguments` attribute. > > - :attr:`args` and :attr:`kwargs` properties can be used to invoke functions:: > + The :attr:`args` and :attr:`kwargs` properties can be used to invoke > + functions:: > > def test(a, *, b): > ... > @@ -656,6 +647,12 @@ > test(*ba.args, **ba.kwargs) > > > +.. seealso:: > + > + :pep:`362` - Function Signature Object. > + The detailed specification, implementation details and examples. > + > + > .. _inspect-classes-functions: > > Classes and functions > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -- Thanks, Andrew Svetlov From kristjan at ccpgames.com Tue Aug 14 12:25:52 2012 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 14 Aug 2012 10:25:52 +0000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: > -----Original Message----- > I moved the script to a new dedicated project on Bitbucket: > https://bitbucket.org/haypo/astoptimizer > > Join the project if you want to help me to build a better optimizer! > > It now works on Python 2.5-3.3. I had the idea (perhaps not an original one) that peephole optimization would be much better done in python than in C. The C code is clunky and unwieldly, wheras python would be much better suited, being able to use nifty regexes and the like. The problem is, there exists only bytecode disassembler, no corresponding assembler. Then I stumbled upon this project: http://code.google.com/p/byteplay/ Sounds like just the ticket, disassemble the code, do transformations on it, then reassemble. Haven't gotten further than that though :) K From victor.stinner at gmail.com Tue Aug 14 15:32:13 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 14 Aug 2012 15:32:13 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: 2012/8/14 Kristj?n Valur J?nsson : >> I moved the script to a new dedicated project on Bitbucket: >> https://bitbucket.org/haypo/astoptimizer >> >> Join the project if you want to help me to build a better optimizer! >> >> It now works on Python 2.5-3.3. > > I had the idea (perhaps not an original one) that peephole optimization would be much better > done in python than in C. The C code is clunky and unwieldly, wheras python would be much > better suited, being able to use nifty regexes and the like. > > The problem is, there exists only bytecode disassembler, no corresponding assembler. Why would you like to work on bytecode instead of AST? The AST contains much more information, you can implement better optimizations in AST. AST is also more convinient than bytecode. Victor From kristjan at ccpgames.com Tue Aug 14 16:33:33 2012 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Tue, 14 Aug 2012 14:33:33 +0000 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: > -----Original Message----- > From: Victor Stinner [mailto:victor.stinner at gmail.com] > Sent: 14. ?g?st 2012 13:32 > To: Kristj?n Valur J?nsson > Cc: Python Dev > Subject: Re: [Python-Dev] AST optimizer implemented in Python > > The problem is, there exists only bytecode disassembler, no corresponding > assembler. > > Why would you like to work on bytecode instead of AST? The AST contains > much more information, you can implement better optimizations in AST. AST > is also more convinient than bytecode. > We already optimize bytecode. But it seems much more could be done there. It also seems like a simpler goal. Also, AST will need to be changed to bytecode at some point, and that bytecode could still be optimized in ways not available to the AST, I imagine. Also, I understand bytecode, more or less :) K From hrvoje.niksic at avl.com Tue Aug 14 17:09:13 2012 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Tue, 14 Aug 2012 17:09:13 +0200 Subject: [Python-Dev] AST optimizer implemented in Python In-Reply-To: References: Message-ID: <502A6A19.8000701@avl.com> On 08/14/2012 03:32 PM, Victor Stinner wrote: >> I had the idea (perhaps not an original one) that peephole optimization would be much better >> done in python than in C. The C code is clunky and unwieldly, wheras python would be much >> better suited, being able to use nifty regexes and the like. >> >> The problem is, there exists only bytecode disassembler, no corresponding assembler. > > Why would you like to work on bytecode instead of AST? The AST > contains much more information, you can implement better optimizations AST allows for better high-level optimizations, but a real peephole optimization pass is actually designed to optimize generated code. This allows eliminating some inefficiencies which would be fairly hard to prevent at higher levels - wikipedia provides some examples. From yselivanov.ml at gmail.com Tue Aug 14 19:46:33 2012 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 14 Aug 2012 13:46:33 -0400 Subject: [Python-Dev] PEPs build system Message-ID: Hi, There seems to be a problem with PEPs build process again. As far as I see - PEP 362 and PEP 398 are out of sync with what is in the repo. Thanks, - Yury From raymond.hettinger at gmail.com Wed Aug 15 02:33:24 2012 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Tue, 14 Aug 2012 19:33:24 -0500 Subject: [Python-Dev] Installation on Macs Message-ID: On Mountain Lion, the default security settings only allow installation of applications downloaded from the Mac App Stored and "identified developers". We need to either become an "identified developer" or include some instructions on how to change the security settings (System Preference -- General -- Unlock --Select the "Anywhere" radio button -- Install Python -- Restore the original settings -- and Relock). Changing the security settings isn't appealing because 1) it weakens the user's security 2) it involves multiple steps and 3) the user will see an unsettling warnings along the way. Another unrelated issue is that the instructions for updating Tcl/Tk are problematic. In the past few months, I've witnessed hundreds of people unsuccessfully trying follow the instructions and having an immediate unpleasant out-of-the-box experience when IDLE crashes. I suggest that we stop being so indirect about the chain of footnotes and links leading to a Tcl/Tk download. I would like to see a direct Tcl/Tk updater link side-by-side with our Python installer link at http://www.python.org/download/releases/2.7.3/ Someone did add a note the the IDLE startup screen to the effect of: "WARNING: The version of Tcl/Tk (8.5.9) in use may be unstable. Visit http://www.python.org/download/mac/tcltk/ for current information." In some ways this is progress. In others, it falls short. If IDLE crashes, you can't see the message. If you have installed the ActiveTCL 8.5.12 update, you still see the warning eventhough it isn't necessary. Also, I don't link that the referenced page is so complex and that it is full unsettling warnings, important notices, do-not-use advice, mentions of instability, etc. I would like to see our download page have something more simple, affirmative, positively worded and direct. For example: * Mac OS X 64-bit/32-bit Installer (3.2.3) for Mac OS X 10.6 and 10.7 [2] (sig). To run IDLE or Tkinter, you need to update Tcl/Tk to ActiveTcl 8.5.12 . That saves you from having to click a links down to a footnote at the bottom of the page that sends you to which is another page full of tables, warnings,etc that leads you to the Apple 8.5.9 section which is a dead-end because there are still known issues with 8.5.9, leaving you with the ActiveTCL section which has a paragraph of text obscuring the link you actually needed: http://www.activestate.com/activetcl/downloads . I applaud that some effort was made to document a solution; however, in practice the daisy chain of footnotes, tables, and links has proven unworkable for most of the engineers I've been working with. Raymond -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnoller at gmail.com Wed Aug 15 03:04:01 2012 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 14 Aug 2012 21:04:01 -0400 Subject: [Python-Dev] Installation on Macs In-Reply-To: References: Message-ID: <5432C23E-CABB-476E-966C-164209BA47AE@gmail.com> I think becoming an apple signed developer to get a cert is the best approach. If anyone wanted to approach apple about open source/non profit gratis licenses, that would be appreciated. Otherwise I could do it / fund it from the PSF board side, which I am happy to do. I also concur with Raymond that the download/install instructions could be simplified. Noting for users that rather than downloading Xcode, they can just download the OSX Command Line Tools installer and easy_install/pip/etc will just work would also be nice Jesse On Aug 14, 2012, at 8:33 PM, Raymond Hettinger wrote: > On Mountain Lion, the default security settings only allow installation of applications downloaded from the Mac App Stored and "identified developers". > > We need to either become an "identified developer" or include some instructions on how to change the security settings (System Preference -- General -- Unlock --Select the "Anywhere" radio button -- Install Python -- Restore the original settings -- and Relock). Changing the security settings isn't appealing because 1) it weakens the user's security 2) it involves multiple steps and 3) the user will see an unsettling warnings along the way. > > Another unrelated issue is that the instructions for updating Tcl/Tk are problematic. In the past few months, I've witnessed hundreds of people unsuccessfully trying follow the instructions and having an immediate unpleasant out-of-the-box experience when IDLE crashes. I suggest that we stop being so indirect about the chain of footnotes and links leading to a Tcl/Tk download. I would like to see a direct Tcl/Tk updater link side-by-side with our Python installer link at http://www.python.org/download/releases/2.7.3/ > > Someone did add a note the the IDLE startup screen to the effect of: "WARNING: The version of Tcl/Tk (8.5.9) in use may be unstable. > Visit http://www.python.org/download/mac/tcltk/ for current information." In some ways this is progress. In others, it falls short. If IDLE crashes, you can't see the message. If you have installed the ActiveTCL 8.5.12 update, you still see the warning eventhough it isn't necessary. Also, I don't link that the referenced page is so complex and that it is full unsettling warnings, important notices, do-not-use advice, mentions of instability, etc. > > I would like to see our download page have something more simple, affirmative, positively worded and direct. For example: > > * Mac OS X 64-bit/32-bit Installer (3.2.3) for Mac OS X 10.6 and 10.7 [2] (sig). To run IDLE or Tkinter, you need to update Tcl/Tk to ActiveTcl 8.5.12 . > > That saves you from having to click a links down to a footnote at the bottom of the page that sends you to which is another page full of tables, warnings,etc that leads you to the Apple 8.5.9 section which is a dead-end because there are still known issues with 8.5.9, leaving you with the ActiveTCL section which has a paragraph of text obscuring the link you actually needed: http://www.activestate.com/activetcl/downloads . > > I applaud that some effort was made to document a solution; however, in practice the daisy chain of footnotes, tables, and links has proven unworkable for most of the engineers I've been working with. > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jnoller%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronaldoussoren at mac.com Wed Aug 15 09:17:41 2012 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 15 Aug 2012 09:17:41 +0200 Subject: [Python-Dev] Installation on Macs In-Reply-To: References: Message-ID: On 15 Aug, 2012, at 2:33, Raymond Hettinger wrote: > On Mountain Lion, the default security settings only allow installation of applications downloaded from the Mac App Stored and "identified developers". > > We need to either become an "identified developer" or include some instructions on how to change the security settings (System Preference -- General -- Unlock --Select the "Anywhere" radio button -- Install Python -- Restore the original settings -- and Relock). Changing the security settings isn't appealing because 1) it weakens the user's security 2) it involves multiple steps and 3) the user will see an unsettling warnings along the way. You don't have to change the security settings, choosing "open" from the context menu will to the trick. This is only needed for the installation after downloading using a browser that supports Apple's quarantine solution (such as Safari, I don't know if other browsers mark files as being quarantined). That said, signing the installer would be more friendly to users and Ned has opened an issue for this: > > Another unrelated issue is that the instructions for updating Tcl/Tk are problematic. In the past few months, I've witnessed hundreds of people unsuccessfully trying follow the instructions and having an immediate unpleasant out-of-the-box experience when IDLE crashes. I suggest that we stop being so indirect about the chain of footnotes and links leading to a Tcl/Tk download. I would like to see a direct Tcl/Tk updater link side-by-side with our Python installer link at http://www.python.org/download/releases/2.7.3/ > > Someone did add a note the the IDLE startup screen to the effect of: "WARNING: The version of Tcl/Tk (8.5.9) in use may be unstable. > Visit http://www.python.org/download/mac/tcltk/ for current information." In some ways this is progress. In others, it falls short. If IDLE crashes, you can't see the message. If you have installed the ActiveTCL 8.5.12 update, you still see the warning eventhough it isn't necessary. Also, I don't link that the referenced page is so complex and that it is full unsettling warnings, important notices, do-not-use advice, mentions of instability, etc. Tk on OSX is a mess. The version that Apple ships tends to crash a lot on even slightly complicated code (or just someone that tries to input accented characters). The ActiveState download is better, but there are still crashes and unexpected behavior with that version. AFAIK most bug reports about IDLE not working correctly on OSX are due to issues with Tk, Ned should know more about that as he's the one that tends to look into those issues. > > I would like to see our download page have something more simple, affirmative, positively worded and direct. For example: > > * Mac OS X 64-bit/32-bit Installer (3.2.3) for Mac OS X 10.6 and 10.7 [2] (sig). To run IDLE or Tkinter, you need to update Tcl/Tk to ActiveTcl 8.5.12 . > > That saves you from having to click a links down to a footnote at the bottom of the page that sends you to which is another page full of tables, warnings,etc that leads you to the Apple 8.5.9 section which is a dead-end because there are still known issues with 8.5.9, leaving you with the ActiveTCL section which has a paragraph of text obscuring the link you actually needed: http://www.activestate.com/activetcl/downloads . > > I applaud that some effort was made to document a solution; however, in practice the daisy chain of footnotes, tables, and links has proven unworkable for most of the engineers I've been working with. +1 on adding direct download links for Tk to the main download page. Ronald > > > Raymond > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4788 bytes Desc: not available URL: From nad at acm.org Wed Aug 15 11:30:17 2012 From: nad at acm.org (Ned Deily) Date: Wed, 15 Aug 2012 02:30:17 -0700 Subject: [Python-Dev] Installation on Macs References: <5432C23E-CABB-476E-966C-164209BA47AE@gmail.com> Message-ID: Raymond raises a couple of issues and Jesse comments on those and brings up another issue. Let me address each in turn (and I apologize for the length!): 1. Gatekeeper singing on 10.8 In article , Raymond Hettinger wrote: > On Mountain Lion, the default security settings only allow installation of > applications downloaded from the Mac App Stored and "identified developers". > > We need to either become an "identified developer" or include some > instructions on how to change the security settings (System Preference -- > General -- Unlock --Select the "Anywhere" radio button -- Install Python -- > Restore the original settings -- and Relock). Changing the security settings > isn't appealing because 1) it weakens the user's security 2) it involves > multiple steps and 3) the user will see an unsettling warnings along the way. Yes, Gatekeeper support is a known desirable feature for OS X 10.8 (Mountain Lion). There are a number of issues involved, involving code, process, and the PSF as a legal entity. Rather than going into all the gory details here, see http://bugs.python.org/issue15661 which I've opened to track what needs to be done. Quick summary is that we need to change the installer format that is used to be able to participate in the installer singing program and the PSF will likely need to be involved as the legal entity "owning" the certificates that need to be signed. Ronald and I are aware of the issues but, to be honest, this has been a lower-priority issue compared to others for the Python 3.3.0 release. Now that 3.3.0b2 is out and things seem to be in pretty good shape, this issue is now in the top set of my list and I have been working on it recently. By the way, it's not necessary to use System Preferences to change the security settings, although Apple doesn't make it obvious that this is the case. As documented here (http://support.apple.com/kb/HT5290) and in the online help, you can override the signing check by using control-click on the installer mpkg file and selecting Open using ... Installer (or use spctl(8) from the command line). Thread-tie: the current ActiveTcl installers for OS X are also not yet signed so attempting to install them on 10.8 currently results in the same user experience as with python.org installers. In article <5432C23E-CABB-476E-966C-164209BA47AE at gmail.com>, Jesse Noller wrote: > I think becoming an apple signed developer to get a cert is the best > approach. > > If anyone wanted to approach apple about open source/non profit gratis > licenses, that would be appreciated. > > Otherwise I could do it / fund it from the PSF board side, which I am happy > to do. Thanks, Jesse. There seems to be a fairly straightforward process for a corporate entity to request a development team membership from Python (at nominal cost, see the references in the opened issue). As the developer ID program is new to me, I have been intending to propose something officially to PSF officers once we were further along with implementation and testing. With Ronald's concurrence, I will make sure to follow up with you and/or Van when we are further along. 2. Tcl/Tk on OS X Raymond: > Another unrelated issue is that the instructions for updating Tcl/Tk are > problematic. In the past few months, I've witnessed hundreds of people > unsuccessfully trying follow the instructions and having an immediate > unpleasant out-of-the-box experience when IDLE crashes. I suggest that we > stop being so indirect about the chain of footnotes and links leading to a > Tcl/Tk download. I would like to see a direct Tcl/Tk updater link > side-by-side with our Python installer link at > http://www.python.org/download/releases/2.7.3/ [...] > I would like to see our download page have something more simple, > affirmative, positively worded and direct. For example: > > * Mac OS X 64-bit/32-bit Installer (3.2.3) for Mac OS X 10.6 and 10.7 [2] > (sig). To run IDLE or Tkinter, you need to update Tcl/Tk to ActiveTcl 8.5.12 > . I am open to changing the wording. However, as I've noted in the past, I think it's problematic to use wording that implies you can unconditionally download and install ActiveState's Tcl. I really appreciate the great work that the ActiveState folks do and am happy to recommend people to use it. But not everyone can without cost. The free (as in beer) ActiveTcl Community Edition is not open source and it is released with a license that restricts the use of some parts of ActiveTcl, the pieces that ActiveState have developed themselves. That's a perfectly understandable business decision. I am not a lawyer so I'm not in a position to say to our users whether or not they can legally download and use ActiveTcl without entering into some other license arrangement. That's one reason why the links send users to the special page. I'd would be happy to see wording on the release pages that incorporate that sense. I'll see what I can come up with and propose something. Let me know if you have any specific suggestions or if you think my concerns are misplaced. http://www.activestate.com/activetcl/license-agreement http://www.python.org/download/releases/3.3.0/ http://www.python.org/download/mac/tcltk/ > Someone did add a note the the IDLE startup screen to the effect of: > "WARNING: The version of Tcl/Tk (8.5.9) in use may be unstable. > Visit http://www.python.org/download/mac/tcltk/ for current information." > In some ways this is progress. In others, it falls short. If IDLE crashes, > you can't see the message. The warning message when IDLE.app is run with the buggy Apple-supplied 10.6 Tcl/Tk has been available since 3.2 and 2.7.2 (Issue10907). I did recently update all branches to warn about the 10.7/10.8 versions as well; they are not as totally broken as 10.6 was but can still crash easily from the "wrong" user keyboard input. > If you have installed the ActiveTCL 8.5.12 > update, you still see the warning eventhough it isn't necessary. That should not be the case with installers downloaded from python.org. If you can reproduce, please check to see the actual path to the Tcl and Tk frameworks. Probably the easiest way for IDLE.app is to launch "/Applications/Utilites/Actiity Monitor.app", select the IDLE process and click on Inspect. In the list of open files, you should see /Library/Frameworks/Tcl.framework/Versions/8.5/Tcl and /Library/Frameworks/Tk.framework/Versions/8.5/Tk if ActiveTcl is being linked or /System/Library/Frameworks/Tcl.framework/Versions/8.5/Tcl and Tk if the Apple system versions are being used. If you are using Pythons from another source or self-built, there is no guarantee that they will link to the ActiveTcl versions without taking some steps during building. Let me know if I can help with that. > Also, I > don't link that the referenced page is so complex and that it is full > unsettling warnings, important notices, do-not-use advice, mentions of > instability, etc. Well, the situation *is* pretty complex. We support a *lot* of configurations and there are a lot of gotchas. We have have taken some steps to simplify things, like dropping installer support for 10.3 and 10.4 with Python 3.3 but unfortunately the most problematic OS X releases for Apple Tcl are the most recent ones (10.6, in particular). Prior to 3.3.0 release, I intend to review and revise it. Wording change suggestions are welcome! But I totally agree that the user experience is not good. The only way I see to make a major improvement is to get into the business of building and supplying Tcl/Tk with the OS X installers, as is currently done for the Windows installers. Now that the Mac Tcl community has been getting more involved in maintaining the Cocoa port themselves and things are getting more stable (and Apple continues to appear to be uninterested), perhaps it is time for us to bite the bullet. I've opened Issue15663 to look into that for 3.4 (and *possibly* for earlier maintenance releases). 3. Download instructions and Xcode Jesse: > I also concur with Raymond that the download/install instructions could be > simplified. Noting for users that rather than downloading Xcode, they can > just download the OSX Command Line Tools installer and easy_install/pip/etc > will just work would also be nice The Mac section of the Python docset is woefully out-of-date (from long before I got involved!) and I plan to give it a major update for 3.3.0. The whole business of what's needed to build extension modules on OS X got *much* more complicated with Xcode 4 (the default for 10.7 and 10.8) and each minor release of Xcode 4 has brought new changes. The introduction of the stand-alone Command Line Tools (with Xcode 4.2?) was a nice addition but there are gotchas with using it, for example, extension building with current Python 3.2.3 installers do not work out-of-the-box with the CLT because the CLT do not provide SDKs. 2.7.3 is better but neither of the current 32-bit installers will work out of the box with Xcode 4. A lot of work has gone into 3.3.0 to make extension building and building Python itself play more nicely with Xcode 4 without breaking support for older versions. Some subset of that support will get backported for 2.7.4 and 3.2.4 once 3.3.0 is done. Also, if network bandwidth and disk space usage are not major concerns, it may now be procedurally easier for most people to install Xcode 4 than the Command Line Tools package since the former is now available for free download from the Mac App Store while the latter still requires registering for a (free) Apple Developer Id and download from the Apple Developer site. And since Xcode 4 has been partitioned up into smaller downloadable components, the Xcode download is smaller as it is not necessary to download everything (including iOS development tools) as was the case with Xcode 3. -- Ned Deily, nad at acm.org From solipsis at pitrou.net Wed Aug 15 12:21:05 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 15 Aug 2012 12:21:05 +0200 Subject: [Python-Dev] Installation on Macs References: <5432C23E-CABB-476E-966C-164209BA47AE@gmail.com> Message-ID: <20120815122105.3847e734@pitrou.net> On Wed, 15 Aug 2012 02:30:17 -0700 Ned Deily wrote: > > 1. Gatekeeper singing on 10.8 > > [...] Quick summary is that we need to > change the installer format that is used to be able to participate in > the installer singing program I first thought Apple had gone poetic and then I realized it's a typo (singing / signing). Too bad. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From ronaldoussoren at mac.com Wed Aug 15 11:37:55 2012 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 15 Aug 2012 11:37:55 +0200 Subject: [Python-Dev] Installation on Macs In-Reply-To: References: <5432C23E-CABB-476E-966C-164209BA47AE@gmail.com> Message-ID: <8D522555-5739-4840-BE8F-828420F1E0BC@mac.com> On 15 Aug, 2012, at 11:30, Ned Deily wrote: > > > > 3. Download instructions and Xcode > > Jesse: >> I also concur with Raymond that the download/install instructions could be >> simplified. Noting for users that rather than downloading Xcode, they can >> just download the OSX Command Line Tools installer and easy_install/pip/etc >> will just work would also be nice > > The Mac section of the Python docset is woefully out-of-date (from long > before I got involved!) and I plan to give it a major update for 3.3.0. > The whole business of what's needed to build extension modules on OS X > got *much* more complicated with Xcode 4 (the default for 10.7 and 10.8) > and each minor release of Xcode 4 has brought new changes. The > introduction of the stand-alone Command Line Tools (with Xcode 4.2?) was > a nice addition but there are gotchas with using it, for example, > extension building with current Python 3.2.3 installers do not work > out-of-the-box with the CLT because the CLT do not provide SDKs. 2.7.3 > is better but neither of the current 32-bit installers will work out of > the box with Xcode 4. A lot of work has gone into 3.3.0 to make > extension building and building Python itself play more nicely with > Xcode 4 without breaking support for older versions. Some subset of > that support will get backported for 2.7.4 and 3.2.4 once 3.3.0 is done. > > Also, if network bandwidth and disk space usage are not major concerns, > it may now be procedurally easier for most people to install Xcode 4 > than the Command Line Tools package since the former is now available > for free download from the Mac App Store while the latter still requires > registering for a (free) Apple Developer Id and download from the Apple > Developer site. And since Xcode 4 has been partitioned up into smaller > downloadable components, the Xcode download is smaller as it is not > necessary to download everything (including iOS development tools) as > was the case with Xcode 3. Another advantage of installing all of Xcode is that the appstore will warn when a new version is available, and when you start Xcode you can still install the command-line tools (and Xcode will warn when that installation is out of date). Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4788 bytes Desc: not available URL: From nad at acm.org Wed Aug 15 12:39:50 2012 From: nad at acm.org (Ned Deily) Date: Wed, 15 Aug 2012 03:39:50 -0700 Subject: [Python-Dev] Installation on Macs References: <5432C23E-CABB-476E-966C-164209BA47AE@gmail.com> <20120815122105.3847e734@pitrou.net> Message-ID: In article <20120815122105.3847e734 at pitrou.net>, Antoine Pitrou wrote: > On Wed, 15 Aug 2012 02:30:17 -0700 > Ned Deily wrote: > > 1. Gatekeeper singing on 10.8 > > > > [...] Quick summary is that we need to > > change the installer format that is used to be able to participate in > > the installer singing program > > I first thought Apple had gone poetic and then I realized it's a typo > (singing / signing). Too bad. Perhaps the installers periodically get together to silently sing _The Old Signing Blues_. -- Ned Deily, nad at acm.org From andrew.svetlov at gmail.com Wed Aug 15 16:17:16 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Wed, 15 Aug 2012 17:17:16 +0300 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): #11062: Fix adding a message from file to Babyl mailbox In-Reply-To: <3Wxpgg4QByzQBX@mail.python.org> References: <3Wxpgg4QByzQBX@mail.python.org> Message-ID: Looks like it is the source of buildbot fail on Windows box. On Wed, Aug 15, 2012 at 2:42 PM, petri.lehtinen wrote: > http://hg.python.org/cpython/rev/7c8c6b905a18 > changeset: 78586:7c8c6b905a18 > parent: 78583:8d90fde35cc6 > parent: 78585:cbc1dc8cda06 > user: Petri Lehtinen > date: Wed Aug 15 14:36:14 2012 +0300 > summary: > #11062: Fix adding a message from file to Babyl mailbox > > files: > Lib/mailbox.py | 2 +- > Lib/test/test_mailbox.py | 18 ++++++------------ > Misc/NEWS | 2 ++ > 3 files changed, 9 insertions(+), 13 deletions(-) > > > diff --git a/Lib/mailbox.py b/Lib/mailbox.py > --- a/Lib/mailbox.py > +++ b/Lib/mailbox.py > @@ -1440,9 +1440,9 @@ > line = line[:-1] + b'\n' > self._file.write(line.replace(b'\n', linesep)) > if line == b'\n' or not line: > - self._file.write(b'*** EOOH ***' + linesep) > if first_pass: > first_pass = False > + self._file.write(b'*** EOOH ***' + linesep) > message.seek(original_pos) > else: > break > diff --git a/Lib/test/test_mailbox.py b/Lib/test/test_mailbox.py > --- a/Lib/test/test_mailbox.py > +++ b/Lib/test/test_mailbox.py > @@ -152,20 +152,16 @@ > f.write(_bytes_sample_message) > f.seek(0) > key = self._box.add(f) > - # See issue 11062 > - if not isinstance(self._box, mailbox.Babyl): > - self.assertEqual(self._box.get_bytes(key).split(b'\n'), > - _bytes_sample_message.split(b'\n')) > + self.assertEqual(self._box.get_bytes(key).split(b'\n'), > + _bytes_sample_message.split(b'\n')) > > def test_add_binary_nonascii_file(self): > with tempfile.TemporaryFile('wb+') as f: > f.write(self._non_latin_bin_msg) > f.seek(0) > key = self._box.add(f) > - # See issue 11062 > - if not isinstance(self._box, mailbox.Babyl): > - self.assertEqual(self._box.get_bytes(key).split(b'\n'), > - self._non_latin_bin_msg.split(b'\n')) > + self.assertEqual(self._box.get_bytes(key).split(b'\n'), > + self._non_latin_bin_msg.split(b'\n')) > > def test_add_text_file_warns(self): > with tempfile.TemporaryFile('w+') as f: > @@ -173,10 +169,8 @@ > f.seek(0) > with self.assertWarns(DeprecationWarning): > key = self._box.add(f) > - # See issue 11062 > - if not isinstance(self._box, mailbox.Babyl): > - self.assertEqual(self._box.get_bytes(key).split(b'\n'), > - _bytes_sample_message.split(b'\n')) > + self.assertEqual(self._box.get_bytes(key).split(b'\n'), > + _bytes_sample_message.split(b'\n')) > > def test_add_StringIO_warns(self): > with self.assertWarns(DeprecationWarning): > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -13,6 +13,8 @@ > Library > ------- > > +- Issue #11062: Fix adding a message from file to Babyl mailbox. > + > - Issue #15646: Prevent equivalent of a fork bomb when using > multiprocessing on Windows without the "if __name__ == '__main__'" > idiom. > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > -- Thanks, Andrew Svetlov From dholth at gmail.com Wed Aug 15 16:49:42 2012 From: dholth at gmail.com (Daniel Holth) Date: Wed, 15 Aug 2012 10:49:42 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) Message-ID: I've drafted some edits to Metadata 1.2 with valuable feedback from distutils-sig (special thanks to Erik Bray), which seems to have no more comments on the issue after about 6 weeks. Let me know if you have an opinion, or if you will have one during some bounded time in the future. Metadata 1.2 (PEP 345), a non-final PEP that has been adopted by approximately 10 of the latest sdists from pypy, cannot represent the setuptools "extras" (optional dependencies) feature. This is a problem because about 1600+ or 10% of the packages hosted on pypy define "extras" as measured in May of this year. The edit implements the extras feature by adding a new condition "extra == 'name'" to the Metadata 1.2 environment markers. Requirements with this marker are only installed when the named optional feature is requested. Valid extras for a package must be declared with Provides-Extra: name. It also adds Setup-Requires-Dist as a way to specify requirements needed during an install as opposed to during runtime. Abbreviated highlights: Setup-Requires-Dist (multiple use) Like Requires-Dist, but names dependencies needed while the distributions's distutils / packaging `setup.py` / `setup.cfg` is run. Provides-Extra (multiple use) A string containing the name of an optional feature. Examples: Requires-Dist: reportlab; extra == 'pdf' Requires-Dist: nose; extra == 'test' Requires-Dist: sphinx; extra == 'doc' (full changeset on https://bitbucket.org/dholth/python-peps/changeset/537e83bd4068) Thanks, Daniel Holth From ericsnowcurrently at gmail.com Wed Aug 15 17:05:53 2012 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Wed, 15 Aug 2012 09:05:53 -0600 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: Message-ID: On Wed, Aug 15, 2012 at 8:49 AM, Daniel Holth wrote: > I've drafted some edits to Metadata 1.2 with valuable feedback from > distutils-sig (special thanks to Erik Bray), which seems to have no > more comments on the issue after about 6 weeks. Let me know if you > have an opinion, or if you will have one during some bounded time in > the future. > > Metadata 1.2 (PEP 345), a non-final PEP that has been adopted by > approximately 10 of the latest sdists from pypy, cannot represent the > setuptools "extras" (optional dependencies) feature. This is a problem > because about 1600+ or 10% of the packages hosted on pypy define > "extras" as measured in May of this year. > > The edit implements the extras feature by adding a new condition > "extra == 'name'" to the Metadata 1.2 environment markers. > Requirements with this marker are only installed when the named > optional feature is requested. Valid extras for a package must be > declared with Provides-Extra: name. > > It also adds Setup-Requires-Dist as a way to specify requirements > needed during an install as opposed to during runtime. > > > Abbreviated highlights: > > Setup-Requires-Dist (multiple use) > > Like Requires-Dist, but names dependencies needed while the > distributions's distutils / packaging `setup.py` / `setup.cfg` is run. > > > Provides-Extra (multiple use) > > A string containing the name of an optional feature. > > Examples: > > Requires-Dist: reportlab; extra == 'pdf' > > Requires-Dist: nose; extra == 'test' > > Requires-Dist: sphinx; extra == 'doc' > > > (full changeset on > https://bitbucket.org/dholth/python-peps/changeset/537e83bd4068) s/pypy/PyPI/ -eric From solipsis at pitrou.net Thu Aug 16 00:25:42 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 16 Aug 2012 00:25:42 +0200 Subject: [Python-Dev] cpython: Add yet another test for subprocess.Popen.communicate References: <3Wy1b22K93zPlj@mail.python.org> Message-ID: <20120816002542.139dfc65@pitrou.net> On Wed, 15 Aug 2012 21:54:06 +0200 (CEST) andrew.svetlov wrote: > > diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py > --- a/Lib/test/test_subprocess.py > +++ b/Lib/test/test_subprocess.py > @@ -645,6 +645,34 @@ > p.communicate() > self.assertEqual(p.returncode, 0) > > + def test_universal_newlines_communicate_stdin_stdout_stderr(self): > + # universal newlines through communicate(), with only stdin > + p = subprocess.Popen([sys.executable, "-c", > + 'import sys,os;' + SETBINARY + '''\nif True: > + s = sys.stdin.readline() > + sys.stdout.write(s) > + sys.stdout.write("line2\\r") > + sys.stderr.write("eline2\\n") > + s = sys.stdin.read() > + sys.stdout.write(s+"line4\\n") > + sys.stdout.write(s+"line5\\r\\n") > + sys.stderr.write("eline6\\n") > + sys.stderr.write("eline7\\r") > + sys.stderr.write("eline8\\r\\n") > + '''], This test is wrong. You need to write your test data as binary data on the binary output streams, as in the other tests. Using the text output streams introduces a spurious line ending conversion, which makes the test fail under Windows: http://buildbot.python.org/all/builders/AMD64%20Windows7%20SP1%203.x/builds/486 > + # Python debug build push something like "[42442 refs]\n" > + # to stderr at exit of subprocess. > + self.assertTrue(stderr.startswith("eline2\neline6\neline7\neline8\n")) You should use self.assertStderrEqual() instead. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From storchaka at gmail.com Thu Aug 16 09:01:41 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 16 Aug 2012 10:01:41 +0300 Subject: [Python-Dev] cpython (3.2): open() / TextIOWrapper doc: make it explicit than newline='\n' doesn't In-Reply-To: <3Wpktm36GKzPnB@mail.python.org> References: <3Wpktm36GKzPnB@mail.python.org> Message-ID: On 04.08.12 02:27, victor.stinner wrote: > http://hg.python.org/cpython/rev/243ad1a6f638 > changeset: 78403:243ad1a6f638 > branch: 3.2 > parent: 78400:f19bea7bbee7 > user: Victor Stinner > date: Sat Aug 04 01:18:56 2012 +0200 > summary: > open() / TextIOWrapper doc: make it explicit than newline='\n' doesn't > translate newlines on output. > + " newline is '' or '\n', no translation takes place. If newline is any\n" Non-escaped "\n". From andrew.svetlov at gmail.com Thu Aug 16 19:20:25 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 16 Aug 2012 20:20:25 +0300 Subject: [Python-Dev] cpython: Add yet another test for subprocess.Popen.communicate In-Reply-To: <20120816002542.139dfc65@pitrou.net> References: <3Wy1b22K93zPlj@mail.python.org> <20120816002542.139dfc65@pitrou.net> Message-ID: Fixed in 150fa296f5b9. New version uses binary stream for output. assertStderrEqual cannot be applied because it strips newlines which are subject for check. On Thu, Aug 16, 2012 at 1:25 AM, Antoine Pitrou wrote: > On Wed, 15 Aug 2012 21:54:06 +0200 (CEST) > andrew.svetlov wrote: >> >> diff --git a/Lib/test/test_subprocess.py b/Lib/test/test_subprocess.py >> --- a/Lib/test/test_subprocess.py >> +++ b/Lib/test/test_subprocess.py >> @@ -645,6 +645,34 @@ >> p.communicate() >> self.assertEqual(p.returncode, 0) >> >> + def test_universal_newlines_communicate_stdin_stdout_stderr(self): >> + # universal newlines through communicate(), with only stdin >> + p = subprocess.Popen([sys.executable, "-c", >> + 'import sys,os;' + SETBINARY + '''\nif True: >> + s = sys.stdin.readline() >> + sys.stdout.write(s) >> + sys.stdout.write("line2\\r") >> + sys.stderr.write("eline2\\n") >> + s = sys.stdin.read() >> + sys.stdout.write(s+"line4\\n") >> + sys.stdout.write(s+"line5\\r\\n") >> + sys.stderr.write("eline6\\n") >> + sys.stderr.write("eline7\\r") >> + sys.stderr.write("eline8\\r\\n") >> + '''], > > This test is wrong. You need to write your test data as binary data on > the binary output streams, as in the other tests. Using the text > output streams introduces a spurious line ending conversion, which makes > the test fail under Windows: > http://buildbot.python.org/all/builders/AMD64%20Windows7%20SP1%203.x/builds/486 > >> + # Python debug build push something like "[42442 refs]\n" >> + # to stderr at exit of subprocess. >> + self.assertTrue(stderr.startswith("eline2\neline6\neline7\neline8\n")) > > You should use self.assertStderrEqual() instead. > > Regards > > Antoine. > > > -- > Software development and contracting: http://pro.pitrou.net > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From trent at snakebite.org Fri Aug 17 10:00:50 2012 From: trent at snakebite.org (Trent Nelson) Date: Fri, 17 Aug 2012 04:00:50 -0400 Subject: [Python-Dev] Mountain Lion drops sign of zero, breaks test_cmath... Message-ID: <20120817075926.GC42732@snakebite.org> The Mountain Lion build slave I set up earlier this evening fails on test_cmath: ====================================================================== FAIL: test_specific_values (test.test_cmath.CMathTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Volumes/bay2/buildslave/cpython/2.7.snakebite-mountainlion-amd64/build/Lib/test/test_cmath.py", line 352, in test_specific_values msg=error_message) File "/Volumes/bay2/buildslave/cpython/2.7.snakebite-mountainlion-amd64/build/Lib/test/test_cmath.py", line 94, in rAssertAlmostEqual 'got {!r}'.format(a, b)) AssertionError: atan0000: atan(complex(0.0, 0.0)) Expected: complex(0.0, 0.0) Received: complex(0.0, -0.0) Received value insufficiently close to expected value. Mountain Lion's atan/log1p appear to drop the negative sign when passed in -0.0, whereas previous versions of OS X didn't: Mountain Lion: % ~/log1p-viper log1p_drops_zero_sign_test: atan2(log1p(-0.), -1.) != atan2(-0., -1.) 3.14159 vs -3.14159 atan_drops_zero_sign_test: atan2(-0., 0.): -0.00000 atan2( 0., -0.): 3.14159 atan2(-0., -0.): -3.14159 atan2( 0., 0.): 0.00000 log1p(-0.): 0.00000 log1p( 0.): 0.00000 Lion: % ./log1p log1p_drops_zero_sign_test: atan2(log1p(-0.), -1.) == atan2(-0., -1.) -3.14159 vs -3.14159 atan_drops_zero_sign_test: atan2(-0., 0.): -0.00000 atan2( 0., -0.): 3.14159 atan2(-0., -0.): -3.14159 atan2( 0., 0.): 0.00000 log1p(-0.): -0.00000 log1p( 0.): 0.00000 (The C code for that is below.) configure.ac already has a test for this (it makes mention of AIX having similar behaviour), and the corresponding sysconfig entry named 'LOG1P_DROPS_ZERO_SIGN' is already being used on a few tests, i.e.: # The algorithm used for atan and atanh makes use of the system # log1p function; If that system function doesn't respect the sign # of zero, then atan and atanh will also have difficulties with # the sign of complex zeros. @requires_IEEE_754 @unittest.skipIf(sysconfig.get_config_var('LOG1P_DROPS_ZERO_SIGN'), "system log1p() function doesn't preserve the sign") def testAtanSign(self): for z in complex_zeros: self.assertComplexIdentical(cmath.atan(z), z) @requires_IEEE_754 @unittest.skipIf(sysconfig.get_config_var('LOG1P_DROPS_ZERO_SIGN'), "system log1p() function doesn't preserve the sign") def testAtanhSign(self): for z in complex_zeros: self.assertComplexIdentical(cmath.atanh(z), z) Taking a look at cmath_testcases.txt, and we can see this: -- These are tested in testAtanSign in test_cmath.py -- atan0000 atan 0.0 0.0 -> 0.0 0.0 -- atan0001 atan 0.0 -0.0 -> 0.0 -0.0 -- atan0002 atan -0.0 0.0 -> -0.0 0.0 -- atan0003 atan -0.0 -0.0 -> -0.0 -0.0 However, a few lines down, those tests crop up again: -- special values atan1000 atan -0.0 0.0 -> -0.0 0.0 atan1014 atan 0.0 0.0 -> 0.0 0.0 ....which is what causes the current test failures. I hacked test_cmath.py a bit to spit out all the errors it finds after it's finished parsing the test file (instead of bombing out on the first one), and it yielded this: FAIL: test_specific_values (test.test_cmath.CMathTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Volumes/bay2/buildslave/cpython/3.2.snakebite-mountainlion-amd64/build/Lib/test/test_cmath.py", line 446, in test_specific_values self.fail("\n".join(failures)) AssertionError: atan1000: atan(complex(-0.0, 0.0)) Expected: complex(-0.0, 0.0) Received: complex(-0.0, -0.0) Received value insufficiently close to expected value. atan1014: atan(complex(0.0, 0.0)) Expected: complex(0.0, 0.0) Received: complex(0.0, -0.0) Received value insufficiently close to expected value. atanh0225: atanh(complex(-0.0, 5.6067e-320)) Expected: complex(-0.0, 5.6067e-320) Received: complex(0.0, 5.6067e-320) Received value insufficiently close to expected value. atanh0227: atanh(complex(-0.0, -3.0861101e-316)) Expected: complex(-0.0, -3.0861101e-316) Received: complex(0.0, -3.0861101e-316) Received value insufficiently close to expected value. atanh1024: atanh(complex(-0.0, -0.0)) Expected: complex(-0.0, -0.0) Received: complex(0.0, -0.0) Received value insufficiently close to expected value. atanh1034: atanh(complex(-0.0, 0.0)) Expected: complex(-0.0, 0.0) Received: complex(0.0, 0.0) Received value insufficiently close to expected value. This is the patch I came up with against test_cmath.py: xenon% hg diff Lib/test/test_cmath.py diff -r ce49599b9fdf Lib/test/test_cmath.py --- a/Lib/test/test_cmath.py Thu Aug 16 22:14:43 2012 +0200 +++ b/Lib/test/test_cmath.py Fri Aug 17 07:54:05 2012 +0000 @@ -121,8 +121,10 @@ # if both a and b are zero, check whether they have the same sign # (in theory there are examples where it would be legitimate for a # and b to have opposite signs; in practice these hardly ever - # occur). - if not a and not b: + # occur) -- the exception to this is if we're on a system that drops + # the sign on zeros. + drops_zero_sign = sysconfig.get_config_var('LOG1P_DROPS_ZERO_SIGN') + if not drops_zero_sign and not a and not b: if math.copysign(1., a) != math.copysign(1., b): self.fail(msg or 'zero has wrong sign: expected {!r}, ' 'got {!r}'.format(a, b)) With that applied, all the test_cmath tests pass again (without any changes to the test file). Thoughts? Trent. -- C code for the example earlier: #include #include int main(int argc, char **argv) { printf("\nlog1p_drops_zero_sign_test:\n"); if (atan2(log1p(-0.), -1.) == atan2(-0., -1.)) printf(" atan2(log1p(-0.), -1.) == atan2(-0., -1.)\n"); else printf(" atan2(log1p(-0.), -1.) != atan2(-0., -1.)\n"); printf( " %.5f vs %.5f\n", atan2(log1p(-0.), -1.), atan2(-0., -1.) ); printf("\natan_drops_zero_sign_test:\n"); printf(" atan2(-0., 0.): %0.5f\n", atan2(-0., 0.)); printf(" atan2( 0., -0.): %0.5f\n", atan2( 0., -0.)); printf(" atan2(-0., -0.): %0.5f\n", atan2(-0., -0.)); printf(" atan2( 0., 0.): %0.5f\n", atan2( 0., 0.)); printf(" log1p(-0.): %0.5f\n", log1p(-0.)); printf(" log1p( 0.): %0.5f\n", log1p( 0.)); } /* vim:set ts=8 sw=4 sts=4 tw=78 et: */ From rdmurray at bitdance.com Fri Aug 17 15:24:17 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 17 Aug 2012 09:24:17 -0400 Subject: [Python-Dev] Mountain Lion drops sign of zero, breaks test_cmath... In-Reply-To: <20120817075926.GC42732@snakebite.org> References: <20120817075926.GC42732@snakebite.org> Message-ID: <20120817132417.9B704250171@webabinitio.net> On Fri, 17 Aug 2012 04:00:50 -0400, Trent Nelson wrote: > This is the patch I came up with against test_cmath.py: > > xenon% hg diff Lib/test/test_cmath.py > diff -r ce49599b9fdf Lib/test/test_cmath.py > --- a/Lib/test/test_cmath.py Thu Aug 16 22:14:43 2012 +0200 > +++ b/Lib/test/test_cmath.py Fri Aug 17 07:54:05 2012 +0000 > @@ -121,8 +121,10 @@ > # if both a and b are zero, check whether they have the same sign > # (in theory there are examples where it would be legitimate for a > # and b to have opposite signs; in practice these hardly ever > - # occur). > - if not a and not b: > + # occur) -- the exception to this is if we're on a system that drops > + # the sign on zeros. > + drops_zero_sign = sysconfig.get_config_var('LOG1P_DROPS_ZERO_SIGN') > + if not drops_zero_sign and not a and not b: > if math.copysign(1., a) != math.copysign(1., b): > self.fail(msg or 'zero has wrong sign: expected {!r}, ' > 'got {!r}'.format(a, b)) > > With that applied, all the test_cmath tests pass again (without any > changes to the test file). > > Thoughts? Open an issue on the tracker and make mark.dickinson (and maybe skrah) nosy. --David From stefan at bytereef.org Fri Aug 17 15:50:09 2012 From: stefan at bytereef.org (Stefan Krah) Date: Fri, 17 Aug 2012 15:50:09 +0200 Subject: [Python-Dev] Mountain Lion drops sign of zero, breaks test_cmath... In-Reply-To: <20120817132417.9B704250171@webabinitio.net> References: <20120817075926.GC42732@snakebite.org> <20120817132417.9B704250171@webabinitio.net> Message-ID: <20120817135009.GA29183@sleipnir.bytereef.org> R. David Murray wrote: > > --- a/Lib/test/test_cmath.py Thu Aug 16 22:14:43 2012 +0200 > > +++ b/Lib/test/test_cmath.py Fri Aug 17 07:54:05 2012 +0000 > > Open an issue on the tracker and make mark.dickinson (and maybe skrah) > nosy. I think this issue covers the problem: http://bugs.python.org/issue15477 Stefan Krah From status at bugs.python.org Fri Aug 17 18:07:14 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 17 Aug 2012 18:07:14 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120817160714.D65F81C98F@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-08-10 - 2012-08-17) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3640 (+52) closed 23856 (+48) total 27496 (+100) Open issues with patches: 1584 Issues opened (80) ================== #15623: Init time relative imports no longer work from __init__.so mod http://bugs.python.org/issue15623 opened by scoder #15625: Support u and w codes in memoryview http://bugs.python.org/issue15625 opened by loewis #15626: unittest.main negates -bb option and programmatic warning conf http://bugs.python.org/issue15626 opened by Ben.Darnell #15627: Add a method to importlib.abc.SourceLoader for converting sour http://bugs.python.org/issue15627 opened by brett.cannon #15629: Run doctests in Doc/*.rst as part of regrtest http://bugs.python.org/issue15629 opened by cjerdonek #15631: Python 3.3 beta 1 installation issue lib/lib64 folders http://bugs.python.org/issue15631 opened by ita1024 #15632: regrtest.py: spurious leaks with -R option http://bugs.python.org/issue15632 opened by skrah #15633: httplib.response is not closed after all data has been read http://bugs.python.org/issue15633 opened by Nikratio #15634: synchronized decorator for the threading module http://bugs.python.org/issue15634 opened by jjdominguezm #15636: base64.decodebytes is only available in Python3.1+ http://bugs.python.org/issue15636 opened by lurchman #15637: Segfault reading null VMA (works fine in python 2.x) http://bugs.python.org/issue15637 opened by albertomilone #15639: csv.Error description is incorrectly broad http://bugs.python.org/issue15639 opened by xmorel #15640: Document importlib.abc.Finder as deprecated http://bugs.python.org/issue15640 opened by brett.cannon #15641: Clean up importlib for Python 3.4 http://bugs.python.org/issue15641 opened by brett.cannon #15642: Integrate pickle protocol version 4 GSoC work by Stefan Mihail http://bugs.python.org/issue15642 opened by alexandre.vassalotti #15643: Support OpenCSW in setup.py http://bugs.python.org/issue15643 opened by flub #15645: 2to3 Grammar pickles not created when upgrading to 3.3.0b2 http://bugs.python.org/issue15645 opened by stefanholek #15648: stderr "refs" output does not respect PYTHONIOENCODING http://bugs.python.org/issue15648 opened by cjerdonek #15649: subprocess.Popen.communicate: accept str for input parameter i http://bugs.python.org/issue15649 opened by asvetlov #15650: PEP 3121, 384 refactoring applied to dbm module http://bugs.python.org/issue15650 opened by Robin.Schreiber #15651: PEP 3121, 384 refactoring applied to elementtree module http://bugs.python.org/issue15651 opened by Robin.Schreiber #15652: PEP 3121, 384 refactoring applied to gdbm module http://bugs.python.org/issue15652 opened by Robin.Schreiber #15653: PEP 3121, 384 refactoring applied to hashopenssl module http://bugs.python.org/issue15653 opened by Robin.Schreiber #15654: PEP 384 Refactoring applied to bz2 module http://bugs.python.org/issue15654 opened by Robin.Schreiber #15655: PEP 384 Refactoring applied to json module http://bugs.python.org/issue15655 opened by Robin.Schreiber #15657: Error in Python 3 docs for PyMethodDef http://bugs.python.org/issue15657 opened by sandro.tosi #15660: In str.format there is a misleading error message about alignm http://bugs.python.org/issue15660 opened by py.user #15661: OS X installer packages should be signed for OS X 10.8 Gatekee http://bugs.python.org/issue15661 opened by ned.deily #15662: PEP 3121 refactoring applied to locale module http://bugs.python.org/issue15662 opened by Robin.Schreiber #15663: Investigate providing Tcl/Tk 8.5 with OS X installers http://bugs.python.org/issue15663 opened by ned.deily #15665: PEP 3121, 384 refactoring applied to lsprof module http://bugs.python.org/issue15665 opened by Robin.Schreiber #15666: PEP 3121, 384 refactoring applied to lzma module http://bugs.python.org/issue15666 opened by Robin.Schreiber #15667: PEP 3121, 384 refactoring applied to pickle module http://bugs.python.org/issue15667 opened by Robin.Schreiber #15668: PEP 3121, 384 Refactoring applied to random module http://bugs.python.org/issue15668 opened by Robin.Schreiber #15669: PEP 3121, 384 Refactoring applied to sre module http://bugs.python.org/issue15669 opened by Robin.Schreiber #15670: PEP 3121, 384 Refactoring applied to ssl module http://bugs.python.org/issue15670 opened by Robin.Schreiber #15671: PEP 3121, 384 Refactoring applied to struct module http://bugs.python.org/issue15671 opened by Robin.Schreiber #15672: PEP 3121, 384 Refactoring applied to testbuffer module http://bugs.python.org/issue15672 opened by Robin.Schreiber #15673: PEP 3121, 384 Refactoring applied to testcapi module http://bugs.python.org/issue15673 opened by Robin.Schreiber #15674: PEP 3121, 384 Refactoring applied to thread module http://bugs.python.org/issue15674 opened by Robin.Schreiber #15675: PEP 3121, 384 Refactoring applied to array module http://bugs.python.org/issue15675 opened by Robin.Schreiber #15676: mmap: add empty file check prior to offset check http://bugs.python.org/issue15676 opened by Steven.Willis #15677: Gzip/zlib allows for compression level=0 http://bugs.python.org/issue15677 opened by sandro.tosi #15678: IDLE menu customization is broken from OS X command lines http://bugs.python.org/issue15678 opened by ned.deily #15680: PEP 3121 refactoring applied to audioop module http://bugs.python.org/issue15680 opened by Robin.Schreiber #15681: PEP 3121 refactoring applied to binascii module http://bugs.python.org/issue15681 opened by Robin.Schreiber #15682: PEP 3121 refactoring applied to fpectl module http://bugs.python.org/issue15682 opened by Robin.Schreiber #15684: PEP 3121 refactoring applied to fpetest module http://bugs.python.org/issue15684 opened by Robin.Schreiber #15685: PEP 3121, 384 Refactoring applied to itertools module http://bugs.python.org/issue15685 opened by Robin.Schreiber #15686: PEP 3121, 384 Refactoring applied to md5 module http://bugs.python.org/issue15686 opened by Robin.Schreiber #15687: PEP 3121, 384 Refactoring applied to mmap module http://bugs.python.org/issue15687 opened by Robin.Schreiber #15688: PEP 3121 Refactoring applied to nis module http://bugs.python.org/issue15688 opened by Robin.Schreiber #15689: PEP 3121, 384 Refactoring applied to operator module http://bugs.python.org/issue15689 opened by Robin.Schreiber #15690: PEP 3121, 384 Refactoring applied to parser module http://bugs.python.org/issue15690 opened by Robin.Schreiber #15691: PEP 3121, 384 Refactoring applied to posix module http://bugs.python.org/issue15691 opened by Robin.Schreiber #15693: expose glossary link on hover http://bugs.python.org/issue15693 opened by cjerdonek #15694: link to "file object" glossary entry in open() and io docs http://bugs.python.org/issue15694 opened by cjerdonek #15695: Correct __sizeof__ support for StgDict http://bugs.python.org/issue15695 opened by storchaka #15696: Correct __sizeof__ support for mmap http://bugs.python.org/issue15696 opened by storchaka #15697: PEP 3121 refactoring applied to pwd module http://bugs.python.org/issue15697 opened by Robin.Schreiber #15698: PEP 3121, 384 Refactoring applied to pyexpat module http://bugs.python.org/issue15698 opened by Robin.Schreiber #15699: PEP 3121, 384 Refactoring applied to readline module http://bugs.python.org/issue15699 opened by Robin.Schreiber #15700: PEP 3121, 384 Refactoring applied to resource module http://bugs.python.org/issue15700 opened by Robin.Schreiber #15701: AttributeError from HTTPError when using digest auth http://bugs.python.org/issue15701 opened by scjody #15703: PEP 3121, 384 Refactoring applied to select module http://bugs.python.org/issue15703 opened by Robin.Schreiber #15704: PEP 3121, 384 Refactoring applied to sha1 module http://bugs.python.org/issue15704 opened by Robin.Schreiber #15705: PEP 3121, 384 Refactoring applied to sha256 module http://bugs.python.org/issue15705 opened by Robin.Schreiber #15706: PEP 3121, 384 Refactoring applied to sha512 module http://bugs.python.org/issue15706 opened by Robin.Schreiber #15707: PEP 3121, 384 Refactoring applied to signal module http://bugs.python.org/issue15707 opened by Robin.Schreiber #15708: PEP 3121, 384 Refactoring applied to socket module http://bugs.python.org/issue15708 opened by Robin.Schreiber #15709: PEP 3121, 384 Refactoring applied to termios module http://bugs.python.org/issue15709 opened by Robin.Schreiber #15710: logging module crashes in Python 2.7.3 for handler.setLevel(lo http://bugs.python.org/issue15710 opened by tobin.baker #15711: PEP 3121, 384 Refactoring applied to time module http://bugs.python.org/issue15711 opened by Robin.Schreiber #15712: PEP 3121, 384 Refactoring applied to unicodedata module http://bugs.python.org/issue15712 opened by Robin.Schreiber #15713: PEP 3121, 384 Refactoring applied to zipimport module http://bugs.python.org/issue15713 opened by Robin.Schreiber #15714: PEP 3121, 384 Refactoring applied to grp module http://bugs.python.org/issue15714 opened by Robin.Schreiber #15715: __import__ now raises with non-existing items in fromlist in 3 http://bugs.python.org/issue15715 opened by sfeltman #15716: Ability to specify the PYTHONPATH via a command line flag http://bugs.python.org/issue15716 opened by gregory.p.smith #15717: Mail System Error - Returned Mail http://bugs.python.org/issue15717 opened by python-dev #15718: Possible OverflowError in __len__ method undocumented (when ca http://bugs.python.org/issue15718 opened by Rostyslav.Dzinko Most recent 15 issues with no replies (15) ========================================== #15718: Possible OverflowError in __len__ method undocumented (when ca http://bugs.python.org/issue15718 #15717: Mail System Error - Returned Mail http://bugs.python.org/issue15717 #15716: Ability to specify the PYTHONPATH via a command line flag http://bugs.python.org/issue15716 #15714: PEP 3121, 384 Refactoring applied to grp module http://bugs.python.org/issue15714 #15713: PEP 3121, 384 Refactoring applied to zipimport module http://bugs.python.org/issue15713 #15712: PEP 3121, 384 Refactoring applied to unicodedata module http://bugs.python.org/issue15712 #15711: PEP 3121, 384 Refactoring applied to time module http://bugs.python.org/issue15711 #15710: logging module crashes in Python 2.7.3 for handler.setLevel(lo http://bugs.python.org/issue15710 #15709: PEP 3121, 384 Refactoring applied to termios module http://bugs.python.org/issue15709 #15708: PEP 3121, 384 Refactoring applied to socket module http://bugs.python.org/issue15708 #15707: PEP 3121, 384 Refactoring applied to signal module http://bugs.python.org/issue15707 #15706: PEP 3121, 384 Refactoring applied to sha512 module http://bugs.python.org/issue15706 #15705: PEP 3121, 384 Refactoring applied to sha256 module http://bugs.python.org/issue15705 #15704: PEP 3121, 384 Refactoring applied to sha1 module http://bugs.python.org/issue15704 #15703: PEP 3121, 384 Refactoring applied to select module http://bugs.python.org/issue15703 Most recent 15 issues waiting for review (15) ============================================= #15715: __import__ now raises with non-existing items in fromlist in 3 http://bugs.python.org/issue15715 #15714: PEP 3121, 384 Refactoring applied to grp module http://bugs.python.org/issue15714 #15713: PEP 3121, 384 Refactoring applied to zipimport module http://bugs.python.org/issue15713 #15712: PEP 3121, 384 Refactoring applied to unicodedata module http://bugs.python.org/issue15712 #15711: PEP 3121, 384 Refactoring applied to time module http://bugs.python.org/issue15711 #15709: PEP 3121, 384 Refactoring applied to termios module http://bugs.python.org/issue15709 #15708: PEP 3121, 384 Refactoring applied to socket module http://bugs.python.org/issue15708 #15707: PEP 3121, 384 Refactoring applied to signal module http://bugs.python.org/issue15707 #15706: PEP 3121, 384 Refactoring applied to sha512 module http://bugs.python.org/issue15706 #15705: PEP 3121, 384 Refactoring applied to sha256 module http://bugs.python.org/issue15705 #15704: PEP 3121, 384 Refactoring applied to sha1 module http://bugs.python.org/issue15704 #15703: PEP 3121, 384 Refactoring applied to select module http://bugs.python.org/issue15703 #15700: PEP 3121, 384 Refactoring applied to resource module http://bugs.python.org/issue15700 #15699: PEP 3121, 384 Refactoring applied to readline module http://bugs.python.org/issue15699 #15698: PEP 3121, 384 Refactoring applied to pyexpat module http://bugs.python.org/issue15698 Top 10 most discussed issues (10) ================================= #15573: Support unknown formats in memoryview comparisons http://bugs.python.org/issue15573 31 msgs #15623: Init time relative imports no longer work from __init__.so mod http://bugs.python.org/issue15623 18 msgs #15629: Run doctests in Doc/*.rst as part of regrtest http://bugs.python.org/issue15629 17 msgs #12623: "universal newlines" subprocess support broken with select- an http://bugs.python.org/issue12623 12 msgs #15625: Support u and w codes in memoryview http://bugs.python.org/issue15625 11 msgs #15586: Provide some examples for usage of ElementTree methods/attribu http://bugs.python.org/issue15586 9 msgs #15653: PEP 3121, 384 refactoring applied to hashopenssl module http://bugs.python.org/issue15653 8 msgs #15715: __import__ now raises with non-existing items in fromlist in 3 http://bugs.python.org/issue15715 8 msgs #15612: Rewrite StringIO to use the _PyUnicodeWriter API http://bugs.python.org/issue15612 7 msgs #13072: Getting a buffer from a Unicode array uses invalid format http://bugs.python.org/issue13072 6 msgs Issues closed (47) ================== #4253: Maildir dumpmessage on http://bugs.python.org/issue4253 closed by petri.lehtinen #6033: LOOKUP_METHOD and CALL_METHOD optimization http://bugs.python.org/issue6033 closed by r.david.murray #7231: Windows installer does not add \Scripts folder to the path http://bugs.python.org/issue7231 closed by r.david.murray #9161: add_option in optparse no longer accepts unicode string http://bugs.python.org/issue9161 closed by r.david.murray #11062: mailbox fails to round-trip a file to a Babyl mailbox http://bugs.python.org/issue11062 closed by petri.lehtinen #13252: new decumulate() function in itertools module http://bugs.python.org/issue13252 closed by rhettinger #14167: document return statement in finally blocks http://bugs.python.org/issue14167 closed by asvetlov #14501: Error initialising BaseManager class with 'authkey' argument o http://bugs.python.org/issue14501 closed by sbt #15151: Documentation for Signature, Parameter and signature in inspec http://bugs.python.org/issue15151 closed by asvetlov #15269: Document dircmp.left and dircmp.right http://bugs.python.org/issue15269 closed by r.david.murray #15322: sysconfig.get_config_var('srcdir') returns unexpected value http://bugs.python.org/issue15322 closed by sbt #15364: sysconfig confused by relative paths http://bugs.python.org/issue15364 closed by sbt #15412: Note in documentation for weakrefs http://bugs.python.org/issue15412 closed by sbt #15424: __sizeof__ of array should include size of items http://bugs.python.org/issue15424 closed by meador.inge #15444: Incorrectly written contributor's names http://bugs.python.org/issue15444 closed by pitrou #15496: harden directory removal for tests on Windows http://bugs.python.org/issue15496 closed by brian.curtin #15497: correct characters in TextWrapper.replace_whitespace docs http://bugs.python.org/issue15497 closed by asvetlov #15502: Meta path finders and path entry finders are different, but sh http://bugs.python.org/issue15502 closed by brett.cannon #15543: central documentation for 'universal newlines' http://bugs.python.org/issue15543 closed by r.david.murray #15561: update subprocess docs to reference io.TextIOWrapper http://bugs.python.org/issue15561 closed by asvetlov #15571: Python version of TextIOWrapper ignores "write_through" arg http://bugs.python.org/issue15571 closed by asvetlov #15576: importlib: ExtensionFileLoader not used to load packages from http://bugs.python.org/issue15576 closed by brett.cannon #15589: Bus error on Debian sparc http://bugs.python.org/issue15589 closed by skrah #15592: subprocess.communicate() breaks on no input with universal new http://bugs.python.org/issue15592 closed by cjerdonek #15604: PyObject_IsTrue failure checks http://bugs.python.org/issue15604 closed by storchaka #15607: New print's argument "flush" is not mentioned in docstring http://bugs.python.org/issue15607 closed by orsenthil #15610: PyImport_ImportModuleEx always fails in 3.3 with "ValueError: http://bugs.python.org/issue15610 closed by brett.cannon #15619: set.pop() documentation is confusing http://bugs.python.org/issue15619 closed by georg.brandl #15620: readline.clear_history() missing in test_readline.py http://bugs.python.org/issue15620 closed by python-dev #15621: UnboundLocalError on simple in-place assignment of an inner sc http://bugs.python.org/issue15621 closed by r.david.murray #15622: struct module 'c' specifier does not follow PEP-3118 http://bugs.python.org/issue15622 closed by ncoghlan #15624: clarify newline documentation for open and io.TextIOWrapper. http://bugs.python.org/issue15624 closed by asvetlov #15628: Add import ABC hierarchy to docs for importlib http://bugs.python.org/issue15628 closed by brett.cannon #15630: Missing "continue" example for "for" loop tutorial http://bugs.python.org/issue15630 closed by orsenthil #15635: memory leak with generators http://bugs.python.org/issue15635 closed by flox #15638: incorrect version info for TextIOWrapper write_through docs http://bugs.python.org/issue15638 closed by pitrou #15644: after _bytesio.seek(0), _bytesio.getvalue() returned reversed http://bugs.python.org/issue15644 closed by ned.deily #15646: multiprocessing can do equivalent of a fork bomb on Windows http://bugs.python.org/issue15646 closed by sbt #15647: isdir should be a local symbol, not exported http://bugs.python.org/issue15647 closed by doko #15656: "Extending Python with C" page needs update for 3.x http://bugs.python.org/issue15656 closed by eli.bendersky #15658: Idle stopped working http://bugs.python.org/issue15658 closed by r.david.murray #15659: using os.fork() and import user's modules results in errors http://bugs.python.org/issue15659 closed by michaeluc #15664: test_curses not run with 'make test' http://bugs.python.org/issue15664 closed by ronaldoussoren #15679: HTMLParser can fail on unquoted attributes. http://bugs.python.org/issue15679 closed by r.david.murray #15683: add decorator for make functions partial applicable http://bugs.python.org/issue15683 closed by r.david.murray #15692: Unexpected exponentiation in lambda function http://bugs.python.org/issue15692 closed by storchaka #15702: Multiprocessing Pool deadlocks on join after empty map operati http://bugs.python.org/issue15702 closed by sbt From trent at snakebite.org Fri Aug 17 20:42:23 2012 From: trent at snakebite.org (Trent Nelson) Date: Fri, 17 Aug 2012 14:42:23 -0400 Subject: [Python-Dev] Mountain Lion drops sign of zero, breaks test_cmath... In-Reply-To: <20120817135009.GA29183@sleipnir.bytereef.org> References: <20120817075926.GC42732@snakebite.org> <20120817132417.9B704250171@webabinitio.net> <20120817135009.GA29183@sleipnir.bytereef.org> Message-ID: <20120817184223.GD42732@snakebite.org> On Fri, Aug 17, 2012 at 06:50:09AM -0700, Stefan Krah wrote: > R. David Murray wrote: > > > --- a/Lib/test/test_cmath.py Thu Aug 16 22:14:43 2012 +0200 > > > +++ b/Lib/test/test_cmath.py Fri Aug 17 07:54:05 2012 +0000 > > > > Open an issue on the tracker and make mark.dickinson (and maybe skrah) > > nosy. > > I think this issue covers the problem: > > http://bugs.python.org/issue15477 Ah! I'll update that with my notes. (FWIW, I've changed my mind; I think the correct action is to remove the erroneous entries from the test file, rather than patch rAssertAlmostEqual.) Trent. From guido at python.org Fri Aug 17 21:27:04 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 17 Aug 2012 12:27:04 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? Message-ID: I just fixed a unittest for some code used at Google that was comparing a url generated by urllib.encode() to a fixed string. The problem was caused by turning on PYTHONHASHSEED=1. Because of this, the code under test would generate a textually different URL each time the test was run, but the intention of the test was just to check that all the query parameters were present and equal to the expected values. The solution was somewhat painful, I had to parse the url, split the query parameters, and compare them to a known dict. I wonder if it wouldn't make sense to change urlencode() to generate URLs that don't depend on the hash order, for all versions of Python that support PYTHONHASHSEED? It seems a one-line fix: query = query.items() with this: query = sorted(query.items()) This would not prevent breakage of unit tests, but it would make a much simpler fix possible: simply sort the parameters in the URL. Thoughts? -- --Guido van Rossum (python.org/~guido) From dholth at gmail.com Fri Aug 17 21:41:36 2012 From: dholth at gmail.com (Daniel Holth) Date: Fri, 17 Aug 2012 15:41:36 -0400 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: Only if it is not an OrderedDict > query = sorted(query.items()) > > This would not prevent breakage of unit tests, but it would make a > much simpler fix possible: simply sort the parameters in the URL. From martin at v.loewis.de Fri Aug 17 21:50:44 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 17 Aug 2012 21:50:44 +0200 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: <502EA094.9090104@v.loewis.de> On 17.08.2012 21:27, Guido van Rossum wrote: > query = sorted(query.items()) > > This would not prevent breakage of unit tests, but it would make a > much simpler fix possible: simply sort the parameters in the URL. > > Thoughts? Sounds good. For best backwards compatibility, I'd restrict the sorting to the exact dict type, since people may be using non-dict mappings which already have a different stable order. > for all versions of Python that support PYTHONHASHSEED? I think this cannot be done, in particular not for 2.6 and 3.1 - it's not a security fix (*). Strictly speaking, it isn't even a bug fix, since it doesn't restore the original behavior that some people (like your test case) relied on. In particular, if somebody has fixed PYTHONHASHSEED to get a stable order, this change would break such installations. By that policy, it could only go into 3.4. OTOH, if it also checked whether there is randomized hashing, and sort only in that case, I think it should be backwards compatible in all interesting cases. Regards, Martin (*) I guess some may claim that the current implementation leaks some bits of the hash seed, since you can learn the seed from the parameter order, so sorting would make it more secure. However, I would disagree that this constitutes a feasible threat. From guido at python.org Fri Aug 17 22:45:20 2012 From: guido at python.org (Guido van Rossum) Date: Fri, 17 Aug 2012 13:45:20 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <502EA094.9090104@v.loewis.de> References: <502EA094.9090104@v.loewis.de> Message-ID: Thanks, I filed http://bugs.python.org/issue15719 to track this. On Fri, Aug 17, 2012 at 12:50 PM, "Martin v. L?wis" wrote: > On 17.08.2012 21:27, Guido van Rossum wrote: >> query = sorted(query.items()) >> >> This would not prevent breakage of unit tests, but it would make a >> much simpler fix possible: simply sort the parameters in the URL. >> >> Thoughts? > > Sounds good. For best backwards compatibility, I'd restrict the sorting > to the exact dict type, since people may be using non-dict mappings > which already have a different stable order. > >> for all versions of Python that support PYTHONHASHSEED? > > I think this cannot be done, in particular not for 2.6 and 3.1 - it's > not a security fix (*). > > Strictly speaking, it isn't even a bug fix, since it doesn't restore > the original behavior that some people (like your test case) relied > on. In particular, if somebody has fixed PYTHONHASHSEED to get a stable > order, this change would break such installations. By that policy, it > could only go into 3.4. > > OTOH, if it also checked whether there is randomized hashing, and sort > only in that case, I think it should be backwards compatible in all > interesting cases. > > Regards, > Martin > > (*) I guess some may claim that the current implementation leaks > some bits of the hash seed, since you can learn the seed from > the parameter order, so sorting would make it more secure. However, > I would disagree that this constitutes a feasible threat. -- --Guido van Rossum (python.org/~guido) From rosuav at gmail.com Fri Aug 17 23:52:54 2012 From: rosuav at gmail.com (Chris Angelico) Date: Sat, 18 Aug 2012 07:52:54 +1000 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: On Sat, Aug 18, 2012 at 5:27 AM, Guido van Rossum wrote: > I just fixed a unittest for some code used at Google that was > comparing a url generated by urllib.encode() to a fixed string. The > problem was caused by turning on PYTHONHASHSEED=1. Because of this, > the code under test would generate a textually different URL each time > the test was run, but the intention of the test was just to check that > all the query parameters were present and equal to the expected > values. > > > query = sorted(query.items()) Hmm. ISTM this is putting priority on the unit test above the functionality of actual usage. Although on the other hand, sorting parameters on a URL is nothing compared to the cost of network traffic, so it's unlikely to be significant. Chris Angelico From jsbueno at python.org.br Sat Aug 18 00:28:15 2012 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Fri, 17 Aug 2012 19:28:15 -0300 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: On 17 August 2012 18:52, Chris Angelico wrote: > On Sat, Aug 18, 2012 at 5:27 AM, Guido van Rossum wrote: >> I just fixed a unittest for some code used at Google that was >> comparing a url generated by urllib.encode() to a fixed string. The >> problem was caused by turning on PYTHONHASHSEED=1. Because of this, >> the code under test would generate a textually different URL each time >> the test was run, but the intention of the test was just to check that >> all the query parameters were present and equal to the expected >> values. >> >> >> query = sorted(query.items()) > > Hmm. ISTM this is putting priority on the unit test above the > functionality of actual usage. Although on the other hand, sorting > parameters on a URL is nothing compared to the cost of network > traffic, so it's unlikely to be significant. > I don't think this behavior is only desirable to unit tests: having URL's been formed in predictable way a good thing in any way one thinks about it. js -><- > Chris Angelico > From stephen at xemacs.org Sat Aug 18 07:23:13 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 18 Aug 2012 14:23:13 +0900 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> Joao S. O. Bueno writes: > I don't think this behavior is only desirable to unit tests: having > URL's been formed in predictable way a good thing in any way one > thinks about it. Especially if you're a hacker. One more thing you may be able to use against careless sites that don't expect the unexpected to occur in URLs. I'm not saying this is a bad thing, but we should remember that the whole point of PYTHONHASHSEED is that regularities can be exploited for devious and malicious purposes, and reducing regularity makes many attacks more difficult. "*Any* way one thinks about it" is far too strong a claim. Steve From __peter__ at web.de Sat Aug 18 09:29:39 2012 From: __peter__ at web.de (Peter Otten) Date: Sat, 18 Aug 2012 09:29:39 +0200 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? References: Message-ID: Guido van Rossum wrote: > I wonder if it wouldn't make sense to change urlencode() to generate > URLs that don't depend on the hash order, for all versions of Python > that support PYTHONHASHSEED? It seems a one-line fix: > > query = query.items() > > with this: > > query = sorted(query.items()) > > This would not prevent breakage of unit tests, but it would make a > much simpler fix possible: simply sort the parameters in the URL. > > Thoughts? There may be people who mix bytes and str or pass other non-str keys: >>> query = {b"a":b"b", "c":"d", 5:6} >>> urlencode(query) 'a=b&c=d&5=6' >>> sorted(query.items()) Traceback (most recent call last): File "", line 1, in TypeError: unorderable types: str() < bytes() Not pretty, but a bugfix should not break such constructs. From solipsis at pitrou.net Sat Aug 18 13:29:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Aug 2012 13:29:10 +0200 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? References: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20120818132910.5b5adbaa@pitrou.net> On Sat, 18 Aug 2012 14:23:13 +0900 "Stephen J. Turnbull" wrote: > Joao S. O. Bueno writes: > > > I don't think this behavior is only desirable to unit tests: having > > URL's been formed in predictable way a good thing in any way one > > thinks about it. > > Especially if you're a hacker. One more thing you may be able to use > against careless sites that don't expect the unexpected to occur in > URLs. That's unsubstantiated. Give an example of how sorted URLs compromise security. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From jsbueno at python.org.br Sat Aug 18 14:01:03 2012 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Sat, 18 Aug 2012 09:01:03 -0300 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> References: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: On 18 August 2012 02:23, Stephen J. Turnbull wrote: > Joao S. O. Bueno writes: > > > I don't think this behavior is only desirable to unit tests: having > > URL's been formed in predictable way a good thing in any way one > > thinks about it. > > Especially if you're a hacker. One more thing you may be able to use > against careless sites that don't expect the unexpected to occur in > URLs. > > I'm not saying this is a bad thing, but we should remember that the > whole point of PYTHONHASHSEED is that regularities can be exploited > for devious and malicious purposes, and reducing regularity makes many > attacks more difficult. "*Any* way one thinks about it" is far too > strong a claim. Ageeded that "any way one thinks about it" is far too strong a claim - but I still hold to the point. Maybe "most ways one thinks about it" :-) . > > Steve > > > > From lists at cheimes.de Sat Aug 18 15:28:03 2012 From: lists at cheimes.de (Christian Heimes) Date: Sat, 18 Aug 2012 15:28:03 +0200 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: Am 17.08.2012 21:27, schrieb Guido van Rossum: > I wonder if it wouldn't make sense to change urlencode() to generate > URLs that don't depend on the hash order, for all versions of Python > that support PYTHONHASHSEED? It seems a one-line fix: > > query = query.items() > > with this: > > query = sorted(query.items()) > > This would not prevent breakage of unit tests, but it would make a > much simpler fix possible: simply sort the parameters in the URL. I vote -0. The issue can also be addressed with a small and simple helper function that wraps urlparse and compares the query parameter. Or you cann urlencode() with `sorted(qs.items)` instead of `qs` in the application. The order of query string parameter is actually important for some applications, for example Zope, colander+deform and other form frameworks use the parameter order to group parameters. Therefore I propose that the query string is only sorted when the query is exactly a dict and not some subclass or class that has an items() method. if type(query) is dict: query = sorted(query.items()) else: query = query.items() Christian From guido at python.org Sat Aug 18 19:34:11 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 18 Aug 2012 10:34:11 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: On Sat, Aug 18, 2012 at 6:28 AM, Christian Heimes wrote: > Am 17.08.2012 21:27, schrieb Guido van Rossum: >> I wonder if it wouldn't make sense to change urlencode() to generate >> URLs that don't depend on the hash order, for all versions of Python >> that support PYTHONHASHSEED? It seems a one-line fix: >> >> query = query.items() >> >> with this: >> >> query = sorted(query.items()) >> >> This would not prevent breakage of unit tests, but it would make a >> much simpler fix possible: simply sort the parameters in the URL. > > I vote -0. The issue can also be addressed with a small and simple > helper function that wraps urlparse and compares the query parameter. Or > you cann urlencode() with `sorted(qs.items)` instead of `qs` in the > application. Hm. That's actually a good point. > The order of query string parameter is actually important for some > applications, for example Zope, colander+deform and other form > frameworks use the parameter order to group parameters. > > Therefore I propose that the query string is only sorted when the query > is exactly a dict and not some subclass or class that has an items() method. > > if type(query) is dict: > query = sorted(query.items()) > else: > query = query.items() That's already in the bug I filed. :-) I also added that the sort may fail if the keys mix e.g. bytes and str (or int and str, for that matter). -- --Guido van Rossum (python.org/~guido) From python at mrabarnett.plus.com Sat Aug 18 20:47:57 2012 From: python at mrabarnett.plus.com (MRAB) Date: Sat, 18 Aug 2012 19:47:57 +0100 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: Message-ID: <502FE35D.4010102@mrabarnett.plus.com> On 18/08/2012 18:34, Guido van Rossum wrote: > On Sat, Aug 18, 2012 at 6:28 AM, Christian Heimes wrote: >> Am 17.08.2012 21:27, schrieb Guido van Rossum: >>> I wonder if it wouldn't make sense to change urlencode() to generate >>> URLs that don't depend on the hash order, for all versions of Python >>> that support PYTHONHASHSEED? It seems a one-line fix: >>> >>> query = query.items() >>> >>> with this: >>> >>> query = sorted(query.items()) >>> >>> This would not prevent breakage of unit tests, but it would make a >>> much simpler fix possible: simply sort the parameters in the URL. >> >> I vote -0. The issue can also be addressed with a small and simple >> helper function that wraps urlparse and compares the query parameter. Or >> you cann urlencode() with `sorted(qs.items)` instead of `qs` in the >> application. > > Hm. That's actually a good point. > >> The order of query string parameter is actually important for some >> applications, for example Zope, colander+deform and other form >> frameworks use the parameter order to group parameters. >> >> Therefore I propose that the query string is only sorted when the query >> is exactly a dict and not some subclass or class that has an items() method. >> >> if type(query) is dict: >> query = sorted(query.items()) >> else: >> query = query.items() > > That's already in the bug I filed. :-) I also added that the sort may > fail if the keys mix e.g. bytes and str (or int and str, for that > matter). > One possible way around that is to add the class names, perhaps only if sorting raises an exception: def make_key(pair): return type(pair[0]).__name__, type(pair[1]).__name__, pair if type(query) is dict: try: query = sorted(query.items()) except TypeError: query = sorted(query.items(), key=make_key) else: query = query.items() From guido at python.org Sat Aug 18 22:23:47 2012 From: guido at python.org (Guido van Rossum) Date: Sat, 18 Aug 2012 13:23:47 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <502FE35D.4010102@mrabarnett.plus.com> References: <502FE35D.4010102@mrabarnett.plus.com> Message-ID: On Saturday, August 18, 2012, MRAB wrote: > On 18/08/2012 18:34, Guido van Rossum wrote: > >> On Sat, Aug 18, 2012 at 6:28 AM, Christian Heimes >> wrote: >> >>> Am 17.08.2012 21:27, schrieb Guido van Rossum: >>> >>>> I wonder if it wouldn't make sense to change urlencode() to generate >>>> URLs that don't depend on the hash order, for all versions of Python >>>> that support PYTHONHASHSEED? It seems a one-line fix: >>>> >>>> query = query.items() >>>> >>>> with this: >>>> >>>> query = sorted(query.items()) >>>> >>>> This would not prevent breakage of unit tests, but it would make a >>>> much simpler fix possible: simply sort the parameters in the URL. >>>> >>> >>> I vote -0. The issue can also be addressed with a small and simple >>> helper function that wraps urlparse and compares the query parameter. Or >>> you cann urlencode() with `sorted(qs.items)` instead of `qs` in the >>> application. >>> >> >> Hm. That's actually a good point. >> >> The order of query string parameter is actually important for some >>> applications, for example Zope, colander+deform and other form >>> frameworks use the parameter order to group parameters. >>> >>> Therefore I propose that the query string is only sorted when the query >>> is exactly a dict and not some subclass or class that has an items() >>> method. >>> >>> if type(query) is dict: >>> query = sorted(query.items()) >>> else: >>> query = query.items() >>> >> >> That's already in the bug I filed. :-) I also added that the sort may >> fail if the keys mix e.g. bytes and str (or int and str, for that >> matter). >> >> One possible way around that is to add the class names, perhaps only if > sorting raises an exception: > > def make_key(pair): > return type(pair[0]).__name__, type(pair[1]).__name__, pair > > if type(query) is dict: > try: > query = sorted(query.items()) > except TypeError: > query = sorted(query.items(), key=make_key) > else: > query = query.items() Doesn't strike me as necessary. > > ______________________________**_________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/**mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/**mailman/options/python-dev/** > guido%40python.org > -- Sent from Gmail Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Sat Aug 18 22:55:48 2012 From: v+python at g.nevcal.com (Glenn Linderman) Date: Sat, 18 Aug 2012 13:55:48 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <502FE35D.4010102@mrabarnett.plus.com> References: <502FE35D.4010102@mrabarnett.plus.com> Message-ID: <50300154.80502@g.nevcal.com> On 8/18/2012 11:47 AM, MRAB wrote: >> I vote -0. The issue can also be addressed with a small and simple >> helper function that wraps urlparse and compares the query parameter. Or >> you cann urlencode() with `sorted(qs.items)` instead of `qs` in the >> application. > > Hm. That's actually a good point. Seems adequate to me. Most programs wouldn't care about the order, because most web frameworks grab whatever is there in whatever order, and present it to the web app in their own order. Programs that care, or which talk to web apps that care, are unlikely to want the order from a non-randomized dict, and so have already taken care of ordering issues, so undoing the randomization seems like a solution in search of a problem (other than for poorly written test cases). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sat Aug 18 23:17:14 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 18 Aug 2012 17:17:14 -0400 Subject: [Python-Dev] 3.3 str timings Message-ID: The issue came up in python-list about string operations being slower in 3.3. (The categorical claim is false as some things are actually faster.) Some things I understand, this one I do not. Win7-64, 3.3.0b2 versus 3.2.3 print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 # .6 in 3.2, 1.2 in 3.3 Why is searching for a two-byte char in a two-bytes per char string so much faster in 3.2? Is this worth a tracker issue (I searched and could not find one) or is there a known and un-fixable cause? print(timeit("a.encode()", "a = 'a'*1000")) # 1.5 in 3.2, .26 in 3.3 print(timeit("a.encode(encoding='utf-8')", "a = 'a'*1000")) # 1.7 in 3.2, .51 in 3.3 This is one of the 3.3 improvements. But since the results are equal: ('a'*1000).encode() == ('a'*1000).encode(encoding='utf-8') and 3.3 should know that for an all-ascii string, I do not see why adding the parameter should double the the time. Another issue or known and un-fixable? -- Terry Jan Reedy From solipsis at pitrou.net Sat Aug 18 23:27:58 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 18 Aug 2012 23:27:58 +0200 Subject: [Python-Dev] 3.3 str timings References: Message-ID: <20120818232758.1e8698c1@pitrou.net> On Sat, 18 Aug 2012 17:17:14 -0400 Terry Reedy wrote: > The issue came up in python-list about string operations being slower in > 3.3. (The categorical claim is false as some things are actually > faster.) Some things I understand, this one I do not. > > Win7-64, 3.3.0b2 versus 3.2.3 > print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 > # .6 in 3.2, 1.2 in 3.3 I get opposite numbers: $ python3.2 -m timeit -s "c = '?'; a = 'a'*1000+c" "c in a" 1000000 loops, best of 3: 0.599 usec per loop $ python3.3 -m timeit -s "c = '?'; a = 'a'*1000+c" "c in a" 10000000 loops, best of 3: 0.119 usec per loop However, in both cases the operation is blindingly fast (less than 1?s), which should make it pretty much a non-issue. > Why is searching for a two-byte char in a two-bytes per char string so > much faster in 3.2? Is this worth a tracker issue (I searched and could > not find one) or is there a known and un-fixable cause? I don't think it's worth a tracker issue. First, because as said above it's practically a non-issue. Second, given the nature and depth of changes brought by the switch to the PEP 393 implementation, an individual micro-benchmark like this is not very useful; you'd need to make a more extensive analysis of string performance (as a hint, we have the stringbench benchmark in the Tools directory). > This is one of the 3.3 improvements. But since the results are equal: > ('a'*1000).encode() == ('a'*1000).encode(encoding='utf-8') > and 3.3 should know that for an all-ascii string, I do not see why > adding the parameter should double the the time. Another issue or known > and un-fixable? When observing performance differences, you should ask yourself whether they matter at all or not. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From martin at v.loewis.de Sat Aug 18 23:34:49 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 18 Aug 2012 23:34:49 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: <20120818233449.Horde.NXZMeklCcOxQMAp5hmcSEuA@webmail.df.eu> Zitat von Terry Reedy : > Is this worth a tracker issue (I searched and could not find one) or > is there a known and un-fixable cause? There is a third option: it's not known, but it's also unimportant. I'd say posting it to python-dev is enough: either there is somebody with sufficient time and interest to research it and provide you with an explanation (or a fix). If nobody picks it up right away, it's IMO fine to wait for somebody to report it who has a real problem with this change in runtime. Regards, Martin From rdmurray at bitdance.com Sun Aug 19 01:19:27 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 18 Aug 2012 19:19:27 -0400 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: <20120818231928.58AC625016C@webabinitio.net> On Sat, 18 Aug 2012 17:17:14 -0400, Terry Reedy wrote: > print(timeit("a.encode()", "a = 'a'*1000")) > # 1.5 in 3.2, .26 in 3.3 > > print(timeit("a.encode(encoding='utf-8')", "a = 'a'*1000")) > # 1.7 in 3.2, .51 in 3.3 > > This is one of the 3.3 improvements. But since the results are equal: > ('a'*1000).encode() == ('a'*1000).encode(encoding='utf-8') > and 3.3 should know that for an all-ascii string, I do not see why > adding the parameter should double the the time. Another issue or known > and un-fixable? At one point there was an issue with certain spellings taking a fast path (avoiding a codec lookup?) and other spellings not. I thought we'd fixed that, but perhaps we didn't? --David From tjreedy at udel.edu Sun Aug 19 04:54:53 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 18 Aug 2012 22:54:53 -0400 Subject: [Python-Dev] 3.3 str timings In-Reply-To: <20120818232758.1e8698c1@pitrou.net> References: <20120818232758.1e8698c1@pitrou.net> Message-ID: On 8/18/2012 5:27 PM, Antoine Pitrou wrote: > On Sat, 18 Aug 2012 17:17:14 -0400 > Terry Reedy wrote: >> The issue came up in python-list about string operations being slower in >> 3.3. (The categorical claim is false as some things are actually >> faster.) Some things I understand, this one I do not. >> >> Win7-64, 3.3.0b2 versus 3.2.3 >> print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 >> # .6 in 3.2, 1.2 in 3.3 > > I get opposite numbers: Just curious, what system? > > $ python3.2 -m timeit -s "c = '?'; a = 'a'*1000+c" "c in a" > 1000000 loops, best of 3: 0.599 usec per loop > $ python3.3 -m timeit -s "c = '?'; a = 'a'*1000+c" "c in a" > 10000000 loops, best of 3: 0.119 usec per loop > > However, in both cases the operation is blindingly fast (less than > 1?s), which should make it pretty much a non-issue. The current default 'number' of 1000000 is higher that I remember. Good to know. >> Why is searching for a two-byte char in a two-bytes per char string so >> much faster in 3.2? Is this worth a tracker issue (I searched and could >> not find one) or is there a known and un-fixable cause? > > I don't think it's worth a tracker issue. First, because as said above > it's practically a non-issue. Second, given the nature and depth of > changes brought by the switch to the PEP 393 implementation, an > individual micro-benchmark like this is not very useful; you'd need to > make a more extensive analysis of string performance (as a hint, we > have the stringbench benchmark in the Tools directory). It is not in my 3.3.0b2 windows install, but I have heard of it. Another good reminder. My main interest was in refuting '3.3 strings ops are always slower'. Both points above are also good 'ammo'. I am sure this discussion will re-occur after the release. -- Terry Jan Reedy From stefan at bytereef.org Sun Aug 19 11:11:34 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 19 Aug 2012 11:11:34 +0200 Subject: [Python-Dev] hg verify warnings Message-ID: <20120819091134.GA21668@sleipnir.bytereef.org> Hello, In a fresh clone, I'm getting a couple of warnings in `hg verify`. Perhaps someone familiar with Mercurial could take a brief look: repository uses revlog format 1 checking changesets checking manifests crosschecking files in changesets and manifests checking files warning: copy source of 'Modules/_threadmodule.c' not in parents of 60ad83716733 warning: copy source of 'Objects/bytesobject.c' not in parents of 64bb1d258322 warning: copy source of 'Objects/stringobject.c' not in parents of 357e268e7c5f 9754 files, 78648 changesets, 175109 total revisions 3 warnings encountered! Stefan Krah From lukasz at langa.pl Sun Aug 19 11:53:19 2012 From: lukasz at langa.pl (=?iso-8859-2?Q?=A3ukasz_Langa?=) Date: Sun, 19 Aug 2012 11:53:19 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: <20120818232758.1e8698c1@pitrou.net> References: <20120818232758.1e8698c1@pitrou.net> Message-ID: Wiadomo?? napisana przez Antoine Pitrou w dniu 18 sie 2012, o godz. 23:27: > On Sat, 18 Aug 2012 17:17:14 -0400 > Terry Reedy wrote: >> The issue came up in python-list about string operations being slower in >> 3.3. (The categorical claim is false as some things are actually >> faster.) Some things I understand, this one I do not. >> >> Win7-64, 3.3.0b2 versus 3.2.3 >> print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 >> # .6 in 3.2, 1.2 in 3.3 > > I get opposite numbers: Me too. 3.2 is slower for me in every case. Mac OS X 10.8. -- Best regards, ?ukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o. http://lukasz.langa.pl/ +48 791 080 144 -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Sun Aug 19 13:55:31 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 19 Aug 2012 20:55:31 +0900 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <20120818132910.5b5adbaa@pitrou.net> References: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> <20120818132910.5b5adbaa@pitrou.net> Message-ID: <871uj3rt8s.fsf@uwakimon.sk.tsukuba.ac.jp> Antoine Pitrou writes: > That's unsubstantiated. Sure. If I had a CVE, I would have posted it. > Give an example of how sorted URLs compromise security. That's not how you think about security; the right question about sorted URLs is "how do you know that they *don't* compromise security?" We know that mishandling URLs *can* compromise security (eg, via bugs in directory traversal). But you know that. What you presumably mean here is "why do you think randomly changing query parameter order in URLs is more secure than sorted order?" The answer to that is that since the server can't depend on order, it *must* handle more configurations of parameters by design (and presumably in implementation and testing), and therefore will be robust against more kinds of parameter configurations. Eg, there will be no temptation to optimize processing by handling parameters in sorted order. Is this a "real" danger? Maybe not. But every unnecessary regularity in inputs that a program's implementation depends on is a potential attack vector via irregular inputs. Remember, I was responding to a claim that sorted order is *always* better. That's a dangerous kind of claim to make about anything that could be input to an Internet server. Steve From stephen at xemacs.org Sun Aug 19 13:57:11 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 19 Aug 2012 20:57:11 +0900 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: References: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <87zk5rqelk.fsf@uwakimon.sk.tsukuba.ac.jp> Joao S. O. Bueno writes: > Ageeded that "any way one thinks about it" is far too strong a claim - > but I still hold to the point. Maybe "most ways one thinks about it" > :-) . 100% agreement now. From solipsis at pitrou.net Sun Aug 19 13:59:17 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 19 Aug 2012 13:59:17 +0200 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <871uj3rt8s.fsf@uwakimon.sk.tsukuba.ac.jp> References: <878vdcojsu.fsf@uwakimon.sk.tsukuba.ac.jp> <20120818132910.5b5adbaa@pitrou.net> <871uj3rt8s.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <20120819135917.7b753615@pitrou.net> On Sun, 19 Aug 2012 20:55:31 +0900 "Stephen J. Turnbull" wrote: > Antoine Pitrou writes: > > > That's unsubstantiated. > > Sure. If I had a CVE, I would have posted it. Ok, so you have no evidence. Regards Antoine. From solipsis at pitrou.net Sun Aug 19 15:15:53 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 19 Aug 2012 15:15:53 +0200 Subject: [Python-Dev] hg verify warnings References: <20120819091134.GA21668@sleipnir.bytereef.org> Message-ID: <20120819151553.5593719c@pitrou.net> On Sun, 19 Aug 2012 11:11:34 +0200 Stefan Krah wrote: > Hello, > > In a fresh clone, I'm getting a couple of warnings in `hg verify`. Perhaps > someone familiar with Mercurial could take a brief look: > > repository uses revlog format 1 > checking changesets > checking manifests > crosschecking files in changesets and manifests > checking files > warning: copy source of 'Modules/_threadmodule.c' not in parents of 60ad83716733 > warning: copy source of 'Objects/bytesobject.c' not in parents of 64bb1d258322 > warning: copy source of 'Objects/stringobject.c' not in parents of 357e268e7c5f > 9754 files, 78648 changesets, 175109 total revisions > 3 warnings encountered! I don't get that problem on the master server, nor on two other machines with fresh clones and different hg versions. I suggest you re-try cloning and, if the issue persists, report it on the Mercurial mailing-list. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From stefan at bytereef.org Sun Aug 19 17:08:25 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sun, 19 Aug 2012 17:08:25 +0200 Subject: [Python-Dev] hg verify warnings In-Reply-To: <20120819151553.5593719c@pitrou.net> References: <20120819091134.GA21668@sleipnir.bytereef.org> <20120819151553.5593719c@pitrou.net> Message-ID: <20120819150825.GA28167@sleipnir.bytereef.org> Antoine Pitrou wrote: > > warning: copy source of 'Modules/_threadmodule.c' not in parents of 60ad83716733 > > I don't get that problem on the master server, nor on two other > machines with fresh clones and different hg versions. I suggest you > re-try cloning and, if the issue persists, report it on the Mercurial > mailing-list. Okay, this only occurs if the ~/.hgrc contains "verbose = True". I found a post from Matt Mackall where he says that this only happens with repos that were started with "now-ancient" versions of hg: http://permalink.gmane.org/gmane.comp.version-control.mercurial.general/23195 So it looks like a known issue, see also: https://bugzilla.mozilla.org/show_bug.cgi?id=644904 Stefan Krah From senthil at uthcode.com Sun Aug 19 19:18:29 2012 From: senthil at uthcode.com (Senthil Kumaran) Date: Sun, 19 Aug 2012 10:18:29 -0700 Subject: [Python-Dev] Should urlencode() sort the query parameters (if they come from a dict)? In-Reply-To: <50300154.80502@g.nevcal.com> References: <502FE35D.4010102@mrabarnett.plus.com> <50300154.80502@g.nevcal.com> Message-ID: On Sat, Aug 18, 2012 at 1:55 PM, Glenn Linderman wrote: > > On 8/18/2012 11:47 AM, MRAB wrote: > > I vote -0. The issue can also be addressed with a small and simple > helper function that wraps urlparse and compares the query parameter. Or > you cann urlencode() with `sorted(qs.items)` instead of `qs` in the > application. > > > Hm. That's actually a good point. > > > Seems adequate to me. Most programs wouldn't care about the order, because most web frameworks grab whatever is there in whatever order, and present it to the web app in their own order. > > Programs that care, or which talk to web apps that care, are unlikely to want the order from a non-randomized dict, and so have already taken care of ordering issues, so undoing the randomization seems like a solution in search of a problem (other than for poorly written test cases). > I am of the same thought too. Changing a behavior based on the test case expectation, no matter if the behavior is a harmless change is still a change. Coming to the point testing query string could be useful in some cases and then giving weightage to the change seems interesting use case, but does not seem to warrant a change. I think, I like Christian Heimes suggestion that a wrapper to compare query strings would be useful and in Guido's original test case, a tittle test code change would have been good. Looks like Guido has withdrawn the bug report too. -- Senthil From martin at v.loewis.de Sun Aug 19 22:05:20 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 19 Aug 2012 22:05:20 +0200 Subject: [Python-Dev] hg verify warnings In-Reply-To: <20120819091134.GA21668@sleipnir.bytereef.org> References: <20120819091134.GA21668@sleipnir.bytereef.org> Message-ID: <50314700.8070308@v.loewis.de> > warning: copy source of 'Modules/_threadmodule.c' not in parents of 60ad83716733 > warning: copy source of 'Objects/bytesobject.c' not in parents of 64bb1d258322 > warning: copy source of 'Objects/stringobject.c' not in parents of 357e268e7c5f These revisions are all from Benjamin. So I conclude that he was once using an now-ancient version of hg. Regards, Martin From barry at python.org Mon Aug 20 15:35:42 2012 From: barry at python.org (Barry Warsaw) Date: Mon, 20 Aug 2012 09:35:42 -0400 Subject: [Python-Dev] [Python-checkins] cpython: s/path importer/path based finder/ (because the path based finder is not an In-Reply-To: <3X0gxd4vH0zQ5X@mail.python.org> References: <3X0gxd4vH0zQ5X@mail.python.org> Message-ID: <20120820093542.49e04a51@limelight.wooz.org> On Aug 20, 2012, at 05:49 AM, nick.coghlan wrote: > s/path importer/path based finder/ (because the path based finder is not an > importer and the simpler 'path finder' is too ambiguous) +1! -Barry From Miko.Lehtonen at students.lskky.fi Mon Aug 20 10:35:23 2012 From: Miko.Lehtonen at students.lskky.fi (Lehtonen Miko) Date: Mon, 20 Aug 2012 08:35:23 +0000 Subject: [Python-Dev] PyPy 1.9 - Yard Wolf Message-ID: moro -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Aug 20 19:08:48 2012 From: brett at python.org (Brett Cannon) Date: Mon, 20 Aug 2012 13:08:48 -0400 Subject: [Python-Dev] [Python-checkins] cpython: s/path importer/path based finder/ (because the path based finder is not an In-Reply-To: <3X0gxd4vH0zQ5X@mail.python.org> References: <3X0gxd4vH0zQ5X@mail.python.org> Message-ID: Should that be "path-based"? On Sun, Aug 19, 2012 at 11:49 PM, nick.coghlan wrote: > http://hg.python.org/cpython/rev/2f9f5ab3d754 > changeset: 78664:2f9f5ab3d754 > user: Nick Coghlan > date: Mon Aug 20 13:49:08 2012 +1000 > summary: > s/path importer/path based finder/ (because the path based finder is not > an importer and the simpler 'path finder' is too ambiguous) > > files: > Doc/glossary.rst | 6 +- > Doc/reference/import.rst | 95 ++++++++++++++------------- > Misc/NEWS | 4 + > 3 files changed, 57 insertions(+), 48 deletions(-) > > > diff --git a/Doc/glossary.rst b/Doc/glossary.rst > --- a/Doc/glossary.rst > +++ b/Doc/glossary.rst > @@ -317,7 +317,7 @@ > > import path > A list of locations (or :term:`path entries `) that are > - searched by the :term:`path importer` for modules to import. During > + searched by the :term:`path based finder` for modules to import. > During > import, this list of locations usually comes from :data:`sys.path`, > but > for subpackages it may also come from the parent package's > ``__path__`` > attribute. > @@ -550,7 +550,7 @@ > > path entry > A single location on the :term:`import path` which the :term:`path > - importer` consults to find modules for importing. > + based finder` consults to find modules for importing. > > path entry finder > A :term:`finder` returned by a callable on :data:`sys.path_hooks` > @@ -562,7 +562,7 @@ > entry finder` if it knows how to find modules on a specific > :term:`path > entry`. > > - path importer > + path based finder > One of the default :term:`meta path finders ` > which > searches an :term:`import path` for modules. > > diff --git a/Doc/reference/import.rst b/Doc/reference/import.rst > --- a/Doc/reference/import.rst > +++ b/Doc/reference/import.rst > @@ -42,6 +42,12 @@ > invoked. These strategies can be modified and extended by using various > hooks > described in the sections below. > > +.. versionchanged:: 3.3 > + The import system has been updated to fully implement the second phase > + of PEP 302. There is no longer any implicit import machinery - the full > + import system is exposed through :data:`sys.meta_path`. In addition, > + native namespace package support has been implemented (see PEP 420). > + > > :mod:`importlib` > ================ > @@ -213,7 +219,7 @@ > interfaces are referred to as :term:`importers ` - they return > themselves when they find that they can load the requested module. > > -By default, Python comes with several default finders and importers. One > +Python includes a number of default finders and importers. One > knows how to locate frozen modules, and another knows how to locate > built-in modules. A third default finder searches an :term:`import path` > for modules. The :term:`import path` is a list of locations that may > @@ -307,7 +313,7 @@ > Python's default :data:`sys.meta_path` has three meta path finders, one > that > knows how to import built-in modules, one that knows how to import frozen > modules, and one that knows how to import modules from an :term:`import > path` > -(i.e. the :term:`path importer`). > +(i.e. the :term:`path based finder`). > > > Loaders > @@ -356,14 +362,14 @@ > * If the module is a package (either regular or namespace), the loader > must > set the module object's ``__path__`` attribute. The value must be > iterable, but may be empty if ``__path__`` has no further significance > - to the importer. If ``__path__`` is not empty, it must produce strings > + to the loader. If ``__path__`` is not empty, it must produce strings > when iterated over. More details on the semantics of ``__path__`` are > given :ref:`below `. > > * The ``__loader__`` attribute must be set to the loader object that > loaded > the module. This is mostly for introspection and reloading, but can be > - used for additional importer-specific functionality, for example > getting > - data associated with an importer. > + used for additional loader-specific functionality, for example getting > + data associated with a loader. > > * The module's ``__package__`` attribute should be set. Its value must > be a > string, but it can be the same value as its ``__name__``. If the > attribute > @@ -456,18 +462,18 @@ > correctly for the namespace package. > > > -The Path Importer > -================= > +The Path Based Finder > +===================== > > .. index:: > - single: path importer > + single: path based finder > > As mentioned previously, Python comes with several default meta path > finders. > -One of these, called the :term:`path importer`, searches an :term:`import > +One of these, called the :term:`path based finder`, searches an > :term:`import > path`, which contains a list of :term:`path entries `. Each > path > entry names a location to search for modules. > > -The path importer itself doesn't know how to import anything. Instead, it > +The path based finder itself doesn't know how to import anything. > Instead, it > traverses the individual path entries, associating each of them with a > path entry finder that knows how to handle that particular kind of path. > > @@ -479,10 +485,10 @@ > loading all of these file types (other than shared libraries) from > zipfiles. > > Path entries need not be limited to file system locations. They can > refer to > -the URLs, database queries, or any other location that can be specified > as a > +URLs, database queries, or any other location that can be specified as a > string. > > -The :term:`path importer` provides additional hooks and protocols so that > you > +The path based finder provides additional hooks and protocols so that you > can extend and customize the types of searchable path entries. For > example, > if you wanted to support path entries as network URLs, you could write a > hook > that implements HTTP semantics to find modules on the web. This hook (a > @@ -498,8 +504,8 @@ > In particular, meta path finders operate at the beginning of the import > process, as keyed off the :data:`sys.meta_path` traversal. > > -On the other hand, path entry finders are in a sense an implementation > detail > -of the :term:`path importer`, and in fact, if the path importer were to be > +By contrast, path entry finders are in a sense an implementation detail > +of the path based finder, and in fact, if the path based finder were to be > removed from :data:`sys.meta_path`, none of the path entry finder > semantics > would be invoked. > > @@ -513,17 +519,17 @@ > single: sys.path_importer_cache > single: PYTHONPATH > > -The :term:`path importer` is responsible for finding and loading Python > +The :term:`path based finder` is responsible for finding and loading > Python > modules and packages whose location is specified with a string :term:`path > entry`. Most path entries name locations in the file system, but they > need > not be limited to this. > > -As a meta path finder, the :term:`path importer` implements the > +As a meta path finder, the :term:`path based finder` implements the > :meth:`find_module()` protocol previously described, however it exposes > additional hooks that can be used to customize how modules are found and > loaded from the :term:`import path`. > > -Three variables are used by the :term:`path importer`, :data:`sys.path`, > +Three variables are used by the :term:`path based finder`, > :data:`sys.path`, > :data:`sys.path_hooks` and :data:`sys.path_importer_cache`. The > ``__path__`` > attributes on package objects are also used. These provide additional > ways > that the import machinery can be customized. > @@ -536,38 +542,40 @@ > (see the :mod:`site` module) that should be searched for modules, such as > URLs, or database queries. > > -The :term:`path importer` is a :term:`meta path finder`, so the import > +The :term:`path based finder` is a :term:`meta path finder`, so the import > machinery begins the :term:`import path` search by calling the path > -importer's :meth:`find_module()` method as described previously. When > +based finder's :meth:`find_module()` method as described previously. When > the ``path`` argument to :meth:`find_module()` is given, it will be a > list of string paths to traverse - typically a package's ``__path__`` > attribute for an import within that package. If the ``path`` argument > is ``None``, this indicates a top level import and :data:`sys.path` is > used. > > -The :term:`path importer` iterates over every entry in the search path, > and > +The path based finder iterates over every entry in the search path, and > for each of these, looks for an appropriate :term:`path entry finder` for > the > path entry. Because this can be an expensive operation (e.g. there may be > -`stat()` call overheads for this search), the :term:`path importer` > maintains > +`stat()` call overheads for this search), the path based finder maintains > a cache mapping path entries to path entry finders. This cache is > maintained > -in :data:`sys.path_importer_cache`. In this way, the expensive search > for a > -particular :term:`path entry` location's :term:`path entry finder` need > only > -be done once. User code is free to remove cache entries from > -:data:`sys.path_importer_cache` forcing the :term:`path importer` to > perform > -the path entry search again [#fnpic]_. > +in :data:`sys.path_importer_cache` (despite the name, this cache actually > +stores finder objects rather than being limited to :term:`importer` > objects). > +In this way, the expensive search for a particular :term:`path entry` > +location's :term:`path entry finder` need only be done once. User code is > +free to remove cache entries from :data:`sys.path_importer_cache` forcing > +the path based finder to perform the path entry search again [#fnpic]_. > > -If the path entry is not present in the cache, the path importer iterates > over > -every callable in :data:`sys.path_hooks`. Each of the :term:`path entry > hooks > -` in this list is called with a single argument, the path > -entry being searched. This callable may either return a :term:`path entry > -finder` that can handle the path entry, or it may raise > :exc:`ImportError`. > -An :exc:`ImportError` is used by the path importer to signal that the hook > +If the path entry is not present in the cache, the path based finder > iterates > +over every callable in :data:`sys.path_hooks`. Each of the > +:term:`path entry hooks ` in this list is called with a > +single argument, the path entry to be searched. This callable may either > +return a :term:`path entry finder` that can handle the path entry, or it > may > +raise :exc:`ImportError`. > +An :exc:`ImportError` is used by the path based finder to signal that the > hook > cannot find a :term:`path entry finder` for that :term:`path entry`. The > exception is ignored and :term:`import path` iteration continues. > > If :data:`sys.path_hooks` iteration ends with no :term:`path entry finder` > -being returned, then the path importer's :meth:`find_module()` method will > -store ``None`` in :data:`sys.path_importer_cache` (to indicate that there > -is no finder for this path entry) and return ``None``, indicating that > +being returned, then the path based finder's :meth:`find_module()` method > +will store ``None`` in :data:`sys.path_importer_cache` (to indicate that > +there is no finder for this path entry) and return ``None``, indicating > that > this :term:`meta path finder` could not find the module. > > If a :term:`path entry finder` *is* returned by one of the :term:`path > entry > @@ -594,8 +602,8 @@ > must be a sequence, although it can be empty. > > If :meth:`find_loader()` returns a non-``None`` loader value, the portion > is > -ignored and the loader is returned from the path importer, terminating the > -search through the path entries. > +ignored and the loader is returned from the path based finder, terminating > +the search through the path entries. > > For backwards compatibility with other implementations of the import > protocol, many path entry finders also support the same, > @@ -645,9 +653,6 @@ > XXX runpy, pkgutil, et al in the library manual should all get "See Also" > links at the top pointing to the new import system section. > > -XXX The :term:`path importer` is not, in fact, an :term:`importer`. That's > -why the corresponding implementation class is > :class:`importlib.PathFinder`. > - > > References > ========== > @@ -667,8 +672,8 @@ > :pep:`366` describes the addition of the ``__package__`` attribute for > explicit relative imports in main modules. > > -:pep:`328` introduced absolute and relative imports and initially proposed > -``__name__`` for semantics :pep:`366` would eventually specify for > +:pep:`328` introduced absolute and explicit relative imports and initially > +proposed ``__name__`` for semantics :pep:`366` would eventually specify > for > ``__package__``. > > :pep:`338` defines executing modules as scripts. > @@ -679,14 +684,14 @@ > > .. [#fnmo] See :class:`types.ModuleType`. > > -.. [#fnlo] The importlib implementation appears not to use the return > value > +.. [#fnlo] The importlib implementation avoids using the return value > directly. Instead, it gets the module object by looking the module > name up > - in :data:`sys.modules`.) The indirect effect of this is that an > imported > + in :data:`sys.modules`. The indirect effect of this is that an > imported > module may replace itself in :data:`sys.modules`. This is > implementation-specific behavior that is not guaranteed to work in > other > Python implementations. > > .. [#fnpic] In legacy code, it is possible to find instances of > :class:`imp.NullImporter` in the :data:`sys.path_importer_cache`. It > - recommended that code be changed to use ``None`` instead. See > + is recommended that code be changed to use ``None`` instead. See > :ref:`portingpythoncode` for more details. > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -55,6 +55,10 @@ > Documentation > ------------- > > +- The "path importer" misnomer has been replaced with Eric Snow's > + more-awkward-but-at-least-not-wrong suggestion of "path based finder" in > + the import system reference docs > + > - Issue #15640: Document importlib.abc.Finder as deprecated. > > - Issue #15630: Add an example for "continue" stmt in the tutorial. Patch > by > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Aug 20 23:17:10 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 20 Aug 2012 23:17:10 +0200 Subject: [Python-Dev] Raw I/O writelines() broken Message-ID: <20120820231710.192f0c39@pitrou.net> Hello, I was considering a FileIO.writelines() implementation based on writev() and I noticed that the current RawIO.writelines() implementation is broken: RawIO.write() can return a partial write but writelines() ignores the result and happily proceeds to the next iterator item (and None is returned at the end). (it's probably broken with non-blocking streams too, for the same reason) In the spirit of RawIO.write(), I think RawIO.writelines() could return the number of bytes written (allowing for partial writes). Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From solipsis at pitrou.net Mon Aug 20 23:42:56 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 20 Aug 2012 23:42:56 +0200 Subject: [Python-Dev] Raw I/O writelines() broken References: <20120820231710.192f0c39@pitrou.net> Message-ID: <20120820234256.2b0c5908@pitrou.net> On Mon, 20 Aug 2012 23:17:10 +0200 Antoine Pitrou wrote: > > Hello, > > I was considering a FileIO.writelines() implementation based on > writev() and I noticed that the current RawIO.writelines() > implementation is broken: RawIO.write() can return a partial write but > writelines() ignores the result and happily proceeds to the next > iterator item (and None is returned at the end). > > (it's probably broken with non-blocking streams too, for the same > reason) > > In the spirit of RawIO.write(), I think RawIO.writelines() could return > the number of bytes written (allowing for partial writes). Another possibility would be a separate RawIO.writev() that would allow partial writes, and to fix RawIO.writelines() to always do complete writes. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From rdmurray at bitdance.com Tue Aug 21 02:34:28 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 20 Aug 2012 20:34:28 -0400 Subject: [Python-Dev] Raw I/O writelines() broken In-Reply-To: <20120820234256.2b0c5908@pitrou.net> References: <20120820231710.192f0c39@pitrou.net> <20120820234256.2b0c5908@pitrou.net> Message-ID: <20120821003429.6DF6C2500FE@webabinitio.net> On Mon, 20 Aug 2012 23:42:56 +0200, Antoine Pitrou wrote: > On Mon, 20 Aug 2012 23:17:10 +0200 > Antoine Pitrou wrote: > > > > Hello, > > > > I was considering a FileIO.writelines() implementation based on > > writev() and I noticed that the current RawIO.writelines() > > implementation is broken: RawIO.write() can return a partial write but > > writelines() ignores the result and happily proceeds to the next > > iterator item (and None is returned at the end). > > > > (it's probably broken with non-blocking streams too, for the same > > reason) > > > > In the spirit of RawIO.write(), I think RawIO.writelines() could return > > the number of bytes written (allowing for partial writes). > > Another possibility would be a separate RawIO.writev() that would allow > partial writes, and to fix RawIO.writelines() to always do complete > writes. I think writelines doing a partial write is counter-intuitive in the Python context (as well as being contrary to the existing documentation), so I think I'd favor this. --David From agriff at tin.it Tue Aug 21 02:39:52 2012 From: agriff at tin.it (Andrea Griffini) Date: Tue, 21 Aug 2012 02:39:52 +0200 Subject: [Python-Dev] Raw I/O writelines() broken In-Reply-To: <20120820234256.2b0c5908@pitrou.net> References: <20120820231710.192f0c39@pitrou.net> <20120820234256.2b0c5908@pitrou.net> Message-ID: On Mon, Aug 20, 2012 at 11:42 PM, Antoine Pitrou wrote: >> In the spirit of RawIO.write(), I think RawIO.writelines() could return >> the number of bytes written (allowing for partial writes). When dealing with a non-blocking IO what you normally do is use number returned from the write call to make next call and try to write the remaining part. How is this supposed to work with writelines? What is the caller supposed to do? From cs at zip.com.au Tue Aug 21 02:44:08 2012 From: cs at zip.com.au (Cameron Simpson) Date: Tue, 21 Aug 2012 10:44:08 +1000 Subject: [Python-Dev] Raw I/O writelines() broken In-Reply-To: References: Message-ID: <20120821004408.GA22666@cskk.homeip.net> On 21Aug2012 02:39, Andrea Griffini wrote: | On Mon, Aug 20, 2012 at 11:42 PM, Antoine Pitrou wrote: | >> In the spirit of RawIO.write(), I think RawIO.writelines() could return | >> the number of bytes written (allowing for partial writes). | | When dealing with a non-blocking IO what you normally do is use number | returned from the write call to make next call and try to write the | remaining part. | | How is this supposed to work with writelines? What is the caller supposed to do? I'd expect writelines to include such logic within itself, personally. -- Cameron Simpson Yield to temptation; It may not pass your way again. - Robert A Heinlein From ncoghlan at gmail.com Tue Aug 21 03:29:54 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Aug 2012 11:29:54 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Close #4966: revamp the sequence docs in order to better explain the state of In-Reply-To: <50326D6D.3050701@udel.edu> References: <3X0mV86gLXzPcJ@mail.python.org> <50326D6D.3050701@udel.edu> Message-ID: On Tue, Aug 21, 2012 at 3:01 AM, Terry Reedy wrote: > > > On 8/20/2012 3:14 AM, nick.coghlan wrote: >> >> +(5) >> + :meth:`clear` and :meth:`!copy` are included for consistency with the >> + interfaces of mutable containers that don't support slicing operations >> + (such as :class:`dict` and :class:`set`) >> + >> + .. versionadded:: 3.3 >> + :meth:`clear` and :meth:`!copy` methods. > > > Should !copy be copy (both places) or is '!' some markup I don't know about? It means you get the formatting without the cross-reference. I didn't write that bit - it shows up in the diff because it was moved around. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Aug 21 03:31:38 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Aug 2012 11:31:38 +1000 Subject: [Python-Dev] [Python-checkins] cpython: s/path importer/path based finder/ (because the path based finder is not an In-Reply-To: References: <3X0gxd4vH0zQ5X@mail.python.org> Message-ID: On Tue, Aug 21, 2012 at 3:08 AM, Brett Cannon wrote: > Should that be "path-based"? I don't mind either way. I ain't trawling through to change it everywhere though - I've already done that once this week :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From apalala at gmail.com Tue Aug 21 04:19:57 2012 From: apalala at gmail.com (=?ISO-8859-1?Q?=22Juancarlo_A=F1ez_=28Apalala=29=22?=) Date: Mon, 20 Aug 2012 21:49:57 -0430 Subject: [Python-Dev] Jython roadmap Message-ID: <5032F04D.1020204@gmail.com> It seems that Jython is under the Python Foundation, but I can't find a roadmap, a plan, or instructions about how to contribute to it reaching 2.7 and 3.3. Are there any pages that describe the process? Thanks in advance, -- Juanca From martin at v.loewis.de Tue Aug 21 07:34:42 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 21 Aug 2012 07:34:42 +0200 Subject: [Python-Dev] Jython roadmap In-Reply-To: <5032F04D.1020204@gmail.com> References: <5032F04D.1020204@gmail.com> Message-ID: <20120821073442.Horde.SHnrUKGZi1VQMx3ynGDC7nA@webmail.df.eu> Zitat von "Juancarlo A?ez (Apalala)" : > It seems that Jython is under the Python Foundation, but I can't > find a roadmap, a plan, or instructions about how to contribute to > it reaching 2.7 and 3.3. > > Are there any pages that describe the process? Hi Juanca, These questions are best asked on the jython-dev mailing list, see http://sourceforge.net/mail/?group_id=12867 python-dev is primarily focussed on CPython instead. There doesn't seem to be much contributor information in the web; all I could find is the bug reporting instructions: http://www.jython.org/docs/bugs.html Regards, Martin From ncoghlan at gmail.com Tue Aug 21 09:47:28 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Aug 2012 17:47:28 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Close #4966: revamp the sequence docs in order to better explain the state of In-Reply-To: <5032EA7E.70107@gmail.com> References: <3X0mV86gLXzPcJ@mail.python.org> <5032EA7E.70107@gmail.com> Message-ID: On Tue, Aug 21, 2012 at 11:55 AM, Ezio Melotti wrote: >> +Sequence Types --- :class:`list`, :class:`tuple`, :class:`range` >> +================================================================ >> + > > > These 3 links in the section title redirect to the functions.html page. I > think it would be better if they linked to the appropriate subsection > instead, and in the case of the subsections (e.g. "Text Sequence Type --- > str") they shouldn't be links. The same comment can be applied to other > titles as well. I made a start on moving the info out of functions.html and adding appropriate noindex entries. str, bytes and bytearray haven't been consolidated at all yet. >> ++--------------------------+--------------------------------+----------+ >> +| ``s * n, n * s`` | *n* shallow copies of *s* | (2)(7) | > > > I would use '``s * n`` or ``n * s``' here. Done. >> +| ``s.index(x, [i[, j]])`` | index of the first occurence | \(8) | > > > This should be ``s.index(x[, i[, j]])`` Done. >> + >> + * if concatenating :class:`tuple` objects, extend a :class:`list` >> instead. >> + >> + * for other types, investigate the relevant class documentation >> + > > > The trailing punctuation of the elements in this list is inconsistent. Just removed all trailing punctuation from these bullet points for now. > > You missed clear() from this list. The problem was actually index() and count() were missing from the index for the "common sequence operations" table. Added them there, and moved that index above the table. copy() was missing from the index list for the mutable sequence methods, so I added that. > Also in the "Result" column the descriptions in prose are OK, but I find > some of the "same as ..." ones not very readable (or even fairly obscure). > (I think I saw something similar in the doc of list.append() too.) These are all rather old - much of this patch was just moving things around rather than fixing the prose, although there was plenty of the latter, too :) I tried to improve them a bit. > Is it worth mentioning a function call as an example of syntactic ambiguity? > Someone might wonder if foo(a, b, c) is actually passing a 3-elements tuple > or 3 distinct values. Done. > This claim is maybe a bit too strong. I think the main reason to use > namedtuples is being able to access the elements via t.name, rather than > t[pos], and while this can be useful for basically every heterogeneous > tuple, I think that plain tuples are still preferred. Reworded. > On a separate note, should tuple unpacking be mentioned here? (a link to a > separate section of the doc is enough.) Not really - despite the name, tuple unpacking isn't especially closely related to tuples these days. > I would mention explicitly "in :keyword:`for` loops" -- ranges don't loop on > their own (I think people familiar with Ruby and/or JQuery might get > confused here). Done. > I thought that these two paragraphs were talking about positive and negative > start/stop/step until I reached the middle of the second paragraph (the word > "indices" wasn't enough to realize that these paragraphs are about > indexing/slicing, probably because they are rarely used and I wasn't > expecting to find them at this point of the doc). Maybe it's better to move > the paragraphs at the bottom of the section. For the moment, I've just dumped the old range builtin docs into this section. They need a pass to remove the duplication and ensure everything makes sense in context. >> +String literals that are part of a single expression and have only >> whitespace >> +between them will be implicitly converted to a single string literal. >> + > > > Is it a string /literal/ they are converted to? Yup: >>> ast.dump(compile('"hello world"', '', 'eval', flags=ast.PyCF_ONLY_AST)) "Expression(body=Str(s='hello world'))" >>> ast.dump(compile('"hello" " world"', '', 'eval', flags=ast.PyCF_ONLY_AST)) "Expression(body=Str(s='hello world'))" > Anyway a simple ('foo' 'bar') == 'foobar' example might make this sentence > more understandable. Added. >> +There is also no mutable string type, but :meth:`str.join` or >> +:class:`io.StringIO` can be used to efficiently construct strings from >> +multiple fragments. >> + > > str.format() deserves to be mentioned here too. For the kinds of strings where quadratic growth is a problem, str.format is unlikely to be appropriate. > I noticed that here there's this fairly long section about the "old" string > formatting and nothing about the "new" formatting. Maybe this should be > moved together with the new formatting doc, so that all the detailed > formatting docs are in the same place. (This would also help making this > less noticeable) Probably. There are a lot of structural problems in the current docs, because the layout hasn't previously changed to suit the language design changes. >> +While bytes literals and representations are based on ASCII text, bytes >> +objects actually behave like immutable sequences of integers, with each >> +value in the sequence restricted such that ``0 <= x < 256`` (attempts to > > Earlier you used 0 <= x <= 255. The current docs are the result of many merges, much of which I didn't write. This is only the start of improving them by breaking away from the old 1.x structure with a few autocratic decisions on my part to establish a new layout that makes more sense given the evolution of the language, especially the big changes in 2.2 and 3.0 :) > Using ``'abc'.replace('a', 'f')`` and ``b'abc'.replace(b'a', b'f')`` inline > would be better IMHO, given that the current note takes lot of space to > explain a trivial concept. Stylistics edits are always fair game, they don't have to be made by me. While I skipped a lot of your specific suggestions that weren't correcting actual errors, that doesn't mean I'm especially opposed to them, just that I didn't like them enough to implement myself :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From luc_j_bourhis at mac.com Tue Aug 21 12:24:35 2012 From: luc_j_bourhis at mac.com (Luc Bourhis) Date: Tue, 21 Aug 2012 12:24:35 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? Message-ID: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Greetings, it is my understanding that the patches floating around the net to support Visual Studio 2010 to compile the Python core and for distutils will never be accepted and therefore that the 2.7 line is stuck to VS 2008 for the remaining of its life. Could you please confirm that? Best wishes, Luc From rdmurray at bitdance.com Tue Aug 21 14:01:15 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 21 Aug 2012 08:01:15 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Close #4966: revamp the sequence docs in order to better explain the state of In-Reply-To: References: <3X0mV86gLXzPcJ@mail.python.org> <5032EA7E.70107@gmail.com> Message-ID: <20120821120116.2FC6A250085@webabinitio.net> On Tue, 21 Aug 2012 17:47:28 +1000, Nick Coghlan wrote: > On Tue, Aug 21, 2012 at 11:55 AM, Ezio Melotti wrote: > >> +String literals that are part of a single expression and have only > >> whitespace > >> +between them will be implicitly converted to a single string literal. > >> + > > > > > > Is it a string /literal/ they are converted to? > Yup: > > >>> ast.dump(compile('"hello world"', '', 'eval', flags=ast.PyCF_ONLY_AST)) > "Expression(body=Str(s='hello world'))" > >>> ast.dump(compile('"hello" " world"', '', 'eval', flags=ast.PyCF_ONLY_AST)) > "Expression(body=Str(s='hello world'))" > > > Anyway a simple ('foo' 'bar') == 'foobar' example might make this sentence > > more understandable. > > Added. I think it is an important and subtle point that this happens at "compile time" rather than "run time". Subtle in that it is not at all obvious (as this question demonstrates), and important in that it does have performance implications (even if those are trivial in most cases). So I think it would be worth saying "implicitly converted to a single string literal when the source is parsed", or something like that. --David From ncoghlan at gmail.com Tue Aug 21 14:07:57 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Aug 2012 22:07:57 +1000 Subject: [Python-Dev] [Python-checkins] cpython: Close #4966: revamp the sequence docs in order to better explain the state of In-Reply-To: <20120821120116.2FC6A250085@webabinitio.net> References: <3X0mV86gLXzPcJ@mail.python.org> <5032EA7E.70107@gmail.com> <20120821120116.2FC6A250085@webabinitio.net> Message-ID: On Tue, Aug 21, 2012 at 10:01 PM, R. David Murray wrote: > I think it is an important and subtle point that this happens at "compile > time" rather than "run time". Subtle in that it is not at all obvious > (as this question demonstrates), and important in that it does have > performance implications (even if those are trivial in most cases). > So I think it would be worth saying "implicitly converted to a single > string literal when the source is parsed", or something like that. That kind of fine detail is what the language reference is for - the distinction really doesn't matter most of the time. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Tue Aug 21 15:04:03 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 21 Aug 2012 15:04:03 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: 2012/8/18 Terry Reedy : > The issue came up in python-list about string operations being slower in > 3.3. (The categorical claim is false as some things are actually faster.) Yes, some operations are slower, but others are faster :-) There was an important effort to limit the overhead of the PEP 393 (when the branch was merged, most operations were slower). I tried to fix all performance regressions. If you find cases where Python 3.3 is slower, I can investigate and try to optimize it (in Python 3.4) or at least explain why it is slower :-) As said by Antoine, use the stringbench tool if you would like to get a first overview of string performances. > Some things I understand, this one I do not. > > Win7-64, 3.3.0b2 versus 3.2.3 > print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 > # .6 in 3.2, 1.2 in 3.3 On Linux with narrow build (UTF-16), I get: $ python3.2 -m timeit -s "c=chr(8230); a='a'*1000+c" "c in a" 100000 loops, best of 3: 4.25 usec per loop $ python3.3 -m timeit -s "c=chr(8230); a='a'*1000+c" "c in a" 100000 loops, best of 3: 3.21 usec per loop Linux-2.6.30.10-105.2.23.fc11.i586-i686-with-fedora-11-Leonidas Python 3.2.2+ (3.2:1453d2fe05bf, Aug 21 2012, 14:21:05) Python 3.3.0b2+ (default:b36ce0a3a844, Aug 21 2012, 14:05:23) I'm not sure that I read your benchmark correctly: you write c='...' and then ord(c)=8230. Algorithms to find a substring are different if the substring is a single character or if the substring is longer. For 1 character, Antoine Pitrou modified the code to use memchr() and memrchr(), even if the string is not UCS1 (if this benchmark, the string uses a UCS2 storage): it may find false positives. > Why is searching for a two-byte char in a two-bytes per char string so much > faster in 3.2? Can you reproduce your benchmark on other Windows platforms? Do you run the benchmark more than once? I always run a benchmark 3 times. I don't like the timeit module for micro benchmarks, it is really unstable (default settings are not written for micro benchmarks). Example of 4 runs on the same platform: $ ./python -m timeit -s "a='a'*1000" "a.encode()" 100000 loops, best of 3: 2.79 usec per loop $ ./python -m timeit -s "a='a'*1000" "a.encode()" 100000 loops, best of 3: 2.61 usec per loop $ ./python -m timeit -s "a='a'*1000" "a.encode()" 100000 loops, best of 3: 3.16 usec per loop $ ./python -m timeit -s "a='a'*1000" "a.encode()" 100000 loops, best of 3: 2.76 usec per loop I wrote my own benchmark tool, based on timeit, to have more stable results on micro benchmarks: https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py Example of 4 runs: 3.18 us: c=chr(8230); a='a'*1000+c; c in a 3.18 us: c=chr(8230); a='a'*1000+c; c in a 3.21 us: c=chr(8230); a='a'*1000+c; c in a 3.18 us: c=chr(8230); a='a'*1000+c; c in a My benchmark.py script calibrates automatically the number of loops to take at least 100 ms, and then repeat the test during at least 1.0 second. Using time instead of a fixed number of loops is more reliable because the test is less dependent on the system activity. > print(timeit("a.encode()", "a = 'a'*1000")) > # 1.5 in 3.2, .26 in 3.3 > > print(timeit("a.encode(encoding='utf-8')", "a = 'a'*1000")) > # 1.7 in 3.2, .51 in 3.3 This test doesn't compare performances of the UTF-8 encoder: "encode" an ASCII string to UTF-8 in Python 3.3 is a no-op, it just duplicates the memory (ASCII is compatible with UTF-8)... So your benchmark just measures the performances of PyArg_ParseTupleAndKeywords()... Try also str.encode('utf-8'). If you want to benchmark the UTF-8 encoder, use at least a non-ASCII character like "\x80". At least, your benchmark shows that Python 3.3 is *much* faster than Python 3.2 to "encode" pure ASCII strings to UTF-8 :-) Victor From brian at python.org Tue Aug 21 15:44:37 2012 From: brian at python.org (Brian Curtin) Date: Tue, 21 Aug 2012 08:44:37 -0500 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: On Tue, Aug 21, 2012 at 5:24 AM, Luc Bourhis wrote: > Greetings, > > it is my understanding that the patches floating around the net to support Visual Studio 2010 to compile the Python core and for distutils will never be accepted and therefore that the 2.7 line is stuck to VS 2008 for the remaining of its life. Could you please confirm that? This is correct. A compiler upgrade is a feature, so the change to VS2010 could only be applied to the version actively receiving new features, which at the time was 3.3. From amauryfa at gmail.com Tue Aug 21 16:35:03 2012 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Tue, 21 Aug 2012 16:35:03 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: 2012/8/21 Brian Curtin : > On Tue, Aug 21, 2012 at 5:24 AM, Luc Bourhis wrote: >> Greetings, >> >> it is my understanding that the patches floating around the net to support Visual Studio 2010 to compile the Python core and for distutils will never be accepted and therefore that the 2.7 line is stuck to VS 2008 for the remaining of its life. Could you please confirm that? > > This is correct. A compiler upgrade is a feature, so the change to > VS2010 could only be applied to the version actively receiving new > features, which at the time was 3.3. But this does not prevent anyone from creating and maintaining such a patch, outside of the official python.org repository. -- Amaury Forgeot d'Arc From martin at v.loewis.de Tue Aug 21 16:57:30 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 21 Aug 2012 16:57:30 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: <20120821165730.Horde.4ObJQ7uWis5QM6HayEFDH9A@webmail.df.eu> Zitat von Luc Bourhis : > it is my understanding that the patches floating around the net to > support Visual Studio 2010 to compile the Python core and for > distutils will never be accepted and therefore that the 2.7 line is > stuck to VS 2008 for the remaining of its life. Could you please > confirm that? That is correct, yes. OTOH, Python is free software, so people are free to maintain such patches, and even make binary releases out of them. These just won't be available from python.org. Regards, Martin From martin at v.loewis.de Tue Aug 21 17:01:21 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 21 Aug 2012 17:01:21 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> Zitat von Brian Curtin : > On Tue, Aug 21, 2012 at 5:24 AM, Luc Bourhis wrote: >> Greetings, >> >> it is my understanding that the patches floating around the net to >> support Visual Studio 2010 to compile the Python core and for >> distutils will never be accepted and therefore that the 2.7 line is >> stuck to VS 2008 for the remaining of its life. Could you please >> confirm that? > > This is correct. A compiler upgrade is a feature In the specific case, this isn't actually the limiting factor. Instead, it's binary compatibility: binaries compiled with VS 2010 are incompatible (in some cases) with those compiled with VS 2008. So if the python.org binaries were released as compiler outputs from VS 2010, exising extensions modules might crash Python. Therefore, we cannot switch. Maintaining a VS 2010 build process along with the VS 2008 process would be a new feature, indeed. Fortunately, Mercurial makes it easy enough to maintain such patches in a ways that allows simple tracking of changes applied to 2.7 itself, for anybody with enough interest to do so. Regards, Martin From agriff at tin.it Tue Aug 21 17:20:14 2012 From: agriff at tin.it (Andrea Griffini) Date: Tue, 21 Aug 2012 17:20:14 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: > My benchmark.py script calibrates automatically the number of loops to > take at least 100 ms, and then repeat the test during at least 1.0 > second. > > Using time instead of a fixed number of loops is more reliable because > the test is less dependent on the system activity. I've also been bitten in the past by something that is probably quite obvious but I didn't think to, that is dynamic cpu frequency. Many modern CPUs can dynamically change the frequency depending on the load and temperature and the switch can take more than one second. When doing benchmarks now I've a small script (based on cpufreq-set) that just blocks all the cores into fast mode. From martin at v.loewis.de Tue Aug 21 17:28:40 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Tue, 21 Aug 2012 17:28:40 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: <20120821172840.Horde.7zSgZruWis5QM6ko2ocjYRA@webmail.df.eu> >> print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 > I'm not sure that I read your benchmark correctly: you write c='...' Apparenly you didn't - or your MUA was not able to display it correctly. He didn't say '...' # U+002E U+002E U+002E, 3x FULL STOP but '?' # U+2026, HORIZONTAL ELLIPSIS Regards, Martin From lists at cheimes.de Tue Aug 21 17:32:01 2012 From: lists at cheimes.de (Christian Heimes) Date: Tue, 21 Aug 2012 17:32:01 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> Message-ID: Am 21.08.2012 17:01, schrieb martin at v.loewis.de: > In the specific case, this isn't actually the limiting factor. > Instead, it's binary compatibility: binaries compiled with VS 2010 > are incompatible (in some cases) with those compiled with VS 2008. > So if the python.org binaries were released as compiler outputs > from VS 2010, exising extensions modules might crash Python. Therefore, > we cannot switch. Compatibility issues may lead to other strange bugs, too. IIRC each msvcrt has its own thread local storage and therefore its own errno handling. An extension compiled with VS 2010 won't be able to use the PyErr_SetFromErrno*() function correctly. That's much harder to debug than a FILE pointer mismatch because it usually doesn't cause a segfault. Christian From solipsis at pitrou.net Tue Aug 21 17:53:15 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 21 Aug 2012 17:53:15 +0200 Subject: [Python-Dev] 3.3 str timings References: Message-ID: <20120821175315.3d3f02e5@pitrou.net> On Tue, 21 Aug 2012 17:20:14 +0200 Andrea Griffini wrote: > > My benchmark.py script calibrates automatically the number of loops to > > take at least 100 ms, and then repeat the test during at least 1.0 > > second. > > > > Using time instead of a fixed number of loops is more reliable because > > the test is less dependent on the system activity. > > I've also been bitten in the past by something that is probably quite > obvious but I didn't think to, that is dynamic cpu frequency. Many > modern CPUs can dynamically change the frequency depending on the load > and temperature and the switch can take more than one second. > > When doing benchmarks now I've a small script (based on cpufreq-set) > that just blocks all the cores into fast mode. For the record, under Linux, the following command: $ sudo cpufreq-set -rg performance should do the trick. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From steve at pearwood.info Tue Aug 21 19:25:21 2012 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 22 Aug 2012 03:25:21 +1000 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: <5033C481.2040903@pearwood.info> On 21/08/12 23:04, Victor Stinner wrote: > I don't like the timeit module for micro benchmarks, it is really > unstable (default settings are not written for micro benchmarks). [...] > I wrote my own benchmark tool, based on timeit, to have more stable > results on micro benchmarks: > https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py I am surprised, because the whole purpose of timeit is to time micro code snippets. If it is as unstable as you suggest, and if you have an alternative which is more stable and accurate, I would love to see it in the standard library. -- Steven From python-dev at masklinn.net Tue Aug 21 19:56:16 2012 From: python-dev at masklinn.net (Xavier Morel) Date: Tue, 21 Aug 2012 19:56:16 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: <5033C481.2040903@pearwood.info> References: <5033C481.2040903@pearwood.info> Message-ID: On 21 ao?t 2012, at 19:25, Steven D'Aprano wrote: > On 21/08/12 23:04, Victor Stinner wrote: > >> I don't like the timeit module for micro benchmarks, it is really >> unstable (default settings are not written for micro benchmarks). > [...] >> I wrote my own benchmark tool, based on timeit, to have more stable >> results on micro benchmarks: >> https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py > > I am surprised, because the whole purpose of timeit is to time micro > code snippets. And when invoked from the command-line, it is already time-based: unless -n is specified, python guesstimates the number of iterations to be a power of 10 resulting in at least 0.2s per test (the repeat defaults to 3 though) As a side-note, every time I use timeit programmatically, it annoys me that this behavior is not available and has to be implemented manually. > If it is as unstable as you suggest, and if you have an alternative > which is more stable and accurate, I would love to see it in the > standard library. From stefan_ml at behnel.de Tue Aug 21 20:38:18 2012 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 21 Aug 2012 20:38:18 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: <5033C481.2040903@pearwood.info> Message-ID: Xavier Morel, 21.08.2012 19:56: > On 21 ao?t 2012, at 19:25, Steven D'Aprano wrote: >> On 21/08/12 23:04, Victor Stinner wrote: >>> I don't like the timeit module for micro benchmarks, it is really >>> unstable (default settings are not written for micro benchmarks). >> [...] >>> I wrote my own benchmark tool, based on timeit, to have more stable >>> results on micro benchmarks: >>> https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py >> >> I am surprised, because the whole purpose of timeit is to time micro >> code snippets. > > And when invoked from the command-line, it is already time-based: unless > -n is specified, python guesstimates the number of iterations to be a > power of 10 resulting in at least 0.2s per test (the repeat defaults to > 3 though) > > As a side-note, every time I use timeit programmatically, it annoys me > that this behavior is not available and has to be implemented manually. +100, sounds like someone should contribute a patch for this. Stefan From alexander.belopolsky at gmail.com Tue Aug 21 20:39:35 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Tue, 21 Aug 2012 14:39:35 -0400 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: <5033C481.2040903@pearwood.info> Message-ID: On Tue, Aug 21, 2012 at 1:56 PM, Xavier Morel wrote: > As a side-note, every time I use timeit programmatically, it annoys me that this behavior is not available and has to be implemented manually. You are not alone: http://bugs.python.org/issue6422 From solipsis at pitrou.net Tue Aug 21 20:41:43 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 21 Aug 2012 20:41:43 +0200 Subject: [Python-Dev] 3.3 str timings References: <5033C481.2040903@pearwood.info> Message-ID: <20120821204143.4f16efe8@pitrou.net> On Wed, 22 Aug 2012 03:25:21 +1000 Steven D'Aprano wrote: > On 21/08/12 23:04, Victor Stinner wrote: > > > I don't like the timeit module for micro benchmarks, it is really > > unstable (default settings are not written for micro benchmarks). > [...] > > I wrote my own benchmark tool, based on timeit, to have more stable > > results on micro benchmarks: > > https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py > > I am surprised, because the whole purpose of timeit is to time micro > code snippets. > > If it is as unstable as you suggest, and if you have an alternative > which is more stable and accurate, I would love to see it in the > standard library. In my experience timeit is stable enough to know whether a change is significant or not. No need for three-digit precision when the question is whether there is at least a 10% performance difference between two approaches. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From luc_j_bourhis at mac.com Tue Aug 21 19:38:42 2012 From: luc_j_bourhis at mac.com (Luc Bourhis) Date: Tue, 21 Aug 2012 19:38:42 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: Thanks for the quick response. >> [...] A compiler upgrade is a feature, so the change to >> VS2010 could only be applied to the version actively receiving new >> features, which at the time was 3.3. > > But this does not prevent anyone from creating and maintaining such a > patch, outside of the official python.org repository. I was contemplating that option indeed. S?bastien Sabl? seemed to have the same aim. Would you know any other such efforts? I would rather prefer to contribute back to the community. Best wishes, Luc Bourhis From storchaka at gmail.com Tue Aug 21 22:36:47 2012 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 21 Aug 2012 23:36:47 +0300 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: On 19.08.12 00:17, Terry Reedy wrote: > This is one of the 3.3 improvements. But since the results are equal: > ('a'*1000).encode() == ('a'*1000).encode(encoding='utf-8') > and 3.3 should know that for an all-ascii string, I do not see why > adding the parameter should double the the time. Another issue or known > and un-fixable? This is a cost of argument packing/unpacking. From tjreedy at udel.edu Wed Aug 22 00:08:57 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 21 Aug 2012 18:08:57 -0400 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: On 8/21/2012 9:04 AM, Victor Stinner wrote: > 2012/8/18 Terry Reedy : >> The issue came up in python-list about string operations being slower in >> 3.3. (The categorical claim is false as some things are actually faster.) > > Yes, some operations are slower, but others are faster :-) Yes, that is what I wrote, showed, and posted to python-list :-) I was and am posting here in response to a certain French writer who dislikes the fact that 3.3 unicode favors text written with the first 256 code points, which do not include all the characters needed for French, and do not include the euro symbol invented years after that set was established. His opinion aside, his search for 'evidence' did turn up a version of the example below. > an important effort to limit the overhead of the PEP 393 (when the > branch was merged, most operations were slower). I tried to fix all > performance regressions. Yes, I read and appreciated the speed-up patches by you and others. > If you find cases where Python 3.3 is slower, > I can investigate and try to optimize it (in Python 3.4) or at least > explain why it is slower :-) Replacement appears to be as much as 6.5 times slower on some Win 7 machines. (I factored out the setup part, which increased the ratio since it takes the same time on both machines.) ttr = timeit.repeat # 3.2.3 >>> ttr("euroreplace('?', '?')", "euroreplace = ('?'*100).replace") [0.385043233078477, 0.35294282203631155, 0.3468394370770511] # 3.3.0b2 >>> ttr("euroreplace('?', '?')", "euroreplace = ('?'*100).replace") [2.2624885911213823, 2.245330314124203, 2.2531118686461014] How do this compare on *nix? > As said by Antoine, use the stringbench tool if you would like to get > a first overview of string performances. I found it, ran it on 3.2 and 3.3, and posted to python-list that 3.3 unicode looks quite good. It is overall comparable to both byte operations and 3.2 unicode operations. Replace operations were relatively the slowest, though I do not remember any as bad as the example above. >> Some things I understand, this one I do not. >> >> Win7-64, 3.3.0b2 versus 3.2.3 >> print(timeit("c in a", "c = '?'; a = 'a'*1000+c")) # ord(c) = 8230 >> # .6 in 3.2, 1.2 in 3.3 > > On Linux with narrow build (UTF-16), I get: > > $ python3.2 -m timeit -s "c=chr(8230); a='a'*1000+c" "c in a" > 100000 loops, best of 3: 4.25 usec per loop > $ python3.3 -m timeit -s "c=chr(8230); a='a'*1000+c" "c in a" > 100000 loops, best of 3: 3.21 usec per loop The slowdown seems to be specific to (some?) windows systems. Perhaps we as hitting a difference in the VC2008 and VC2010 compilers or runtimes. Someone on python-list wondered whether the 3.3.0 betas have the same compile optimization settings as 3.2.3 final. Martin? > Can you reproduce your benchmark on other Windows platforms? Do you > run the benchmark more than once? I always run a benchmark 3 times. Always, and now I see the repeat does this for me. > I don't like the timeit module for micro benchmarks, it is really > unstable (default settings are not written for micro benchmarks). I am reporting rounded lowest times. As other said, make timeit better if you can. >> print(timeit("a.encode()", "a = 'a'*1000")) >> # 1.5 in 3.2, .26 in 3.3 >> >> print(timeit("a.encode(encoding='utf-8')", "a = 'a'*1000")) >> # 1.7 in 3.2, .51 in 3.3 > > This test doesn't compare performances of the UTF-8 encoder: "encode" > an ASCII string to UTF-8 in Python 3.3 is a no-op, it just duplicates > the memory (ASCII is compatible with UTF-8)... That is what I thought, and why I was puzzled, ... > So your benchmark just measures the performances of > PyArg_ParseTupleAndKeywords()..., having forgotten about arg processing. I should have factored out the .encode lookup (as I did with .replace). The following suggests that you are correct. The difference, about .3, is independent of the length of string being copied. >>> ttr("aenc()", "aenc = ('a'*10000).encode") [0.588499543029684, 0.5760222493490801, 0.5757037691037112] >>> ttr("aenc(encoding='utf-8')", "aenc = ('a'*10000).encode") [0.8973955632254729, 0.887000380270365, 0.884113153942053] >>> ttr("aenc()", "aenc = ('a'*50000).encode") [3.6618914099180984, 3.650091040467487, 3.6542183723140624] >>> ttr("aenc(encoding='utf-8')", "aenc = ('a'*50000).encode") [3.964849740958016, 3.9363826484832316, 3.937290440151628] -- Terry Jan Reedy From martin at v.loewis.de Wed Aug 22 00:34:19 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 22 Aug 2012 00:34:19 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> Message-ID: <20120822003419.Horde.7ueiObuWis5QNAzrCstnLwA@webmail.df.eu> > I was contemplating that option indeed. S?bastien Sabl? seemed to > have the same aim. Would you know any other such efforts? I believe Kristjan Jonsson has a port as well. Regards, Martin From martin at v.loewis.de Wed Aug 22 00:46:24 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Wed, 22 Aug 2012 00:46:24 +0200 Subject: [Python-Dev] 3.3 str timings In-Reply-To: References: Message-ID: <20120822004624.Horde.haccRLuWis5QNA-A5-Z3QKA@webmail.df.eu> Zitat von Terry Reedy : > I was and am posting here in response to a certain French writer who > dislikes the fact that 3.3 unicode favors text written with the > first 256 code points, which do not include all the characters > needed for French, and do not include the euro symbol invented years > after that set was established. His opinion aside, his search for > 'evidence' did turn up a version of the example below. I personally don't see a need to "defend" this or any other deliberate change. There is a need to defend changes before they are made, to convince co-contributors and other Python users, this is what the PEP process is good for. One point of the PEP process is that once the PEP is accepted, discussion ought to stop - or anybody continuing in discussion doesn't deserve an answer by anybody not interested. Anybody who doesn't like the change is free not to use Python 3.3, or stay at 2.7, use PyPy, or switch to Ruby altogether. Neither bothers me to the slightest. If people find proper bugs, they are encouraged to report them; if they contribute patches along, the better. If they merely want to complain - let them complain. If they want to see an agreed-upon patch reverted, they can try to lobby a BDFL pronouncement. I certainly think the performance of str in 3.3 is fine, and thought so even before Serhiy or Victor submitted their patches. I actually dislike some of the code complication that these improvements brought, but I can accept that a certain loss of maintainability that gives better performance makes a lot of people happy. But I will continue to object further complications that support irrelevant special cases. Regards, Martin From "ja...py" at farowl.co.uk Wed Aug 22 00:17:07 2012 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Tue, 21 Aug 2012 23:17:07 +0100 Subject: [Python-Dev] Jython roadmap In-Reply-To: <20120821073442.Horde.SHnrUKGZi1VQMx3ynGDC7nA@webmail.df.eu> References: <5032F04D.1020204@gmail.com> <20120821073442.Horde.SHnrUKGZi1VQMx3ynGDC7nA@webmail.df.eu> Message-ID: <503408E3.8090101@farowl.co.uk> On 21/08/2012 06:34, martin at v.loewis.de wrote: > > Zitat von "Juancarlo A?ez (Apalala)" : > >> It seems that Jython is under the Python Foundation, but I can't find >> a roadmap, a plan, or instructions about how to contribute to it >> reaching 2.7 and 3.3. >> >> Are there any pages that describe the process? > > Hi Juanca, > > These questions are best asked on the jython-dev mailing list, see > Hi Juancarlo: I'm cross-posting this for you on jython-dev as Martin is right. Let's continue there. Jython does need new helpers and I agree it isn't very easy to get started. And we could do with a published roadmap. I began by fixing a few bugs (about a year ago now), as that seemed to be the suggestion on-line and patches can be offered unilaterally. (After a bit of nagging) some of these got reviewed and I'd won my spurs. I found the main difficulty to be understanding the source, or rather the architecture: there is too little documentation and some of what you can find is out of date (svn?). A lot of basic stuff is still a complete mystery to me. As I've discovered things I've put them on the Jython Wiki ( http://wiki.python.org/jython/JythonDeveloperGuide ) in the hope of speeding others' entry, including up-to-date description of how to get the code to build in Eclipse. One place to look, that may not occur to you immediately, is Frank Wierzbicki's blog ( http://fwierzbicki.blogspot.co.uk/ ). Frank is the project manager for Jython, an author of the Jython book, and has worked like a Trojan (the good kind, not the horse) over the last 6 months. Although Frank has shared inklings of a roadmap, it must be difficult to put dates to things that depend on a small pool of volunteers working in their spare time -- especially perfectionist volunteers who write more Javadoc than actual code, then delete it all because they've had a better idea :-). Direction of travel is easier: 2.5.3 is out, we're trying to get to 2.7b, but with an eye on 3.3. I haven't seen anything systematic on what's still to do, who's doing it, and where the gaps are, which is probably what you're looking for. ... Frank? Jeff Allen From trent at snakebite.org Thu Aug 23 00:28:16 2012 From: trent at snakebite.org (Trent Nelson) Date: Wed, 22 Aug 2012 18:28:16 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys Message-ID: <20120822222816.GF42732@snakebite.org> Hi folks, I've set up a bunch of Snakebite build slaves over the past week. One of the original goals was to provide Python committers with full access to the slaves, which I'm still keen on providing. What's a nice simple way to achieve that in the interim? Here's what I was thinking: - Create a new hg repo: hg.python.org/keys. - Committers can push to it just like any other repo (i.e. same ssh/authz configuration as cpython). - Repo is laid out as follows: keys/ / ssh (ssh public key) gpg (gpg public key) - Prime the repo with the current .ssh/authorized_keys (presuming you still use the --tunnel-user facility?). That'll provide me with everything I need to set up the relevant .ssh/authorized_keys stuff on the Snakebite side. GPG keys will be handy if I ever need to send passwords over e-mail (which I'll probably have to do initially for those that want to RDP into the Windows slaves). Thoughts? As for the slaves, here's what's up and running now: - AMD64 Mountain Lion [SB] - AMD64 FreeBSD 8.2 [SB] - AMD64 FreeBSD 9.1 [SB] - AMD64 NetBSD 5.1.2 [SB] - AMD64 OpenBSD 5.1 [SB] - AMD64 DragonFlyBSD 3.0.2 [SB] - AMD64 Windows Server 2008 R2 SP1 [SB] - x86 NetBSD 5.1.2 [SB] - x86 OpenBSD 5.1 [SB] - x86 DragonFlyBSD 3.0.2 [SB] - x86 Windows Server 2003 R2 SP2 [SB] - x86 Windows Server 2008 R2 SP1 [SB] All the FreeBSD ones use ZFS, all the DragonFly ones use HAMMER. DragonFly, NetBSD and OpenBSD are currently reporting all sorts of weird and wonderful errors, which is partly why I want to set up ssh access sooner rather than later. Other slaves on the horizon (i.e. hardware is up, OS is installed): - Windows 8 x64 (w/ VS2010 and VS2012) - HP-UX 11iv2 PA-RISC - HP-UX 11iv3 Itanium (64GB RAM) - AIX 5.3 RS/6000 - AIX 6.1 RS/6000 - AIX 7.1 RS/6000 - Solaris 9 SPARC - Solaris 10 SPARC Nostalgia slaves that probably won't ever see green: - IRIX 6.5.33 MIPS - Tru64 5.1B Alpha If anyone wants ssh access now to the UNIX platforms in order to debug/test, feel free to e-mail me directly with your ssh public keys. For committers on other Python projects like Buildbot, Django and Twisted that may be reading this -- yes, the plan is to give you guys Snakebite access/slaves down the track too. I'll start looking into that after I've finished setting up the remaining slaves for Python. (Setting up a keys repo will definitely help (doesn't have to be hg -- feel free to use svn/git/whatever, just try and follow the same layout).) Regards, Trent "that-took-a-bit-longer-than-expected" Nelson. From tjreedy at udel.edu Thu Aug 23 00:30:27 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 22 Aug 2012 18:30:27 -0400 Subject: [Python-Dev] root@python doc cron job failure messages Message-ID: root at python is indirectly trying to send doc cron job failure messages to the python-checkings list. headers below. They are caught and held for moderation since "Blind carbon copies or other implicit destinations are not allowed." I think it is a mistake to send these messages to checkins, which has enough checkins traffic already, but I do not know who is responsible to fix the situation. The last two examples: "home/docs/devguide/documenting.rst:773: WARNING: term not in glossary: bytecode" "abort: error: Connection timed out" Headers: Return-Path: <docs at python.org> X-Original-To: python-checkins at python.org Delivered-To: python-checkins at mail.python.org Received: from albatross.python.org (localhost [127.0.0.1]) by mail.python.org (Postfix) with ESMTP id 3X2G3z3xCNzQjK; Wed, 22 Aug 2012 19:30:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=python.org; s=200901; t=1345656623; bh=conhuN6h+7FXE7LPMr0jHBM5W+Bs5Ld9a8QDgyfQyA4=; h=Date:Message-Id:From:To:Subject:Content-Type; b=lVY4n5KqDW1Qzzy4ngaHTMcO7wCbBlDQzSPWDqaNsUGwrBrcjtY1X8+hiDNsDxUA/ A/wYxK1w887LE2mbzqzONtg2zoUau0cvTvG52sg0aXHqWLidRNbvJZ3WxYeYSC1ph/ pK5u6M9JBd5a1HOiyiTOA5uTu6DWXATy04FTkjdM= X-Spam-Status: OK 0.009 X-Spam-Evidence: '*H*': 0.98; '*S*': 0.00; 'received:dinsdale.python.org': 0.03; 'error:': 0.05; 'subject:build': 0.07; 'subject: <': 0.09; 'message- id:@dinsdale.python.org': 0.16; 'subject:home': 0.16; 'timed': 0.16; 'from:addr:python.org': 0.17; 'subject:/': 0.28; 'connection': 0.30; 'received:python.org': 0.31; 'received:org': 0.36; 'subject:-': 0.40; 'header:Message-Id:1': 0.62; 'to:addr:docs': 0.68; 'subject:@': 0.81 Received: from localhost (HELO mail.python.org) (127.0.0.1) by albatross.python.org with SMTP; 22 Aug 2012 19:30:23 +0200 Received: from dinsdale.python.org (svn.python.org [IPv6:2001:888:2000:d::a4]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.python.org (Postfix) with ESMTPS; Wed, 22 Aug 2012 19:30:23 +0200 (CEST) Received: from docs by dinsdale.python.org with local (Exim 4.72) (envelope-from <docs at dinsdale.python.org>) id 1T4El5-0007tw-4K for docs at dinsdale.python.org; Wed, 22 Aug 2012 19:30:23 +0200 Date: Wed, 22 Aug 2012 19:30:23 +0200 Message-Id: <E1T4El5-0007tw-4K at dinsdale.python.org> From: root at python.org (Cron Daemon) To: docs at dinsdale.python.org Subject: Cron <docs at dinsdale> /home/docs/build-devguide Content-Type: text/plain; charset=UTF-8 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/home/docs> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=docs> -- Terry Jan Reedy From noah at coderanger.net Thu Aug 23 00:53:34 2012 From: noah at coderanger.net (Noah Kantrowitz) Date: Thu, 23 Aug 2012 10:53:34 +1200 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120822222816.GF42732@snakebite.org> References: <20120822222816.GF42732@snakebite.org> Message-ID: <8EE1B5A3-B1C2-436F-8CE4-99E016DE4408@coderanger.net> For everyone with a record in the Chef server (read: everyone with SSH access to any of the PSF servers at OSL) I can easily give you automated access. Whats the easiest format? I can give you a Python script that will spit out files or JSON or more or less whatever else you want. --Noah On Aug 23, 2012, at 10:28 AM, Trent Nelson wrote: > Hi folks, > > I've set up a bunch of Snakebite build slaves over the past week. > One of the original goals was to provide Python committers with > full access to the slaves, which I'm still keen on providing. > > What's a nice simple way to achieve that in the interim? Here's > what I was thinking: > > - Create a new hg repo: hg.python.org/keys. > > - Committers can push to it just like any other repo (i.e. > same ssh/authz configuration as cpython). > > - Repo is laid out as follows: > keys/ > / > ssh (ssh public key) > gpg (gpg public key) > > - Prime the repo with the current .ssh/authorized_keys > (presuming you still use the --tunnel-user facility?). > > That'll provide me with everything I need to set up the relevant > .ssh/authorized_keys stuff on the Snakebite side. GPG keys will > be handy if I ever need to send passwords over e-mail (which I'll > probably have to do initially for those that want to RDP into the > Windows slaves). > > Thoughts? > > As for the slaves, here's what's up and running now: > > - AMD64 Mountain Lion [SB] > - AMD64 FreeBSD 8.2 [SB] > - AMD64 FreeBSD 9.1 [SB] > - AMD64 NetBSD 5.1.2 [SB] > - AMD64 OpenBSD 5.1 [SB] > - AMD64 DragonFlyBSD 3.0.2 [SB] > - AMD64 Windows Server 2008 R2 SP1 [SB] > - x86 NetBSD 5.1.2 [SB] > - x86 OpenBSD 5.1 [SB] > - x86 DragonFlyBSD 3.0.2 [SB] > - x86 Windows Server 2003 R2 SP2 [SB] > - x86 Windows Server 2008 R2 SP1 [SB] > > All the FreeBSD ones use ZFS, all the DragonFly ones use HAMMER. > DragonFly, NetBSD and OpenBSD are currently reporting all sorts > of weird and wonderful errors, which is partly why I want to set > up ssh access sooner rather than later. > > Other slaves on the horizon (i.e. hardware is up, OS is installed): > > - Windows 8 x64 (w/ VS2010 and VS2012) > - HP-UX 11iv2 PA-RISC > - HP-UX 11iv3 Itanium (64GB RAM) > - AIX 5.3 RS/6000 > - AIX 6.1 RS/6000 > - AIX 7.1 RS/6000 > - Solaris 9 SPARC > - Solaris 10 SPARC > > Nostalgia slaves that probably won't ever see green: > - IRIX 6.5.33 MIPS > - Tru64 5.1B Alpha > > If anyone wants ssh access now to the UNIX platforms in order to > debug/test, feel free to e-mail me directly with your ssh public > keys. > > For committers on other Python projects like Buildbot, Django and > Twisted that may be reading this -- yes, the plan is to give you > guys Snakebite access/slaves down the track too. I'll start looking > into that after I've finished setting up the remaining slaves for > Python. (Setting up a keys repo will definitely help (doesn't have > to be hg -- feel free to use svn/git/whatever, just try and follow > the same layout).) > > Regards, > > Trent "that-took-a-bit-longer-than-expected" Nelson. > ________________________________________________ > Infrastructure mailing list > Infrastructure at python.org > http://mail.python.org/mailman/listinfo/infrastructure > Unsubscribe: http://mail.python.org/mailman/options/infrastructure/noah%40coderanger.net -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 203 bytes Desc: Message signed with OpenPGP using GPGMail URL: From ncoghlan at gmail.com Thu Aug 23 01:03:59 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 23 Aug 2012 09:03:59 +1000 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120822222816.GF42732@snakebite.org> References: <20120822222816.GF42732@snakebite.org> Message-ID: On Thu, Aug 23, 2012 at 8:28 AM, Trent Nelson wrote: > Hi folks, > > I've set up a bunch of Snakebite build slaves over the past week. > One of the original goals was to provide Python committers with > full access to the slaves, which I'm still keen on providing. > > What's a nice simple way to achieve that in the interim? Here's > what I was thinking: > > - Create a new hg repo: hg.python.org/keys. > > - Committers can push to it just like any other repo (i.e. > same ssh/authz configuration as cpython). > > - Repo is laid out as follows: > keys/ > / > ssh (ssh public key) > gpg (gpg public key) > > - Prime the repo with the current .ssh/authorized_keys > (presuming you still use the --tunnel-user facility?). Make ssh and gpg directories and this sounds like a usefully secure way to allow us to add extra keys (currently, there's a security hole in the fact that requests to change our registered ssh key for access are not themselves authenticated electronically) Also, nice work on getting to this point, even though it turned out to be a lot more work than you originally anticipated! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From brett at python.org Thu Aug 23 01:52:54 2012 From: brett at python.org (Brett Cannon) Date: Wed, 22 Aug 2012 19:52:54 -0400 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: References: <20120822222816.GF42732@snakebite.org> Message-ID: On Wed, Aug 22, 2012 at 7:03 PM, Nick Coghlan wrote: > On Thu, Aug 23, 2012 at 8:28 AM, Trent Nelson wrote: > > Hi folks, > > > > I've set up a bunch of Snakebite build slaves over the past week. > > One of the original goals was to provide Python committers with > > full access to the slaves, which I'm still keen on providing. > > > > What's a nice simple way to achieve that in the interim? Here's > > what I was thinking: > > > > - Create a new hg repo: hg.python.org/keys. > > > > - Committers can push to it just like any other repo (i.e. > > same ssh/authz configuration as cpython). > > > > - Repo is laid out as follows: > > keys/ > > / > > ssh (ssh public key) > > gpg (gpg public key) > > > > - Prime the repo with the current .ssh/authorized_keys > > (presuming you still use the --tunnel-user facility?). > > Make ssh and gpg directories and this sounds like a usefully secure > way to allow us to add extra keys (currently, there's a security hole > in the fact that requests to change our registered ssh key for access > are not themselves authenticated electronically) > Screw security, it would mean ssh keys would be self-serve! =) No more having to email an alias that bugs Georg and Antoine to add a key when you can do it yourself (or for the person who you nominated to gain commit access). This assumes, of course, that Georg, Antoine, and Martin are cool with this can get some hook set up to make this work with our current setup. > > Also, nice work on getting to this point, even though it turned out to > be a lot more work than you originally anticipated! > I expect a TIP BoF update at PyCon US 2013 or else I consider this an early April Fool's joke. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Thu Aug 23 02:43:49 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 22 Aug 2012 20:43:49 -0400 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <8EE1B5A3-B1C2-436F-8CE4-99E016DE4408@coderanger.net> References: <20120822222816.GF42732@snakebite.org> <8EE1B5A3-B1C2-436F-8CE4-99E016DE4408@coderanger.net> Message-ID: <20120823004349.BDEF8250168@webabinitio.net> On Thu, 23 Aug 2012 10:53:34 +1200, Noah Kantrowitz wrote: > For everyone with a record in the Chef server (read: everyone with SSH access to any of the PSF servers at OSL) I can easily give you automated access. Whats the easiest format? I can give you a Python script that will spit out files or JSON or more or less whatever else you want. That isn't going to be the right set of keys for Trent's purposes (though it is likely to be a subset). The keyfile we use for the hg repository is. --David From martin at v.loewis.de Thu Aug 23 09:17:32 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 23 Aug 2012 09:17:32 +0200 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: References: <20120822222816.GF42732@snakebite.org> Message-ID: <5035D90C.5050903@v.loewis.de> On 23.08.2012 01:03, Nick Coghlan wrote: > currently, there's a security hole > in the fact that requests to change our registered ssh key for access > are not themselves authenticated electronically Indeed, we should start requesting birth certificates, else someone claiming to be Reinhold Birkenfeld may get commit access :-) Regards, Martin From chris at simplistix.co.uk Thu Aug 23 08:46:04 2012 From: chris at simplistix.co.uk (Chris Withers) Date: Thu, 23 Aug 2012 07:46:04 +0100 Subject: [Python-Dev] bug in tarfile module? Message-ID: <5035D1AC.2010001@simplistix.co.uk> Hi All, This feels like a bug, but just wanted to check here before filing a report if I've missed something: buzzkill$ python2.7 Enthought Python Distribution -- www.enthought.com Version: 7.2-2 (32-bit) Python 2.7.2 |EPD 7.2-2 (32-bit)| (default, Sep 7 2011, 09:16:50) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "packages", "demo" or "enthought" for more information. >>> import tarfile >>> source = open('/src/Python-2.6.7.tgz', 'rb') >>> tar = tarfile.open(fileobj=source, mode='r|*') >>> member = tar.extractfile('Python-2.6.7/Lib/genericpath.py') >>> data = member.read() Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", line 815, in read buf += self.fileobj.read() File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", line 735, in read return self.readnormal(size) File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", line 742, in readnormal self.fileobj.seek(self.offset + self.position) File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", line 554, in seek raise StreamError("seeking backwards is not allowed") tarfile.StreamError: seeking backwards is not allowed The key is the "mode='r*|" which I understood to be specifically for reading blocks from a stream without seeking that would cause problems. I've reproduced on Py26 and Py27 on Mac OS X, and Py26 on SUSE. Thoughts? Chris -- Simplistix - Content Management, Batch Processing & Python Consulting - http://www.simplistix.co.uk From georg at python.org Thu Aug 23 09:22:20 2012 From: georg at python.org (Georg Brandl) Date: Thu, 23 Aug 2012 09:22:20 +0200 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <5035D90C.5050903@v.loewis.de> References: <20120822222816.GF42732@snakebite.org> <5035D90C.5050903@v.loewis.de> Message-ID: <5035DA2C.4030308@python.org> On 23.08.2012 09:17, "Martin v. L?wis" wrote: > On 23.08.2012 01:03, Nick Coghlan wrote: >> currently, there's a security hole >> in the fact that requests to change our registered ssh key for access >> are not themselves authenticated electronically > > Indeed, we should start requesting birth certificates, else someone > claiming to be Reinhold Birkenfeld may get commit access :-) Of course, even then it might be a political problem for us if Barack Obama wants to join as a core contributor if he happens to be voted out of office and gets bored ;) cheers, Georg From martin at v.loewis.de Thu Aug 23 09:24:33 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 23 Aug 2012 09:24:33 +0200 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120823004349.BDEF8250168@webabinitio.net> References: <20120822222816.GF42732@snakebite.org> <8EE1B5A3-B1C2-436F-8CE4-99E016DE4408@coderanger.net> <20120823004349.BDEF8250168@webabinitio.net> Message-ID: <5035DAB1.40906@v.loewis.de> On 23.08.2012 02:43, R. David Murray wrote: > On Thu, 23 Aug 2012 10:53:34 +1200, Noah Kantrowitz wrote: >> For everyone with a record in the Chef server (read: everyone with SSH access to any of the PSF servers at OSL) I can easily give you automated access. Whats the easiest format? I can give you a Python script that will spit out files or JSON or more or less whatever else you want. > > That isn't going to be the right set of keys for Trent's purposes > (though it is likely to be a subset). The keyfile we use for the hg > repository is. ... for which it would be easiest if we give Trent access to the repository storing these keys. I'm a bit hesitant to put "public" keys into the real world-wide public, given the past history of easily-breakable public keys. For PGP, this is less of a concern than for SSH, since the threats are smaller (plus users where aware that they might have to publish the key when they created it). Regards, Martin From trent at snakebite.org Thu Aug 23 10:05:49 2012 From: trent at snakebite.org (Trent Nelson) Date: Thu, 23 Aug 2012 04:05:49 -0400 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <5035DAB1.40906@v.loewis.de> References: <20120822222816.GF42732@snakebite.org> <8EE1B5A3-B1C2-436F-8CE4-99E016DE4408@coderanger.net> <20120823004349.BDEF8250168@webabinitio.net> <5035DAB1.40906@v.loewis.de> Message-ID: <20120823080549.GH42732@snakebite.org> On Thu, Aug 23, 2012 at 12:24:33AM -0700, "Martin v. L?wis" wrote: > On 23.08.2012 02:43, R. David Murray wrote: > > On Thu, 23 Aug 2012 10:53:34 +1200, Noah Kantrowitz wrote: > >> For everyone with a record in the Chef server (read: everyone with SSH access to any of the PSF servers at OSL) I can easily give you automated access. Whats the easiest format? I can give you a Python script that will spit out files or JSON or more or less whatever else you want. > > > > That isn't going to be the right set of keys for Trent's purposes > > (though it is likely to be a subset). The keyfile we use for the hg > > repository is. > > ... for which it would be easiest if we give Trent access to the > repository storing these keys. > > I'm a bit hesitant to put "public" keys into the real world-wide > public, given the past history of easily-breakable public keys. > For PGP, this is less of a concern than for SSH, since the threats > are smaller (plus users where aware that they might have to publish > the key when they created it). Hmmm. So, from my perspective, I have the following goals: - Commit access to ssh://hg.python.org implies access to `ssh cpython at snakebite` via the exact same key. - No extra administrative overhead/burden on infrastructure@ (with regards to ssh key management, i.e. an entry in .ssh/ authorized_keys should be sufficient for implicit snakebite access). Factoring in your (valid) security concerns, here's my altered proposal: - Let's just call the repo 'snakebite', and have it accessible only to ssh committers, no public http access. Calling it something generic like 'keys' may invite phantom requirements like being able to store multiple identities/keys etc. I don't need that for snakebite; one ssh key and one optional gpg key is all I want :-) - Same repo layout as before -- GPG keys not required unless I need to send you something encrypted via e-mail (RDP is the only use case I can think of for this). - I'll whip up the glue to take our current .ssh/authz and dump it into the 'snakebite' repo. We can refine that process down the track (with automation and whatnot). If there are no objections I can take this offline with inf at . Trent. From g.rodola at gmail.com Thu Aug 23 11:09:14 2012 From: g.rodola at gmail.com (=?ISO-8859-1?Q?Giampaolo_Rodol=E0?=) Date: Thu, 23 Aug 2012 11:09:14 +0200 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120822222816.GF42732@snakebite.org> References: <20120822222816.GF42732@snakebite.org> Message-ID: > For committers on other Python projects like Buildbot, Django and > Twisted that may be reading this -- yes, the plan is to give you > guys Snakebite access/slaves down the track too. I'll start looking > into that after I've finished setting up the remaining slaves for > Python. (Setting up a keys repo will definitely help (doesn't have > to be hg -- feel free to use svn/git/whatever, just try and follow > the same layout).) This is so great! I've been looking forward to this for a long time and kept visiting the site every now and then to see if there was any progress. I'd surely use this for psutil if you'll let me. Also, at some point I would suggest to introduce the possibility to donate some money in order to help supporting what I think must be a pretty complex infrastructure requiring a lot of resources, both in terms of hardware and time/labor. Regards --- Giampaolo http://code.google.com/p/pyftpdlib/ http://code.google.com/p/psutil/ http://code.google.com/p/pysendfile/ 2012/8/23 Trent Nelson : > Hi folks, > > I've set up a bunch of Snakebite build slaves over the past week. > One of the original goals was to provide Python committers with > full access to the slaves, which I'm still keen on providing. > > What's a nice simple way to achieve that in the interim? Here's > what I was thinking: > > - Create a new hg repo: hg.python.org/keys. > > - Committers can push to it just like any other repo (i.e. > same ssh/authz configuration as cpython). > > - Repo is laid out as follows: > keys/ > / > ssh (ssh public key) > gpg (gpg public key) > > - Prime the repo with the current .ssh/authorized_keys > (presuming you still use the --tunnel-user facility?). > > That'll provide me with everything I need to set up the relevant > .ssh/authorized_keys stuff on the Snakebite side. GPG keys will > be handy if I ever need to send passwords over e-mail (which I'll > probably have to do initially for those that want to RDP into the > Windows slaves). > > Thoughts? > > As for the slaves, here's what's up and running now: > > - AMD64 Mountain Lion [SB] > - AMD64 FreeBSD 8.2 [SB] > - AMD64 FreeBSD 9.1 [SB] > - AMD64 NetBSD 5.1.2 [SB] > - AMD64 OpenBSD 5.1 [SB] > - AMD64 DragonFlyBSD 3.0.2 [SB] > - AMD64 Windows Server 2008 R2 SP1 [SB] > - x86 NetBSD 5.1.2 [SB] > - x86 OpenBSD 5.1 [SB] > - x86 DragonFlyBSD 3.0.2 [SB] > - x86 Windows Server 2003 R2 SP2 [SB] > - x86 Windows Server 2008 R2 SP1 [SB] > > All the FreeBSD ones use ZFS, all the DragonFly ones use HAMMER. > DragonFly, NetBSD and OpenBSD are currently reporting all sorts > of weird and wonderful errors, which is partly why I want to set > up ssh access sooner rather than later. > > Other slaves on the horizon (i.e. hardware is up, OS is installed): > > - Windows 8 x64 (w/ VS2010 and VS2012) > - HP-UX 11iv2 PA-RISC > - HP-UX 11iv3 Itanium (64GB RAM) > - AIX 5.3 RS/6000 > - AIX 6.1 RS/6000 > - AIX 7.1 RS/6000 > - Solaris 9 SPARC > - Solaris 10 SPARC > > Nostalgia slaves that probably won't ever see green: > - IRIX 6.5.33 MIPS > - Tru64 5.1B Alpha > > If anyone wants ssh access now to the UNIX platforms in order to > debug/test, feel free to e-mail me directly with your ssh public > keys. > > For committers on other Python projects like Buildbot, Django and > Twisted that may be reading this -- yes, the plan is to give you > guys Snakebite access/slaves down the track too. I'll start looking > into that after I've finished setting up the remaining slaves for > Python. (Setting up a keys repo will definitely help (doesn't have > to be hg -- feel free to use svn/git/whatever, just try and follow > the same layout).) > > Regards, > > Trent "that-took-a-bit-longer-than-expected" Nelson. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/g.rodola%40gmail.com From petri at digip.org Thu Aug 23 11:25:26 2012 From: petri at digip.org (Petri Lehtinen) Date: Thu, 23 Aug 2012 12:25:26 +0300 Subject: [Python-Dev] bug in tarfile module? In-Reply-To: <5035D1AC.2010001@simplistix.co.uk> References: <5035D1AC.2010001@simplistix.co.uk> Message-ID: <20120823092526.GA13939@chang> Chris Withers wrote: > Hi All, > > This feels like a bug, but just wanted to check here before filing a > report if I've missed something: > > buzzkill$ python2.7 > Enthought Python Distribution -- www.enthought.com > Version: 7.2-2 (32-bit) > > Python 2.7.2 |EPD 7.2-2 (32-bit)| (default, Sep 7 2011, 09:16:50) > [GCC 4.0.1 (Apple Inc. build 5493)] on darwin > Type "packages", "demo" or "enthought" for more information. > >>> import tarfile > >>> source = open('/src/Python-2.6.7.tgz', 'rb') > >>> tar = tarfile.open(fileobj=source, mode='r|*') > >>> member = tar.extractfile('Python-2.6.7/Lib/genericpath.py') > >>> data = member.read() > Traceback (most recent call last): > File "", line 1, in > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > line 815, in read > buf += self.fileobj.read() > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > line 735, in read > return self.readnormal(size) > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > line 742, in readnormal > self.fileobj.seek(self.offset + self.position) > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > line 554, in seek > raise StreamError("seeking backwards is not allowed") > tarfile.StreamError: seeking backwards is not allowed > > The key is the "mode='r*|" which I understood to be specifically for > reading blocks from a stream without seeking that would cause > problems. When discussing "filemode|[compression]" modes, the docs say: However, such a TarFile object is limited in that it does not allow to be accessed randomly I'm not a tarfile expert, but extracting a single file sounds like random access to me. If it was the first file in the archive (or there was only one file) it probably wouldn't count as random access. Petri From solipsis at pitrou.net Thu Aug 23 13:34:19 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Aug 2012 13:34:19 +0200 Subject: [Python-Dev] [Infrastructure] Snakebite build slaves and developer SSH/GPG public keys References: <20120822222816.GF42732@snakebite.org> <5035D90C.5050903@v.loewis.de> <5035DA2C.4030308@python.org> Message-ID: <20120823133419.4b205bd6@pitrou.net> On Thu, 23 Aug 2012 09:22:20 +0200 Georg Brandl wrote: > On 23.08.2012 09:17, "Martin v. L?wis" wrote: > > On 23.08.2012 01:03, Nick Coghlan wrote: > >> currently, there's a security hole > >> in the fact that requests to change our registered ssh key for access > >> are not themselves authenticated electronically > > > > Indeed, we should start requesting birth certificates, else someone > > claiming to be Reinhold Birkenfeld may get commit access :-) > > Of course, even then it might be a political problem for us if Barack Obama > wants to join as a core contributor if he happens to be voted out of office > and gets bored ;) Will he have a German accent? Regads Antoine. -- Software development and contracting: http://pro.pitrou.net From rdmurray at bitdance.com Thu Aug 23 14:40:21 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 23 Aug 2012 08:40:21 -0400 Subject: [Python-Dev] bug in tarfile module? In-Reply-To: <20120823092526.GA13939@chang> References: <5035D1AC.2010001@simplistix.co.uk> <20120823092526.GA13939@chang> Message-ID: <20120823124022.471ED2500FA@webabinitio.net> On Thu, 23 Aug 2012 12:25:26 +0300, Petri Lehtinen wrote: > Chris Withers wrote: > > Hi All, > > > > This feels like a bug, but just wanted to check here before filing a > > report if I've missed something: > > > > buzzkill$ python2.7 > > Enthought Python Distribution -- www.enthought.com > > Version: 7.2-2 (32-bit) > > > > Python 2.7.2 |EPD 7.2-2 (32-bit)| (default, Sep 7 2011, 09:16:50) > > [GCC 4.0.1 (Apple Inc. build 5493)] on darwin > > Type "packages", "demo" or "enthought" for more information. > > >>> import tarfile > > >>> source = open('/src/Python-2.6.7.tgz', 'rb') > > >>> tar = tarfile.open(fileobj=source, mode='r|*') > > >>> member = tar.extractfile('Python-2.6.7/Lib/genericpath.py') > > >>> data = member.read() > > Traceback (most recent call last): > > File "", line 1, in > > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > > line 815, in read > > buf += self.fileobj.read() > > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > > line 735, in read > > return self.readnormal(size) > > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > > line 742, in readnormal > > self.fileobj.seek(self.offset + self.position) > > File "/Library/Frameworks/Python.framework/Versions/7.2/lib/python2.7/tarfile.py", > > line 554, in seek > > raise StreamError("seeking backwards is not allowed") > > tarfile.StreamError: seeking backwards is not allowed > > > > The key is the "mode='r*|" which I understood to be specifically for > > reading blocks from a stream without seeking that would cause > > problems. > > When discussing "filemode|[compression]" modes, the docs say: > > However, such a TarFile object is limited in that it does not > allow to be accessed randomly > > I'm not a tarfile expert, but extracting a single file sounds like > random access to me. If it was the first file in the archive (or there > was only one file) it probably wouldn't count as random access. There is an open doc bug for this: http://bugs.python.org/issue10436 --David From jdhardy at gmail.com Thu Aug 23 20:03:51 2012 From: jdhardy at gmail.com (Jeff Hardy) Date: Thu, 23 Aug 2012 11:03:51 -0700 Subject: [Python-Dev] [compatibility-sig] do all VMs implement the ast module? (was: Re: AST optimizer implemented in Python) In-Reply-To: References: Message-ID: On Mon, Aug 13, 2012 at 12:06 PM, Brett Cannon wrote: > Time to ask the other VMs what they are currently doing (the ast module came > into existence in Python 2.6 so all the VMs should be answer the question > since Jython is in alpha for 2.7 compatibility). IronPython has an _ast implementation that matches 2.7 as close as reasonably possible. - Jeff From trent at snakebite.org Fri Aug 24 03:24:05 2012 From: trent at snakebite.org (Trent Nelson) Date: Thu, 23 Aug 2012 21:24:05 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: References: <20120822222816.GF42732@snakebite.org> Message-ID: <20120824012402.GA93736@snakebite.org> On Thu, Aug 23, 2012 at 02:09:14AM -0700, Giampaolo Rodol? wrote: > > For committers on other Python projects like Buildbot, Django and > > Twisted that may be reading this -- yes, the plan is to give you > > guys Snakebite access/slaves down the track too. I'll start looking > > into that after I've finished setting up the remaining slaves for > > Python. (Setting up a keys repo will definitely help (doesn't have > > to be hg -- feel free to use svn/git/whatever, just try and follow > > the same layout).) > > This is so great! > I've been looking forward to this for a long time and kept visiting > the site every now and then to see if there was any progress. > I'd surely use this for psutil if you'll let me. Hey, psutil, I use that! :-) Awesome candidate for Snakebite, too, given the unavoidable platform-specific C code behind the scenes. I'll take it up with you offline. > Also, at some point I would suggest to introduce the possibility to > donate some money in order to help supporting what I think must be a > pretty complex infrastructure requiring a lot of resources, both in > terms of hardware and time/labor. You have no idea :-/ Regards, Trent. From trent at snakebite.org Fri Aug 24 03:36:28 2012 From: trent at snakebite.org (Trent Nelson) Date: Thu, 23 Aug 2012 21:36:28 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120822222816.GF42732@snakebite.org> References: <20120822222816.GF42732@snakebite.org> Message-ID: <20120824013628.GB93736@snakebite.org> On Wed, Aug 22, 2012 at 03:28:16PM -0700, Trent Nelson wrote: > Hi folks, > > I've set up a bunch of Snakebite build slaves over the past week. > One of the original goals was to provide Python committers with > full access to the slaves, which I'm still keen on providing. > > What's a nice simple way to achieve that in the interim? Quick update: Martin's hooked me up with everything I need for now. I'll send out another e-mail once I've set up the necessary glue on my end. Trent. From stefan at jarn.com Fri Aug 24 11:19:33 2012 From: stefan at jarn.com (=?iso-8859-1?Q?=22Stefan_H=2E_Holek_=B7_Jarn=22?=) Date: Fri, 24 Aug 2012 11:19:33 +0200 Subject: [Python-Dev] Why no venv in existing directory? In-Reply-To: References: <26A4F829-8091-4CA4-A4F5-A18B5813C1CC@epy.co.at> <50083843.2040105@oddbird.net> Message-ID: FYI, I have created a tracker issue for this: http://bugs.python.org/issue15776 Stefan On 23.07.2012, at 09:09, Stefan H. Holek wrote: > The feature certainly is on *my* wish-list but I might be alone here. ;-) -- Stefan H. Holek www.jarn.com/stefan From andrew.svetlov at gmail.com Fri Aug 24 13:30:15 2012 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 24 Aug 2012 14:30:15 +0300 Subject: [Python-Dev] Why no venv in existing directory? In-Reply-To: References: <26A4F829-8091-4CA4-A4F5-A18B5813C1CC@epy.co.at> <50083843.2040105@oddbird.net> Message-ID: Looks like you can use for that $ pyvenv . --upgrade On Fri, Aug 24, 2012 at 12:19 PM, "Stefan H. Holek ? Jarn" wrote: > FYI, I have created a tracker issue for this: http://bugs.python.org/issue15776 > > Stefan > > > On 23.07.2012, at 09:09, Stefan H. Holek wrote: > >> The feature certainly is on *my* wish-list but I might be alone here. ;-) > > -- > Stefan H. Holek > www.jarn.com/stefan > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From ctb at msu.edu Fri Aug 24 16:27:09 2012 From: ctb at msu.edu (C. Titus Brown) Date: Fri, 24 Aug 2012 07:27:09 -0700 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: References: <20120822222816.GF42732@snakebite.org> Message-ID: <20120824142709.GS9439@idyll.org> On Thu, Aug 23, 2012 at 11:09:14AM +0200, Giampaolo Rodol? wrote: > > For committers on other Python projects like Buildbot, Django and > > Twisted that may be reading this -- yes, the plan is to give you > > guys Snakebite access/slaves down the track too. I'll start looking > > into that after I've finished setting up the remaining slaves for > > Python. (Setting up a keys repo will definitely help (doesn't have > > to be hg -- feel free to use svn/git/whatever, just try and follow > > the same layout).) > > This is so great! > I've been looking forward to this for a long time and kept visiting > the site every now and then to see if there was any progress. > I'd surely use this for psutil if you'll let me. > Also, at some point I would suggest to introduce the possibility to > donate some money in order to help supporting what I think must be a > pretty complex infrastructure requiring a lot of resources, both in > terms of hardware and time/labor. Don't forget the heavy Xanax requirements on the part of the technical owner of the space. Dunno if Trent will put up any pictures but I'm dreading the day that building maintenance gives me a call asking what the heck I've done to the windows, drains, and power. --titus From trent at snakebite.org Fri Aug 24 16:39:54 2012 From: trent at snakebite.org (Trent Nelson) Date: Fri, 24 Aug 2012 10:39:54 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120824142709.GS9439@idyll.org> References: <20120822222816.GF42732@snakebite.org> <20120824142709.GS9439@idyll.org> Message-ID: <20120824143954.GG93736@snakebite.org> On Fri, Aug 24, 2012 at 07:27:09AM -0700, C. Titus Brown wrote: > On Thu, Aug 23, 2012 at 11:09:14AM +0200, Giampaolo Rodol? wrote: > > > For committers on other Python projects like Buildbot, Django and > > > Twisted that may be reading this -- yes, the plan is to give you > > > guys Snakebite access/slaves down the track too. I'll start looking > > > into that after I've finished setting up the remaining slaves for > > > Python. (Setting up a keys repo will definitely help (doesn't have > > > to be hg -- feel free to use svn/git/whatever, just try and follow > > > the same layout).) > > > > This is so great! > > I've been looking forward to this for a long time and kept visiting > > the site every now and then to see if there was any progress. > > I'd surely use this for psutil if you'll let me. > > Also, at some point I would suggest to introduce the possibility to > > donate some money in order to help supporting what I think must be a > > pretty complex infrastructure requiring a lot of resources, both in > > terms of hardware and time/labor. > > Don't forget the heavy Xanax requirements on the part of the technical owner of > the space. Dunno if Trent will put up any pictures but I'm dreading the day > that building maintenance gives me a call asking what the heck I've done > to the windows, drains, and power. What's an industrial exhaust fan or two bolted to a window frame between friends eh. From trent at snakebite.org Fri Aug 24 16:48:59 2012 From: trent at snakebite.org (Trent Nelson) Date: Fri, 24 Aug 2012 10:48:59 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120824143954.GG93736@snakebite.org> References: <20120822222816.GF42732@snakebite.org> <20120824142709.GS9439@idyll.org> <20120824143954.GG93736@snakebite.org> Message-ID: <20120824144858.GH93736@snakebite.org> On Fri, Aug 24, 2012 at 07:39:54AM -0700, Trent Nelson wrote: > On Fri, Aug 24, 2012 at 07:27:09AM -0700, C. Titus Brown wrote: > > Don't forget the heavy Xanax requirements on the part of the technical owner of > > the space. Dunno if Trent will put up any pictures but I'm dreading the day > > that building maintenance gives me a call asking what the heck I've done > > to the windows, drains, and power. > > What's an industrial exhaust fan or two bolted to a window frame > between friends eh. http://i.imgur.com/BbKn9.jpg ....that wouldn't be very effective if I left the window panes in now, would it? ;-) From brett at python.org Fri Aug 24 18:04:07 2012 From: brett at python.org (Brett Cannon) Date: Fri, 24 Aug 2012 12:04:07 -0400 Subject: [Python-Dev] Snakebite build slaves and developer SSH/GPG public keys In-Reply-To: <20120824144858.GH93736@snakebite.org> References: <20120822222816.GF42732@snakebite.org> <20120824142709.GS9439@idyll.org> <20120824143954.GG93736@snakebite.org> <20120824144858.GH93736@snakebite.org> Message-ID: On Fri, Aug 24, 2012 at 10:48 AM, Trent Nelson wrote: > On Fri, Aug 24, 2012 at 07:39:54AM -0700, Trent Nelson wrote: > > On Fri, Aug 24, 2012 at 07:27:09AM -0700, C. Titus Brown wrote: > > > Don't forget the heavy Xanax requirements on the part of the technical > owner of > > > the space. Dunno if Trent will put up any pictures but I'm dreading > the day > > > that building maintenance gives me a call asking what the heck I've > done > > > to the windows, drains, and power. > > > > What's an industrial exhaust fan or two bolted to a window frame > > between friends eh. > > http://i.imgur.com/BbKn9.jpg > > ....that wouldn't be very effective if I left the window panes in > now, would it? ;-) The reflective lining makes it look like some industrial-level cat lady lives in that office space compared to the rest of the windows. -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Aug 24 18:07:15 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 24 Aug 2012 18:07:15 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-08-17 - 2012-08-24) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3640 ( +0) closed 23914 (+58) total 27554 (+58) Open issues with patches: 1603 Issues opened (36) ================== #15316: runpy swallows ImportError information with relative imports http://bugs.python.org/issue15316 reopened by cjerdonek #15720: move __import__() out of the default lookup chain http://bugs.python.org/issue15720 opened by eric.snow #15721: PEP 3121, 384 Refactoring applied to tkinter module http://bugs.python.org/issue15721 opened by Robin.Schreiber #15722: PEP 3121, 384 Refactoring applied to decimal module http://bugs.python.org/issue15722 opened by Robin.Schreiber #15723: Python breaks OS' append guarantee on file writes http://bugs.python.org/issue15723 opened by bsdphk #15724: Add "versionchanged" to memoryview docs http://bugs.python.org/issue15724 opened by skrah #15725: PyType_FromSpecWithBases bugfix http://bugs.python.org/issue15725 opened by Robin.Schreiber #15727: PyType_FromSpecWithBases tp_new bugfix http://bugs.python.org/issue15727 opened by Robin.Schreiber #15729: PyStructSequence_NewType enhancement http://bugs.python.org/issue15729 opened by Robin.Schreiber #15730: Silence unused value warnings under Mac OS X 10.8/clang http://bugs.python.org/issue15730 opened by Benno.Rice #15731: Mechanism for inheriting docstrings and signatures http://bugs.python.org/issue15731 opened by ncoghlan #15733: PEP 3121, 384 Refactoring applied to winapi module http://bugs.python.org/issue15733 opened by Robin.Schreiber #15734: PEP 3121, 384 Refactoring applied to spwd module http://bugs.python.org/issue15734 opened by Robin.Schreiber #15735: PEP 3121, 384 Refactoring applied to ossaudio module http://bugs.python.org/issue15735 opened by Robin.Schreiber #15743: test_urllib2/test_urllib use deprecated urllib.Request methods http://bugs.python.org/issue15743 opened by Jeff.Knupp #15744: missing tests for {RawIO,BufferedIO,TextIO}.writelines http://bugs.python.org/issue15744 opened by pitrou #15745: Numerous utime ns tests fail on FreeBSD w/ ZFS http://bugs.python.org/issue15745 opened by trent #15746: test_winsound bombing out on 2003 buildslave http://bugs.python.org/issue15746 opened by trent #15748: Various symlink test failures in test_shutil on FreeBSD http://bugs.python.org/issue15748 opened by trent #15749: cgitb prints html for text when display disabled. http://bugs.python.org/issue15749 opened by aliles #15750: test_localtime_daylight_false_dst_true raises OverflowError: m http://bugs.python.org/issue15750 opened by trent #15751: Support subinterpreters in the GIL state API http://bugs.python.org/issue15751 opened by ncoghlan #15753: No-argument super in method with variable arguments raises Sys http://bugs.python.org/issue15753 opened by james.sanders #15756: subprocess.poll() does not handle errno.ECHILD "No child proce http://bugs.python.org/issue15756 opened by twhitema #15757: ./configure --with-pydebug on FreeBSD results in -O2 -pipe eve http://bugs.python.org/issue15757 opened by trent #15758: FileIO.readall() has worst case O(n^2) complexity http://bugs.python.org/issue15758 opened by sbt #15759: "make suspicious" doesn't display instructions in case of fail http://bugs.python.org/issue15759 opened by ezio.melotti #15761: Setting PYTHONEXECUTABLE can cause segfaults on OS X http://bugs.python.org/issue15761 opened by ned.deily #15765: test_getcwd_long_pathnames (in test_posix) kills NetBSD http://bugs.python.org/issue15765 opened by trent #15766: _imp.load_dynamic() does crash with non-ASCII path and uses th http://bugs.python.org/issue15766 opened by haypo #15767: add ModuleNotFoundError http://bugs.python.org/issue15767 opened by eric.snow #15769: urllib.request.urlopen with cafile or capath set overrides any http://bugs.python.org/issue15769 opened by caligatio #15772: Unresolved symbols in Windows 64-bit python http://bugs.python.org/issue15772 opened by spatz123 #15775: Add StopParser() to expat http://bugs.python.org/issue15775 opened by nemeskeyd #15776: Allow pyvenv to work in existing directory http://bugs.python.org/issue15776 opened by stefanholek #15777: test_capi refleak http://bugs.python.org/issue15777 opened by rosslagerwall Most recent 15 issues with no replies (15) ========================================== #15775: Add StopParser() to expat http://bugs.python.org/issue15775 #15772: Unresolved symbols in Windows 64-bit python http://bugs.python.org/issue15772 #15767: add ModuleNotFoundError http://bugs.python.org/issue15767 #15759: "make suspicious" doesn't display instructions in case of fail http://bugs.python.org/issue15759 #15749: cgitb prints html for text when display disabled. http://bugs.python.org/issue15749 #15744: missing tests for {RawIO,BufferedIO,TextIO}.writelines http://bugs.python.org/issue15744 #15735: PEP 3121, 384 Refactoring applied to ossaudio module http://bugs.python.org/issue15735 #15734: PEP 3121, 384 Refactoring applied to spwd module http://bugs.python.org/issue15734 #15729: PyStructSequence_NewType enhancement http://bugs.python.org/issue15729 #15727: PyType_FromSpecWithBases tp_new bugfix http://bugs.python.org/issue15727 #15725: PyType_FromSpecWithBases bugfix http://bugs.python.org/issue15725 #15714: PEP 3121, 384 Refactoring applied to grp module http://bugs.python.org/issue15714 #15713: PEP 3121, 384 Refactoring applied to zipimport module http://bugs.python.org/issue15713 #15712: PEP 3121, 384 Refactoring applied to unicodedata module http://bugs.python.org/issue15712 #15711: PEP 3121, 384 Refactoring applied to time module http://bugs.python.org/issue15711 Most recent 15 issues waiting for review (15) ============================================= #15776: Allow pyvenv to work in existing directory http://bugs.python.org/issue15776 #15769: urllib.request.urlopen with cafile or capath set overrides any http://bugs.python.org/issue15769 #15766: _imp.load_dynamic() does crash with non-ASCII path and uses th http://bugs.python.org/issue15766 #15765: test_getcwd_long_pathnames (in test_posix) kills NetBSD http://bugs.python.org/issue15765 #15759: "make suspicious" doesn't display instructions in case of fail http://bugs.python.org/issue15759 #15758: FileIO.readall() has worst case O(n^2) complexity http://bugs.python.org/issue15758 #15756: subprocess.poll() does not handle errno.ECHILD "No child proce http://bugs.python.org/issue15756 #15753: No-argument super in method with variable arguments raises Sys http://bugs.python.org/issue15753 #15749: cgitb prints html for text when display disabled. http://bugs.python.org/issue15749 #15743: test_urllib2/test_urllib use deprecated urllib.Request methods http://bugs.python.org/issue15743 #15735: PEP 3121, 384 Refactoring applied to ossaudio module http://bugs.python.org/issue15735 #15734: PEP 3121, 384 Refactoring applied to spwd module http://bugs.python.org/issue15734 #15733: PEP 3121, 384 Refactoring applied to winapi module http://bugs.python.org/issue15733 #15731: Mechanism for inheriting docstrings and signatures http://bugs.python.org/issue15731 #15730: Silence unused value warnings under Mac OS X 10.8/clang http://bugs.python.org/issue15730 Top 10 most discussed issues (10) ================================= #15751: Support subinterpreters in the GIL state API http://bugs.python.org/issue15751 23 msgs #15758: FileIO.readall() has worst case O(n^2) complexity http://bugs.python.org/issue15758 16 msgs #15316: runpy swallows ImportError information with relative imports http://bugs.python.org/issue15316 9 msgs #15723: Python breaks OS' append guarantee on file writes http://bugs.python.org/issue15723 9 msgs #15748: Various symlink test failures in test_shutil on FreeBSD http://bugs.python.org/issue15748 9 msgs #15776: Allow pyvenv to work in existing directory http://bugs.python.org/issue15776 9 msgs #13370: test_ctypes fails when building python with clang http://bugs.python.org/issue13370 8 msgs #15642: Integrate pickle protocol version 4 GSoC work by Stefan Mihail http://bugs.python.org/issue15642 8 msgs #15745: Numerous utime ns tests fail on FreeBSD w/ ZFS http://bugs.python.org/issue15745 8 msgs #14468: Update cloning guidelines in devguide http://bugs.python.org/issue14468 7 msgs Issues closed (55) ================== #1574: Touchpad 2 Finger scroll does not work in IDLE on Mac (But scr http://bugs.python.org/issue1574 closed by ned.deily #4966: Improving Lib Doc Sequence Types Section http://bugs.python.org/issue4966 closed by python-dev #6749: Support for encrypted zipfiles when interpreting zipfile as sc http://bugs.python.org/issue6749 closed by ncoghlan #12415: Missing: How to checkout the Doc sources http://bugs.python.org/issue12415 closed by sandro.tosi #13579: string.Formatter doesn't understand the a conversion specifier http://bugs.python.org/issue13579 closed by r.david.murray #13799: Base 16 should be hexadecimal in Unicode HOWTO http://bugs.python.org/issue13799 closed by terry.reedy #14292: OS X installer build script doesn't set $CXX, so it ends up as http://bugs.python.org/issue14292 closed by ned.deily #14563: Segmentation fault on ctypes.Structure subclass with byte stri http://bugs.python.org/issue14563 closed by aliles #14814: Implement PEP 3144 (the ipaddress module) http://bugs.python.org/issue14814 closed by python-dev #14846: Change in error when sys.path contains a nonexistent folder (i http://bugs.python.org/issue14846 closed by python-dev #14954: weakref doc clarification http://bugs.python.org/issue14954 closed by pitrou #15131: Document py/pyw launchers http://bugs.python.org/issue15131 closed by brian.curtin #15199: Default mimetype for javascript should be application/javascri http://bugs.python.org/issue15199 closed by petri.lehtinen #15249: email.generator.BytesGenerator doesn't mangle "From " lines wh http://bugs.python.org/issue15249 closed by r.david.murray #15355: generator docs should mention already-executing exception http://bugs.python.org/issue15355 closed by r.david.murray #15477: test_cmath failures on OS X 10.8 http://bugs.python.org/issue15477 closed by mark.dickinson #15511: _decimal does not build in PGUpdate mode http://bugs.python.org/issue15511 closed by loewis #15570: email.header.decode_header parses differently http://bugs.python.org/issue15570 closed by r.david.murray #15595: subprocess.Popen(universal_newlines=True) does not work for ce http://bugs.python.org/issue15595 closed by asvetlov #15615: More tests for JSON decoder to test Exceptions http://bugs.python.org/issue15615 closed by pitrou #15632: regrtest.py: spurious leaks with -R option http://bugs.python.org/issue15632 closed by python-dev #15636: base64.decodebytes is only available in Python3.1+ http://bugs.python.org/issue15636 closed by eric.araujo #15637: Segfault reading null VMA (works fine in python 2.x) http://bugs.python.org/issue15637 closed by r.david.murray #15640: Document importlib.abc.Finder as deprecated http://bugs.python.org/issue15640 closed by brett.cannon #15645: 2to3 Grammar pickles not created when upgrading to 3.3.0b2 http://bugs.python.org/issue15645 closed by ned.deily #15660: Clarify 0 prefix for width specifier in str.format doc, http://bugs.python.org/issue15660 closed by terry.reedy #15678: IDLE menu customization is broken from OS X command lines http://bugs.python.org/issue15678 closed by ned.deily #15694: link to "file object" glossary entry in open() and io docs http://bugs.python.org/issue15694 closed by r.david.murray #15715: __import__ now raises with non-existing items in fromlist in 3 http://bugs.python.org/issue15715 closed by brett.cannon #15717: Mail System Error - Returned Mail http://bugs.python.org/issue15717 closed by eric.araujo #15719: Sort dict items in urlencode() http://bugs.python.org/issue15719 closed by gvanrossum #15726: PyState_FindModule false length-comparison fix http://bugs.python.org/issue15726 closed by pitrou #15728: Leak in PyUnicode_AsWideCharString() http://bugs.python.org/issue15728 closed by skrah #15732: Crash (constructed) in _PySequence_BytesToCharpArray() http://bugs.python.org/issue15732 closed by skrah #15736: Crash #2 (constructed overflow) in _PySequence_BytesToCharpAr http://bugs.python.org/issue15736 closed by skrah #15737: NULL dereference in zipimport.c http://bugs.python.org/issue15737 closed by python-dev #15738: Crash (constructed) in subprocess_fork_exec() http://bugs.python.org/issue15738 closed by skrah #15739: Python crashes with "Bus error: 10" http://bugs.python.org/issue15739 closed by ned.deily #15740: test_ssl failure when cacert.org CA cert in system keychain on http://bugs.python.org/issue15740 closed by ronaldoussoren #15741: NULL dereference in builtin_compile() http://bugs.python.org/issue15741 closed by skrah #15742: SQLite3 documentation changes http://bugs.python.org/issue15742 closed by r.david.murray #15747: Various chflags tests failing on FreeBSD/ZFS http://bugs.python.org/issue15747 closed by trent #15752: change test_json's use of deprecated unittest function http://bugs.python.org/issue15752 closed by ezio.melotti #15754: Traceback message not returning SQLite check constraint detail http://bugs.python.org/issue15754 closed by jftuga #15760: make install should generate grammar file http://bugs.python.org/issue15760 closed by lregebro #15762: Windows 8 certification http://bugs.python.org/issue15762 closed by loewis #15763: email non-ASCII characters in TO or FROM field doesn't work http://bugs.python.org/issue15763 closed by r.david.murray #15764: Sqlite3 performance http://bugs.python.org/issue15764 closed by loewis #15768: re.sub() with re.MULTILINE not replacing all occurrences http://bugs.python.org/issue15768 closed by eacousineau #15770: _testbuffer.get_contiguous() doesn't check input arguments http://bugs.python.org/issue15770 closed by skrah #15771: Tunple Bug? http://bugs.python.org/issue15771 closed by ezio.melotti #15773: `is' operator returns False on classmethods http://bugs.python.org/issue15773 closed by ncoghlan #15774: String method title() produces incorrect resutls http://bugs.python.org/issue15774 closed by r.david.murray #1578643: various datetime methods fail in restricted mode http://bugs.python.org/issue1578643 closed by belopolsky #1228112: code.py use sys.excepthook to display exceptions http://bugs.python.org/issue1228112 closed by tebeka From brett at python.org Fri Aug 24 23:43:39 2012 From: brett at python.org (Brett Cannon) Date: Fri, 24 Aug 2012 17:43:39 -0400 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> Message-ID: On Fri, Aug 24, 2012 at 12:07 PM, Python tracker wrote: > > ACTIVITY SUMMARY (2012-08-17 - 2012-08-24) > Python tracker at http://bugs.python.org/ > > To view or respond to any of the issues listed below, click on the issue. > Do NOT respond to this message. > > Issues counts and deltas: > open 3640 ( +0) > closed 23914 (+58) > total 27554 (+58) > Have we ever had a flat month of open bugs?!? Pretty great regardless. -Brett > > Open issues with patches: 1603 > > > Issues opened (36) > ================== > > #15316: runpy swallows ImportError information with relative imports > http://bugs.python.org/issue15316 reopened by cjerdonek > > #15720: move __import__() out of the default lookup chain > http://bugs.python.org/issue15720 opened by eric.snow > > #15721: PEP 3121, 384 Refactoring applied to tkinter module > http://bugs.python.org/issue15721 opened by Robin.Schreiber > > #15722: PEP 3121, 384 Refactoring applied to decimal module > http://bugs.python.org/issue15722 opened by Robin.Schreiber > > #15723: Python breaks OS' append guarantee on file writes > http://bugs.python.org/issue15723 opened by bsdphk > > #15724: Add "versionchanged" to memoryview docs > http://bugs.python.org/issue15724 opened by skrah > > #15725: PyType_FromSpecWithBases bugfix > http://bugs.python.org/issue15725 opened by Robin.Schreiber > > #15727: PyType_FromSpecWithBases tp_new bugfix > http://bugs.python.org/issue15727 opened by Robin.Schreiber > > #15729: PyStructSequence_NewType enhancement > http://bugs.python.org/issue15729 opened by Robin.Schreiber > > #15730: Silence unused value warnings under Mac OS X 10.8/clang > http://bugs.python.org/issue15730 opened by Benno.Rice > > #15731: Mechanism for inheriting docstrings and signatures > http://bugs.python.org/issue15731 opened by ncoghlan > > #15733: PEP 3121, 384 Refactoring applied to winapi module > http://bugs.python.org/issue15733 opened by Robin.Schreiber > > #15734: PEP 3121, 384 Refactoring applied to spwd module > http://bugs.python.org/issue15734 opened by Robin.Schreiber > > #15735: PEP 3121, 384 Refactoring applied to ossaudio module > http://bugs.python.org/issue15735 opened by Robin.Schreiber > > #15743: test_urllib2/test_urllib use deprecated urllib.Request methods > http://bugs.python.org/issue15743 opened by Jeff.Knupp > > #15744: missing tests for {RawIO,BufferedIO,TextIO}.writelines > http://bugs.python.org/issue15744 opened by pitrou > > #15745: Numerous utime ns tests fail on FreeBSD w/ ZFS > http://bugs.python.org/issue15745 opened by trent > > #15746: test_winsound bombing out on 2003 buildslave > http://bugs.python.org/issue15746 opened by trent > > #15748: Various symlink test failures in test_shutil on FreeBSD > http://bugs.python.org/issue15748 opened by trent > > #15749: cgitb prints html for text when display disabled. > http://bugs.python.org/issue15749 opened by aliles > > #15750: test_localtime_daylight_false_dst_true raises OverflowError: m > http://bugs.python.org/issue15750 opened by trent > > #15751: Support subinterpreters in the GIL state API > http://bugs.python.org/issue15751 opened by ncoghlan > > #15753: No-argument super in method with variable arguments raises Sys > http://bugs.python.org/issue15753 opened by james.sanders > > #15756: subprocess.poll() does not handle errno.ECHILD "No child proce > http://bugs.python.org/issue15756 opened by twhitema > > #15757: ./configure --with-pydebug on FreeBSD results in -O2 -pipe eve > http://bugs.python.org/issue15757 opened by trent > > #15758: FileIO.readall() has worst case O(n^2) complexity > http://bugs.python.org/issue15758 opened by sbt > > #15759: "make suspicious" doesn't display instructions in case of fail > http://bugs.python.org/issue15759 opened by ezio.melotti > > #15761: Setting PYTHONEXECUTABLE can cause segfaults on OS X > http://bugs.python.org/issue15761 opened by ned.deily > > #15765: test_getcwd_long_pathnames (in test_posix) kills NetBSD > http://bugs.python.org/issue15765 opened by trent > > #15766: _imp.load_dynamic() does crash with non-ASCII path and uses th > http://bugs.python.org/issue15766 opened by haypo > > #15767: add ModuleNotFoundError > http://bugs.python.org/issue15767 opened by eric.snow > > #15769: urllib.request.urlopen with cafile or capath set overrides any > http://bugs.python.org/issue15769 opened by caligatio > > #15772: Unresolved symbols in Windows 64-bit python > http://bugs.python.org/issue15772 opened by spatz123 > > #15775: Add StopParser() to expat > http://bugs.python.org/issue15775 opened by nemeskeyd > > #15776: Allow pyvenv to work in existing directory > http://bugs.python.org/issue15776 opened by stefanholek > > #15777: test_capi refleak > http://bugs.python.org/issue15777 opened by rosslagerwall > > > > Most recent 15 issues with no replies (15) > ========================================== > > #15775: Add StopParser() to expat > http://bugs.python.org/issue15775 > > #15772: Unresolved symbols in Windows 64-bit python > http://bugs.python.org/issue15772 > > #15767: add ModuleNotFoundError > http://bugs.python.org/issue15767 > > #15759: "make suspicious" doesn't display instructions in case of fail > http://bugs.python.org/issue15759 > > #15749: cgitb prints html for text when display disabled. > http://bugs.python.org/issue15749 > > #15744: missing tests for {RawIO,BufferedIO,TextIO}.writelines > http://bugs.python.org/issue15744 > > #15735: PEP 3121, 384 Refactoring applied to ossaudio module > http://bugs.python.org/issue15735 > > #15734: PEP 3121, 384 Refactoring applied to spwd module > http://bugs.python.org/issue15734 > > #15729: PyStructSequence_NewType enhancement > http://bugs.python.org/issue15729 > > #15727: PyType_FromSpecWithBases tp_new bugfix > http://bugs.python.org/issue15727 > > #15725: PyType_FromSpecWithBases bugfix > http://bugs.python.org/issue15725 > > #15714: PEP 3121, 384 Refactoring applied to grp module > http://bugs.python.org/issue15714 > > #15713: PEP 3121, 384 Refactoring applied to zipimport module > http://bugs.python.org/issue15713 > > #15712: PEP 3121, 384 Refactoring applied to unicodedata module > http://bugs.python.org/issue15712 > > #15711: PEP 3121, 384 Refactoring applied to time module > http://bugs.python.org/issue15711 > > > > Most recent 15 issues waiting for review (15) > ============================================= > > #15776: Allow pyvenv to work in existing directory > http://bugs.python.org/issue15776 > > #15769: urllib.request.urlopen with cafile or capath set overrides any > http://bugs.python.org/issue15769 > > #15766: _imp.load_dynamic() does crash with non-ASCII path and uses th > http://bugs.python.org/issue15766 > > #15765: test_getcwd_long_pathnames (in test_posix) kills NetBSD > http://bugs.python.org/issue15765 > > #15759: "make suspicious" doesn't display instructions in case of fail > http://bugs.python.org/issue15759 > > #15758: FileIO.readall() has worst case O(n^2) complexity > http://bugs.python.org/issue15758 > > #15756: subprocess.poll() does not handle errno.ECHILD "No child proce > http://bugs.python.org/issue15756 > > #15753: No-argument super in method with variable arguments raises Sys > http://bugs.python.org/issue15753 > > #15749: cgitb prints html for text when display disabled. > http://bugs.python.org/issue15749 > > #15743: test_urllib2/test_urllib use deprecated urllib.Request methods > http://bugs.python.org/issue15743 > > #15735: PEP 3121, 384 Refactoring applied to ossaudio module > http://bugs.python.org/issue15735 > > #15734: PEP 3121, 384 Refactoring applied to spwd module > http://bugs.python.org/issue15734 > > #15733: PEP 3121, 384 Refactoring applied to winapi module > http://bugs.python.org/issue15733 > > #15731: Mechanism for inheriting docstrings and signatures > http://bugs.python.org/issue15731 > > #15730: Silence unused value warnings under Mac OS X 10.8/clang > http://bugs.python.org/issue15730 > > > > Top 10 most discussed issues (10) > ================================= > > #15751: Support subinterpreters in the GIL state API > http://bugs.python.org/issue15751 23 msgs > > #15758: FileIO.readall() has worst case O(n^2) complexity > http://bugs.python.org/issue15758 16 msgs > > #15316: runpy swallows ImportError information with relative imports > http://bugs.python.org/issue15316 9 msgs > > #15723: Python breaks OS' append guarantee on file writes > http://bugs.python.org/issue15723 9 msgs > > #15748: Various symlink test failures in test_shutil on FreeBSD > http://bugs.python.org/issue15748 9 msgs > > #15776: Allow pyvenv to work in existing directory > http://bugs.python.org/issue15776 9 msgs > > #13370: test_ctypes fails when building python with clang > http://bugs.python.org/issue13370 8 msgs > > #15642: Integrate pickle protocol version 4 GSoC work by Stefan Mihail > http://bugs.python.org/issue15642 8 msgs > > #15745: Numerous utime ns tests fail on FreeBSD w/ ZFS > http://bugs.python.org/issue15745 8 msgs > > #14468: Update cloning guidelines in devguide > http://bugs.python.org/issue14468 7 msgs > > > > Issues closed (55) > ================== > > #1574: Touchpad 2 Finger scroll does not work in IDLE on Mac (But scr > http://bugs.python.org/issue1574 closed by ned.deily > > #4966: Improving Lib Doc Sequence Types Section > http://bugs.python.org/issue4966 closed by python-dev > > #6749: Support for encrypted zipfiles when interpreting zipfile as sc > http://bugs.python.org/issue6749 closed by ncoghlan > > #12415: Missing: How to checkout the Doc sources > http://bugs.python.org/issue12415 closed by sandro.tosi > > #13579: string.Formatter doesn't understand the a conversion specifier > http://bugs.python.org/issue13579 closed by r.david.murray > > #13799: Base 16 should be hexadecimal in Unicode HOWTO > http://bugs.python.org/issue13799 closed by terry.reedy > > #14292: OS X installer build script doesn't set $CXX, so it ends up as > http://bugs.python.org/issue14292 closed by ned.deily > > #14563: Segmentation fault on ctypes.Structure subclass with byte stri > http://bugs.python.org/issue14563 closed by aliles > > #14814: Implement PEP 3144 (the ipaddress module) > http://bugs.python.org/issue14814 closed by python-dev > > #14846: Change in error when sys.path contains a nonexistent folder (i > http://bugs.python.org/issue14846 closed by python-dev > > #14954: weakref doc clarification > http://bugs.python.org/issue14954 closed by pitrou > > #15131: Document py/pyw launchers > http://bugs.python.org/issue15131 closed by brian.curtin > > #15199: Default mimetype for javascript should be application/javascri > http://bugs.python.org/issue15199 closed by petri.lehtinen > > #15249: email.generator.BytesGenerator doesn't mangle "From " lines wh > http://bugs.python.org/issue15249 closed by r.david.murray > > #15355: generator docs should mention already-executing exception > http://bugs.python.org/issue15355 closed by r.david.murray > > #15477: test_cmath failures on OS X 10.8 > http://bugs.python.org/issue15477 closed by mark.dickinson > > #15511: _decimal does not build in PGUpdate mode > http://bugs.python.org/issue15511 closed by loewis > > #15570: email.header.decode_header parses differently > http://bugs.python.org/issue15570 closed by r.david.murray > > #15595: subprocess.Popen(universal_newlines=True) does not work for ce > http://bugs.python.org/issue15595 closed by asvetlov > > #15615: More tests for JSON decoder to test Exceptions > http://bugs.python.org/issue15615 closed by pitrou > > #15632: regrtest.py: spurious leaks with -R option > http://bugs.python.org/issue15632 closed by python-dev > > #15636: base64.decodebytes is only available in Python3.1+ > http://bugs.python.org/issue15636 closed by eric.araujo > > #15637: Segfault reading null VMA (works fine in python 2.x) > http://bugs.python.org/issue15637 closed by r.david.murray > > #15640: Document importlib.abc.Finder as deprecated > http://bugs.python.org/issue15640 closed by brett.cannon > > #15645: 2to3 Grammar pickles not created when upgrading to 3.3.0b2 > http://bugs.python.org/issue15645 closed by ned.deily > > #15660: Clarify 0 prefix for width specifier in str.format doc, > http://bugs.python.org/issue15660 closed by terry.reedy > > #15678: IDLE menu customization is broken from OS X command lines > http://bugs.python.org/issue15678 closed by ned.deily > > #15694: link to "file object" glossary entry in open() and io docs > http://bugs.python.org/issue15694 closed by r.david.murray > > #15715: __import__ now raises with non-existing items in fromlist in 3 > http://bugs.python.org/issue15715 closed by brett.cannon > > #15717: Mail System Error - Returned Mail > http://bugs.python.org/issue15717 closed by eric.araujo > > #15719: Sort dict items in urlencode() > http://bugs.python.org/issue15719 closed by gvanrossum > > #15726: PyState_FindModule false length-comparison fix > http://bugs.python.org/issue15726 closed by pitrou > > #15728: Leak in PyUnicode_AsWideCharString() > http://bugs.python.org/issue15728 closed by skrah > > #15732: Crash (constructed) in _PySequence_BytesToCharpArray() > http://bugs.python.org/issue15732 closed by skrah > > #15736: Crash #2 (constructed overflow) in _PySequence_BytesToCharpAr > http://bugs.python.org/issue15736 closed by skrah > > #15737: NULL dereference in zipimport.c > http://bugs.python.org/issue15737 closed by python-dev > > #15738: Crash (constructed) in subprocess_fork_exec() > http://bugs.python.org/issue15738 closed by skrah > > #15739: Python crashes with "Bus error: 10" > http://bugs.python.org/issue15739 closed by ned.deily > > #15740: test_ssl failure when cacert.org CA cert in system keychain on > http://bugs.python.org/issue15740 closed by ronaldoussoren > > #15741: NULL dereference in builtin_compile() > http://bugs.python.org/issue15741 closed by skrah > > #15742: SQLite3 documentation changes > http://bugs.python.org/issue15742 closed by r.david.murray > > #15747: Various chflags tests failing on FreeBSD/ZFS > http://bugs.python.org/issue15747 closed by trent > > #15752: change test_json's use of deprecated unittest function > http://bugs.python.org/issue15752 closed by ezio.melotti > > #15754: Traceback message not returning SQLite check constraint detail > http://bugs.python.org/issue15754 closed by jftuga > > #15760: make install should generate grammar file > http://bugs.python.org/issue15760 closed by lregebro > > #15762: Windows 8 certification > http://bugs.python.org/issue15762 closed by loewis > > #15763: email non-ASCII characters in TO or FROM field doesn't work > http://bugs.python.org/issue15763 closed by r.david.murray > > #15764: Sqlite3 performance > http://bugs.python.org/issue15764 closed by loewis > > #15768: re.sub() with re.MULTILINE not replacing all occurrences > http://bugs.python.org/issue15768 closed by eacousineau > > #15770: _testbuffer.get_contiguous() doesn't check input arguments > http://bugs.python.org/issue15770 closed by skrah > > #15771: Tunple Bug? > http://bugs.python.org/issue15771 closed by ezio.melotti > > #15773: `is' operator returns False on classmethods > http://bugs.python.org/issue15773 closed by ncoghlan > > #15774: String method title() produces incorrect resutls > http://bugs.python.org/issue15774 closed by r.david.murray > > #1578643: various datetime methods fail in restricted mode > http://bugs.python.org/issue1578643 closed by belopolsky > > #1228112: code.py use sys.excepthook to display exceptions > http://bugs.python.org/issue1228112 closed by tebeka > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francismb at email.de Sat Aug 25 11:29:36 2012 From: francismb at email.de (francis) Date: Sat, 25 Aug 2012 11:29:36 +0200 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> Message-ID: <50389B00.9010006@email.de> > > Most recent 15 issues waiting for review (15) > ============================================= > Just curious: How is a issue considered "waiting for review"? Thanks! francis From martin at v.loewis.de Sat Aug 25 11:54:41 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 25 Aug 2012 11:54:41 +0200 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <50389B00.9010006@email.de> References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> <50389B00.9010006@email.de> Message-ID: <20120825115441.Horde.pHBzULuWis5QOKDhjKMVbhA@webmail.df.eu> Zitat von francis : >> >> Most recent 15 issues waiting for review (15) >> ============================================= >> > Just curious: How is a issue considered "waiting for review"? Issues that have the "patch" or "needs review" keyword or are in the "patch review" stage. Regards, Martin From francismb at email.de Sat Aug 25 12:31:11 2012 From: francismb at email.de (francis) Date: Sat, 25 Aug 2012 12:31:11 +0200 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20120825115441.Horde.pHBzULuWis5QOKDhjKMVbhA@webmail.df.eu> References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> <50389B00.9010006@email.de> <20120825115441.Horde.pHBzULuWis5QOKDhjKMVbhA@webmail.df.eu> Message-ID: <5038A96F.9060409@email.de> >>> >>> Most recent 15 issues waiting for review (15) >>> ============================================= >>> >> Just curious: How is a issue considered "waiting for review"? > > Issues that have the "patch" or "needs review" keyword or are > in the "patch review" stage. > Thank you! Is there a easy way to automate this?: - Get a list the "waiting for review" issues - Get the last patch - Try to apply that patch to the version(s) to check if that patch already applies? Regards, francis From martin at v.loewis.de Sat Aug 25 13:07:37 2012 From: martin at v.loewis.de (martin at v.loewis.de) Date: Sat, 25 Aug 2012 13:07:37 +0200 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <5038A96F.9060409@email.de> References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> <50389B00.9010006@email.de> <20120825115441.Horde.pHBzULuWis5QOKDhjKMVbhA@webmail.df.eu> <5038A96F.9060409@email.de> Message-ID: <20120825130737.Horde.Z530NbuWis5QOLH5Dl9mCMA@webmail.df.eu> Zitat von francis : > Is there a easy way to automate this?: > > - Get a list the "waiting for review" issues Not exactly this precise list; instead, a list of issues with a patch: s=xmlrpclib.ServerProxy("http://bugs.python.org",allow_none=True) s.filter('issue',dict(keywords=2,status=1}) The other conditions need to be queried separately (although you could search for both keywords in a single query). > - Get the last patch s.display('issue12201','files') The latest patch will be the one with the highest ID. To then download the patch, just download http://bugs.python.org/file/arbitrary-name.diff Alternatively, do s.display('file22163', 'content') > - Try to apply that patch to the version(s) to check if that patch > already applies? This should be possible by just running patch(1) through the subprocess module (and hg revert afterwards). You may have to do some patch parsing to find out whether to pass -p0 or -p1. Regards, Martin From ezio.melotti at gmail.com Sat Aug 25 13:15:02 2012 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sat, 25 Aug 2012 14:15:02 +0300 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20120825130737.Horde.Z530NbuWis5QOLH5Dl9mCMA@webmail.df.eu> References: <20120824160715.795EE1CA7F@psf.upfronthosting.co.za> <50389B00.9010006@email.de> <20120825115441.Horde.pHBzULuWis5QOKDhjKMVbhA@webmail.df.eu> <5038A96F.9060409@email.de> <20120825130737.Horde.Z530NbuWis5QOLH5Dl9mCMA@webmail.df.eu> Message-ID: Hi, On Sat, Aug 25, 2012 at 2:07 PM, wrote: > > Zitat von francis : > > >> Is there a easy way to automate this?: >> >> - Get a list the "waiting for review" issues > > > Not exactly this precise list; instead, a list of issues with a patch: > > s=xmlrpclib.ServerProxy("http://bugs.python.org",allow_none=True) > s.filter('issue',dict(keywords=2,status=1}) > > The other conditions need to be queried separately (although you > could search for both keywords in a single query). > > [...] In addition, you might want to check the Roundup XML-RPC docs: http://roundup.sourceforge.net/docs/xmlrpc.html and the source of the script that generates the summary: http://hg.python.org/tracker/python-dev/file/default/scripts/roundup-summary Best Regards, Ezio Melotti From martin at v.loewis.de Sat Aug 25 19:02:15 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 25 Aug 2012 19:02:15 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> Message-ID: <50390517.7070300@v.loewis.de> > Compatibility issues may lead to other strange bugs, too. IIRC each > msvcrt has its own thread local storage and therefore its own errno > handling. An extension compiled with VS 2010 won't be able to use the > PyErr_SetFromErrno*() function correctly. That's much harder to debug > than a FILE pointer mismatch because it usually doesn't cause a segfault. Interesting point. This somewhat breaks the stable ABI, which does include three SetFromErrno functions. So I guess we need to warn users of the stable ABI against using these functions. A solution would then be to add an additional set of functions which expect errno as a parameter, although this is quite some complication. Another solution is to introduce a Py_errno macro (and _Py_errno function) which exposes Python's view of errno, so code that might be confronted with this issue would write Py_errno = errno; before calling any of these functions. Except for the FILE* issue, I never considered any of the other issues really relevant for Python extensions, namely: - each CRT has its own heap, allocating on one heap and releasing to the other can leak. Not an issue for Python, since no Python API involves malloc/free pairs across DLL boundaries. - each CRT has its own timezone. This isn't really an issue, as they still get initialized consistently when the process starts (I guess except when the process starts before the DST change, but imports the extension after the DST change). - each CRT has its own locale. This may be an issue if an extension module relies on the CRT locale for data formatting; I just think this is unlikely to occur in practice (and when it does, it's easily notable). Anything else that you are aware of? Regards, Martin From stefan at bytereef.org Sat Aug 25 19:36:36 2012 From: stefan at bytereef.org (Stefan Krah) Date: Sat, 25 Aug 2012 19:36:36 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: <50390517.7070300@v.loewis.de> References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> <50390517.7070300@v.loewis.de> Message-ID: <20120825173636.GA10325@sleipnir.bytereef.org> "Martin v. L?wis" wrote: > - each CRT has its own locale. This may be an issue if an extension > module relies on the CRT locale for data formatting; I just think > this is unlikely to occur in practice (and when it does, it's easily > notable). _decimal's 'n' format specifier actually relies on the CRT locale. The functions in question are in libmpdec, so on Windows it is not possible to compile a static libmpdec and build the module from that. Well, it's possible, but setting the locale from Python then has no effect on the module, IIRC. Stefan Krah From martin at v.loewis.de Sat Aug 25 19:49:00 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 25 Aug 2012 19:49:00 +0200 Subject: [Python-Dev] Python 2.7: only Visual Studio 2008? In-Reply-To: <20120825173636.GA10325@sleipnir.bytereef.org> References: <7B27FDF2-F302-4969-A236-3D6500806007@mac.com> <20120821170121.Horde.ekeyVruWis5QM6LBdOMjMMA@webmail.df.eu> <50390517.7070300@v.loewis.de> <20120825173636.GA10325@sleipnir.bytereef.org> Message-ID: <5039100C.2010105@v.loewis.de> On 25.08.2012 19:36, Stefan Krah wrote: > "Martin v. L?wis" wrote: >> - each CRT has its own locale. This may be an issue if an extension >> module relies on the CRT locale for data formatting; I just think >> this is unlikely to occur in practice (and when it does, it's easily >> notable). > > _decimal's 'n' format specifier actually relies on the CRT locale. The > functions in question are in libmpdec, so on Windows it is not possible > to compile a static libmpdec and build the module from that. Building a static libmpdec should work fine, as long as it links with the CRT DLL. Most likely, the problem was that the project file in question would also link with the static CRT, which then causes the problem. But I see the point that extension modules may rely on the locale set by the Python script. Regards, Martin From georg at python.org Sat Aug 25 21:36:54 2012 From: georg at python.org (Georg Brandl) Date: Sat, 25 Aug 2012 21:36:54 +0200 Subject: [Python-Dev] [RELEASED] Python 3.3.0 release candidate 1 Message-ID: <50392956.6030907@python.org> On behalf of the Python development team, I'm delighted to announce the first release candidate of Python 3.3.0. This is a preview release, and its use is not recommended in production settings. Python 3.3 includes a range of improvements of the 3.x series, as well as easier porting between 2.x and 3.x. Major new features and changes in the 3.3 release series are: * PEP 380, syntax for delegating to a subgenerator ("yield from") * PEP 393, flexible string representation (doing away with the distinction between "wide" and "narrow" Unicode builds) * A C implementation of the "decimal" module, with up to 80x speedup for decimal-heavy applications * The import system (__import__) now based on importlib by default * The new "lzma" module with LZMA/XZ support * PEP 397, a Python launcher for Windows * PEP 405, virtual environment support in core * PEP 420, namespace package support * PEP 3151, reworking the OS and IO exception hierarchy * PEP 3155, qualified name for classes and functions * PEP 409, suppressing exception context * PEP 414, explicit Unicode literals to help with porting * PEP 418, extended platform-independent clocks in the "time" module * PEP 412, a new key-sharing dictionary implementation that significantly saves memory for object-oriented code * PEP 362, the function-signature object * The new "faulthandler" module that helps diagnosing crashes * The new "unittest.mock" module * The new "ipaddress" module * The "sys.implementation" attribute * A policy framework for the email package, with a provisional (see PEP 411) policy that adds much improved unicode support for email header parsing * A "collections.ChainMap" class for linking mappings to a single unit * Wrappers for many more POSIX functions in the "os" and "signal" modules, as well as other useful functions such as "sendfile()" * Hash randomization, introduced in earlier bugfix releases, is now switched on by default In total, almost 500 API items are new or improved in Python 3.3. For a more extensive list of changes in 3.3.0, see http://docs.python.org/3.3/whatsnew/3.3.html To download Python 3.3.0 visit: http://www.python.org/download/releases/3.3.0/ Please consider trying Python 3.3.0 with your code and reporting any bugs you may notice to: http://bugs.python.org/ Enjoy! -- Georg Brandl, Release Manager georg at python.org (on behalf of the entire python-dev team and 3.3's contributors) From chris.jerdonek at gmail.com Sun Aug 26 21:15:04 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 26 Aug 2012 12:15:04 -0700 Subject: [Python-Dev] question re: default branch and release clone Message-ID: Now that the 3.3 release clone has been created, can someone clarify what changes are allowed to go into the default branch? Is it the same policy as if the changes were going into the release clone directly (i.e. code freeze unless you have Georg's approval), or are future changes for 3.3.1 okay, or is the default branch for changes that would go into 3.4? If the policy is the same, when and how do we anticipate changing things for the default branch? Thanks, --Chris From victor.stinner at gmail.com Sun Aug 26 22:16:06 2012 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 26 Aug 2012 22:16:06 +0200 Subject: [Python-Dev] Sphinx issue in What's New in Python 3.3 doc Message-ID: Hi, In the first example of the "PEP 409: Suppressing exception context" section, I read "from None...". http://docs.python.org/dev/whatsnew/3.3.html#pep-409-suppressing-exception-context It's confusing because I don't remember what was the last choice for the PEP: None or ... :-) The reST "code" looks correct in Doc/whatsnew/3.3.rst: ... raise AttributeError(attr) from None ... Victor From g.brandl at gmx.net Sun Aug 26 22:56:18 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 26 Aug 2012 22:56:18 +0200 Subject: [Python-Dev] question re: default branch and release clone In-Reply-To: References: Message-ID: On 26.08.2012 21:15, Chris Jerdonek wrote: > Now that the 3.3 release clone has been created, can someone clarify > what changes are allowed to go into the default branch? Is it the > same policy as if the changes were going into the release clone > directly (i.e. code freeze unless you have Georg's approval), or are > future changes for 3.3.1 okay, or is the default branch for changes > that would go into 3.4? If the policy is the same, when and how do we > anticipate changing things for the default branch? Changes to the default branch must be bugfix-only. The 3.4 development only opens when the 3.3 branch is created, which happens after the release of 3.3.0 final. Changes made in default and not cherry-picked to the 3.3.0 release clone will therefore end up in 3.3.1 and 3.4. cheers, Georg From g.brandl at gmx.net Mon Aug 27 07:55:09 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 27 Aug 2012 07:55:09 +0200 Subject: [Python-Dev] Sphinx issue in What's New in Python 3.3 doc In-Reply-To: References: Message-ID: On 26.08.2012 22:16, Victor Stinner wrote: > Hi, > > In the first example of the "PEP 409: Suppressing exception context" > section, I read "from None...". > http://docs.python.org/dev/whatsnew/3.3.html#pep-409-suppressing-exception-context > > It's confusing because I don't remember what was the last choice for > the PEP: None or ... :-) > > The reST "code" looks correct in Doc/whatsnew/3.3.rst: > > ... raise AttributeError(attr) from None > ... Hi Victor, this is fixed in the latest Pygments, and will be fine in the doc once I update its version used for building. Until then, you could disable syntax highlighting on that particular code block. Georg From dholth at gmail.com Mon Aug 27 16:56:20 2012 From: dholth at gmail.com (Daniel Holth) Date: Mon, 27 Aug 2012 10:56:20 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: Message-ID: On Wed, Aug 15, 2012 at 10:49 AM, Daniel Holth wrote: > I've drafted some edits to Metadata 1.2 with valuable feedback from ... > (full changeset on https://bitbucket.org/dholth/python-peps/changeset/537e83bd4068) Metadata 1.2 is nearly 8 years old and it's Accepted but not Final. Is it better to continue editing it, or create a new PEP for Metadata 1.3? From martin at v.loewis.de Mon Aug 27 22:29:59 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 27 Aug 2012 22:29:59 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: Message-ID: <503BD8C7.90706@v.loewis.de> Am 27.08.12 16:56, schrieb Daniel Holth: > On Wed, Aug 15, 2012 at 10:49 AM, Daniel Holth wrote: >> I've drafted some edits to Metadata 1.2 with valuable feedback from > ... >> (full changeset on https://bitbucket.org/dholth/python-peps/changeset/537e83bd4068) > > Metadata 1.2 is nearly 8 years old and it's Accepted but not Final. Is > it better to continue editing it, or create a new PEP for Metadata > 1.3? You can't add new fields to the format after the fact, unless the format had provided for such additions (which it does not - there is no mention of custom fields anywhere, and no elaboration on how "unknown" fields should be processed). So if you want to add new fields, you need to create a new version of the metadata. Prepare for a ten-year period of acceptance - so it would be good to be sure that no further additions are desired within the next ten years before seeking approval for the PEP. Regards, Martin From dholth at gmail.com Mon Aug 27 23:02:20 2012 From: dholth at gmail.com (Daniel Holth) Date: Mon, 27 Aug 2012 17:02:20 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503BD8C7.90706@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> Message-ID: On Mon, Aug 27, 2012 at 4:29 PM, "Martin v. L?wis" wrote: > Am 27.08.12 16:56, schrieb Daniel Holth: > >> On Wed, Aug 15, 2012 at 10:49 AM, Daniel Holth wrote: >>> >>> I've drafted some edits to Metadata 1.2 with valuable feedback from >> >> ... >>> >>> (full changeset on >>> https://bitbucket.org/dholth/python-peps/changeset/537e83bd4068) >> >> >> Metadata 1.2 is nearly 8 years old and it's Accepted but not Final. Is >> it better to continue editing it, or create a new PEP for Metadata >> 1.3? > > > You can't add new fields to the format after the fact, unless the format had > provided for such additions (which it does not - there is > no mention of custom fields anywhere, and no elaboration on how > "unknown" fields should be processed). > > So if you want to add new fields, you need to create a new version > of the metadata. Prepare for a ten-year period of acceptance - so it > would be good to be sure that no further additions are desired within > the next ten years before seeking approval for the PEP. I don't know of a tool that doesn't reliably ignore extra fields, but I will put you down as being in favor of an X- fields paragraph: Extensions (X- Fields) :::::::::::::::::::::: Metadata files can contain fields that are not part of the specification, called *extensions*. These fields start with with `X-`. From petri at digip.org Tue Aug 28 06:15:52 2012 From: petri at digip.org (Petri Lehtinen) Date: Tue, 28 Aug 2012 07:15:52 +0300 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> Message-ID: <20120828041552.GA21422@chang> Daniel Holth wrote: > I don't know of a tool that doesn't reliably ignore extra fields, but > I will put you down as being in favor of an X- fields paragraph: > > Extensions (X- Fields) > :::::::::::::::::::::: > > Metadata files can contain fields that are not part of > the specification, called *extensions*. These fields start with > with `X-`. See RFC 6648 for why such X-fields may not be a good idea: http://tools.ietf.org/html/rfc6648 From petri at digip.org Tue Aug 28 06:22:42 2012 From: petri at digip.org (Petri Lehtinen) Date: Tue, 28 Aug 2012 07:22:42 +0300 Subject: [Python-Dev] question re: default branch and release clone In-Reply-To: References: Message-ID: <20120828042241.GB21422@chang> Georg Brandl wrote: > Changes to the default branch must be bugfix-only. The 3.4 development > only opens when the 3.3 branch is created, which happens after the > release of 3.3.0 final. > > Changes made in default and not cherry-picked to the 3.3.0 release clone > will therefore end up in 3.3.1 and 3.4. Where should I put the news entry for fixes that will go to 3.3.1? The top item in Misc/NEWS is 3.3.0 RC 2. The stuff in your private clone will be released as 3.3.0, so can a 3.3.1 entry be added to the default branch of the public repo now? And if changes like this are added now, they will be included in 3.2.4 but not in 3.3.0. Is this bad? Petri From stephen at xemacs.org Tue Aug 28 06:57:05 2012 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 28 Aug 2012 13:57:05 +0900 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <20120828041552.GA21422@chang> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> Message-ID: <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> Petri Lehtinen writes: > Daniel Holth wrote: > > I don't know of a tool that doesn't reliably ignore extra fields, but > > I will put you down as being in favor of an X- fields paragraph: > > > > Extensions (X- Fields) > > :::::::::::::::::::::: > > > > Metadata files can contain fields that are not part of > > the specification, called *extensions*. These fields start with > > with `X-`. > > See RFC 6648 for why such X-fields may not be a good idea: > > http://tools.ietf.org/html/rfc6648 But note that the RFC also says that the preferred solution to the problem that X-fields are intended to solve is an easily accessible name registry and a simple registration procedure. If Martin's "be prepared for a ten-year period to acceptance" is serious, what should be done about such a registry? From g.brandl at gmx.net Tue Aug 28 07:11:05 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Tue, 28 Aug 2012 07:11:05 +0200 Subject: [Python-Dev] question re: default branch and release clone In-Reply-To: <20120828042241.GB21422@chang> References: <20120828042241.GB21422@chang> Message-ID: On 28.08.2012 06:22, Petri Lehtinen wrote: > Georg Brandl wrote: >> Changes to the default branch must be bugfix-only. The 3.4 development >> only opens when the 3.3 branch is created, which happens after the >> release of 3.3.0 final. >> >> Changes made in default and not cherry-picked to the 3.3.0 release clone >> will therefore end up in 3.3.1 and 3.4. > > Where should I put the news entry for fixes that will go to 3.3.1? The > top item in Misc/NEWS is 3.3.0 RC 2. The stuff in your private clone > will be released as 3.3.0, so can a 3.3.1 entry be added to the > default branch of the public repo now? Yes. > And if changes like this are added now, they will be included in 3.2.4 > but not in 3.3.0. Is this bad? Sounds fine to me. Georg From ncoghlan at gmail.com Tue Aug 28 07:20:43 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 15:20:43 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503BD8C7.90706@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> Message-ID: On Tue, Aug 28, 2012 at 6:29 AM, "Martin v. L?wis" wrote: > Am 27.08.12 16:56, schrieb Daniel Holth: >> Metadata 1.2 is nearly 8 years old and it's Accepted but not Final. Is >> it better to continue editing it, or create a new PEP for Metadata >> 1.3? > > > You can't add new fields to the format after the fact, unless the format had > provided for such additions (which it does not - there is > no mention of custom fields anywhere, and no elaboration on how > "unknown" fields should be processed). > > So if you want to add new fields, you need to create a new version > of the metadata. I agree with this point - the main reason the metadata PEP is still lingering at Accepted rather than Final is the tangled relationship between distutils and other projects that led to the complete distutils feature freeze. Until distutils2 makes it into the standard library as the packaging module, the standard library is going to be stuck at v1.1 of the metadata format. > Prepare for a ten-year period of acceptance - so it > would be good to be sure that no further additions are desired within > the next ten years before seeking approval for the PEP. However, this point I really don't agree with. The packaging ecosystem is currently evolving outside the standard library, but the standardisation process for the data interchange formats still falls under the authority of python-dev and the PEP process. If there are things missing from v1.2 of the metadata spec, then define v1.3 to address those known problems. Don't overengineer it in an attempt to anticipate every possible need that might come in the next decade. Tools outside the standard library are then free to adopt the new standard, even while the stdlib itself continues to lag behind. When the packaging module is finally added (hopefully 3.4, even if that means we have to temporarily cull the entire compiler subpackage), it will handle the most recent accepted version of the metadata format (as well as any previous versions). If more holes reveal themselves in the next 18 months, then it's OK if v1.4 is created when it becomes clear that it's necessary. At the very least, something v1.3 should make explicit is that custom metadata should NOT be put into the .dist-info/METADATA (PEP 376 location, PKG-INFO, in distutils terms) file. Instead, that data should be placed in a *separate* file in the .dist-info directory. Something that *may* be appropriate is a new field in METADATA that explicitly calls out such custom metadata files by naming the PyPI distribution that is the authority for the relevant format (e.g. "Custom-Metadata: wheel" to indicate that 'wheel' defined metadata is present) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Aug 28 13:04:28 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 07:04:28 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> Message-ID: <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> On Aug 28, 2012, at 1:20 AM, Nick Coghlan wrote: > On Tue, Aug 28, 2012 at 6:29 AM, "Martin v. L?wis" wrote: >> Am 27.08.12 16:56, schrieb Daniel Holth: >>> Metadata 1.2 is nearly 8 years old and it's Accepted but not Final. Is >>> it better to continue editing it, or create a new PEP for Metadata >>> 1.3? >> >> >> You can't add new fields to the format after the fact, unless the format had >> provided for such additions (which it does not - there is >> no mention of custom fields anywhere, and no elaboration on how >> "unknown" fields should be processed). >> >> So if you want to add new fields, you need to create a new version >> of the metadata. > > I agree with this point - the main reason the metadata PEP is still > lingering at Accepted rather than Final is the tangled relationship > between distutils and other projects that led to the complete > distutils feature freeze. Until distutils2 makes it into the standard > library as the packaging module, the standard library is going to be > stuck at v1.1 of the metadata format. > >> Prepare for a ten-year period of acceptance - so it >> would be good to be sure that no further additions are desired within >> the next ten years before seeking approval for the PEP. > > However, this point I really don't agree with. The packaging ecosystem > is currently evolving outside the standard library, but the > standardisation process for the data interchange formats still falls > under the authority of python-dev and the PEP process. > > If there are things missing from v1.2 of the metadata spec, then > define v1.3 to address those known problems. Don't overengineer it in > an attempt to anticipate every possible need that might come in the > next decade. Tools outside the standard library are then free to adopt > the new standard, even while the stdlib itself continues to lag > behind. > > When the packaging module is finally added (hopefully 3.4, even if > that means we have to temporarily cull the entire compiler > subpackage), it will handle the most recent accepted version of the > metadata format (as well as any previous versions). If more holes > reveal themselves in the next 18 months, then it's OK if v1.4 is > created when it becomes clear that it's necessary. > > At the very least, something v1.3 should make explicit is that custom > metadata should NOT be put into the .dist-info/METADATA (PEP 376 > location, PKG-INFO, in distutils terms) file. Instead, that data > should be placed in a *separate* file in the .dist-info directory. > Something that *may* be appropriate is a new field in METADATA that > explicitly calls out such custom metadata files by naming the PyPI > distribution that is the authority for the relevant format (e.g. > "Custom-Metadata: wheel" to indicate that 'wheel' defined metadata is > present) Setuptools just uses path.exists() when it needs a particular file and will not bother parsing pkg-info at all if it can help it. The metadata edits for 1.2 fold some of those files into metadata. From ncoghlan at gmail.com Tue Aug 28 13:45:09 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 21:45:09 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 9:04 PM, Daniel Holth wrote: > Setuptools just uses path.exists() when it needs a particular file and will not bother parsing pkg-info at all if it can help it. The metadata edits for 1.2 fold some of those files into metadata. You can't use path.exists() on metadata published by a webservice (or still inside a zipfile), but you can download or read the main metadata file. Still, I don't really care whether or not such a field indicating the presence of custom metadata is added, I'm mainly registering a strong -1 on allowing extension fields (in the form of X- headers or CSS style prefixed headers) in the metadata file itself. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald.stufft at gmail.com Tue Aug 28 14:07:39 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 08:07:39 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> Message-ID: <96CFF8D4471940379002B1C69E1C9982@gmail.com> I personally think that at a minimum we should have X-Fields that get moved into the normal METADATA file, and personally I would prefer to just drop the X- prefix completely. I think any spec which doesn't include first class support for extending it with new metadata is going to essentially kick the can down the road and solve the problems of today without leaving room to solve the problems of tomorrow. I know that distutils2 have requires-dist, but for the sake of argument pretend they don't. If there is first class support for extending the metadata with new fields, a project could come along, and add a requires-dist (or x-requires-dist) concept to metadata. Tools that understand it would see that data and be able to act on it, tools that don't understand it would simply write it to the METADATA file incase in the future a tool that does understand it needs to act on it. Essentially first class support for extending the metadata outside of a PEP process means that outside of the stdlib people can experiment and try new things, existing tools will continue to work and just ignore that extra data (but leave it intact), new tools will be able to utilize it to do something useful. Ideally as a new concept is tested externally and begins to gain acceptance a new metadata version could be created that standardizes that field as part of the spec instead of an extension. On Tuesday, August 28, 2012 at 7:45 AM, Nick Coghlan wrote: > On Tue, Aug 28, 2012 at 9:04 PM, Daniel Holth wrote: > > Setuptools just uses path.exists() when it needs a particular file and will not bother parsing pkg-info at all if it can help it. The metadata edits for 1.2 fold some of those files into metadata. > > > You can't use path.exists() on metadata published by a webservice (or > still inside a zipfile), but you can download or read the main > metadata file. > > Still, I don't really care whether or not such a field indicating the > presence of custom metadata is added, I'm mainly registering a strong > -1 on allowing extension fields (in the form of X- headers or CSS > style prefixed headers) in the metadata file itself. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com (mailto:ncoghlan at gmail.com) | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org (mailto:Python-Dev at python.org) > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/donald.stufft%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Aug 28 14:28:44 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 08:28:44 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <96CFF8D4471940379002B1C69E1C9982@gmail.com> References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 8:07 AM, Donald Stufft wrote: > I personally think that at a minimum we should have X-Fields that > get moved into the normal METADATA file, and personally I would > prefer to just drop the X- prefix completely. That is my preference as well. The standard library basically ignores every metadata field or metadata file inside or outside of metadata currently, so where is the harm changing the official document to read "you may add new metadata fields to metadata" with an updated standard library that only ignores some of the metadata in metadata instead of all of it. The community is small enough to handle it. From ncoghlan at gmail.com Tue Aug 28 14:28:57 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 22:28:57 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <96CFF8D4471940379002B1C69E1C9982@gmail.com> References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 10:07 PM, Donald Stufft wrote: > I personally think that at a minimum we should have X-Fields that > get moved into the normal METADATA file, and personally I would > prefer to just drop the X- prefix completely. Hell no. We've been down this road with setuptools and it *sucks*. Everybody gets confused, because you can't tell just by looking at a metadata file what's part of the standard and what's been added just because a tool developer thought it was a good idea without being able to obtain broad consensus (perhaps because others couldn't see the point until the extension had been field tested for an extended period). Almost *nobody* reads metadata specs other than the people that helped write them. Everyone else copies a file that works, and tweaks it to suit, or they use a tool that generates the metadata for them based on some other interface. The least-awful widespread extension approach I'm aware of is CSS vendor prefixes. X- headers suck because they only give you two namespaces - the "standard" namespace and the "extension" namespace. That means everyone is quickly forced back into seeking agreement and consensus to avoid naming conflicts for extension fields. However, I'm open to the idea of a properly namespaced extension mechanism, which is exactly why I suggested separate files flagged in the main metadata with the PyPI project that defines the format of those extensions. I'm also open to the idea of extensions appearing in [PyPI distribution] prefixed sections after the standard metadata so, for example, there could be a [wheel] section in METADATA rather than a separate WHEEL file. We already have a namespace registry in the form of PyPI, so there's no reason to invent a new one, and allowing *any* PyPI distribution to add custom metadata fields without name conflicts would allow easy experimentation while still making it clear which fields are defined in PEPs and which are defined by particular projects. > I know that distutils2 have requires-dist, but for the sake of > argument pretend they don't. If there is first class support for > extending the metadata with new fields, a project could come > along, and add a requires-dist (or x-requires-dist) concept to > metadata. Tools that understand it would see that data and > be able to act on it, tools that don't understand it would simply > write it to the METADATA file incase in the future a tool that > does understand it needs to act on it. > > Essentially first class support for extending the metadata outside > of a PEP process means that outside of the stdlib people can > experiment and try new things, existing tools will continue to work > and just ignore that extra data (but leave it intact), new tools will be > able to utilize it to do something useful. Ideally as a new concept is > tested externally and begins to gain acceptance a new metadata > version could be created that standardizes that field as part of the > spec instead of an extension. Agreed, and this is the kind of thing a v1.3 metadata PEP could define. It just needs to be properly namespaced, and the obvious namespacing mechanism is PyPI project names. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Aug 28 14:33:59 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 08:33:59 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 8:28 AM, Nick Coghlan wrote: > On Tue, Aug 28, 2012 at 10:07 PM, Donald Stufft wrote: >> I personally think that at a minimum we should have X-Fields that >> get moved into the normal METADATA file, and personally I would >> prefer to just drop the X- prefix completely. > > Hell no. We've been down this road with setuptools and it *sucks*. > Everybody gets confused, because you can't tell just by looking at a > metadata file what's part of the standard and what's been added just > because a tool developer thought it was a good idea without being able > to obtain broad consensus (perhaps because others couldn't see the > point until the extension had been field tested for an extended > period). > > Almost *nobody* reads metadata specs other than the people that helped > write them. Everyone else copies a file that works, and tweaks it to > suit, or they use a tool that generates the metadata for them based on > some other interface. > > The least-awful widespread extension approach I'm aware of is CSS > vendor prefixes. X- headers suck because they only give you two > namespaces - the "standard" namespace and the "extension" namespace. > That means everyone is quickly forced back into seeking agreement and > consensus to avoid naming conflicts for extension fields. > > However, I'm open to the idea of a properly namespaced extension > mechanism, which is exactly why I suggested separate files flagged in > the main metadata with the PyPI project that defines the format of > those extensions. I'm also open to the idea of extensions appearing in > [PyPI distribution] prefixed sections after the standard metadata so, > for example, there could be a [wheel] section in METADATA rather than > a separate WHEEL file. > > We already have a namespace registry in the form of PyPI, so there's > no reason to invent a new one, and allowing *any* PyPI distribution to > add custom metadata fields without name conflicts would allow easy > experimentation while still making it clear which fields are defined > in PEPs and which are defined by particular projects. > > >> I know that distutils2 have requires-dist, but for the sake of >> argument pretend they don't. If there is first class support for >> extending the metadata with new fields, a project could come >> along, and add a requires-dist (or x-requires-dist) concept to >> metadata. Tools that understand it would see that data and >> be able to act on it, tools that don't understand it would simply >> write it to the METADATA file incase in the future a tool that >> does understand it needs to act on it. >> >> Essentially first class support for extending the metadata outside >> of a PEP process means that outside of the stdlib people can >> experiment and try new things, existing tools will continue to work >> and just ignore that extra data (but leave it intact), new tools will be >> able to utilize it to do something useful. Ideally as a new concept is >> tested externally and begins to gain acceptance a new metadata >> version could be created that standardizes that field as part of the >> spec instead of an extension. > > Agreed, and this is the kind of thing a v1.3 metadata PEP could > define. It just needs to be properly namespaced, and the obvious > namespacing mechanism is PyPI project names. > > Cheers, > Nick. Wheel deals with this somewhat by including a Packager: bdist_wheel line in WHEEL so that you can deal with packager-specific bugs. Bento uses indentation so you can have sections: Key: value Indented Key: value From ncoghlan at gmail.com Tue Aug 28 14:31:49 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 22:31:49 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 10:28 PM, Daniel Holth wrote: > That is my preference as well. The standard library basically ignores > every metadata field or metadata file inside or outside of metadata > currently, so where is the harm changing the official document to read > "you may add new metadata fields to metadata" with an updated standard > library that only ignores some of the metadata in metadata instead of > all of it. The community is small enough to handle it. I will campaign ardently against any such proposal. Any extension field must be clearly traceable to an authority that gets to define what it means to avoid a repeat of the setuptools debacle. Namespaces are a honkin' great idea, let's do more of those :P Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From dholth at gmail.com Tue Aug 28 14:57:36 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 08:57:36 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: How about Extensions are fields that start with a pypi-registered name followed by a hyphen. A file that contains extension fields declares them with Extension: name : Extension: pypiname pypiname-Field: value From ncoghlan at gmail.com Tue Aug 28 14:59:55 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 22:59:55 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 10:33 PM, Daniel Holth wrote: > Wheel deals with this somewhat by including a > > Packager: bdist_wheel > > line in WHEEL so that you can deal with packager-specific bugs. Right, but the problem with that is it's defining a couple of *new* namespaces to manage: - the filenames within dist_info (although uppercasing a PyPI project name is pretty safe) - the "Packager" field (bdist_wheel is a distutils command rather than a PyPI project) By using PyPI distribution names to indicate custom sections in the main metadata file, we would get to exploit an existing registry that enforces uniqueness without imposing significant overhead. > Bento uses indentation so you can have sections: > > Key: value > Indented Key: value Yes, the main metadata file could definitely go that way. The three main ways I can see an extensible metadata format working are: 1. The way wheel currently works (separate WHEEL file, naming conflicts resolved largely by first-in-first-served with no official registry, no obvious indication which project defines the format) 2. PyPI as extension registry, with an ini-file inspired section syntax inside dist-info/METADATA [wheel] Version: 0.9 Packager: bdist_wheel-0.1 Root-Is-Purelib: true 3. PyPI as extension registry, with an indented section syntax inside dist-info/METADATA Extended-Metadata: wheel Version: 0.9 Packager: bdist_wheel-0.1 Root-Is-Purelib: true My preference is currently for the ini-style variant, but I could definitely live with the indented approach.Either way, any project registered on PyPI would be free to add their own extensions without fear of naming conflicts or any doubts about the relevant authority for the meaning of the fields. Standard tools could just treat those sections as opaque blocks of text to be preserved verbatim, or else they could be constrained so that the standard tools could pick out the individual key:value pairs. Namespacing an extension mechanism based on PyPI distributions names should be pretty straightforward and it will mean that a lot of problems that can otherwise arise with extensible metadata systems should simply never come up. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Aug 28 15:09:20 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 23:09:20 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 10:57 PM, Daniel Holth wrote: > How about > > Extensions are fields that start with a pypi-registered name followed > by a hyphen. A file that contains extension fields declares them with > Extension: name : > > Extension: pypiname > pypiname-Field: value The repetition seems rather annoying. Compare the two section based variants I just posted to: Extension: wheel wheel-Version: 0.9 wheel-Packager: bdist_wheel-0.1 wheel-Root-Is-Purelib: true It does have the advantage that tools for manipulating the format can remain dumber, but that doesn't seem like *that* much of an advantage, especially since any such benefit could be eliminated completely by just switching to a completely standard ConfigParser format by putting the PEP defined settings into a [python] section. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald.stufft at gmail.com Tue Aug 28 15:19:33 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 09:19:33 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: <4F1FFE82181A4DAC969400FC39A3D58D@gmail.com> On Tuesday, August 28, 2012 at 8:28 AM, Nick Coghlan wrote: > > Agreed, and this is the kind of thing a v1.3 metadata PEP could > define. It just needs to be properly namespaced, and the obvious > namespacing mechanism is PyPI project names. The biggest reason I have against namespacing them is it makes moving from experimental to standard easier, but I'm ok with some form of a namespace. The biggest reason I see against using PyPI names as the namespace is it needlessly ties a piece of data to the original creator. Similar to how right now you could write a less hacky setuptools, but in order to do so you need to continue to use the setuptools package name (see distribute). Using PyPI names means that in the requires-dist example it would be something like setuptools-requires-dist, and even if I make my own tool that supports the same concept as setuptools's requires-dist I would need to use setuptools-requires-dist. The concept of metadata I think should be divorced from specific implementations. Obviously there are going to be some implementation specific issues but I think it's much cleaner to have a x-requires-dist that any implementation can use than to have whoever-invented-it-first-requires-dist or a twenty-different-forms-of-requires-dist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at gmail.com Tue Aug 28 15:20:23 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 09:20:23 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tuesday, August 28, 2012 at 9:09 AM, Nick Coghlan wrote: > On Tue, Aug 28, 2012 at 10:57 PM, Daniel Holth wrote: > > How about > > > > Extensions are fields that start with a pypi-registered name followed > > by a hyphen. A file that contains extension fields declares them with > > Extension: name : > > > > Extension: pypiname > > pypiname-Field: value > > > > > The repetition seems rather annoying. Compare the two section based > variants I just posted to: > > Extension: wheel > wheel-Version: 0.9 > wheel-Packager: bdist_wheel-0.1 > wheel-Root-Is-Purelib: true > > It does have the advantage that tools for manipulating the format can > remain dumber, but that doesn't seem like *that* much of an advantage, > especially since any such benefit could be eliminated completely by > just switching to a completely standard ConfigParser format by putting > the PEP defined settings into a [python] section. > METADATA files are not ini files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Aug 28 15:20:33 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 09:20:33 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 9:09 AM, Nick Coghlan wrote: > On Tue, Aug 28, 2012 at 10:57 PM, Daniel Holth wrote: >> How about >> >> Extensions are fields that start with a pypi-registered name followed >> by a hyphen. A file that contains extension fields declares them with >> Extension: name : >> >> Extension: pypiname >> pypiname-Field: value > > The repetition seems rather annoying. Compare the two section based > variants I just posted to: > > Extension: wheel > wheel-Version: 0.9 > wheel-Packager: bdist_wheel-0.1 > wheel-Root-Is-Purelib: true > > It does have the advantage that tools for manipulating the format can > remain dumber, but that doesn't seem like *that* much of an advantage, > especially since any such benefit could be eliminated completely by > just switching to a completely standard ConfigParser format by putting > the PEP defined settings into a [python] section. Wheel is a little different because once it's installed it is no longer a wheel, but it makes a decent example. That's not even repetition, it's just longer tag names. Repetition is having one Classifier: line for every trove classifier. It would be quite inconvenient to change the parser for PKG-INFO. It's a win to keep the file flat. From donald.stufft at gmail.com Tue Aug 28 15:23:59 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 09:23:59 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tuesday, August 28, 2012 at 9:09 AM, Nick Coghlan wrote: > > It does have the advantage that tools for manipulating the format can > remain dumber, but that doesn't seem like *that* much of an advantage, > especially since any such benefit could be eliminated completely by > just switching to a completely standard ConfigParser format by putting > the PEP defined settings into a [python] section. > To be more specific, there is setup.cfg (which I dislike for other reasons), and then there is METADATA. setup.cfg is an ini file but METADATA is a simple key: value file with a flat namespace so any namespacing you want to do in METADATA needs to be done at the key level. You could translate: [setuptools] requires-dist=foo in a setup.cfg into setuptools-requires-dist: foo in METADATA, but I'm not sure if that would be beneficial or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Aug 28 15:25:24 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 23:25:24 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 11:20 PM, Daniel Holth wrote: > Wheel is a little different because once it's installed it is no > longer a wheel, but it makes a decent example. That's not even > repetition, it's just longer tag names. Repetition is having one > Classifier: line for every trove classifier. > > It would be quite inconvenient to change the parser for PKG-INFO. It's > a win to keep the file flat. Cool, it's the namespace I care about. Every piece of extended metadata must have an authority who gets to define what it means. If that means people register a "virtual" PyPI project just to reserve an extension namespace, I'm fine with that. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Tue Aug 28 15:41:38 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 28 Aug 2012 23:41:38 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 11:23 PM, Donald Stufft wrote: > To be more specific, there is setup.cfg (which I dislike for other reasons), > and > then there is METADATA. setup.cfg is an ini file but METADATA is a simple > key: value file with a flat namespace so any namespacing you want to do in > METADATA needs to be done at the key level. We're talking about the format for v1.3 of the metadata. That format is not defined yet, so it's not obligatory for it to remain a flat key value store. However, there are advantages to keeping it as such, so I'm fine with Daniel's suggested approach. The only thing I really care about is the namespacing, for the same reasons the IETF wrote RFC 6648, as Petri linked earlier [1]. Establishing proper name registration rules can categorically eliminate a bunch of problems further down the line (such as the past confusion between which metadata entries were defined by PEPs and which were setuptools-specific extensions that other tools might not understand). With PyPI based namespacing we get clear orthogonal naming with clear lines of authority: 1. PEPs continue to define the core metadata used by PyPI, the standard library (once we get updated packaging support in place) and most other tools 2. Any members of the community with a specific interest can register a PyPI project to define additional metadata without risking naming conflicts. This need may arise in the context of a specific project, and thus use that project's name, or else it may be a project registered for the express purpose of being a metadata namespace, and not actually correspond to any installable module. The main point is to take advantage of an existing automated Python-specific name and resource registry to avoid naming conflicts without Java-style reverse DNS based clutter, and without python-dev having to explicitly approve each and every metadata extension. Cheers, Nick. [1] https://tools.ietf.org/html/rfc6648#section-4 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald.stufft at gmail.com Tue Aug 28 15:48:31 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 09:48:31 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tuesday, August 28, 2012 at 9:41 AM, Nick Coghlan wrote: > > The only thing I really care about is the namespacing, for the same > reasons the IETF wrote RFC 6648, as Petri linked earlier [1]. > Establishing proper name registration rules can categorically > eliminate a bunch of problems further down the line (such as the past > confusion between which metadata entries were defined by PEPs and > which were setuptools-specific extensions that other tools might not > understand). > > I'm happy with any form of a namespace to be quite honest. I have a bit of a preference for no or flat namespace but i'm perfectly fine with a PyPI based namespace. The important part is a defined way to extend the data that even when tools don't understand the extended data they can losslessly move it around from setup.cfg/setup.py/whatever to METADATA and any other format, even if they themselves don't utilize it, leaving it intact for tools that _do_ utilize it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Aug 28 16:15:37 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Aug 2012 00:15:37 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: On Tue, Aug 28, 2012 at 11:48 PM, Donald Stufft wrote: > On Tuesday, August 28, 2012 at 9:41 AM, Nick Coghlan wrote: > > The only thing I really care about is the namespacing, for the same > reasons the IETF wrote RFC 6648, as Petri linked earlier [1]. > Establishing proper name registration rules can categorically > eliminate a bunch of problems further down the line (such as the past > confusion between which metadata entries were defined by PEPs and > which were setuptools-specific extensions that other tools might not > understand). > > > I'm happy with any form of a namespace to be quite honest. I have a bit of > a preference for no or flat namespace but i'm perfectly fine with a PyPI > based > namespace. The important part is a defined way to extend the data that > even when tools don't understand the extended data they can losslessly > move it around from setup.cfg/setup.py/whatever to METADATA and > any other format, even if they themselves don't utilize it, leaving it > intact > for tools that _do_ utilize it. Oh, yes, I care about that part, too, as without that there's no reason to define a metadata extension format at all :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From martin at v.loewis.de Tue Aug 28 16:24:02 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 16:24:02 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <4416FCFB-EB0C-4DCB-BA97-28131D6DA100@gmail.com> <96CFF8D4471940379002B1C69E1C9982@gmail.com> Message-ID: <503CD482.6030801@v.loewis.de> Am 28.08.12 14:28, schrieb Daniel Holth: > On Tue, Aug 28, 2012 at 8:07 AM, Donald Stufft wrote: >> I personally think that at a minimum we should have X-Fields that >> get moved into the normal METADATA file, and personally I would >> prefer to just drop the X- prefix completely. > > That is my preference as well. The standard library basically ignores > every metadata field or metadata file inside or outside of metadata > currently, so where is the harm changing the official document to read > "you may add new metadata fields to metadata" with an updated standard > library that only ignores some of the metadata in metadata instead of > all of it. The community is small enough to handle it. The problem with that (and the reason to introduce the X- prefix in RFC 822) is that allowing arbitrary additions will make evolution difficult: if you want to standardize a certain field at some point, you either need to pick a name that is unused in all implementations (which you never can be really certain about), or you break some existing tool by making the addition (unless the addition happens to have the exact same syntax and semantics as the prior use). Regards, Martin From martin at v.loewis.de Tue Aug 28 16:43:35 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 16:43:35 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> Message-ID: <503CD917.5050902@v.loewis.de> > But note that the RFC also says that the preferred solution to the > problem that X-fields are intended to solve is an easily accessible > name registry and a simple registration procedure. If Martin's "be > prepared for a ten-year period to acceptance" is serious, what should > be done about such a registry? I'm happy for PyPI to host such a registry. A specificaion for the registry should be part of the PEP for the 1.3 format, but I would propose this structure (without having researched in detail what other registries feature, but with a rough idea what IANA registries typically include): - name of metadata field - name of registrant (individual or PyPI package) - contact email address (published) - expiration date; by default, extensions expire 1 month after their registration, unless renewed; maximum expiration time is 5 years - English description of the field - regular expression to validate the field Deleting undesired extensions would not be possible, instead, one would have to create another extension if the syntax or semantics changes Regards, Martin From martin at v.loewis.de Tue Aug 28 16:47:55 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 16:47:55 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> Message-ID: <503CDA1B.2080003@v.loewis.de> >> Prepare for a ten-year period of acceptance - so it >> would be good to be sure that no further additions are desired within >> the next ten years before seeking approval for the PEP. > > However, this point I really don't agree with. The packaging ecosystem > is currently evolving outside the standard library, but the > standardisation process for the data interchange formats still falls > under the authority of python-dev and the PEP process. Maybe I misphrased. By "accepted" I meant "widely implemented". From the day this gets published until it is really usable, I still believe 10 years is realistic. For example, setuptools doesn't implement Meta-data 1.2, and nearly nobody uses it, 8 years after it was written. > When the packaging module is finally added (hopefully 3.4, even if > that means we have to temporarily cull the entire compiler > subpackage), it will handle the most recent accepted version of the > metadata format (as well as any previous versions). If more holes > reveal themselves in the next 18 months, then it's OK if v1.4 is > created when it becomes clear that it's necessary. The problem is that flooding people with specifications is a guarantee that they will not get implemented. So we can have one metadata specification every ten years; if we have more, none of them will be implemented (except in the tool of the author of the PEP). Regards, Martin From donald.stufft at gmail.com Tue Aug 28 16:53:01 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 10:53:01 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CD917.5050902@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> Message-ID: On Tuesday, August 28, 2012 at 10:43 AM, "Martin v. L?wis" wrote: > > I'm happy for PyPI to host such a registry. A specificaion for the > registry should be part of the PEP for the 1.3 format, but I would > propose this structure (without having researched in detail what > other registries feature, but with a rough idea what IANA registries > typically include): PyPI packages itself could serve as a registry, but I like the idea of a separate registry better in many ways because it lets you divorce the namespace from the package. The question being would this be a x-registered-name type system or a registered-namespace-* type system? It occurs to me one problem with arbitrary namespaces is there is a unintended collision problem. e.g. you have the foo-bar namespace and the foo namespace, what happens if you have a test key inside of foo-bar and a bar-test inside of the foo namepspace. They'll both end up being foo-bar-test. This makes me think that we need a seperate registry and that if we go the namespace route it should be limited to alphanumerics only so that you don't have the foo/foo-bar collision problem. > > - name of metadata field > - name of registrant (individual or PyPI package) > - contact email address (published) > - expiration date; by default, extensions expire 1 month after > their registration, unless renewed; maximum expiration time is > 5 years > - English description of the field > - regular expression to validate the field What happens when it expires? Is that name freed up for future use? I think that freeing up the name is likely to be a bad idea since we can't go backwards in time (as you alluded to later about not deleting them), so what does expiration do? > > Deleting undesired extensions would not be possible, instead, one > would have to create another extension if the syntax or semantics > changes -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Tue Aug 28 16:59:06 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 16:59:06 +0200 Subject: [Python-Dev] question re: default branch and release clone In-Reply-To: <20120828042241.GB21422@chang> References: <20120828042241.GB21422@chang> Message-ID: <503CDCBA.6060108@v.loewis.de> > And if changes like this are added now, they will be included in 3.2.4 > but not in 3.3.0. Is this bad? This is the standard for any security fix: such a fix would be added to 3.1.6, 3.2.4, 3.3.1, and 3.4.0, but not to 3.2.3 or 3.3.0. So version(A) > version(B) does not imply has_fix(A, F) if has_fix(B, F) (for Python releases A and B and fix F) The same would regularly happen with any bug fix, too, except we only have one bug fix branch at nearly every point in time (except that we have the 2.7 branch as well). Regards, Martin P.S. Python 3.1 will continue to receive security fixes until June 2014, 3.2 will receive them until February 2016, 3.3 until September 2017. For 2.7, a policy needs to be set after the last bug fix release of 2.7 was made. From ncoghlan at gmail.com Tue Aug 28 17:05:39 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Aug 2012 01:05:39 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> Message-ID: On Wed, Aug 29, 2012 at 12:53 AM, Donald Stufft wrote: > On Tuesday, August 28, 2012 at 10:43 AM, "Martin v. L?wis" wrote: > > I'm happy for PyPI to host such a registry. A specificaion for the > registry should be part of the PEP for the 1.3 format, but I would > propose this structure (without having researched in detail what > other registries feature, but with a rough idea what IANA registries > typically include): > > PyPI packages itself could serve as a registry, but I like the idea of > a separate registry better in many ways because it lets you divorce > the namespace from the package. The question being would this > be a x-registered-name type system or a registered-namespace-* > type system? Please, don't. The software and infrastructure to run PyPI exists. Some level of namespacing makes sense to separate out extension management to different groups of people, but creating a whole management application just for this would be serious overkill. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From martin at v.loewis.de Tue Aug 28 17:07:16 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 28 Aug 2012 17:07:16 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> Message-ID: <503CDEA4.9090800@v.loewis.de> Am 28.08.12 16:53, schrieb Donald Stufft: > On Tuesday, August 28, 2012 at 10:43 AM, "Martin v. L?wis" wrote: >> >> I'm happy for PyPI to host such a registry. A specificaion for the >> registry should be part of the PEP for the 1.3 format, but I would >> propose this structure (without having researched in detail what >> other registries feature, but with a rough idea what IANA registries >> typically include): > PyPI packages itself could serve as a registry, but I like the idea of > a separate registry better in many ways because it lets you divorce > the namespace from the package. Maybe I didn't express myself clearly - this is exactly what I proposed. The registry would be implemented in the same software as PyPI, and run on the same machine, and (perhaps) have pypi.python.org as it's domain name, but otherwise would be decoupled from Python packages. > What happens when it expires? Is that name freed up for future use? Yes, exactly. > I > think that freeing up the name is likely to be a bad idea since we can't go > backwards in time (as you alluded to later about not deleting them), so > what does expiration do? Why would it require going backwards in time? Existing usages of the extension just become invalid, e.g. with the consequence that you can't upload the package to PyPI anymore unless you remove the extension, or re-register it. If the extension is in active use, somebody certainly will make sure it stays registered. Expiration is to free up names that are not in active use, but are otherwise reasonable names for metadata fields (say, Requires-Unicode-Version). Regards, Martin From donald.stufft at gmail.com Tue Aug 28 17:13:04 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 11:13:04 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> Message-ID: <80123CE889BB4D24BAA5C17684E13BE5@gmail.com> On Tuesday, August 28, 2012 at 11:05 AM, Nick Coghlan wrote: > On Wed, Aug 29, 2012 at 12:53 AM, Donald Stufft wrote: > > Please, don't. The software and infrastructure to run PyPI exists. > Some level of namespacing makes sense to separate out extension > management to different groups of people, but creating a whole > management application just for this would be serious overkill. > How do you deal with a PyPI package foo which wants a bar-test value (foo-bar-test), and a PyPI package foo-bar with a value test (foo-bar-test). PyPI packages allow too much in the way of names to be able to fully namespace it without collisions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donald.stufft at gmail.com Tue Aug 28 17:17:08 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Tue, 28 Aug 2012 11:17:08 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CDEA4.9090800@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> Message-ID: <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> On Tuesday, August 28, 2012 at 11:07 AM, "Martin v. L?wis" wrote: > > What happens when it expires? Is that name freed up for future use? > > > Yes, exactly. > > > I > > think that freeing up the name is likely to be a bad idea since we can't go > > backwards in time (as you alluded to later about not deleting them), so > > what does expiration do? > > > > > Why would it require going backwards in time? Existing usages of the > extension just become invalid, e.g. with the consequence that you can't > upload the package to PyPI anymore unless you remove the extension, > or re-register it. > > If the extension is in active use, somebody certainly will make sure it > stays registered. Expiration is to free up names that are not in active > use, but are otherwise reasonable names for metadata fields (say, > Requires-Unicode-Version). What do you do with packages that have already been uploaded with requires-unicode-version once it expires? If the point of a registry is to remove ambiguity from what any particular key means, won't expiring and allowing reregistration of an in use name (even if it's no longer being uploaded, but is still available inside of a package) reintroduce that same ambiguity? How will we know that requires-unicode-version from a package uploaded a year ago and has since expired is different than requires-unicode-version from a package uploaded yesterday and has been reregistered? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Tue Aug 28 17:30:43 2012 From: dholth at gmail.com (Daniel Holth) Date: Tue, 28 Aug 2012 11:30:43 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CDA1B.2080003@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <503CDA1B.2080003@v.loewis.de> Message-ID: On Tue, Aug 28, 2012 at 10:47 AM, "Martin v. L?wis" wrote: >>> Prepare for a ten-year period of acceptance - so it >>> would be good to be sure that no further additions are desired within >>> the next ten years before seeking approval for the PEP. >> >> >> However, this point I really don't agree with. The packaging ecosystem >> is currently evolving outside the standard library, but the >> standardisation process for the data interchange formats still falls >> under the authority of python-dev and the PEP process. > > > Maybe I misphrased. By "accepted" I meant "widely implemented". From > the day this gets published until it is really usable, I still believe > 10 years is realistic. For example, setuptools doesn't implement Meta-data > 1.2, and nearly nobody uses it, 8 years after it was written. > >> When the packaging module is finally added (hopefully 3.4, even if >> that means we have to temporarily cull the entire compiler >> subpackage), it will handle the most recent accepted version of the >> metadata format (as well as any previous versions). If more holes >> reveal themselves in the next 18 months, then it's OK if v1.4 is >> created when it becomes clear that it's necessary. > > > The problem is that flooding people with specifications is a guarantee > that they will not get implemented. So we can have one metadata > specification every ten years; if we have more, none of them will be > implemented (except in the tool of the author of the PEP). Why not. You get the feature in the tool, and you don't get it elsewhere, but the other implementation can still parse what it understands. The tool author promotes his tool for this reason. The extension format is intentionally ugly so that people will standardize eventually if only for aesthetic reasons. Yes, you have to support popular extensions forever, it's a messy world we live in. Two tools that implement Metadata 1.2+ are called wheel and distribute >= 0.6.28. It's just adding the requirements in PKG-INFO (METADATA) instead of in a separate .txt file. Unfortunately it was necessary to add Setup-Requires-Dist:, Provides-Extra: and an extra variable in the conditional-dependencies (environment markers) spec to be able to represent setuptools data present in 10% of the packages on PyPi. So it is necessary to edit the PEP for the environment markers, but Provides-Extra could change to Extension: distribute Distribute-Provides-Extra: foo (Just require un-hyphenated names in Extension: or map them to underscore _ if you must) -1 on doing anything but mapping them to package names. I can't provide a regex to strictly validate Requires-Dist: foo ; condition because condition is an impoverished subset of the Python language filtered with the wonderful ast module. Are you really willing to unpack and validate PKG-INFO on every archive that is uploaded to pypi? From rdmurray at bitdance.com Tue Aug 28 17:38:11 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 28 Aug 2012 11:38:11 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> Message-ID: <20120828153811.CBDD32500FE@webabinitio.net> On Tue, 28 Aug 2012 11:17:08 -0400, Donald Stufft wrote: > On Tuesday, August 28, 2012 at 11:07 AM, "Martin v. L??wis" wrote: > > > > What happens when it expires? Is that name freed up for future use? > > > > > > Yes, exactly. > > > > > I > > > think that freeing up the name is likely to be a bad idea since we can't go > > > backwards in time (as you alluded to later about not deleting them), so > > > what does expiration do? > > > > > > > > > Why would it require going backwards in time? Existing usages of the > > extension just become invalid, e.g. with the consequence that you can't > > upload the package to PyPI anymore unless you remove the extension, > > or re-register it. > > > > If the extension is in active use, somebody certainly will make sure it > > stays registered. Expiration is to free up names that are not in active > > use, but are otherwise reasonable names for metadata fields (say, > > Requires-Unicode-Version). > > What do you do with packages that have already been uploaded with > requires-unicode-version once it expires? If the point of a registry is > to remove ambiguity from what any particular key means, won't expiring > and allowing reregistration of an in use name (even if it's no longer being > uploaded, but is still available inside of a package) reintroduce that same > ambiguity? How will we know that requires-unicode-version from a package > uploaded a year ago and has since expired is different than requires-unicode-version > from a package uploaded yesterday and has been reregistered? Ah, that's a better phrasing of the same concern I had but couldn't figure out how to articulate. I don't recall any RFC registries that have expiration dates for entries. Are there any? RFC registries usually have an organization vetting the entries, whereas it seems like we want this to be an open registry. Note that the MIME-type specification allows for "vendor types", which is a namespace mechanism and allows delegation of vetting authority. That sounds more like Nick's proposal. (I'm sure there is some way to solve the ambiguity issue.) We could still have a (vetted) registry for "official" names, if we wanted. That would follow the MIME model. Or we can still have a separate registry, but only "qualified" (namespaced) names are open for anyone to register, without any expiration dates. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From martin at v.loewis.de Tue Aug 28 18:08:51 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 28 Aug 2012 18:08:51 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> Message-ID: <503CED13.1050800@v.loewis.de> Am 28.08.12 17:17, schrieb Donald Stufft: > What do you do with packages that have already been uploaded with > requires-unicode-version once it expires? Who is "I" in this case? The PyPI installation? Mark the keys in the database as expired, and stop displaying them. If the key is restored, and the values are still syntactically correct, restore the values. Or is "I" software which downloads packages? Continue doing what it always does for invalid meta-data: I recommend to issue a warning; aborting the setup could also work. > If the point of a registry is > to remove ambiguity from what any particular key means, won't expiring > and allowing reregistration of an in use name (even if it's no longer being > uploaded, but is still available inside of a package) reintroduce that same > ambiguity? No: if nobody renews the old registration, it's because the extension is not in use. So the case you are constructing won't happen in practice. > How will we know that requires-unicode-version from a package > uploaded a year ago and has since expired is different than > requires-unicode-version > from a package uploaded yesterday and has been reregistered? If the packages that were uploaded a year ago are still in active use, somebody will renew the registration. So the case won't happen. If nobody cares about the specific field, it may break, which is then well-deserved. Regards, Martin From rdmurray at bitdance.com Tue Aug 28 18:27:41 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 28 Aug 2012 12:27:41 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CED13.1050800@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <503CED13.1050800@v.loewis.de> Message-ID: <20120828162741.A560B2500FE@webabinitio.net> On Tue, 28 Aug 2012 18:08:51 +0200, =?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?= wrote: > Am 28.08.12 17:17, schrieb Donald Stufft: > > If the point of a registry is > > to remove ambiguity from what any particular key means, won't expiring > > and allowing reregistration of an in use name (even if it's no longer being > > uploaded, but is still available inside of a package) reintroduce that same > > ambiguity? > > No: if nobody renews the old registration, it's because the extension is > not in use. So the case you are constructing won't happen in practice. > > > How will we know that requires-unicode-version from a package > > uploaded a year ago and has since expired is different than > > requires-unicode-version > > from a package uploaded yesterday and has been reregistered? > > If the packages that were uploaded a year ago are still in active use, > somebody will renew the registration. So the case won't happen. > > If nobody cares about the specific field, it may break, which is > then well-deserved. The problem Donald is asking about is: the old registration expires, and a *new* registration is entered with a different meaning, but packages still exist on PyPI that have the key with the old meaning. That seems likely to happen in practice. Or if it doesn't, then allowing for the recycling of names probably isn't important. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From martin at v.loewis.de Tue Aug 28 18:36:40 2012 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 18:36:40 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <20120828153811.CBDD32500FE@webabinitio.net> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> Message-ID: <503CF398.3020606@v.loewis.de> Am 28.08.12 17:38, schrieb R. David Murray: > I don't recall any RFC registries that have expiration dates for > entries. Are there any? The RFC database itself has expiration dates on specifications, namely on I-D documents (internet drafts). The expire 6 months after their initial publication, unless renewed. For number assignments, the risk is that it will eventually run out of numbers, in which case the protocol gets redesigned to increase the number space. For name assignments, the risk is that many similar-sounding elements become used, and people accept that as a trade-off for the problems you see in my expiration proposal. The most popular name registry that does have expiration (despite being hierarchical) is the DNS: you have to renew your names yearly in most TLDs. People apparently accept the risk of confusion when a domain expires and gets reused by someone else (and yes, the DNS *is* an "RFC registry" :-) > RFC registries usually have an organization vetting the entries, > whereas it seems like we want this to be an open registry. It very much depends. If you browse over the IANA registries, you find that many parameter space require "IETF consensus", so they can be extended only by RFC (similar to the status quo in metadata). There are IANA registries that are open (e.g. SNMP, or MIME); things are assigned in a first-come first-served manner (e.g. try to find out what 1.3.6.1.4.1.18832.11.3 is :-) > We could still have a (vetted) registry for "official" names, if > we wanted. That would follow the MIME model. Or we can still have a > separate registry, but only "qualified" (namespaced) names are open for > anyone to register, without any expiration dates. I don't consider it an absolute necessity that there is an expiration. I do consider it a flaw in (some) IANA name registrations that there is no expiration to them; I can report that people regularly want to claim some PyPI package name on the basis that the original owner didn't ever release any software under that name. Regards, Martin From martin at v.loewis.de Tue Aug 28 18:47:16 2012 From: martin at v.loewis.de (=?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 18:47:16 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <20120828162741.A560B2500FE@webabinitio.net> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <503CED13.1050800@v.loewis.de> <20120828162741.A560B2500FE@webabinitio.net> Message-ID: <503CF614.1070409@v.loewis.de> Am 28.08.12 18:27, schrieb R. David Murray: > The problem Donald is asking about is: the old registration expires, > and a *new* registration is entered with a different meaning, but > packages still exist on PyPI that have the key with the old meaning. > That seems likely to happen in practice. Or if it doesn't, then > allowing for the recycling of names probably isn't important. Let me retry answering the question: Expiration *is* important in the case the key was just registered and never used, because it may be a good name for something, but can't be used because it is reserved for a use case that has no users. If the key is *widely* used, the scenario you assume is *not* likely in practice - either the original registrant will renew the registration before it expires, or somebody else will reregister it after it expires. There is also the case of a key that is used in a few packages (one or two packages seems a likely case - namely packages produced by the original registrant for the purpose of testing). Assuming the registrant then loses interest, and nobody else starts using the keys (i.e. they are not widely used), then these packages will break (in a mode that can be painted in different colors). This may happen, but I don't consider it a problem. If the original author finds the package broken, he will have to release a new version without the these keys, or re-register them under a new name (since his original name is now taken by somebody else - who hopefully can attract more users with his definition of the key). There is also the potential risk of key-jacking, which can be resolved administratively (by revoking the abusive registration). Regards, Martin From martin at v.loewis.de Tue Aug 28 19:02:57 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 19:02:57 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <503BD8C7.90706@v.loewis.de> <503CDA1B.2080003@v.loewis.de> Message-ID: <503CF9C1.1040100@v.loewis.de> Am 28.08.12 17:30, schrieb Daniel Holth: > Are you really willing to unpack and validate PKG-INFO on every > archive that is uploaded to pypi? Users should run the "register" command, which will provide the metadata information. Also, the UI needs to be extended to allow to fill out and edit metadata information interactively, or upload it. And yes, PyPI already extracts (but currently doesn't further process) PKG-INFO from every archive that is uploaded. So PyPI absolutely needs to "know" about the meta-data. Regards, Martin From rdmurray at bitdance.com Tue Aug 28 19:09:15 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 28 Aug 2012 13:09:15 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CF614.1070409@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <503CED13.1050800@v.loewis.de> <20120828162741.A560B2500FE@webabinitio.net> <503CF614.1070409@v.loewis.de> Message-ID: <20120828170915.D4A6F2500FE@webabinitio.net> On Tue, 28 Aug 2012 18:47:16 +0200, =?ISO-8859-15?Q?=22Martin_v=2E_L=F6wis=22?= wrote: > Am 28.08.12 18:27, schrieb R. David Murray: > > The problem Donald is asking about is: the old registration expires, > > and a *new* registration is entered with a different meaning, but > > packages still exist on PyPI that have the key with the old meaning. > > That seems likely to happen in practice. Or if it doesn't, then > > allowing for the recycling of names probably isn't important. > > Let me retry answering the question: Expiration *is* important in > the case the key was just registered and never used, because it may > be a good name for something, but can't be used because it is reserved > for a use case that has no users. > > If the key is *widely* used, the scenario you assume is *not* likely > in practice - either the original registrant will renew the registration > before it expires, or somebody else will reregister it after it expires. > > There is also the case of a key that is used in a few packages (one > or two packages seems a likely case - namely packages produced by the > original registrant for the purpose of testing). Assuming the registrant > then loses interest, and nobody else starts using the keys (i.e. they > are not widely used), then these packages will break (in a mode that > can be painted in different colors). This may happen, but I don't > consider it a problem. If the original author finds the package broken, > he will have to release a new version without the these keys, or > re-register them under a new name (since his original name is now > taken by somebody else - who hopefully can attract more users with > his definition of the key). > > There is also the potential risk of key-jacking, which can be > resolved administratively (by revoking the abusive registration). OK, I understand your logic now. Yes that does make sense to me. There are tradeoffs to be made, and this seems like a reasonable tradeoff given the goals articulated so far. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From phd at phdru.name Tue Aug 28 19:15:41 2012 From: phd at phdru.name (Oleg Broytman) Date: Tue, 28 Aug 2012 21:15:41 +0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503CF398.3020606@v.loewis.de> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> Message-ID: <20120828171541.GA9777@iskra.aviel.ru> On Tue, Aug 28, 2012 at 06:36:40PM +0200, "\"Martin v. L?wis\"" wrote: > Am 28.08.12 17:38, schrieb R. David Murray: > >I don't recall any RFC registries that have expiration dates for > >entries. Are there any? > > The RFC database itself has expiration dates on specifications, > namely on I-D documents (internet drafts). The expire 6 months > after their initial publication, unless renewed. Does that expiration mean something? The draft for Web Proxy Autodiscovery Protocol[1] expired in 1999 but still is widely implemented and used. 1. https://en.wikipedia.org/wiki/Web_Proxy_Autodiscovery_Protocol Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From martin at v.loewis.de Tue Aug 28 20:19:08 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 28 Aug 2012 20:19:08 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <20120828171541.GA9777@iskra.aviel.ru> References: <503BD8C7.90706@v.loewis.de> <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> Message-ID: <503D0B9C.6070308@v.loewis.de> Am 28.08.12 19:15, schrieb Oleg Broytman: >> The RFC database itself has expiration dates on specifications, >> namely on I-D documents (internet drafts). The expire 6 months >> after their initial publication, unless renewed. > > Does that expiration mean something? It's explained in RFC 2026. An internet draft is not an internet standard, it may get changed at any time. An I-D which is expired and still used has the same relevance as a proprietary standard; it has nothing to do with the internet standards process. Whether this has any practical consequence depends on the market, of course. Customers that insist on standards compliance will look for RFC compliance, but typically not for I-D compliance. If the field of standardization is of relevance for such users, they will eventually ask for an RFC to be issued, which then may or may not be compatible with a long-standing proprietary standard. Regards, Martin From phd at phdru.name Tue Aug 28 20:38:07 2012 From: phd at phdru.name (Oleg Broytman) Date: Tue, 28 Aug 2012 22:38:07 +0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <503D0B9C.6070308@v.loewis.de> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> Message-ID: <20120828183807.GA12554@iskra.aviel.ru> On Tue, Aug 28, 2012 at 08:19:08PM +0200, "\"Martin v. L?wis\"" wrote: > Am 28.08.12 19:15, schrieb Oleg Broytman: > >> The RFC database itself has expiration dates on specifications, > >> namely on I-D documents (internet drafts). The expire 6 months > >> after their initial publication, unless renewed. > > > > Does that expiration mean something? > > It's explained in RFC 2026. An internet draft is not an internet > standard, it may get changed at any time. An I-D which is expired > and still used has the same relevance as a proprietary standard; > it has nothing to do with the internet standards process. > > Whether this has any practical consequence depends on the market, > of course. Customers that insist on standards compliance will look > for RFC compliance, but typically not for I-D compliance. If the > field of standardization is of relevance for such users, they will > eventually ask for an RFC to be issued, which then may or may not > be compatible with a long-standing proprietary standard. I see. Thank you! Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From brett at python.org Tue Aug 28 21:00:57 2012 From: brett at python.org (Brett Cannon) Date: Tue, 28 Aug 2012 15:00:57 -0400 Subject: [Python-Dev] [Python-checkins] cpython: Issue #15794: Relax a test case due to the deadlock detection's In-Reply-To: <3X5ylS3RfgzPyR@mail.python.org> References: <3X5ylS3RfgzPyR@mail.python.org> Message-ID: Should there be a Misc/NEWS entry since we are in rc mode? On Tue, Aug 28, 2012 at 2:13 PM, antoine.pitrou wrote: > http://hg.python.org/cpython/rev/454dceb5fd56 > changeset: 78790:454dceb5fd56 > parent: 78788:06497bbdf4fe > user: Antoine Pitrou > date: Tue Aug 28 20:10:18 2012 +0200 > summary: > Issue #15794: Relax a test case due to the deadlock detection's > conservativeness. > > files: > Lib/test/test_importlib/test_locks.py | 22 ++++++++++++-- > 1 files changed, 18 insertions(+), 4 deletions(-) > > > diff --git a/Lib/test/test_importlib/test_locks.py > b/Lib/test/test_importlib/test_locks.py > --- a/Lib/test/test_importlib/test_locks.py > +++ b/Lib/test/test_importlib/test_locks.py > @@ -1,4 +1,5 @@ > from importlib import _bootstrap > +import sys > import time > import unittest > import weakref > @@ -41,6 +42,17 @@ > @unittest.skipUnless(threading, "threads needed for this test") > class DeadlockAvoidanceTests(unittest.TestCase): > > + def setUp(self): > + try: > + self.old_switchinterval = sys.getswitchinterval() > + sys.setswitchinterval(0.000001) > + except AttributeError: > + self.old_switchinterval = None > + > + def tearDown(self): > + if self.old_switchinterval is not None: > + sys.setswitchinterval(self.old_switchinterval) > + > def run_deadlock_avoidance_test(self, create_deadlock): > NLOCKS = 10 > locks = [LockType(str(i)) for i in range(NLOCKS)] > @@ -75,10 +87,12 @@ > > def test_deadlock(self): > results = self.run_deadlock_avoidance_test(True) > - # One of the threads detected a potential deadlock on its second > - # acquire() call. > - self.assertEqual(results.count((True, False)), 1) > - self.assertEqual(results.count((True, True)), len(results) - 1) > + # At least one of the threads detected a potential deadlock on its > + # second acquire() call. It may be several of them, because the > + # deadlock avoidance mechanism is conservative. > + nb_deadlocks = results.count((True, False)) > + self.assertGreaterEqual(nb_deadlocks, 1) > + self.assertEqual(results.count((True, True)), len(results) - > nb_deadlocks) > > def test_no_deadlock(self): > results = self.run_deadlock_avoidance_test(False) > > -- > Repository URL: http://hg.python.org/cpython > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Aug 28 21:07:36 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 28 Aug 2012 21:07:36 +0200 Subject: [Python-Dev] cpython: Issue #15794: Relax a test case due to the deadlock detection's References: <3X5ylS3RfgzPyR@mail.python.org> Message-ID: <20120828210736.453c5839@pitrou.net> On Tue, 28 Aug 2012 15:00:57 -0400 Brett Cannon wrote: > Should there be a Misc/NEWS entry since we are in rc mode? Well, I didn't ask for 3.3.0 inclusion, since this is a very minor fix. Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From chris.jerdonek at gmail.com Wed Aug 29 05:58:19 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 28 Aug 2012 20:58:19 -0700 Subject: [Python-Dev] core dev IRC nicks Message-ID: Is there a list somewhere of the IRC nicks of the core developers that use IRC (and who wish to be listed) alongside their real names? If there is no such list, has there ever been discussion on python-dev of creating such a list? --Chris From rosuav at gmail.com Wed Aug 29 06:03:09 2012 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 29 Aug 2012 14:03:09 +1000 Subject: [Python-Dev] Copyediting PEPs Message-ID: What's the procedure for making trivial edits to a PEP? I was (re)reading PEP 393 and found a couple of typos (PyCompactObject s/be PyCompactUnicodeObject and "differs form length" s/be "from"). Can I edit them directly in the web site's SVN repo, or should a patch be submitted? Apologies for the perhaps clueless question! ChrisA From brian at python.org Wed Aug 29 06:21:15 2012 From: brian at python.org (Brian Curtin) Date: Tue, 28 Aug 2012 23:21:15 -0500 Subject: [Python-Dev] core dev IRC nicks In-Reply-To: References: Message-ID: On Tue, Aug 28, 2012 at 10:58 PM, Chris Jerdonek wrote: > Is there a list somewhere of the IRC nicks of the core developers that > use IRC (and who wish to be listed) alongside their real names? If > there is no such list, has there ever been discussion on python-dev of > creating such a list? I'm sure we could make one, but it's probably of limited utility since not many people are on IRC, and those who are come and go throughout the day, throughout the month, year, etc. It'd just end up being another list to try and keep up to date (alphabetized, of course). You're probably better off just asking who's around whenever you're on. From ncoghlan at gmail.com Wed Aug 29 06:31:41 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Aug 2012 14:31:41 +1000 Subject: [Python-Dev] core dev IRC nicks In-Reply-To: References: Message-ID: On Wed, Aug 29, 2012 at 1:58 PM, Chris Jerdonek wrote: > Is there a list somewhere of the IRC nicks of the core developers that > use IRC (and who wish to be listed) alongside their real names? If > there is no such list, has there ever been discussion on python-dev of > creating such a list? No, to all those questions. It's not a bad idea, though. One possibility (if someone was willing to work on it) would be to enhance our Roundup instance to handle it. (See http://docs.python.org/devguide/tracker.html#the-meta-tracker) Allow users to add their typical IRC handle, and perhaps add a "Committers List" link below the existing "Users List" link in the sidebar. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Aug 29 06:36:44 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Aug 2012 14:36:44 +1000 Subject: [Python-Dev] Copyediting PEPs In-Reply-To: References: Message-ID: On Wed, Aug 29, 2012 at 2:03 PM, Chris Angelico wrote: > What's the procedure for making trivial edits to a PEP? I was > (re)reading PEP 393 and found a couple of typos (PyCompactObject s/be > PyCompactUnicodeObject and "differs form length" s/be "from"). Can I > edit them directly in the web site's SVN repo, or should a patch be > submitted? > > Apologies for the perhaps clueless question! Sending a patch to the PEP editors (peps at python.org) or posting it to the tracker is likely the best option. However, in the general case, we don't worry too much about minor errors in accepted and final PEPs - once the PEP is implemented, they're mostly historical records rather than reference documents (some exceptions do exist, but those mostly related to items that weren't properly documented in the language and library reference. The two most notable offenders, PEP 302 and 3118, have finally been replaced by more comprehensive documentation for the import system and binary buffer protocol in the official docs for 3.3) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rosuav at gmail.com Wed Aug 29 06:40:04 2012 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 29 Aug 2012 14:40:04 +1000 Subject: [Python-Dev] Copyediting PEPs In-Reply-To: References: Message-ID: On Wed, Aug 29, 2012 at 2:36 PM, Nick Coghlan wrote: > Sending a patch to the PEP editors (peps at python.org) or posting it to > the tracker is likely the best option. However, in the general case, > we don't worry too much about minor errors in accepted and final PEPs > - once the PEP is implemented, they're mostly historical records > rather than reference documents Thanks. I'll not worry about it then; PEP 393 is final. ChrisA From guido at python.org Wed Aug 29 06:45:04 2012 From: guido at python.org (Guido van Rossum) Date: Tue, 28 Aug 2012 21:45:04 -0700 Subject: [Python-Dev] Copyediting PEPs In-Reply-To: References: Message-ID: Hm. I would be fine with the edits proposed, no need to bother others. But please do use the Hg repo. :-) On Tuesday, August 28, 2012, Chris Angelico wrote: > On Wed, Aug 29, 2012 at 2:36 PM, Nick Coghlan > > wrote: > > Sending a patch to the PEP editors (peps at python.org ) or > posting it to > > the tracker is likely the best option. However, in the general case, > > we don't worry too much about minor errors in accepted and final PEPs > > - once the PEP is implemented, they're mostly historical records > > rather than reference documents > > Thanks. I'll not worry about it then; PEP 393 is final. > > ChrisA > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- Sent from Gmail Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 29 07:26:29 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 29 Aug 2012 15:26:29 +1000 Subject: [Python-Dev] Copyediting PEPs In-Reply-To: References: Message-ID: On Wed, Aug 29, 2012 at 2:45 PM, Guido van Rossum wrote: > Hm. I would be fine with the edits proposed, no need to bother others. But > please do use the Hg repo. :-) Indeed :) (Also, as described in PEP 1, any core committer can act as a PEP editor - the peps at python.org address is most useful when there's no specific core developer handling updates) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From rosuav at gmail.com Wed Aug 29 09:50:26 2012 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 29 Aug 2012 17:50:26 +1000 Subject: [Python-Dev] Copyediting PEPs In-Reply-To: References: Message-ID: On Wed, Aug 29, 2012 at 2:45 PM, Guido van Rossum wrote: > Hm. I would be fine with the edits proposed, no need to bother others. But > please do use the Hg repo. :-) What? It's in Hg and I've been using Subversion all this time? Oh, you had my hopes up for a while there. Alas, the problem is that I didn't say clearly enough. I have access to the pydotorg svn repo, but not to anything beyond that. The error was that I thought that PEPs were in the web site repository, which they appear not to be. So it looks like I can't copyedit anyway. Sorry for the confusion! ChrisA From petri at digip.org Wed Aug 29 13:18:32 2012 From: petri at digip.org (Petri Lehtinen) Date: Wed, 29 Aug 2012 14:18:32 +0300 Subject: [Python-Dev] [Python-checkins] cpython (merge 3.2 -> default): merge 3.2 (#15801) In-Reply-To: <3X63qB3VCzzQ4K@mail.python.org> References: <3X63qB3VCzzQ4K@mail.python.org> Message-ID: <20120829111832.GC3308@p16.foo.com> Hi, benjamin.peterson wrote: > http://hg.python.org/cpython/rev/263d09ce3e9e > changeset: 78794:263d09ce3e9e > parent: 78790:454dceb5fd56 > parent: 78793:4d431e719646 > user: Benjamin Peterson > date: Tue Aug 28 18:01:45 2012 -0400 > summary: > merge 3.2 (#15801) > > files: > Lib/test/string_tests.py | 3 +++ > Misc/NEWS | 3 +++ > Objects/unicodeobject.c | 3 +-- > 3 files changed, 7 insertions(+), 2 deletions(-) > [snip] > diff --git a/Misc/NEWS b/Misc/NEWS > --- a/Misc/NEWS > +++ b/Misc/NEWS > @@ -71,6 +71,9 @@ > > - Issue #15761: Fix crash when PYTHONEXECUTABLE is set on Mac OS X. > > +- Issue #15801: Make sure mappings passed to '%' formatting are actually > + subscriptable. > + > - Issue #15726: Fix incorrect bounds checking in PyState_FindModule. > Patch by Robin Schreiber. The news entry was added in the middle of the news for 3.3.0 RC 1, which has already been released. Petri From rdmurray at bitdance.com Wed Aug 29 15:38:57 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 29 Aug 2012 09:38:57 -0400 Subject: [Python-Dev] core dev IRC nicks In-Reply-To: References: Message-ID: <20120829133858.0C52625010D@webabinitio.net> On Tue, 28 Aug 2012 23:21:15 -0500, Brian Curtin wrote: > On Tue, Aug 28, 2012 at 10:58 PM, Chris Jerdonek > wrote: > > Is there a list somewhere of the IRC nicks of the core developers that > > use IRC (and who wish to be listed) alongside their real names? If > > there is no such list, has there ever been discussion on python-dev of > > creating such a list? > > I'm sure we could make one, but it's probably of limited utility since > not many people are on IRC, and those who are come and go throughout > the day, throughout the month, year, etc. > > It'd just end up being another list to try and keep up to date > (alphabetized, of course). You're probably better off just asking > who's around whenever you're on. On the other hand, a lot of people, especially the people who do use IRC frequently and are therefore most likely to be found there, keep the same IRC handle for a long time (I've had mine, bitdancer, since 1995 or so). So I think there would be utility in such a list even if the updates were spotty. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From g.brandl at gmx.net Wed Aug 29 22:50:25 2012 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 29 Aug 2012 22:50:25 +0200 Subject: [Python-Dev] core dev IRC nicks In-Reply-To: <20120829133858.0C52625010D@webabinitio.net> References: <20120829133858.0C52625010D@webabinitio.net> Message-ID: On 29.08.2012 15:38, R. David Murray wrote: > On Tue, 28 Aug 2012 23:21:15 -0500, Brian Curtin wrote: >> On Tue, Aug 28, 2012 at 10:58 PM, Chris Jerdonek >> wrote: >> > Is there a list somewhere of the IRC nicks of the core developers that >> > use IRC (and who wish to be listed) alongside their real names? If >> > there is no such list, has there ever been discussion on python-dev of >> > creating such a list? >> >> I'm sure we could make one, but it's probably of limited utility since >> not many people are on IRC, and those who are come and go throughout >> the day, throughout the month, year, etc. >> >> It'd just end up being another list to try and keep up to date >> (alphabetized, of course). You're probably better off just asking >> who's around whenever you're on. > > On the other hand, a lot of people, especially the people who do use IRC > frequently and are therefore most likely to be found there, keep the same > IRC handle for a long time (I've had mine, bitdancer, since 1995 or so). > So I think there would be utility in such a list even if the updates > were spotty. But so far I've not seen a core developer without the real name field set to their, well, real name, so it's not quite as hard to find out who is who anyway. Georg From solipsis at pitrou.net Wed Aug 29 22:55:33 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 29 Aug 2012 22:55:33 +0200 Subject: [Python-Dev] core dev IRC nicks References: <20120829133858.0C52625010D@webabinitio.net> Message-ID: <20120829225533.4295a503@pitrou.net> On Wed, 29 Aug 2012 22:50:25 +0200 Georg Brandl wrote: > On 29.08.2012 15:38, R. David Murray wrote: > > On Tue, 28 Aug 2012 23:21:15 -0500, Brian Curtin wrote: > >> On Tue, Aug 28, 2012 at 10:58 PM, Chris Jerdonek > >> wrote: > >> > Is there a list somewhere of the IRC nicks of the core developers that > >> > use IRC (and who wish to be listed) alongside their real names? If > >> > there is no such list, has there ever been discussion on python-dev of > >> > creating such a list? > >> > >> I'm sure we could make one, but it's probably of limited utility since > >> not many people are on IRC, and those who are come and go throughout > >> the day, throughout the month, year, etc. > >> > >> It'd just end up being another list to try and keep up to date > >> (alphabetized, of course). You're probably better off just asking > >> who's around whenever you're on. > > > > On the other hand, a lot of people, especially the people who do use IRC > > frequently and are therefore most likely to be found there, keep the same > > IRC handle for a long time (I've had mine, bitdancer, since 1995 or so). > > So I think there would be utility in such a list even if the updates > > were spotty. > > But so far I've not seen a core developer without the real name field set > to their, well, real name, so it's not quite as hard to find out who is > who anyway. And there's probably an entertainment value to it. (I still wonder who "Alex_Gaynor" is, though) cheers Antoine. -- Software development and contracting: http://pro.pitrou.net From alexander.belopolsky at gmail.com Wed Aug 29 23:28:29 2012 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Wed, 29 Aug 2012 17:28:29 -0400 Subject: [Python-Dev] Py_buffer.obj documentation Message-ID: I am trying to reconcile this section in 3.3 documentation: """ void *obj A new reference to the exporting object. The reference is owned by the consumer and automatically decremented and set to NULL by PyBuffer_Release(). The field is the equivalent of the return value of any standard C-API function. As a special case, for temporary buffers that are wrapped by PyMemoryView_FromBuffer() or PyBuffer_FillInfo() this field is NULL. In general, exporting objects MUST NOT use this scheme. """ -- http://docs.python.org/dev/c-api/buffer.html#Py_buffer.obj with the following comment in the code (Objects/memoryobject.c:762): /* info->obj is either NULL or a borrowed reference. This reference should not be decremented in PyBuffer_Release(). */ I have not studied the code yet, but given the history of bugs in this area the code may not have the most authoritative answer. In any case, either the comment or the ReST section should be corrected. From "ja...py" at farowl.co.uk Thu Aug 30 03:28:07 2012 From: "ja...py" at farowl.co.uk (Jeff Allen) Date: Thu, 30 Aug 2012 02:28:07 +0100 Subject: [Python-Dev] Py_buffer.obj documentation In-Reply-To: References: Message-ID: <503EC1A7.9020306@farowl.co.uk> On 29/08/2012 22:28, Alexander Belopolsky wrote: > I am trying to reconcile this section in 3.3 documentation: > > """ > void *obj > > A new reference to the exporting object. The reference is owned by the > consumer and automatically decremented and set to NULL by > PyBuffer_Release(). > with the following comment in the code (Objects/memoryobject.c:762): > > /* info->obj is either NULL or a borrowed reference. This > reference > should not be decremented in PyBuffer_Release(). */ I've studied this code in the interests of reproducing something similar for Jython. The comment is in the context of PyMemoryView_FromBuffer(Py_buffer *info), at a point where the whole info struct is being copied to mbuf->master, then the code sets mbuf->master.obj = NULL. I think the comment means that the caller, which is in the role of consumer to the original exporter, owns the info struct and therefore the reference info.obj. That caller will eventually call PyBuffer_Release(info), which will result in a DECREF(obj) matching the INCREF(obj) that happened during bf_getbuffer(info). In this sense obj is a borrowed reference as far as the memoryview is concerned. mbuf->master must not also keep a reference, or it risks making a second call to DECREF(obj). Jeff Allen From stefan at bytereef.org Thu Aug 30 10:53:25 2012 From: stefan at bytereef.org (Stefan Krah) Date: Thu, 30 Aug 2012 10:53:25 +0200 Subject: [Python-Dev] Py_buffer.obj documentation In-Reply-To: References: Message-ID: <20120830085324.GA10685@sleipnir.bytereef.org> Alexander Belopolsky wrote: > /* info->obj is either NULL or a borrowed reference. This > reference should not be decremented in PyBuffer_Release(). */ The semantics of PyMemoryView_FromBuffer() are problematic. This function is the odd one in memoryobject.c since it's the only function that breaks the link in consumer -> exporter chains. This has several consequences: 1) One can't rely on the fact that 'info' has PyBUF_FULL information. This is a major inconvenience and the reason for *a lot* of code in memoryobject.c that reconstructs PyBUF_FULL information. 2) One has to make a decision whether PyMemoryView_FromBuffer() steals the reference to view.obj or treats it as a borrowed reference. My view on this is that it's safer to treat it as a borrowed reference. Additionally, I can't see a scenario where PyMemoryView_FromBuffer(info) could be used for creating a non-temporary memoryview with automatic decref of info.obj: If 'info' is allocated on the stack, then the memoryview shouldn't be returned from a function. If 'info' is allocated on the heap, then who frees 'info' when the memoryview is deallocated? Permanent memoryviews can now be safely created with PyMemoryView_FromMemory(). PyMemoryView_FromBuffer() isn't really that useful any more. It's hard to document all this in a few lines. Perhaps you can open an issue for this? Stefan Krah From cupcicm at gmail.com Thu Aug 30 14:39:41 2012 From: cupcicm at gmail.com (Manu) Date: Thu, 30 Aug 2012 14:39:41 +0200 Subject: [Python-Dev] Problem with _PyTrash_destroy_chain ? Message-ID: Hi, I am currently hitting http://bugs.python.org/issue13992. I have a scenario that reproduces the bug after 1 to 2 hours (intensive sqlalchemy and threading). I get the same stack trace as described in the bug. After spending quite a bit of time trying to understand what could go wrong in the C extensions I use, and not finding anything interesting, I decided to try to find the problem with gdb. The stacktrace I have seems to mean that we are trying to double free something in the frame_dealloc method. See (gdb) bt #0 0x000000000046479f in _Py_ForgetReference (op=0x4dc7bc0) at Objects/object.c:2222 #1 0x0000000000464810 in _Py_Dealloc (op=0x4dc7bc0) at Objects/object.c:2242 #2 0x0000000000559a68 in frame_dealloc (f=0x4997ab0) at Objects/frameobject.c:458 #3 0x000000000046481d in _Py_Dealloc (op=0x4997ab0) at Objects/object.c:2243 and info in the bug report. Since the frame dealloc method is bracketed with Py_TRASHCAN_SAFE_{BEGIN|END} macros, and they deal with memory management, I had a closer look at those. I compiled cpython without this trashcan management (replaced the macros by two noops) and reran my scenario and it seems it is not segfaulting anymore. I then had a closer look at the PyTrash_destroy_chain method (in object.c). Here is what I think it does : for each PyObject in the _PyTrash_delete_later linked list do : set delete_nesting to 1 (it was 0 when the method was called) so that we don't call destroy_chain again. call deallocator for the object set delete_nesting back to 1. The thing is that this deallocator (from what I understood) is also bracketed with Py_TRASHCAN macros. It could potentially cause a long deallocation chain, that will be added to the _PyTrash_delete_later linked list (if it's bigger than the PyTrash_UNWIND_LEVEL). If that happens, it seems that the _PyTrash_delete_later list is going to contain twice the same object, which could in turn cause the double free ? Note that I am really not sure about this. What I am now almost sure about is that my segfault goes away if I bypass the trashcan mechanism. I am currently trying to set the unwind level to something like 5 and get a quicker way to reproduce the bug, but I didn't manage to yet. I am definitely available for help if needed. Thanks, Manu -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Aug 30 19:38:00 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 30 Aug 2012 19:38:00 +0200 Subject: [Python-Dev] Problem with _PyTrash_destroy_chain ? References: Message-ID: <20120830193800.1100b6fe@pitrou.net> Hello, On Thu, 30 Aug 2012 14:39:41 +0200 Manu wrote: > Hi, > > I am currently hitting http://bugs.python.org/issue13992. > > I have a scenario that reproduces the bug after 1 to 2 hours (intensive > sqlalchemy and threading). I get the same stack trace as described in the > bug. > [...] > > The thing is that this deallocator (from what I understood) is also > bracketed with Py_TRASHCAN macros. It could potentially cause a long > deallocation chain, that will be added to the _PyTrash_delete_later linked > list (if it's bigger than the PyTrash_UNWIND_LEVEL). If that happens, it > seems that the _PyTrash_delete_later list is going to contain twice the > same object, which could in turn cause the double free ? I don't see how that can happen. The following piece of logic in _PyTrash_destroy_chain(): PyObject *op = _PyTrash_delete_later; destructor dealloc = Py_TYPE(op)->tp_dealloc; _PyTrash_delete_later = (PyObject*) _Py_AS_GC(op)->gc.gc_prev; ensures that the object is moved out of the list before it is potentially re-added to it. However, there's a potential pitfall producing double dealloc's described in subtype_dealloc() in typeobject.c, under the following comment: Q. Why the bizarre (net-zero) manipulation of _PyTrash_delete_nesting around the trashcan macros? (I'm not copying the answer since it's quite long-winded, you can find it here: http://hg.python.org/cpython/file/2dde5a7439fd/Objects/typeobject.c#l1066 ) The bottom line is that subtype_dealloc() mutates _PyTrash_delete_nesting to avoid being called a second time, but it seems to miss the fact that another thread can run in-between and also mutate _PyTrash_delete_nesting in the other direction. The GIL protects us as long as it is not released, but it can be released inside the functions called by a non-trivial deallocator such as subtype_dealloc(). This is only a hypothesis, but we see that this traceback involves subtype_dealloc() and deallocators running from multiple threads: http://bugs.python.org/file26717/thread_all_apply_bt.txt Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From martin at v.loewis.de Thu Aug 30 20:49:33 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 30 Aug 2012 20:49:33 +0200 Subject: [Python-Dev] Problem with _PyTrash_destroy_chain ? In-Reply-To: References: Message-ID: <503FB5BD.40109@v.loewis.de> Am 30.08.12 14:39, schrieb Manu: > After spending quite a bit of time trying to understand what could go > wrong in the C extensions I use, and not finding anything interesting, I > decided to try to find the problem with gdb. The stacktrace I have seems > to mean that we are trying to double free something in the frame_dealloc > method. See > > (gdb) bt > #0 0x000000000046479f in _Py_ForgetReference (op=0x4dc7bc0) at > Objects/object.c:2222 > #1 0x0000000000464810 in _Py_Dealloc (op=0x4dc7bc0) at > Objects/object.c:2242 > #2 0x0000000000559a68 in frame_dealloc (f=0x4997ab0) at > Objects/frameobject.c:458 > #3 0x000000000046481d in _Py_Dealloc (op=0x4997ab0) at > Objects/object.c:2243 Why do you think that this stacktrace "seems to mean that we are trying to double free something"? I can't infer that from the stacktrace. You seem to suggest that the crash happens on object.c:2222. If so, it's indeed likely a double-free; my suspicion is that the object memory shows the regular "deallocated block" memory pattern (display *op at the point of crash). It would then be interesting to find out what object used to be there. Unfortunately, there is no easy way to find out (unless the crash always involves 0x4dc7bc0). So I suggest the following tracing: - in _Py_NewReference, printf("allocate %p\n", op); - in _Py_ForgetReference, printf("free %p %s\n", op, op->ob_type->tp_name); fflush(stdout); This may get large, so you may open a file (e.g. in /tmp/fixedname) to collect the trace. When it crashes, find the last prior free of the address - there shouldn't be any subsequent alloc. If the type doesn't give a clue, but it's always the same type, you can restrict tracing to that type, and then print values of the object for further diagnosis. Other useful debug information would be to find out what specific frame object is being deallocated - this can still be diagnosed at the point of crash (try _PyObject_Dump(f->f_code)), and then which specific variable (not sure what exact Python version you are using - it seems it's in localsplus, so it likely is a local variable of that frame). HTH, Martin From tjreedy at udel.edu Thu Aug 30 21:57:23 2012 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 30 Aug 2012 15:57:23 -0400 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 Message-ID: If one goes to http://hg.python.org/cpython/ and clicks 'browse', it defaults to 2.7, not to default (now 3.3). Moreover, there is no indication that it is defaulting to an old branch rather than current default, as one might reasonably expect. I found this very confusing when I was trying to get a link for a python-list post and the code did not look right. When one clicks 'branches', default is listed under 2.7, instead of on top, where it should be. I hope someone can fix both these issues. -- Terry Jan Reedy From nad at acm.org Thu Aug 30 22:09:58 2012 From: nad at acm.org (Ned Deily) Date: Thu, 30 Aug 2012 13:09:58 -0700 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 References: Message-ID: In article , Terry Reedy wrote: > If one goes to http://hg.python.org/cpython/ and clicks 'browse', it > defaults to 2.7, not to default (now 3.3). Moreover, there is no > indication that it is defaulting to an old branch rather than current > default, as one might reasonably expect. I found this very confusing > when I was trying to get a link for a python-list post and the code did > not look right. It defaults to "tip" which is the most recently pushed change set. At the moment, it just so happens that tip is a 2.7 change set. Usually a change set for "default" will be the most recent but not always. You just need to check the branch list. -- Ned Deily, nad at acm.org From cupcicm at gmail.com Thu Aug 30 22:22:17 2012 From: cupcicm at gmail.com (Manu) Date: Thu, 30 Aug 2012 22:22:17 +0200 Subject: [Python-Dev] Problem with _PyTrash_destroy_chain ? In-Reply-To: <503FB5BD.40109@v.loewis.de> References: <503FB5BD.40109@v.loewis.de> Message-ID: On Thu, Aug 30, 2012 at 8:49 PM, "Martin v. L?wis" wrote: > Am 30.08.12 14:39, schrieb Manu: > > After spending quite a bit of time trying to understand what could go >> wrong in the C extensions I use, and not finding anything interesting, I >> decided to try to find the problem with gdb. The stacktrace I have seems >> to mean that we are trying to double free something in the frame_dealloc >> method. See >> >> (gdb) bt >> #0 0x000000000046479f in _Py_ForgetReference (op=0x4dc7bc0) at >> Objects/object.c:2222 >> #1 0x0000000000464810 in _Py_Dealloc (op=0x4dc7bc0) at >> Objects/object.c:2242 >> #2 0x0000000000559a68 in frame_dealloc (f=0x4997ab0) at >> Objects/frameobject.c:458 >> #3 0x000000000046481d in _Py_Dealloc (op=0x4997ab0) at >> Objects/object.c:2243 >> > > Why do you think that this stacktrace "seems to mean that we are trying > to double free something"? I can't infer that from the stacktrace. > That's right, sorry. The reason why I think this is a double free is that the op seems to point to an object that has been deallocated by python. (gdb) select-frame 0 (gdb) print *op $6 = {_ob_next = 0x0, _ob_prev = 0x0, ob_refcnt = 0, ob_type = 0x2364020} > > You seem to suggest that the crash happens on object.c:2222. If so, > it's indeed likely a double-free; my suspicion is that the object memory > shows the regular "deallocated block" memory pattern (display *op at > the point of crash). > > It would then be interesting to find out what object used to be there. > Unfortunately, there is no easy way to find out (unless the crash > always involves 0x4dc7bc0). Not sure why you know this address by heart ;) but the op pointer points exactly there in the stacktrace I posted in the bug report. I'd bet it's like this every time. What does it mean ? (gdb) p op $12 = (PyObject *) 0x4dc7bc0 I'll reproduce tomorrow and tell you if I get this again. > So I suggest the following tracing: > > - in _Py_NewReference, printf("allocate %p\n", op); > - in _Py_ForgetReference, > printf("free %p %s\n", op, op->ob_type->tp_name); > fflush(stdout); > > This may get large, so you may open a file (e.g. in /tmp/fixedname) > to collect the trace. When it crashes, find the last prior free > of the address - there shouldn't be any subsequent alloc. > > If the type doesn't give a clue, but it's always the same type, > you can restrict tracing to that type, and then print values > of the object for further diagnosis. > OK I'll do that. It's always the same type. It's one of our mapped SqlAlchemy types. The code I am using uses sqlalchemy heavily in a multithreaded environment, and is using mostly instances of exactly this mapped type. > > Other useful debug information would be to find out what specific > frame object is being deallocated - this can still be diagnosed > at the point of crash (try _PyObject_Dump(f->f_code)), and then > which specific variable (not sure what exact Python version you > are using - it seems it's in localsplus, so it likely is a local > variable of that frame). > I used the macro to get the python stacktrace. It always crashes on the same line, in a sqlalchemy file. It looked innocuous to me, but I can post it tomorrow when I get back to work. > > HTH, > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Aug 30 22:34:56 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 30 Aug 2012 13:34:56 -0700 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 In-Reply-To: References: Message-ID: <503FCE70.5030709@stoneleaf.us> Ned Deily wrote: > In article , Terry Reedy > wrote: > >> If one goes to http://hg.python.org/cpython/ and clicks 'browse', it >> defaults to 2.7, not to default (now 3.3). Moreover, there is no >> indication that it is defaulting to an old branch rather than current >> default, as one might reasonably expect. I found this very confusing >> when I was trying to get a link for a python-list post and the code did >> not look right. > > It defaults to "tip" which is the most recently pushed change set. At > the moment, it just so happens that tip is a 2.7 change set. Usually a > change set for "default" will be the most recent but not always. You > just need to check the branch list. So is it not possible to have the default stay at "default" instead of at "tip"? ~Ethan~ From solipsis at pitrou.net Thu Aug 30 22:46:16 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 30 Aug 2012 22:46:16 +0200 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 References: <503FCE70.5030709@stoneleaf.us> Message-ID: <20120830224616.5b6d4a29@pitrou.net> On Thu, 30 Aug 2012 13:34:56 -0700 Ethan Furman wrote: > Ned Deily wrote: > > In article , Terry Reedy > > wrote: > > > >> If one goes to http://hg.python.org/cpython/ and clicks 'browse', it > >> defaults to 2.7, not to default (now 3.3). Moreover, there is no > >> indication that it is defaulting to an old branch rather than current > >> default, as one might reasonably expect. I found this very confusing > >> when I was trying to get a link for a python-list post and the code did > >> not look right. > > > > It defaults to "tip" which is the most recently pushed change set. At > > the moment, it just so happens that tip is a 2.7 change set. Usually a > > change set for "default" will be the most recent but not always. You > > just need to check the branch list. > > So is it not possible to have the default stay at "default" instead of > at "tip"? http://bz.selenic.com/show_bug.cgi?id=2815 Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From chris.jerdonek at gmail.com Thu Aug 30 23:49:12 2012 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 30 Aug 2012 14:49:12 -0700 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 In-Reply-To: References: Message-ID: On Thu, Aug 30, 2012 at 12:57 PM, Terry Reedy wrote: > If one goes to http://hg.python.org/cpython/ and clicks 'browse', it > defaults to 2.7, not to default (now 3.3). Moreover, there is no indication > that it is defaulting to an old branch rather than current default, as one > might reasonably expect. I found this very confusing when I was trying to > get a link for a python-list post and the code did not look right. I reported both of these issues previously here: http://bugs.python.org/issue15491 For the second issue, I wound up filing this issue (noted in a comment on the above issue): http://bz.selenic.com/show_bug.cgi?id=3559 --Chris > When one clicks 'branches', default is listed under 2.7, instead of on top, > where it should be. > > I hope someone can fix both these issues. > > -- > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com From nad at acm.org Thu Aug 30 23:51:57 2012 From: nad at acm.org (Ned Deily) Date: Thu, 30 Aug 2012 14:51:57 -0700 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 References: <503FCE70.5030709@stoneleaf.us> <20120830224616.5b6d4a29@pitrou.net> Message-ID: In article <20120830224616.5b6d4a29 at pitrou.net>, Antoine Pitrou wrote: > On Thu, 30 Aug 2012 13:34:56 -0700 > Ethan Furman wrote: > > Ned Deily wrote: > > > In article , Terry Reedy > > > wrote: > > > > > >> If one goes to http://hg.python.org/cpython/ and clicks 'browse', it > > >> defaults to 2.7, not to default (now 3.3). Moreover, there is no > > >> indication that it is defaulting to an old branch rather than current > > >> default, as one might reasonably expect. I found this very confusing > > >> when I was trying to get a link for a python-list post and the code did > > >> not look right. > > > > > > It defaults to "tip" which is the most recently pushed change set. At > > > the moment, it just so happens that tip is a 2.7 change set. Usually a > > > change set for "default" will be the most recent but not always. You > > > just need to check the branch list. > > > > So is it not possible to have the default stay at "default" instead of > > at "tip"? > > http://bz.selenic.com/show_bug.cgi?id=2815 Yes, as Matt hints at, what most people really want is a filter for a particular branch. As it stands, you can always find the (default) head of a branch: http://hg.python.org/cpython/shortlog/default http://hg.python.org/cpython/shortlog/3.2 http://hg.python.org/cpython/shortlog/2.7 etc but note that as you follow the graph for a branch, merge change sets can take you into other branches depending on which parent you choose, and the change log shows all earlier change sets regardless of branch. -- Ned Deily, nad at acm.org From ethan at stoneleaf.us Thu Aug 30 23:44:45 2012 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 30 Aug 2012 14:44:45 -0700 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 In-Reply-To: <20120830224616.5b6d4a29@pitrou.net> References: <503FCE70.5030709@stoneleaf.us> <20120830224616.5b6d4a29@pitrou.net> Message-ID: <503FDECD.3090908@stoneleaf.us> Antoine Pitrou wrote: > On Thu, 30 Aug 2012 13:34:56 -0700 > Ethan Furman wrote: >> Ned Deily wrote: >>> Terry Reedy wrote: >>>> If one goes to http://hg.python.org/cpython/ and clicks 'browse', it >>>> defaults to 2.7, not to default (now 3.3). Moreover, there is no >>>> indication that it is defaulting to an old branch rather than current >>>> default, as one might reasonably expect. I found this very confusing >>>> when I was trying to get a link for a python-list post and the code did >>>> not look right. >>> >>> It defaults to "tip" which is the most recently pushed change set. At >>> the moment, it just so happens that tip is a 2.7 change set. Usually a >>> change set for "default" will be the most recent but not always. You >>> just need to check the branch list. >> >> So is it not possible to have the default stay at "default" instead of >> at "tip"? > > http://bz.selenic.com/show_bug.cgi?id=2815 Thanks, Antoine. For those with click-a-phobia (Steven? ;) the short answer is "No". Branches it is, then. ~Ethan~ From martin at v.loewis.de Thu Aug 30 23:53:07 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 30 Aug 2012 23:53:07 +0200 Subject: [Python-Dev] hg.python.org should default to defaut, not 2.7 In-Reply-To: <503FCE70.5030709@stoneleaf.us> References: <503FCE70.5030709@stoneleaf.us> Message-ID: <503FE0C3.6020205@v.loewis.de> Am 30.08.12 22:34, schrieb Ethan Furman: >>> If one goes to http://hg.python.org/cpython/ and clicks 'browse', it >>> defaults to 2.7, not to default (now 3.3). Moreover, there is no >>> indication that it is defaulting to an old branch rather than current >>> default, as one might reasonably expect. I found this very confusing >>> when I was trying to get a link for a python-list post and the code >>> did not look right. >> >> It defaults to "tip" which is the most recently pushed change set. At >> the moment, it just so happens that tip is a 2.7 change set. Usually a >> change set for "default" will be the most recent but not always. You >> just need to check the branch list. > > So is it not possible to have the default stay at "default" instead of > at "tip"? It becomes challenging when you look at the other links: The bz2/zip/gz links: should they archive default or tip? [I guess you want them to archive default as well] The log link (which is actually the default page): should it log all changes including tip, or start at the head of default? [Do you really want it to suppress changes that are older than default's head?] The graph link: starting from default head, or tip? [same issue as log] The changeset link: tip or default? [I cannot guess what you would prefer. I personally think all changesets should be hyperlinked in the shortlog, and the shortlog shouldn't include a changeset navlink. This is probably intended, except that it breaks for changesets with tracker issue numbers in their description.] tags and branches are special cases - they always consider all of them (where "all tags" is "all .hgtags entries from all active (?) branches' heads"); help is also special. Regards, Martin From martin at v.loewis.de Fri Aug 31 00:21:54 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 31 Aug 2012 00:21:54 +0200 Subject: [Python-Dev] Problem with _PyTrash_destroy_chain ? In-Reply-To: References: <503FB5BD.40109@v.loewis.de> Message-ID: <503FE782.1010806@v.loewis.de> Am 30.08.12 22:22, schrieb Manu: > That's right, sorry. The reason why I think this is a double free is > that the op seems to point to an object that has been deallocated by python. > > (gdb) select-frame 0 > (gdb) print *op > $6 = {_ob_next = 0x0, _ob_prev = 0x0, ob_refcnt = 0, ob_type = 0x2364020} Doesn't look like that to me. If it was deallocated, the debug malloc would fill it with DEADBYTE (0xDB). So the memory has *not* been deallocated (yet?). The object was just unlinked from the all-objects list (or the pointers were overwritten with NULL by some buggy code of yours). If the object had properly been unlinked before, its refcount would have been set to 0. In turn, the DECREF call that caused the second deallocation would have complained "UNREF negative refcnt" just above the line where it crashed. So it's rather unlikely that an earlier ForgetReference had happened, unless someone has INCREFed the object again in-between. This is actually plausible: the trashcan defers deallocation, allowing for a double drop-refcount-to-zero operation. Without the trashcan, the object gets deallocated, the refcount overwritten with DEADBYTE, the bogus INCREF and the second DECREF happen, the refcount isn't 0 then, so there is no second deallocation. Since there apparently is still a reference from a frame, the first forgetreference was actually bogus; the object shouldn't have been released yet. So you are missing an INCREF somewhere. > It would then be interesting to find out what object used to be there. > Unfortunately, there is no easy way to find out (unless the crash > always involves 0x4dc7bc0). > > > Not sure why you know this address by heart ;) but the op pointer points > exactly there in the stacktrace I posted in the bug report. I'd bet it's > like this every time. What does it mean ? It means that it's a deterministic failure, which is a good thing from a debugging point of view. You can set a watchpoint on the refcount, and watch it go 0, then 1, then 0 again. If you are unfamiliar with watchpoints, here is the rough guide: 1. Find the address of ob_refcnt, i.e. &op->ob_refcnt. I *think* it should be (int*)0x4dc7bc8, but please double-check. 2. watch *(int*)0x4dc7bc8 3. run, continue, continue, ... In this form, this typically doesn't work, since the address has typically no memory associated initially. So you need to set a break point on a function that is called closely before the object gets allocated, and set the watchpoint only then. Good luck, Martin From greg at krypto.org Fri Aug 31 03:43:05 2012 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 30 Aug 2012 18:43:05 -0700 Subject: [Python-Dev] why is _PyBytes_Join not public but PyUnicode_Join is? Message-ID: We have use for _PyBytes_Join in an extension module but technically it isn't a public Python C API... anyone know why? PyUnicode_Join is. Looking up the bytes 'join' method and using the C API to call that method object with proper parameters seems like overkill in the case where we're not dealing with user supplied byte strings at all. -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Fri Aug 31 04:48:59 2012 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 31 Aug 2012 03:48:59 +0100 Subject: [Python-Dev] why is _PyBytes_Join not public but PyUnicode_Join is? In-Reply-To: References: Message-ID: <5040261B.1050004@mrabarnett.plus.com> On 31/08/2012 02:43, Gregory P. Smith wrote: > We have use for _PyBytes_Join in an extension module but technically it > isn't a public Python C API... anyone know why? > > PyUnicode_Join is. > > Looking up the bytes 'join' method and using the C API to call that > method object with proper parameters seems like overkill in the case > where we're not dealing with user supplied byte strings at all. > For what it's worth, I could also make use of it in the regex module. I use PyUnicode_Join when working with Unicode strings, but I could also use PyBytes_Join if it were available instead of having to look it up. From dholth at gmail.com Fri Aug 31 05:16:49 2012 From: dholth at gmail.com (Daniel Holth) Date: Thu, 30 Aug 2012 23:16:49 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <20120828183807.GA12554@iskra.aviel.ru> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> Message-ID: After this discussion it seemed wiser to submit my proposed 1.2 edits as Metadata 1.3, adding Provides-Extra, Setup-Requires-Dist, and Extension (with no defined registration procedure). This version is sure to be exciting as it also specifies that the values are UTF-8 with tolerant decoding and re-defines environment markers in terms of the ast module (is there a better way to specify a subset of Python?). The proposed Metadata 1.3 is at https://bitbucket.org/dholth/python-peps/changeset/8fa1de7478e95b5ef3a18c3272f740d8f3e2fb80 Thanks, Daniel Holth From martin at v.loewis.de Fri Aug 31 12:24:04 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 31 Aug 2012 12:24:04 +0200 Subject: [Python-Dev] why is _PyBytes_Join not public but PyUnicode_Join is? In-Reply-To: References: Message-ID: <504090C4.8010600@v.loewis.de> Am 31.08.12 03:43, schrieb Gregory P. Smith: > We have use for _PyBytes_Join in an extension module but technically it > isn't a public Python C API... anyone know why? API minimalism. No API should be public unless somebody can demonstrate an actual use case. The Unicode API of Python 2.0 had a serious mistake in making dozens of functions public in the API. This broke twice in Python's history: once when UCS-4 became supported, and again for PEP 393. For the former, a work-around was possible by introducing macros, to support API compatibility while breaking ABI compatibility. For PEP 393, huge efforts were necessary to even preserve the API (and this only worked with limitations). So by default, all new functions should be internal API (static if possible), until somebody has explicitly considered use cases and considered what kind of stability can be guaranteed for the API. > Looking up the bytes 'join' method and using the C API to call that > method object with proper parameters seems like overkill in the case > where we're not dealing with user supplied byte strings at all. It's not really that difficult. Instead of r = PyBytes_Join(sep, x); you write r = PyObject_CallMethod(sep, "join", "O", x); This is just a few more letters to type. Or are you concerned with the runtime overhead that this causes? Don't be: the cost of actually joining is much higher than the cost of making the call. Regards, Martin From martin at v.loewis.de Fri Aug 31 12:48:48 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 31 Aug 2012 12:48:48 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> Message-ID: <50409690.9030109@v.loewis.de> Am 31.08.12 05:16, schrieb Daniel Holth: > After this discussion it seemed wiser to submit my proposed 1.2 edits > as Metadata 1.3, adding Provides-Extra, Setup-Requires-Dist, and > Extension (with no defined registration procedure). Thanks for doing this. A few comments: 1. -1 on "tolerant decoding". I think the format should clearly specify what fields are text (I think most of them are), and mandate that they be in UTF-8. If there is a need for binary data, they should be specified to be in base64 encoding (but I don't think any of the fields really are binary data). 2. The extensions section should discuss order. E.g. is it ok to write Chili-Type: Poblano Extension: Chili Platform: Basmati Extension: Garlic Chili-Heat: Mild Garlic-Size: 1tsp 3. There should be a specification of how collisions between extension fields and standard fields are resolved. E.g. if I have Extension: Home Home-page: http://www.python.org is Home-page the extension field or the PEP 345 field? There are several ways to resolve this; I suggest giving precedence to the standard field (unless you specify that extensions must follow all standard fields, in which case you can drop the extension prefix from the extension keys). 4. There needs to be a discusion of the meta-syntax. PEP 314 still mentioned that this is RFC 822; PEP 345 dropped that and didn't say anything about the syntax of fields (i.e. not even that they are key-value, that the colon is a separator, that the keys are case-insensitive, etc). Regards, Martin From donald.stufft at gmail.com Fri Aug 31 12:54:28 2012 From: donald.stufft at gmail.com (Donald Stufft) Date: Fri, 31 Aug 2012 06:54:28 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: <50409690.9030109@v.loewis.de> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> Message-ID: On Friday, August 31, 2012 at 6:48 AM, "Martin v. L?wis" wrote: > 3. There should be a specification of how collisions between extension > fields and standard fields are resolved. E.g. if I have > > Extension: Home > Home-page: http://www.python.org > > is Home-page the extension field or the PEP 345 field? There are > several ways to resolve this; I suggest giving precedence to the > standard field (unless you specify that extensions must follow all > standard fields, in which case you can drop the extension prefix > from the extension keys). > Unless i'm mistaken (which I may be!) I believe that a / can be used as the separator between the namespace and the "real" key. Home-page: http://www.python.org Extension: Home Home/other-thing: Foo Doing this is the "Extension" field required? -------------- next part -------------- An HTML attachment was scrubbed... URL: From dholth at gmail.com Fri Aug 31 12:56:45 2012 From: dholth at gmail.com (Daniel Holth) Date: Fri, 31 Aug 2012 06:56:45 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <50409690.9030109@v.loewis.de> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> Message-ID: <0C702AFF-9D1C-4BB0-A247-A226FE4C3F5E@gmail.com> On Aug 31, 2012, at 6:48 AM, "Martin v. L?wis" wrote: > Am 31.08.12 05:16, schrieb Daniel Holth: >> After this discussion it seemed wiser to submit my proposed 1.2 edits >> as Metadata 1.3, adding Provides-Extra, Setup-Requires-Dist, and >> Extension (with no defined registration procedure). > > Thanks for doing this. A few comments: > > 1. -1 on "tolerant decoding". I think the format should clearly specify > what fields are text (I think most of them are), and mandate that > they be in UTF-8. If there is a need for binary data, they should be > specified to be in base64 encoding (but I don't think any of the > fields really are binary data). > Ok. If you want you can check the version to decide how strict you want to be. > 2. The extensions section should discuss order. E.g. is it ok to write > > Chili-Type: Poblano > Extension: Chili > Platform: Basmati > Extension: Garlic > Chili-Heat: Mild > Garlic-Size: 1tsp Ordering doesn't matter and collisions with existing tags are not allowed. > > 3. There should be a specification of how collisions between extension > fields and standard fields are resolved. E.g. if I have > > Extension: Home > Home-page: http://www.python.org > > is Home-page the extension field or the PEP 345 field? There are > several ways to resolve this; I suggest giving precedence to the > standard field (unless you specify that extensions must follow all > standard fields, in which case you can drop the extension prefix > from the extension keys). > > 4. There needs to be a discusion of the meta-syntax. PEP 314 still > mentioned that this is RFC 822; PEP 345 dropped that and didn't > say anything about the syntax of fields (i.e. not even that they > are key-value, that the colon is a separator, that the keys > are case-insensitive, etc). > I think the new profile support for email Parser will handle this perfectly. > Regards, > Martin > > > > From dholth at gmail.com Fri Aug 31 13:01:17 2012 From: dholth at gmail.com (Daniel Holth) Date: Fri, 31 Aug 2012 07:01:17 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> Message-ID: <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> On Aug 31, 2012, at 6:54 AM, Donald Stufft wrote: > On Friday, August 31, 2012 at 6:48 AM, "Martin v. L?wis" wrote: >> 3. There should be a specification of how collisions between extension >> fields and standard fields are resolved. E.g. if I have >> >> Extension: Home >> Home-page: http://www.python.org >> >> is Home-page the extension field or the PEP 345 field? There are >> several ways to resolve this; I suggest giving precedence to the >> standard field (unless you specify that extensions must follow all >> standard fields, in which case you can drop the extension prefix >> from the extension keys). >> > Unless i'm mistaken (which I may be!) I believe that a / can be used as > the separator between the namespace and the "real" key. > > Home-page: http://www.python.org > Extension: Home > Home/other-thing: Foo > Not bad. > Doing this is the "Extension" field required? Yes it is required. A simple lookup for data ['extension'] tells you what to expect. -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Fri Aug 31 13:08:14 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 31 Aug 2012 13:08:14 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> Message-ID: <50409B1E.7040505@v.loewis.de> Am 31.08.12 12:54, schrieb Donald Stufft: > On Friday, August 31, 2012 at 6:48 AM, "Martin v. L?wis" wrote: >> 3. There should be a specification of how collisions between extension >> fields and standard fields are resolved. E.g. if I have >> >> Extension: Home >> Home-page:http://www.python.org >> >> is Home-page the extension field or the PEP 345 field? There are >> several ways to resolve this; I suggest giving precedence to the >> standard field (unless you specify that extensions must follow all >> standard fields, in which case you can drop the extension prefix >> from the extension keys). >> > Unless i'm mistaken (which I may be!) I believe that a / can be used as > the separator between the namespace and the "real" key. What do you mean by "can be"? In RFC 822, a slash can be in a field-name, yes, but the PEPs recently became silent on the meta-syntax. > > Home-page: http://www.python.org > Extension: Home > Home/other-thing: Foo > > Doing this is the "Extension" field required? Well, in my example it would then be Home-page: http://www.python.org Home/page: Foo I don't think the Extension field is necessary if there is a promise that standard fields won't ever include slashes. Regards, Martin From martin at v.loewis.de Fri Aug 31 13:09:35 2012 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 31 Aug 2012 13:09:35 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional dependencies) In-Reply-To: <0C702AFF-9D1C-4BB0-A247-A226FE4C3F5E@gmail.com> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <0C702AFF-9D1C-4BB0-A247-A226FE4C3F5E@gmail.com> Message-ID: <50409B6F.9090609@v.loewis.de> Am 31.08.12 12:56, schrieb Daniel Holth: >> 1. -1 on "tolerant decoding". I think the format should clearly specify >> what fields are text (I think most of them are), and mandate that >> they be in UTF-8. If there is a need for binary data, they should be >> specified to be in base64 encoding (but I don't think any of the >> fields really are binary data). >> > > Ok. If you want you can check the version to decide how strict you want to be. Thanks for the offer - I'd prefer to remain as a reader, not an author of the PEP. Regards, Martin From eliben at gmail.com Fri Aug 31 13:22:54 2012 From: eliben at gmail.com (Eli Bendersky) Date: Fri, 31 Aug 2012 13:22:54 +0200 Subject: [Python-Dev] why is _PyBytes_Join not public but PyUnicode_Join is? In-Reply-To: References: Message-ID: On Fri, Aug 31, 2012 at 3:43 AM, Gregory P. Smith wrote: > We have use for _PyBytes_Join in an extension module but technically it > isn't a public Python C API... anyone know why? > > PyUnicode_Join is. > > Looking up the bytes 'join' method and using the C API to call that method > object with proper parameters seems like overkill in the case where we're > not dealing with user supplied byte strings at all. > > I wondered about the same thing a month ago - http://mail.python.org/pipermail/python-dev/2012-July/121031.html Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Aug 31 14:41:19 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 31 Aug 2012 08:41:19 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> Message-ID: <20120831124119.9763A25010D@webabinitio.net> On Fri, 31 Aug 2012 07:01:17 -0400, Daniel Holth wrote: > On Aug 31, 2012, at 6:54 AM, Donald Stufft wrote: > > Unless i'm mistaken (which I may be!) I believe that a / can be used as > > the separator between the namespace and the "real" key. > > > > Home-page: http://www.python.org > > Extension: Home > > Home/other-thing: Foo > > > > Not bad. > > > Doing this is the "Extension" field required? > > Yes it is required. A simple lookup for data ['extension'] tells you what to expect. It also allows for typo detection, which automatically interpreting prefix strings as extensions names would not. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From ncoghlan at gmail.com Fri Aug 31 15:20:15 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 31 Aug 2012 23:20:15 +1000 Subject: [Python-Dev] why is _PyBytes_Join not public but PyUnicode_Join is? In-Reply-To: <504090C4.8010600@v.loewis.de> References: <504090C4.8010600@v.loewis.de> Message-ID: On Fri, Aug 31, 2012 at 8:24 PM, "Martin v. L?wis" wrote: > So by default, all new functions should be internal API (static if > possible), until somebody has explicitly considered use cases and > considered what kind of stability can be guaranteed for the API. The other aspect we're conscious of these days is that folks like the IronClad and cpyext developers *are* making a concerted effort to emulate the full C API of CPython-the-runtime, not just implementing Python-the-language. External tools like Dave Malcolm's static analyser for gcc also need to be taught the refcounting semantics of any new API additions. So, unless there's a compelling reason for direct public access from C, the preferred option is to only expose the corresponding Python API via the general purpose APIs for calling back into Python from C extensions. This minimises the induced workload on other groups, as well as making future maintenance easier for CPython itself. New additions are still possible - they're just not the default any more. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Aug 31 15:57:16 2012 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 31 Aug 2012 23:57:16 +1000 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: <20120831124119.9763A25010D@webabinitio.net> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> <20120831124119.9763A25010D@webabinitio.net> Message-ID: On Fri, Aug 31, 2012 at 10:41 PM, R. David Murray wrote: > It also allows for typo detection, which automatically interpreting > prefix strings as extensions names would not. +1 on retaining the explicit extension field, mainly for the cross-validation benefits (including easily checking which extension syntax is used by a module). However, also +1 on using "/" as the extension separator to avoid ambiguity in field names, as well as restoring the explicit requirement that metadata entries use valid RFC 822 metasyntax. If the precise rules can be articulated as a 3.3 email module policy, so much the better. I've now pushed Daniel's latest draft as PEP 426. I added the following section on "Metadata Files", which restores some background info on the overall file format that went AWOL in v1.2: ----------------------------------------------------------------------- Metadata Files ============== The syntax defined in this PEP is for use with Python distribution metadata files. This file format is a single set of RFC-822 headers parseable by the ``rfc822`` or ``email`` modules. The field names listed in the `Fields`_ section are used as the header names. There are two standard locations for these metadata files: * the ``PKG-INFO`` file included in the base directory of Python source distribution archives (as created by the distutils ``sdist`` command) * the ``dist-info/METADATA`` files in a Python installation database, as described in PEP 376. Other tools involved in Python distribution may choose to record this metadata in additional tool-specific locations (e.g. as part of a binary distribution archive format). ----------------------------------------------------------------------- As far as I know, the sdist archive format isn't actually defined anywhere beyond "archives like those created by the distutils sdist command". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From martin at v.loewis.de Fri Aug 31 17:33:56 2012 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 31 Aug 2012 17:33:56 +0200 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> <20120831124119.9763A25010D@webabinitio.net> Message-ID: <5040D964.8030202@v.loewis.de> Am 31.08.12 15:57, schrieb Nick Coghlan: > However, also +1 on using "/" as the extension separator to avoid > ambiguity in field names, as well as restoring the explicit > requirement that metadata entries use valid RFC 822 metasyntax. Unfortunately, this conflicts with the desire to use UTF-8 in attribute values - RFC 822 (and also 2822) don't support this, but require the use oF MIME instead (Q or B encoding). RFC 2822 also has a continuation line semantics which traditionally conflicts with the metadata; in particular, line breaks cannot be represented (but are interpreted as continuation lines instead). OTOH, several of the metadata fields do require line breaks, in particular those formatted as ReST. Regards, Martin From status at bugs.python.org Fri Aug 31 18:07:18 2012 From: status at bugs.python.org (Python tracker) Date: Fri, 31 Aug 2012 18:07:18 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20120831160718.137DB1CE17@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2012-08-24 - 2012-08-31) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 3653 (+13) closed 23955 (+41) total 27608 (+54) Open issues with patches: 1621 Issues opened (37) ================== #15780: IDLE (windows) with PYTHONPATH and multiple python versions http://bugs.python.org/issue15780 opened by Jimbofbx #15782: Compile error for a number of Mac modules with recent Xcode http://bugs.python.org/issue15782 opened by ronaldoussoren #15783: decimal.localcontext(None) fails when using the C accelerator http://bugs.python.org/issue15783 opened by ncoghlan #15784: OSError.__str__() should distinguish between errno and winerro http://bugs.python.org/issue15784 opened by sbt #15785: curses.get_wch() returns keypad codes incorrectly http://bugs.python.org/issue15785 opened by simpkins #15786: IDLE code completion window does not scoll/select with mouse http://bugs.python.org/issue15786 opened by suddha.sourav #15787: PEP 3121 Refactoring http://bugs.python.org/issue15787 opened by belopolsky #15789: mention shell-like parts of the std lib in the subprocess docs http://bugs.python.org/issue15789 opened by cvrebert #15792: Fix compiler options for x64 builds on Windows http://bugs.python.org/issue15792 opened by jkloth #15793: Stack corruption in ssl.RAND_egd() http://bugs.python.org/issue15793 opened by storchaka #15795: Zipfile.extractall does not preserve file permissions http://bugs.python.org/issue15795 opened by uruz #15796: Fix readline() docstrings http://bugs.python.org/issue15796 opened by storchaka #15797: bdist_msi does not pass -install/remove flags to install_scrip http://bugs.python.org/issue15797 opened by braudel #15798: subprocess.Popen() fails if 0, 1 or 2 descriptor is closed http://bugs.python.org/issue15798 opened by sarum9in #15799: httplib client and statusline http://bugs.python.org/issue15799 opened by karlcow #15802: Nonsensical test for mailbox http://bugs.python.org/issue15802 opened by storchaka #15803: Incorrect docstring on ConfigParser.items() http://bugs.python.org/issue15803 opened by nbtrap #15805: Add stdout redirection tool to contextlib http://bugs.python.org/issue15805 opened by rhettinger #15806: Add context manager for the "try: ... except: pass" pattern http://bugs.python.org/issue15806 opened by rhettinger #15808: Possibility of setting custom key bindings for "Additional hel http://bugs.python.org/issue15808 opened by Rostyslav.Dzinko #15809: IDLE console uses incorrect encoding. http://bugs.python.org/issue15809 opened by alex.hartwig #15810: assertSequenceEqual should be fired when comparing sequence su http://bugs.python.org/issue15810 opened by flox #15811: ElementTree.write() raises TypeError when xml_declaration = Tr http://bugs.python.org/issue15811 opened by David.Buxton #15812: inspect.getframeinfo() cannot show first line http://bugs.python.org/issue15812 opened by sbt #15814: memoryview: equality-hash invariant http://bugs.python.org/issue15814 opened by skrah #15815: Add numerator to ZeroDivisionError messages http://bugs.python.org/issue15815 opened by terry.reedy #15817: Misc/gdbinit: Expose command documentation to gdb help http://bugs.python.org/issue15817 opened by belopolsky #15818: multiprocessing documentation of Process.exitcode http://bugs.python.org/issue15818 opened by schmiddy #15819: Unable to build Python out-of-tree when source tree is readonl http://bugs.python.org/issue15819 opened by trent #15820: Add additional info to Resources area on Dev Guide http://bugs.python.org/issue15820 opened by mikehoy #15821: Improve docs for PyMemoryView_FromBuffer() http://bugs.python.org/issue15821 opened by skrah #15822: Python 3.3 creates lib2to3 grammar in wrong order http://bugs.python.org/issue15822 opened by tpievila #15826: Increased test coverage of test_glob.py http://bugs.python.org/issue15826 opened by eng793 #15828: imp.load_module doesn't support C_EXTENSION type http://bugs.python.org/issue15828 opened by metolone #15829: Threading Lock - Wrong Exception Name http://bugs.python.org/issue15829 opened by mikehoy #15830: make -s no longer silences output from setup.py http://bugs.python.org/issue15830 opened by brett.cannon #15831: comma after leading optional argument is after bracket in docs http://bugs.python.org/issue15831 opened by cjerdonek Most recent 15 issues with no replies (15) ========================================== #15831: comma after leading optional argument is after bracket in docs http://bugs.python.org/issue15831 #15830: make -s no longer silences output from setup.py http://bugs.python.org/issue15830 #15826: Increased test coverage of test_glob.py http://bugs.python.org/issue15826 #15818: multiprocessing documentation of Process.exitcode http://bugs.python.org/issue15818 #15817: Misc/gdbinit: Expose command documentation to gdb help http://bugs.python.org/issue15817 #15812: inspect.getframeinfo() cannot show first line http://bugs.python.org/issue15812 #15797: bdist_msi does not pass -install/remove flags to install_scrip http://bugs.python.org/issue15797 #15796: Fix readline() docstrings http://bugs.python.org/issue15796 #15782: Compile error for a number of Mac modules with recent Xcode http://bugs.python.org/issue15782 #15772: Unresolved symbols in Windows 64-bit python http://bugs.python.org/issue15772 #15767: add ModuleNotFoundError http://bugs.python.org/issue15767 #15759: "make suspicious" doesn't display instructions in case of fail http://bugs.python.org/issue15759 #15744: missing tests for {RawIO,BufferedIO,TextIO}.writelines http://bugs.python.org/issue15744 #15735: PEP 3121, 384 Refactoring applied to ossaudio module http://bugs.python.org/issue15735 #15734: PEP 3121, 384 Refactoring applied to spwd module http://bugs.python.org/issue15734 Most recent 15 issues waiting for review (15) ============================================= #15829: Threading Lock - Wrong Exception Name http://bugs.python.org/issue15829 #15828: imp.load_module doesn't support C_EXTENSION type http://bugs.python.org/issue15828 #15826: Increased test coverage of test_glob.py http://bugs.python.org/issue15826 #15821: Improve docs for PyMemoryView_FromBuffer() http://bugs.python.org/issue15821 #15820: Add additional info to Resources area on Dev Guide http://bugs.python.org/issue15820 #15819: Unable to build Python out-of-tree when source tree is readonl http://bugs.python.org/issue15819 #15817: Misc/gdbinit: Expose command documentation to gdb help http://bugs.python.org/issue15817 #15814: memoryview: equality-hash invariant http://bugs.python.org/issue15814 #15811: ElementTree.write() raises TypeError when xml_declaration = Tr http://bugs.python.org/issue15811 #15809: IDLE console uses incorrect encoding. http://bugs.python.org/issue15809 #15808: Possibility of setting custom key bindings for "Additional hel http://bugs.python.org/issue15808 #15803: Incorrect docstring on ConfigParser.items() http://bugs.python.org/issue15803 #15802: Nonsensical test for mailbox http://bugs.python.org/issue15802 #15798: subprocess.Popen() fails if 0, 1 or 2 descriptor is closed http://bugs.python.org/issue15798 #15797: bdist_msi does not pass -install/remove flags to install_scrip http://bugs.python.org/issue15797 Top 10 most discussed issues (10) ================================= #15819: Unable to build Python out-of-tree when source tree is readonl http://bugs.python.org/issue15819 23 msgs #15814: memoryview: equality-hash invariant http://bugs.python.org/issue15814 18 msgs #15783: decimal.localcontext(None) fails when using the C accelerator http://bugs.python.org/issue15783 14 msgs #15785: curses.get_wch() returns keypad codes incorrectly http://bugs.python.org/issue15785 14 msgs #15751: Support subinterpreters in the GIL state API http://bugs.python.org/issue15751 13 msgs #15776: Allow pyvenv to work in existing directory http://bugs.python.org/issue15776 12 msgs #15798: subprocess.Popen() fails if 0, 1 or 2 descriptor is closed http://bugs.python.org/issue15798 11 msgs #15806: Add context manager for the "try: ... except: pass" pattern http://bugs.python.org/issue15806 10 msgs #15828: imp.load_module doesn't support C_EXTENSION type http://bugs.python.org/issue15828 9 msgs #15829: Threading Lock - Wrong Exception Name http://bugs.python.org/issue15829 9 msgs Issues closed (35) ================== #10650: decimal.py: quantize(): excess digits with watchexp=0 http://bugs.python.org/issue10650 closed by python-dev #11225: getcwd fix for NetBSD to handle ERANGE errno http://bugs.python.org/issue11225 closed by neologix #11964: Undocumented change to indent param of json.dump in 3.2 http://bugs.python.org/issue11964 closed by petri.lehtinen #13370: test_ctypes fails when building python with clang http://bugs.python.org/issue13370 closed by ronaldoussoren #13518: configparser can???t read file objects from urlopen http://bugs.python.org/issue13518 closed by pitrou #14042: json.dumps() documentation is slightly incorrect. http://bugs.python.org/issue14042 closed by petri.lehtinen #14674: Link to & explain deviations from RFC 4627 in json module docs http://bugs.python.org/issue14674 closed by pitrou #14880: csv.reader and .writer use wrong kwargs notation in 2.7 docs http://bugs.python.org/issue14880 closed by hynek #15136: Decimal accepting Fraction http://bugs.python.org/issue15136 closed by rhettinger #15544: math.isnan fails with some Decimal NaNs http://bugs.python.org/issue15544 closed by mark.dickinson #15573: Support unknown formats in memoryview comparisons http://bugs.python.org/issue15573 closed by python-dev #15591: when building the extensions, stdout is lost when stdout is re http://bugs.python.org/issue15591 closed by doko #15720: move __import__() out of the default lookup chain http://bugs.python.org/issue15720 closed by benjamin.peterson #15724: Add "versionchanged" to memoryview docs http://bugs.python.org/issue15724 closed by skrah #15761: Setting PYTHONEXECUTABLE can cause segfaults on OS X http://bugs.python.org/issue15761 closed by ronaldoussoren #15765: test_getcwd_long_pathnames (in test_posix) kills NetBSD http://bugs.python.org/issue15765 closed by trent #15777: test_capi refleak http://bugs.python.org/issue15777 closed by rosslagerwall #15778: str(ImportError(b'foo')) fails http://bugs.python.org/issue15778 closed by brett.cannon #15779: socket error [Errno 10013] when creating SMTP object http://bugs.python.org/issue15779 closed by loewis #15781: test_threaded_import fails with -j4 http://bugs.python.org/issue15781 closed by pitrou #15788: cross-refs in the subprocess.Popen.std{in,out,err} warning box http://bugs.python.org/issue15788 closed by ezio.melotti #15790: Python 3.3.0rc1 release notes claims PEP-405 support, yet pyse http://bugs.python.org/issue15790 closed by pas #15791: pydoc does not handle non-ASCII unicode AUTHOR field http://bugs.python.org/issue15791 closed by r.david.murray #15794: test_importlib: test_locks failure http://bugs.python.org/issue15794 closed by pitrou #15800: Closing of sys.std* files in gzip test. http://bugs.python.org/issue15800 closed by pitrou #15801: Weird string interpolation behaviour http://bugs.python.org/issue15801 closed by python-dev #15804: Feature request, implicit "except : pass" http://bugs.python.org/issue15804 closed by benjamin.peterson #15807: Bogus versionchanged note in logging.handlers.MemoryHandler do http://bugs.python.org/issue15807 closed by python-dev #15813: Python function decorator scope losing variable http://bugs.python.org/issue15813 closed by r.david.murray #15816: pydoc.py uses a hack that depends on implementation details an http://bugs.python.org/issue15816 closed by r.david.murray #15823: argparse produces error when multiply help lines http://bugs.python.org/issue15823 closed by r.david.murray #15824: mutable urlparse return type http://bugs.python.org/issue15824 closed by r.david.murray #15825: Typo in OrderedDict docs http://bugs.python.org/issue15825 closed by asvetlov #15827: Use "ll" for PY_FORMAT_SIZE_T on Win64 http://bugs.python.org/issue15827 closed by scoder #1432343: Description of file-object read() method is wrong. http://bugs.python.org/issue1432343 closed by r.david.murray From dholth at gmail.com Fri Aug 31 18:18:05 2012 From: dholth at gmail.com (Daniel Holth) Date: Fri, 31 Aug 2012 12:18:05 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: <5040D964.8030202@v.loewis.de> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> <20120831124119.9763A25010D@webabinitio.net> <5040D964.8030202@v.loewis.de> Message-ID: Some edits to include / and remove rfc822 again. What is the right email.policy.Policy()? https://bitbucket.org/dholth/python-peps/changeset/8ec6dd453ccbde6d34c63d2d2a18393bc70cf115 From rdmurray at bitdance.com Fri Aug 31 18:53:27 2012 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 31 Aug 2012 12:53:27 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> <20120831124119.9763A25010D@webabinitio.net> <5040D964.8030202@v.loewis.de> Message-ID: <20120831165327.DAC5D25010D@webabinitio.net> On Fri, 31 Aug 2012 12:18:05 -0400, Daniel Holth wrote: > Some edits to include / and remove rfc822 again. What is the right > email.policy.Policy()? When I discussed using email to parse metadata with Tarek a long time ago, I thought he was going to move to using a delimiter-substitution algorithm to encode and recover the line breaks. Perhaps that discussion wasn't in this same context, but I thought it was. If you did that, then 'SMTP' would be the correct policy for RFC2822/5322. But that isn't really going to work for this use case, even with the above hack. As Martin pointed out, RFC2822 does not allow utf-8 in the values. RFC 5335, which is Experimental, does. A medium term goal of the email package is to support that RFC, so this might be a motivation to move that higher in my feature priority list. (Support mostly involves switches to allow unicode/utf8 to be *written*; the parsing side works already, though it is not thoroughly tested.) However, all that aside, to answer your question you are really going to want to define a custom policy that derives from email.policy.Policy. Especially if you want to not follow the email RFCs and do want to assign meaning to the line separators. You can do that with a custom policy and thus still be able the use the email parsing infrastructure to read and write the files. I'll be glad to help out with creating the custom policy once we've reached that stage of the process. -- R. David Murray If you like the work I do for Python, you can enable me to spend more time doing it by supporting me here: http://gittip.com/bitdancer From solipsis at pitrou.net Fri Aug 31 18:49:58 2012 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 31 Aug 2012 18:49:58 +0200 Subject: [Python-Dev] benchmarks: Pathlib works under Python 3. References: <3X7lT21BBTzQMV@mail.python.org> Message-ID: <20120831184958.737b18b2@pitrou.net> On Fri, 31 Aug 2012 17:52:38 +0200 (CEST) brett.cannon wrote: > http://hg.python.org/benchmarks/rev/873baf08045e > changeset: 162:873baf08045e > user: Brett Cannon > date: Fri Aug 31 11:52:30 2012 -0400 > summary: > Pathlib works under Python 3. ... but therefore you shouldn't run it under 2to3 (which may pessimize the code). Benchmarks with a 3.x-compatible code base are listed under the "2n3" meta-benchmark name in perf.py: http://hg.python.org/benchmarks/file/873baf08045e/perf.py#l2000 Regards Antoine. -- Software development and contracting: http://pro.pitrou.net From dholth at gmail.com Fri Aug 31 19:12:29 2012 From: dholth at gmail.com (Daniel Holth) Date: Fri, 31 Aug 2012 13:12:29 -0400 Subject: [Python-Dev] Edits to Metadata 1.2 to add extras (optional ependencies) In-Reply-To: <20120831165327.DAC5D25010D@webabinitio.net> References: <20120828041552.GA21422@chang> <87wr0jtxzy.fsf@uwakimon.sk.tsukuba.ac.jp> <503CD917.5050902@v.loewis.de> <503CDEA4.9090800@v.loewis.de> <691A785F4B8F4AE09948ADCBDD7534C2@gmail.com> <20120828153811.CBDD32500FE@webabinitio.net> <503CF398.3020606@v.loewis.de> <20120828171541.GA9777@iskra.aviel.ru> <503D0B9C.6070308@v.loewis.de> <20120828183807.GA12554@iskra.aviel.ru> <50409690.9030109@v.loewis.de> <466B4750-EC6C-4773-87BF-E742652DD958@gmail.com> <20120831124119.9763A25010D@webabinitio.net> <5040D964.8030202@v.loewis.de> <20120831165327.DAC5D25010D@webabinitio.net> Message-ID: On Fri, Aug 31, 2012 at 12:53 PM, R. David Murray wrote: > On Fri, 31 Aug 2012 12:18:05 -0400, Daniel Holth wrote: >> Some edits to include / and remove rfc822 again. What is the right >> email.policy.Policy()? > > When I discussed using email to parse metadata with Tarek a long time > ago, I thought he was going to move to using a delimiter-substitution > algorithm to encode and recover the line breaks. Perhaps that discussion > wasn't in this same context, but I thought it was. If you did that, > then 'SMTP' would be the correct policy for RFC2822/5322. > > But that isn't really going to work for this use case, even with the above > hack. As Martin pointed out, RFC2822 does not allow utf-8 in the values. Thanks. For the time being I am happily using the surrogateescape/bytesgenerator hack and it preserves UTF-8 and linebreaks. I don't have a strong opinion about the line continuation policy; I do not have code that relies on parsing the long description from PKG-INFO files.