From ncoghlan at gmail.com Wed Mar 1 00:42:41 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 1 Mar 2017 15:42:41 +1000 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? Message-ID: Short version: - there are some reasonable requests for async variants of contextlib APIs for 3.7 - prompted by Raymond, I'm thinking it actually makes more sense to add these in a new `asyncio.contextlib` module than it does to add them directly to the existing module - would anyone object strongly to my asking authors of the affected PRs to take their changes in that direction? Longer version: There are a couple of open issues requesting async variants of some contextlib APIs (asynccontextmanager and AsyncExitStack). I'm inclined to accept both of them, but Raymond raised a good question regarding our general design philosophy for these kinds of additions: would it make more sense to put these in an "asyncio.contextlib" module than it would to add them directly to contextlib itself? The main advantage I see to the idea is that if someone proposed adding an "asyncio" dependency to contextlib, I'd say no. For the existing asynccontextmanager PR, I even said no to adding that dependency to the standard contextlib test suite, and instead asked that the new tests be moved out to a separate file, so the existing tests could continue to run even if asyncio was unavailable for some reason. While rejecting the idea of an asyncio dependency isn't a problem for asyncontextmanager specifically (it's low level enough for it not to matter), it's likely to be more of a concern for the AsyncExitStack API, where the "asyncio.iscoroutinefunction" introspection API is likely to be quite helpful, as are other APIs like `asyncio.ensure_future()`. So would folks be OK with my asking the author of the PR for https://bugs.python.org/issue29679 (adding asynccontextmanager) to rewrite the patch to add it as asyncio.contextlib.asyncontextmanager (with a cross-reference from the synchronous contextlib docs), rather than the current approach of adding it directly to contextlib? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Wed Mar 1 01:45:58 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 28 Feb 2017 22:45:58 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: <58B66E26.4090300@stoneleaf.us> On 02/28/2017 09:42 PM, Nick Coghlan wrote: > So would folks be OK with my asking the author of the PR for https://bugs.python.org/issue29679 (adding > asynccontextmanager) to rewrite the patch to add it as asyncio.contextlib.asyncontextmanager (with a cross-reference > from the synchronous contextlib docs), rather than the current approach of adding it directly to contextlib? I like the idea of keep the asyncio stuff in one general location. -- ~Ethan~ From njs at pobox.com Wed Mar 1 02:16:45 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 28 Feb 2017 23:16:45 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: On Tue, Feb 28, 2017 at 9:42 PM, Nick Coghlan wrote: > Short version: > > - there are some reasonable requests for async variants of contextlib APIs > for 3.7 > - prompted by Raymond, I'm thinking it actually makes more sense to add > these in a new `asyncio.contextlib` module than it does to add them directly > to the existing module > - would anyone object strongly to my asking authors of the affected PRs to > take their changes in that direction? IMHO this is a good idea *iff* the new APIs really are bound to asyncio, rather than being generic across all uses of async/await. It sounds like that's not the case, though? There are definitely use cases for acontextmanager in programs that don't use asyncio at all (but rather use twisted, curio, ...). Guido's even suggested that he'd like to see a PEP for an "asyncio2" within the 3.7/3.8 timeframe: https://mail.python.org/pipermail/async-sig/2016-November/000175.html asyncio is an important use case for async/await, but it's definitely not the only one. In cases where it's possible to write generic machinery in terms of async/await semantics, without assuming any particular coroutine runner's semantics, then I strongly urge you to do so. > Longer version: > > There are a couple of open issues requesting async variants of some > contextlib APIs (asynccontextmanager and AsyncExitStack). I'm inclined to > accept both of them, but Raymond raised a good question regarding our > general design philosophy for these kinds of additions: would it make more > sense to put these in an "asyncio.contextlib" module than it would to add > them directly to contextlib itself? > > The main advantage I see to the idea is that if someone proposed adding an > "asyncio" dependency to contextlib, I'd say no. For the existing > asynccontextmanager PR, I even said no to adding that dependency to the > standard contextlib test suite, and instead asked that the new tests be > moved out to a separate file, so the existing tests could continue to run > even if asyncio was unavailable for some reason. asyncio is a stable, non-provisional part of the standard library; it's not going anywhere. Personally I wouldn't be bothered about depending on it for tests. (My async_generator library is in a similar position: it isn't tied to any particular framework, and I don't even use asyncio myself, but the test suite depends on asyncio because hey, whatever, everyone already has it and it plays the role of generic coroutine runner as well as anything else does.) OTOH if you don't need to do any I/O then it's actually pretty easy to write a trivial coroutine runner. I think something like this should be sufficient to write any test you might want: @types.coroutine def send_me(value): return yield ("value", value) @types.coroutine def throw_me(exc): yield ("error", exc) async def yield_briefly(): await send_me(None) def run(async_fn, *args, **kwargs): coro = async_fn(*args, **kwargs) next_msg = ("value", None) try: while True: if next_msg[0] == "value": next_msg = coro.send(next_msg[1]) else: next_msg = coro.throw(next_msg[1]) except StopIteration as exc: return exc.value > While rejecting the idea of an asyncio dependency isn't a problem for > asyncontextmanager specifically (it's low level enough for it not to > matter), it's likely to be more of a concern for the AsyncExitStack API, > where the "asyncio.iscoroutinefunction" introspection API is likely to be > quite helpful, as are other APIs like `asyncio.ensure_future()`. FYI FWIW, every time I've tried to use iscoroutinefunction so far I've ended up regretting it and ripping it out again :-). The problem is that people will do things like apply a decorator to a coroutine function, and get a wrapped function that returns a coroutine object and which is interchangeable with a real coroutine function in every way except that iscoroutinefunction returns False. And there's no collections.abc.CoroutineFunction (not sure how that would even work). Better to just call the thing and then do an isinstance(..., collections.abc.Coroutine) on the return value. I haven't had a reason to try porting ExitStack to handle async context managers yet, so I can't speak to it beyond that :-). -n -- Nathaniel J. Smith -- https://vorpus.org From victor.stinner at gmail.com Wed Mar 1 04:55:53 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 1 Mar 2017 10:55:53 +0100 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: Please don't put code using asyncio in Python stdlib yet. The Python language is still changing rapidly to get new async features (async/await keywords, async generators, etc.), and asyncio also evolved quickly. I suggest to create 3rd party modules on PyPI. It became easy to pull dependencies using pip and virtualenv. It seems like https://github.com/aio-libs is the home of many asyncio libraries. Victor From yselivanov.ml at gmail.com Wed Mar 1 10:34:04 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 1 Mar 2017 10:34:04 -0500 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: <2c226953-b2ed-eb56-039b-aa3c2e20080e@gmail.com> On 2017-03-01 12:42 AM, Nick Coghlan wrote: > Short version: > > - there are some reasonable requests for async variants of contextlib APIs > for 3.7 > - prompted by Raymond, I'm thinking it actually makes more sense to add > these in a new `asyncio.contextlib` module than it does to add them > directly to the existing module > - would anyone object strongly to my asking authors of the affected PRs to > take their changes in that direction? Both asynccontextmanager and AsyncExitStack do not require asyncio is their implementations. Using asyncio as a helper to write tests is totally OK. For example, I use asyncio to test asynchronous generators (PEP 525). async/await is a generic language feature; asyncio is a framework that uses it. Things like asynccontextmanager are framework agnostic, they can be used in programs built with asyncio, Twisted, Tornado, etc. +1 to put both in contextlib. Yury From steve.dower at python.org Wed Mar 1 10:43:31 2017 From: steve.dower at python.org (Steve Dower) Date: Wed, 1 Mar 2017 07:43:31 -0800 Subject: [Python-Dev] API design: where to add async variants ofexisting stdlib APIs? In-Reply-To: References: Message-ID: Big +1 here, and an implicit -1 on the other suggestions. asyncio != async/await Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Nathaniel Smith" Sent: ?2/?28/?2017 23:19 To: "Nick Coghlan" Cc: "python-dev at python.org" Subject: Re: [Python-Dev] API design: where to add async variants ofexisting stdlib APIs? On Tue, Feb 28, 2017 at 9:42 PM, Nick Coghlan wrote: > Short version: > > - there are some reasonable requests for async variants of contextlib APIs > for 3.7 > - prompted by Raymond, I'm thinking it actually makes more sense to add > these in a new `asyncio.contextlib` module than it does to add them directly > to the existing module > - would anyone object strongly to my asking authors of the affected PRs to > take their changes in that direction? IMHO this is a good idea *iff* the new APIs really are bound to asyncio, rather than being generic across all uses of async/await. It sounds like that's not the case, though? There are definitely use cases for acontextmanager in programs that don't use asyncio at all (but rather use twisted, curio, ...). Guido's even suggested that he'd like to see a PEP for an "asyncio2" within the 3.7/3.8 timeframe: https://mail.python.org/pipermail/async-sig/2016-November/000175.html asyncio is an important use case for async/await, but it's definitely not the only one. In cases where it's possible to write generic machinery in terms of async/await semantics, without assuming any particular coroutine runner's semantics, then I strongly urge you to do so. > Longer version: > > There are a couple of open issues requesting async variants of some > contextlib APIs (asynccontextmanager and AsyncExitStack). I'm inclined to > accept both of them, but Raymond raised a good question regarding our > general design philosophy for these kinds of additions: would it make more > sense to put these in an "asyncio.contextlib" module than it would to add > them directly to contextlib itself? > > The main advantage I see to the idea is that if someone proposed adding an > "asyncio" dependency to contextlib, I'd say no. For the existing > asynccontextmanager PR, I even said no to adding that dependency to the > standard contextlib test suite, and instead asked that the new tests be > moved out to a separate file, so the existing tests could continue to run > even if asyncio was unavailable for some reason. asyncio is a stable, non-provisional part of the standard library; it's not going anywhere. Personally I wouldn't be bothered about depending on it for tests. (My async_generator library is in a similar position: it isn't tied to any particular framework, and I don't even use asyncio myself, but the test suite depends on asyncio because hey, whatever, everyone already has it and it plays the role of generic coroutine runner as well as anything else does.) OTOH if you don't need to do any I/O then it's actually pretty easy to write a trivial coroutine runner. I think something like this should be sufficient to write any test you might want: @types.coroutine def send_me(value): return yield ("value", value) @types.coroutine def throw_me(exc): yield ("error", exc) async def yield_briefly(): await send_me(None) def run(async_fn, *args, **kwargs): coro = async_fn(*args, **kwargs) next_msg = ("value", None) try: while True: if next_msg[0] == "value": next_msg = coro.send(next_msg[1]) else: next_msg = coro.throw(next_msg[1]) except StopIteration as exc: return exc.value > While rejecting the idea of an asyncio dependency isn't a problem for > asyncontextmanager specifically (it's low level enough for it not to > matter), it's likely to be more of a concern for the AsyncExitStack API, > where the "asyncio.iscoroutinefunction" introspection API is likely to be > quite helpful, as are other APIs like `asyncio.ensure_future()`. FYI FWIW, every time I've tried to use iscoroutinefunction so far I've ended up regretting it and ripping it out again :-). The problem is that people will do things like apply a decorator to a coroutine function, and get a wrapped function that returns a coroutine object and which is interchangeable with a real coroutine function in every way except that iscoroutinefunction returns False. And there's no collections.abc.CoroutineFunction (not sure how that would even work). Better to just call the thing and then do an isinstance(..., collections.abc.Coroutine) on the return value. I haven't had a reason to try porting ExitStack to handle async context managers yet, so I can't speak to it beyond that :-). -n -- Nathaniel J. Smith -- https://vorpus.org _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Mar 1 10:47:55 2017 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 1 Mar 2017 10:47:55 -0500 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> On 2017-03-01 2:16 AM, Nathaniel Smith wrote: > On Tue, Feb 28, 2017 at 9:42 PM, Nick Coghlan wrote: >> Short version: >> >> - there are some reasonable requests for async variants of contextlib APIs >> for 3.7 >> - prompted by Raymond, I'm thinking it actually makes more sense to add >> these in a new `asyncio.contextlib` module than it does to add them directly >> to the existing module >> - would anyone object strongly to my asking authors of the affected PRs to >> take their changes in that direction? > IMHO this is a good idea*iff* the new APIs really are bound to > asyncio, rather than being generic across all uses of async/await. I agree. There is no need to make asynccontextmanager and AsyncExitStack dependent on asyncio or specific to asyncio. They should both stay framework agnostic (use only protocols defined by PEP 492 and PEP 525) and both shouldn't be put into asyncio package. Yury From p.f.moore at gmail.com Wed Mar 1 10:51:07 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 1 Mar 2017 15:51:07 +0000 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <2c226953-b2ed-eb56-039b-aa3c2e20080e@gmail.com> References: <2c226953-b2ed-eb56-039b-aa3c2e20080e@gmail.com> Message-ID: On 1 March 2017 at 15:34, Yury Selivanov wrote: > +1 to put both in contextlib. With the proviso that the implementation shouldn't depend on asyncio. As Yury says, it should be framework agnostic, let's be careful to make that the case and not rely on helpers from asyncio, either deliberately or accidentally. If writing framework-agnostic versions is difficult, maybe that implies that some framework-agnostic helpers need to be moved out of asyncio? Paul From sporel at tronc.com Wed Mar 1 07:24:44 2017 From: sporel at tronc.com (Porel, Subrata) Date: Wed, 1 Mar 2017 12:24:44 +0000 Subject: [Python-Dev] Python-3.6.0 compilation error Message-ID: Hi Team, While trying to compile the Python-3.6.0 in our Solaris 10 server to install this beside python 2.6.4 we have got below error: Please help on this. root at xxyyzz:/opt/Python-3.6.0#python Python 2.6.4 (r264:75706, Jun 27 2012, 05:45:50) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information. >>> ^D root at xxyyzz:/opt/Python-3.6.0#make test gcc -o Programs/_freeze_importlib Programs/_freeze_importlib.o Modules/getbuildinfo.o Parser/acceler.o Parser/grammar1.o Parser/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o Parser/myreadline.o Parser/parsetok.o Parser/tokenizer.o Objects/abstract.o Objects/accu.o Objects/boolobject.o Objects/bytes_methods.o Objects/bytearrayobject.o Objects/bytesobject.o Objects/cellobject.o Objects/classobject.o Objects/codeobject.o Objects/complexobject.o Objects/descrobject.o Objects/enumobject.o Objects/exceptions.o Objects/genobject.o Objects/fileobject.o Objects/floatobject.o Objects/frameobject.o Objects/funcobject.o Objects/iterobject.o Objects/listobject.o Objects/longobject.o Objects/dictobject.o Objects/odictobject.o Objects/memoryobject.o Objects/methodobject.o Objects/moduleobject.o Objects/namespaceobject.o Objects/object.o Objects/obmalloc.o Objects/capsule.o Objects/rangeobject.o Objects/setobject.o Objects/sliceobject.o Objects/structseq.o Objects/tupleobject.o Objects/typeobject.o Objects/unicodeobject.o Objects/unicodectype.o Objects/weakrefobject.o Python/_warnings.o Python/Python-ast.o Python/asdl.o Python/ast.o Python/bltinmodule.o Python/ceval.o Python/compile.o Python/codecs.o Python/dynamic_annotations.o Python/errors.o Python/frozenmain.o Python/future.o Python/getargs.o Python/getcompiler.o Python/getcopyright.o Python/getplatform.o Python/getversion.o Python/graminit.o Python/import.o Python/importdl.o Python/marshal.o Python/modsupport.o Python/mystrtoul.o Python/mysnprintf.o Python/peephole.o Python/pyarena.o Python/pyctype.o Python/pyfpe.o Python/pyhash.o Python/pylifecycle.o Python/pymath.o Python/pystate.o Python/pythonrun.o Python/pytime.o Python/random.o Python/structmember.o Python/symtable.o Python/sysmodule.o Python/traceback.o Python/getopt.o Python/pystrcmp.o Python/pystrtod.o Python/pystrhex.o Python/dtoa.o Python/formatter_unicode.o Python/fileutils.o Python/dynload_shlib.o Python/thread.o Modules/config.o Modules/getpath.o Modules/main.o Modules/gcmodule.o Modules/_threadmodule.o Modules/posixmodule.o Modules/errnomodule.o Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o Modules/_weakref.o Modules/_functoolsmodule.o Modules/_operator.o Modules/_collectionsmodule.o Modules/itertoolsmodule.o Modules/atexitmodule.o Modules/signalmodule.o Modules/_stat.o Modules/timemodule.o Modules/_localemodule.o Modules/_iomodule.o Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o Modules/zipimport.o Modules/faulthandler.o Modules/_tracemalloc.o Modules/hashtable.o Modules/symtablemodule.o Modules/xxsubtype.o -lsocket -lnsl -lintl -lrt -ldl -lsendfile -lm Undefined first referenced symbol in file libintl_bind_textdomain_codeset Modules/_localemodule.o libintl_gettext Modules/_localemodule.o libintl_textdomain Modules/_localemodule.o libintl_dcgettext Modules/_localemodule.o libintl_bindtextdomain Modules/_localemodule.o libintl_dgettext Modules/_localemodule.o ld: fatal: symbol referencing errors. No output written to Programs/_freeze_importlib collect2: ld returned 1 exit status *** Error code 1 make: Fatal error: Command failed for target `Programs/_freeze_importlib' root at xxyyzz:/opt/Python-3.6.0#gcc gcc: no input files root at xxyyzz:/opt/Python-3.6.0#which gcc /usr/sfw/bin/gcc root at fcudrpt01:/opt/Python-3.6.0#gcc --version gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Regards, Subrata Porel Tronc TCS UNIX Admin TCS Gitanjali Park, Kolkata, India Email: sporel at tronc.com Mobile: +91 798 036 7240 Work Phone: +1 321 270 7685 -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Mar 1 11:29:33 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 1 Mar 2017 11:29:33 -0500 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: <20170301112933.273342e9@subdivisions.wooz.org> On Mar 01, 2017, at 10:55 AM, Victor Stinner wrote: >I suggest to create 3rd party modules on PyPI. It became easy to pull >dependencies using pip and virtualenv. > >It seems like https://github.com/aio-libs is the home of many asyncio >libraries. This is what we did for aiosmtpd, an asyncio-based replacement for smtpd. It's worked out great on all fronts so far (good community contributions, rapid development, API flexibility as we move toward 1.0, good visibility inside the more general aio-libs umbrella). Cheers, -Barry From barry at python.org Wed Mar 1 12:28:24 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 1 Mar 2017 12:28:24 -0500 Subject: [Python-Dev] Help requested with Python 2.7 performance regression Message-ID: <20170301122824.3f1f3689@subdivisions.wooz.org> Hello all, Over in Ubuntu, we've gotten reports about some performance regressions in Python 2.7 when moving from Trusty (14.04 LTS) to Xenial (16.04 LTS). Trusty's version is based on 2.7.6 while Xenial's version is based on 2.7.12 with bits of .13 cherry picked. We've not been able to identify any change in Python itself (or the Debian/Ubuntu deltas) which could account for this, so the investigation has led to various gcc compiler options and version differences. In particular disabling LTO (link-time optimization) seems to have a positive impact, but doesn't completely regain the loss. Louis (Cc'd here) has done a ton of work to measure and analyze the problem, but we've more or less hit a roadblock, so we're taking the issue public to see if anybody on this mailing list has further ideas. A detailed analysis is available in this Google doc: https://docs.google.com/document/d/1zrV3OIRSo99fd2Ty4YdGk_scmTRDmVauBprKL8eij6w/edit The document should be public for comments and editing. If you have any thoughts, or other lines of investigation you think are worthwhile pursuing, please add your comments to the document. Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From solipsis at pitrou.net Wed Mar 1 12:51:13 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Mar 2017 18:51:13 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: <20170301185113.4e557c1f@fsol> On Wed, 1 Mar 2017 12:28:24 -0500 Barry Warsaw wrote: > > Louis (Cc'd here) has done a ton of work to measure and analyze the problem, > but we've more or less hit a roadblock, so we're taking the issue public to > see if anybody on this mailing list has further ideas. A detailed analysis is > available in this Google doc: > > https://docs.google.com/document/d/1zrV3OIRSo99fd2Ty4YdGk_scmTRDmVauBprKL8eij6w/edit > > The document should be public for comments and editing. I may be misunderstanding the document, but this lacks at least a comparison of the *same* interpreter version with different compiler versions. As for the high level: what if the training set used for PGO in Xenial has become skewed or inadequate? Just a thought, as it would imply that PGO+LTO uses wrong information for code placement and other optimizations. Regards Antoine. From brett at python.org Wed Mar 1 13:33:07 2017 From: brett at python.org (Brett Cannon) Date: Wed, 01 Mar 2017 18:33:07 +0000 Subject: [Python-Dev] Python-3.6.0 compilation error In-Reply-To: References: Message-ID: This mailing list is for the development **of** Python, not **with** it. For build failures like this it's best to ask on python-list (and if you search for the missing libintl_* symbols you will find they come from gettext). On Wed, 1 Mar 2017 at 08:10 Porel, Subrata wrote: > Hi Team, > > > > While trying to compile the Python-3.6.0 in our Solaris 10 server to > install this beside python 2.6.4 we have got below error: Please help on > this. > > > > root at xxyyzz:/opt/Python-3.6.0#python > > Python 2.6.4 (r264:75706, Jun 27 2012, 05:45:50) [C] on sunos5 > > Type "help", "copyright", "credits" or "license" for more information. > > >>> ^D > > root at xxyyzz:/opt/Python-3.6.0#make test > > gcc -o Programs/_freeze_importlib Programs/_freeze_importlib.o > Modules/getbuildinfo.o Parser/acceler.o Parser/grammar1.o > Parser/listnode.o Parser/node.o Parser/parser.o Parser/bitset.o > Parser/metagrammar.o Parser/firstsets.o Parser/grammar.o Parser/pgen.o > Parser/myreadline.o Parser/parsetok.o Parser/tokenizer.o > Objects/abstract.o Objects/accu.o Objects/boolobject.o > Objects/bytes_methods.o Objects/bytearrayobject.o Objects/bytesobject.o > Objects/cellobject.o Objects/classobject.o Objects/codeobject.o > Objects/complexobject.o Objects/descrobject.o Objects/enumobject.o > Objects/exceptions.o Objects/genobject.o Objects/fileobject.o > Objects/floatobject.o Objects/frameobject.o Objects/funcobject.o > Objects/iterobject.o Objects/listobject.o Objects/longobject.o > Objects/dictobject.o Objects/odictobject.o Objects/memoryobject.o > Objects/methodobject.o Objects/moduleobject.o Objects/namespaceobject.o > Objects/object.o Objects/obmalloc.o Objects/capsule.o > Objects/rangeobject.o Objects/setobject.o Objects/sliceobject.o > Objects/structseq.o Objects/tupleobject.o Objects/typeobject.o > Objects/unicodeobject.o Objects/unicodectype.o Objects/weakrefobject.o > Python/_warnings.o Python/Python-ast.o Python/asdl.o Python/ast.o > Python/bltinmodule.o Python/ceval.o Python/compile.o Python/codecs.o > Python/dynamic_annotations.o Python/errors.o Python/frozenmain.o > Python/future.o Python/getargs.o Python/getcompiler.o > Python/getcopyright.o Python/getplatform.o Python/getversion.o > Python/graminit.o Python/import.o Python/importdl.o Python/marshal.o > Python/modsupport.o Python/mystrtoul.o Python/mysnprintf.o > Python/peephole.o Python/pyarena.o Python/pyctype.o Python/pyfpe.o > Python/pyhash.o Python/pylifecycle.o Python/pymath.o Python/pystate.o > Python/pythonrun.o Python/pytime.o Python/random.o > Python/structmember.o Python/symtable.o Python/sysmodule.o > Python/traceback.o Python/getopt.o Python/pystrcmp.o Python/pystrtod.o > Python/pystrhex.o Python/dtoa.o Python/formatter_unicode.o > Python/fileutils.o Python/dynload_shlib.o Python/thread.o > Modules/config.o Modules/getpath.o Modules/main.o Modules/gcmodule.o > Modules/_threadmodule.o Modules/posixmodule.o Modules/errnomodule.o > Modules/pwdmodule.o Modules/_sre.o Modules/_codecsmodule.o > Modules/_weakref.o Modules/_functoolsmodule.o Modules/_operator.o > Modules/_collectionsmodule.o Modules/itertoolsmodule.o > Modules/atexitmodule.o Modules/signalmodule.o Modules/_stat.o > Modules/timemodule.o Modules/_localemodule.o Modules/_iomodule.o > Modules/iobase.o Modules/fileio.o Modules/bytesio.o Modules/bufferedio.o > Modules/textio.o Modules/stringio.o Modules/zipimport.o > Modules/faulthandler.o Modules/_tracemalloc.o Modules/hashtable.o > Modules/symtablemodule.o Modules/xxsubtype.o -lsocket -lnsl -lintl -lrt > -ldl -lsendfile -lm > > Undefined first referenced > > symbol in file > > libintl_bind_textdomain_codeset Modules/_localemodule.o > > libintl_gettext Modules/_localemodule.o > > libintl_textdomain Modules/_localemodule.o > > libintl_dcgettext Modules/_localemodule.o > > libintl_bindtextdomain Modules/_localemodule.o > > libintl_dgettext Modules/_localemodule.o > > ld: fatal: symbol referencing errors. No output written to > Programs/_freeze_importlib > > collect2: ld returned 1 exit status > > *** Error code 1 > > make: Fatal error: Command failed for target `Programs/_freeze_importlib' > > root at xxyyzz:/opt/Python-3.6.0#gcc > > gcc: no input files > > root at xxyyzz:/opt/Python-3.6.0#which gcc > > /usr/sfw/bin/gcc > > root at fcudrpt01:/opt/Python-3.6.0#gcc --version > > gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath) > > Copyright (C) 2004 Free Software Foundation, Inc. > > This is free software; see the source for copying conditions. There is NO > > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. > > > > > > Regards, > > Subrata Porel > > Tronc TCS UNIX Admin > > TCS Gitanjali Park, Kolkata, India > Email: sporel at tronc.com > Mobile: +91 798 036 7240 <+91%2079803%2067240> > Work Phone: +1 321 270 7685 <(321)%20270-7685> > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Wed Mar 1 13:58:14 2017 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 1 Mar 2017 19:58:14 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <20170301185113.4e557c1f@fsol> References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> Message-ID: <9339e18d-90d4-4b84-5584-d043f19a686d@ubuntu.com> On 01.03.2017 18:51, Antoine Pitrou wrote: > As for the high level: what if the training set used for PGO in Xenial > has become skewed or inadequate? running the testsuite From solipsis at pitrou.net Wed Mar 1 14:07:14 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Mar 2017 20:07:14 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> <9339e18d-90d4-4b84-5584-d043f19a686d@ubuntu.com> Message-ID: <20170301200714.687009ba@fsol> On Wed, 1 Mar 2017 19:58:14 +0100 Matthias Klose wrote: > On 01.03.2017 18:51, Antoine Pitrou wrote: > > As for the high level: what if the training set used for PGO in Xenial > > has become skewed or inadequate? > > running the testsuite I did some tests a year or two ago, and running the whole test suite is not a good idea, as coverage varies wildly from one functionality to the other, so PGO will not infer the right information from it. You don't get very good benchmark results from it. (for example, decimal has an extensive test suite which might lead PGO to believe that code paths exercised by the decimal module are the hottest ones) Regards Antoine. From louis.bouchard at canonical.com Wed Mar 1 14:24:03 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Wed, 1 Mar 2017 20:24:03 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <20170301185113.4e557c1f@fsol> References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> Message-ID: Hello, Le 01/03/2017 ? 18:51, Antoine Pitrou a ?crit : > On Wed, 1 Mar 2017 12:28:24 -0500 > Barry Warsaw wrote: >> >> Louis (Cc'd here) has done a ton of work to measure and analyze the problem, >> but we've more or less hit a roadblock, so we're taking the issue public to >> see if anybody on this mailing list has further ideas. A detailed analysis is >> available in this Google doc: >> >> https://docs.google.com/document/d/1zrV3OIRSo99fd2Ty4YdGk_scmTRDmVauBprKL8eij6w/edit >> >> The document should be public for comments and editing. > > I may be misunderstanding the document, but this lacks at least a > comparison of the *same* interpreter version with different compiler > versions. > > As for the high level: what if the training set used for PGO in Xenial > has become skewed or inadequate? Just a thought, as it would imply > that PGO+LTO uses wrong information for code placement and other > optimizations. > > Regards > > Antoine. > Indeed, this is something that is in the history of the LP bug so here is the URL where those comparison can be found : https://docs.google.com/spreadsheets/d/1MyNBPVZlBeic1OLqVKe_bcPk2deO_pQs9trIfOFefM0/edit#gid=2034603487 Hope it can help, Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From solipsis at pitrou.net Wed Mar 1 14:40:01 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 1 Mar 2017 20:40:01 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> Message-ID: <20170301204001.580e8870@fsol> On Wed, 1 Mar 2017 20:24:03 +0100 Louis Bouchard wrote: > > Indeed, this is something that is in the history of the LP bug so here is the > URL where those comparison can be found : > > https://docs.google.com/spreadsheets/d/1MyNBPVZlBeic1OLqVKe_bcPk2deO_pQs9trIfOFefM0/edit#gid=2034603487 Some more questions: * what does "faster" or "slower" mean (that is, which one is faster)? * is it possible to have actual performance differences in percent? being 2% slower is not the same as being 30% slower... Regards Antoine. From victor.stinner at gmail.com Wed Mar 1 16:00:46 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 1 Mar 2017 22:00:46 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <20170301122824.3f1f3689@subdivisions.wooz.org> References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: Hi, Your document doesn't explain how you configured the host to run benchmarks. Maybe you didn't tune Linux or anything else? Be careful with modern hardware which can make funny (or not) surprises. See my recent talk at FOSDEM (last month): "How to run a stable benchmark" https://fosdem.org/2017/schedule/event/python_stable_benchmark/ Factors impacting Python benchmarks: * Linux Address Space Layout Randomization (ASRL), /proc/sys/kernel/randomize_va_space * Python random hash function: PYTHONHASHSEED * Command line arguments and environmnet variables: enabling ASLR helps here (?) * CPU power saving and performance features: disable Intel Turbo Boost and/or use a fixed CPU frequency. * Temperature: temperature has a limited impact on benchmarks. If the CPU is below 95?C, Intel CPUs still run at full speed. With a correct cooling system, temperature is not an issue. * Linux perf probes: /proc/sys/kernel/perf_event_max_sample_rate * Code locality, CPU L1 instruction cache (L1c): Profiled Guided Optimization (PGO) helps here * Other processes and the kernel, CPU isolation (CPU pinning) helps here: use isolcpus=cpu_list and rcu_nocbs=cpu_list on the * Linux kernel command line * ... Reboot? Sadly, other unknown factors may still impact benchmarks. Sometimes, it helps to reboot to restore standard performances. https://haypo-notes.readthedocs.io/microbenchmark.html#factors-impacting-benchmarks Victor From louis.bouchard at canonical.com Wed Mar 1 16:04:07 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Wed, 1 Mar 2017 22:04:07 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <20170301204001.580e8870@fsol> References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> <20170301204001.580e8870@fsol> Message-ID: <10ecc00b-ac2d-1206-4a58-496e32a09a6e@canonical.com> Hello, Le 01/03/2017 ? 20:40, Antoine Pitrou a ?crit : > On Wed, 1 Mar 2017 20:24:03 +0100 > Louis Bouchard wrote: >> >> Indeed, this is something that is in the history of the LP bug so here is the >> URL where those comparison can be found : >> >> https://docs.google.com/spreadsheets/d/1MyNBPVZlBeic1OLqVKe_bcPk2deO_pQs9trIfOFefM0/edit#gid=2034603487 > > Some more questions: > * what does "faster" or "slower" mean (that is, which one is faster)? > * is it possible to have actual performance differences in percent? > being 2% slower is not the same as being 30% slower... > This means that the second element of the test is slower than the first. For instance if the test is Trusty stock .vs. Xenial stock and it shows slower, it means that Xenial stock is slower than Trusty stock. This is directly taken from the output of "pyperformance compare". The third column of each comparison (1.x) gives the proportion figure of the test. A test that shows slower 1.14 is 14% slower. HTH, Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From ncoghlan at gmail.com Wed Mar 1 20:20:08 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 2 Mar 2017 11:20:08 +1000 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <20170301112933.273342e9@subdivisions.wooz.org> References: <20170301112933.273342e9@subdivisions.wooz.org> Message-ID: On 2 March 2017 at 02:29, Barry Warsaw wrote: > On Mar 01, 2017, at 10:55 AM, Victor Stinner wrote: > > >I suggest to create 3rd party modules on PyPI. It became easy to pull > >dependencies using pip and virtualenv. > > > >It seems like https://github.com/aio-libs is the home of many asyncio > >libraries. > > This is what we did for aiosmtpd, an asyncio-based replacement for smtpd. > It's worked out great on all fronts so far (good community contributions, > rapid development, API flexibility as we move toward 1.0, good visibility > inside the more general aio-libs umbrella). > While I agree with this approach for higher level stuff, it's specifically the lower level pieces that just interact with the async/await language features rather than the event loop itself where I needed some discussion to clarify my own thoughts :) My conclusion from the thread is: - if it needs to depend on asyncio, it should either go in asyncio, or be published as a third party aio-lib - if it *doesn't* need to depend on asyncio, then it's a candidate for stdlib inclusion (e.g. the coroutine support in inspect) - both asynccontextmanager and AsyncExitStack actually fall into the latter category - other contextlib APIs like closing() should be able to transparently support both the sync and async variants of the CM protocol without negatively affecting the synchronous version - so for the specific case of contextlib, supporting both synchronous and asynchronous contexts in the one module makes sense - I still plan to keep the test cases separate, since the async test cases need more infrastructure than the synchronous ones What we shouldn't do is take this design decision as setting a binding precedent for any other modules like itertools - the trade-offs there are going to be different, and there are already third party modules like https://github.com/asyncdef/aitertools that provide equivalent APIs for the asynchronous programming model. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Mar 2 04:11:27 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Thu, 2 Mar 2017 18:11:27 +0900 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <20170301200714.687009ba@fsol> References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> <9339e18d-90d4-4b84-5584-d043f19a686d@ubuntu.com> <20170301200714.687009ba@fsol> Message-ID: On Thu, Mar 2, 2017 at 4:07 AM, Antoine Pitrou wrote: > On Wed, 1 Mar 2017 19:58:14 +0100 > Matthias Klose wrote: >> On 01.03.2017 18:51, Antoine Pitrou wrote: >> > As for the high level: what if the training set used for PGO in Xenial >> > has become skewed or inadequate? >> >> running the testsuite > > I did some tests a year or two ago, and running the whole test suite is > not a good idea, as coverage varies wildly from one functionality to the > other, so PGO will not infer the right information from it. You don't > get very good benchmark results from it. > > (for example, decimal has an extensive test suite which might lead PGO > to believe that code paths exercised by the decimal module are the > hottest ones) > > Regards > > Antoine. > FYI, there are "profile-opt" make target. It uses subset of regrtest. https://github.com/python/cpython/blob/2.7/Makefile.pre.in#L211-L214 Does Ubuntu (and Debian) use it? From greg at krypto.org Thu Mar 2 15:59:52 2017 From: greg at krypto.org (Gregory P. Smith) Date: Thu, 02 Mar 2017 20:59:52 +0000 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> <20170301185113.4e557c1f@fsol> <9339e18d-90d4-4b84-5584-d043f19a686d@ubuntu.com> <20170301200714.687009ba@fsol> Message-ID: We updated profile-opt to use the testsuite subset based on what distros had already been using for their training runs. As for the comment about the test suite not being good for training.... Mostly a myth. The test suite exercises the ceval loop well as well as things like re and json sufficiently to be a lot better than stupid workloads such as pybench (the previous default training run). Room for improvement in training? Likely in some corners. But I have yet to see anyone propose any evidence based patch as a workload that reliably improves on anything for PGO over what we train with today. -gpshead On Thu, Mar 2, 2017, 1:12 AM INADA Naoki wrote: > On Thu, Mar 2, 2017 at 4:07 AM, Antoine Pitrou > wrote: > > On Wed, 1 Mar 2017 19:58:14 +0100 > > Matthias Klose wrote: > >> On 01.03.2017 18:51, Antoine Pitrou wrote: > >> > As for the high level: what if the training set used for PGO in Xenial > >> > has become skewed or inadequate? > >> > >> running the testsuite > > > > I did some tests a year or two ago, and running the whole test suite is > > not a good idea, as coverage varies wildly from one functionality to the > > other, so PGO will not infer the right information from it. You don't > > get very good benchmark results from it. > > > > (for example, decimal has an extensive test suite which might lead PGO > > to believe that code paths exercised by the decimal module are the > > hottest ones) > > > > Regards > > > > Antoine. > > > > FYI, there are "profile-opt" make target. It uses subset of regrtest. > https://github.com/python/cpython/blob/2.7/Makefile.pre.in#L211-L214 > > Does Ubuntu (and Debian) use it? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Thu Mar 2 19:13:17 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 02 Mar 2017 16:13:17 -0800 Subject: [Python-Dev] Enum conversions in the stdlib Message-ID: <58B8B51D.1030700@stoneleaf.us> There are a few modules that have had their constants redefined as Enums, such as signal, which has revealed a minor nit: >>> pp(list(signal.Signals)) [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] The resulting enumeration is neither in alpha nor value order. While this has no bearing on programmatic usage I would like these Enums to be ordered, preferably by value. Would anyone prefer lexicographical ordering, and if so, why? -- ~Ethan~ From ethan at stoneleaf.us Thu Mar 2 21:36:38 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Thu, 02 Mar 2017 18:36:38 -0800 Subject: [Python-Dev] Fwd: Re: Enum conversions in the stdlib In-Reply-To: References: Message-ID: <58B8D6B6.4020908@stoneleaf.us> I strongly prefer numeric order for signals. --Guido (mobile) From alexander.belopolsky at gmail.com Thu Mar 2 21:49:54 2017 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 2 Mar 2017 21:49:54 -0500 Subject: [Python-Dev] Fwd: Re: Enum conversions in the stdlib In-Reply-To: <58B8D6B6.4020908@stoneleaf.us> References: <58B8D6B6.4020908@stoneleaf.us> Message-ID: > I strongly prefer numeric order for signals. > > --Guido (mobile) +1 Numerical values of UNIX signals are often more widely known than their names. For example, every UNIX user knows what signal 9 does. From qingyun.tao at tophant.com Thu Mar 2 23:51:59 2017 From: qingyun.tao at tophant.com (Tao Qingyun) Date: Fri, 3 Mar 2017 12:51:59 +0800 Subject: [Python-Dev] why multiprocessing use os._exit Message-ID: <20170303045159.GA9348@taoqy-P> in multiprocessing/forking.py#129, `os._exit` cause child process don't close open file. For example: ``` from multiprocessing import Process def f(): global log # prevent gc close the file log = open("info.log", "w") log.write("***hello world***\n") p = Process(target=f) p.start() p.join() ``` and the `info.log` will be empty. why not use sys.exit ? Thanks From ncoghlan at gmail.com Fri Mar 3 02:27:08 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 3 Mar 2017 17:27:08 +1000 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: On 2 March 2017 at 07:00, Victor Stinner wrote: > Hi, > > Your document doesn't explain how you configured the host to run > benchmarks. Maybe you didn't tune Linux or anything else? Be careful > with modern hardware which can make funny (or not) surprises. Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25 binaries (Python 2.7.13 + patches, built with GCC 6.3.x)? I know you've been using perf to look for differences between *Python* major versions, but this would be more about using Python's benchmark suite to investigate the performance of *gcc*, since it appears that may be the culprit here. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From louis.bouchard at canonical.com Fri Mar 3 03:21:11 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Fri, 3 Mar 2017 09:21:11 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: <7fd6e4c0-6ace-1cfa-75e9-6668278902b1@canonical.com> Hello, Le 03/03/2017 ? 08:27, Nick Coghlan a ?crit : > On 2 March 2017 at 07:00, Victor Stinner > wrote: > > Hi, > > Your document doesn't explain how you configured the host to run > benchmarks. Maybe you didn't tune Linux or anything else? Be careful > with modern hardware which can make funny (or not) surprises. > > This was 'almost' intentional, as no specific O/S tuning was done. The intent is to compare performance between two specific versions of the interperter, not to target any gain in performance. Such tuning would suposedly have a linear impact on both version. If not, then the compiler definitively does some funky things that I want to be aware of. > Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x > binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25 > binaries (Python 2.7.13 + patches, built with GCC 6.3.x)? > > I know you've been using perf to look for differences between *Python* major > versions, but this would be more about using Python's benchmark suite to > investigate the performance of *gcc*, since it appears that may be the culprit here. > Now this is an interesting test that I can probably do myself to a certain extent using containers and/or VM on the same hardware. While it will be no mean a strong validation of the performances, I may be able to confirm a similar trend in the results before going forward with tests on baremetal. > Cheers, > Nick. > Thanks, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From victor.stinner at gmail.com Fri Mar 3 05:21:49 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 3 Mar 2017 11:21:49 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: 2017-03-03 8:27 GMT+01:00 Nick Coghlan : > Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x > binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25 > binaries (Python 2.7.13 + patches, built with GCC 6.3.x)? I didn't and I'm not aware of anyone who did that. It would be nice to run performance since this benchmark suite should now be much more reliable. By the way, I always forget to check if Fedora and RHEL compile Python using PGO. Victor From cstratak at redhat.com Fri Mar 3 06:18:14 2017 From: cstratak at redhat.com (Charalampos Stratakis) Date: Fri, 3 Mar 2017 06:18:14 -0500 (EST) Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: <705879099.113681155.1488539894643.JavaMail.zimbra@redhat.com> PGO is not enabled in RHEL and Fedora. I did some initial testing for Fedora, however it increased the compilation time of the RPM by approximately two hours, so for the time being I left it out. Regards, Charalampos Stratakis Associate Software Engineer Python Maintenance Team, Red Hat ----- Original Message ----- From: "Victor Stinner" To: "Nick Coghlan" Cc: "Barry Warsaw" , "Python-Dev" Sent: Friday, March 3, 2017 11:21:49 AM Subject: Re: [Python-Dev] Help requested with Python 2.7 performance regression 2017-03-03 8:27 GMT+01:00 Nick Coghlan : > Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x > binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25 > binaries (Python 2.7.13 + patches, built with GCC 6.3.x)? I didn't and I'm not aware of anyone who did that. It would be nice to run performance since this benchmark suite should now be much more reliable. By the way, I always forget to check if Fedora and RHEL compile Python using PGO. Victor _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/cstratak%40redhat.com From victor.stinner at gmail.com Fri Mar 3 06:53:00 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 3 Mar 2017 12:53:00 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: <705879099.113681155.1488539894643.JavaMail.zimbra@redhat.com> References: <20170301122824.3f1f3689@subdivisions.wooz.org> <705879099.113681155.1488539894643.JavaMail.zimbra@redhat.com> Message-ID: 2017-03-03 12:18 GMT+01:00 Charalampos Stratakis : > PGO is not enabled in RHEL and Fedora. > > I did some initial testing for Fedora, however it increased the compilation time of the RPM by approximately two hours, so for the time being I left it out. Two hours in a *single* build server is very cheap compared to the 10-20% speedup on *all* computers using this PGO build, no? Victor From cstratak at redhat.com Fri Mar 3 07:10:11 2017 From: cstratak at redhat.com (Charalampos Stratakis) Date: Fri, 3 Mar 2017 07:10:11 -0500 (EST) Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> <705879099.113681155.1488539894643.JavaMail.zimbra@redhat.com> Message-ID: <962710057.113691749.1488543011054.JavaMail.zimbra@redhat.com> And that is the reason I wanted to test this a bit more. However it adds a maintenance burden when fast fixes need to be applied (aka now that Fedora 26 alpha is being prepared). The build now due to the arm architecture and the huge test suite takes approximately 3 hours and 30 minutes. Increasing that by two hours is not something I would do during the development phase. On another note, RHEL's python does not have the PGO functionality backported to it. Regards, Charalampos Stratakis Associate Software Engineer Python Maintenance Team, Red Hat ----- Original Message ----- From: "Victor Stinner" To: "Charalampos Stratakis" Cc: "Nick Coghlan" , "Barry Warsaw" , "Python-Dev" Sent: Friday, March 3, 2017 12:53:00 PM Subject: Re: [Python-Dev] Help requested with Python 2.7 performance regression 2017-03-03 12:18 GMT+01:00 Charalampos Stratakis : > PGO is not enabled in RHEL and Fedora. > > I did some initial testing for Fedora, however it increased the compilation time of the RPM by approximately two hours, so for the time being I left it out. Two hours in a *single* build server is very cheap compared to the 10-20% speedup on *all* computers using this PGO build, no? Victor From phd at phdru.name Fri Mar 3 07:24:11 2017 From: phd at phdru.name (Oleg Broytman) Date: Fri, 3 Mar 2017 13:24:11 +0100 Subject: [Python-Dev] why multiprocessing use os._exit In-Reply-To: <20170303045159.GA9348@taoqy-P> References: <20170303045159.GA9348@taoqy-P> Message-ID: <20170303122411.GA10730@phdru.name> Hello. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Thank you for understanding. Using os._exit() after fork is documented: https://docs.python.org/3/library/os.html#os._exit and this is exactly what multiprocessing does. On Fri, Mar 03, 2017 at 12:51:59PM +0800, Tao Qingyun wrote: > in multiprocessing/forking.py#129, `os._exit` cause child process don't close open > file. For example: > > ``` > from multiprocessing import Process > > def f(): > global log # prevent gc close the file > log = open("info.log", "w") > log.write("***hello world***\n") > > p = Process(target=f) > p.start() > p.join() > > ``` > and the `info.log` will be empty. why not use sys.exit ? > > > Thanks Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From louis.bouchard at canonical.com Fri Mar 3 09:09:31 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Fri, 3 Mar 2017 15:09:31 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: Hello, Le 03/03/2017 ? 08:27, Nick Coghlan a ?crit : > On 2 March 2017 at 07:00, Victor Stinner > wrote: > > Hi, > > Your document doesn't explain how you configured the host to run > benchmarks. Maybe you didn't tune Linux or anything else? Be careful > with modern hardware which can make funny (or not) surprises. > > > Victor, do you know if you or anyone else has compared the RHEL/CentOS 7.x > binaries (Python 2.7.5 + patches, built with GCC 4.8.x) with the Fedora 25 > binaries (Python 2.7.13 + patches, built with GCC 6.3.x)? > > I know you've been using perf to look for differences between *Python* major > versions, but this would be more about using Python's benchmark suite to > investigate the performance of *gcc*, since it appears that may be the culprit here. > > Cheers, > Nick. Out of curiosity, I ran the set of benchmarks in two LXC containers running centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The benchmarks do run faster in 18 benchmarks, slower on 12 and insignificant for the rest (~33 from memory). Do take into account that this is run on baremetal system running an Ubuntu kernel (4.4.0-59) so this is by no mean a reference value but just for a quick test. Results were appended to the spreadsheet referred to in the analysis document. It is somewhat coherent with a previous test I ran where I disabled PGO on 2.7.6+gcc4.8 (Trusty). This made the 2.7.6+gcc4.8 (Trusty) interpreter to become slower than the Xenial reference. Unfortunately, I cannot redeploy my server on RHEL or Fedora at the moment so this is as far as I can go. Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From victor.stinner at gmail.com Fri Mar 3 09:31:55 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 3 Mar 2017 15:31:55 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: > Out of curiosity, I ran the set of benchmarks in two LXC containers running > centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The benchmarks > do run faster in 18 benchmarks, slower on 12 and insignificant for the rest (~33 > from memory). "faster" or "slower" is relative: I would like to see the ?.??x faster/slower or percent value. Can you please share the result? I don't know what is the best output: python3 -m performance compare centos.json fedora.json or the new: python3 -m perf compare_to centos.json fedora.json --table --quiet Victor From louis.bouchard at canonical.com Fri Mar 3 09:37:56 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Fri, 3 Mar 2017 15:37:56 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: Hello, Le 03/03/2017 ? 15:31, Victor Stinner a ?crit : >> Out of curiosity, I ran the set of benchmarks in two LXC containers running >> centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The benchmarks >> do run faster in 18 benchmarks, slower on 12 and insignificant for the rest (~33 >> from memory). > > "faster" or "slower" is relative: I would like to see the ?.??x > faster/slower or percent value. Can you please share the result? I > don't know what is the best output: > python3 -m performance compare centos.json fedora.json > or the new: > python3 -m perf compare_to centos.json fedora.json --table --quiet > > Victor > All the results, including the latest are in the spreadsheet here (cited in the analysis document) : https://docs.google.com/spreadsheets/d/1pKCOpyu4HUyw9YtJugn6jzVGa_zeDmBVNzqmXHtM6gM/edit#gid=1548436297 Third column is the ?.??x value that you are looking for, taken directly out of the 'pyperformance analyze' results. I didn't know about the new options, I'll give it a spin & see if I can get a better format. Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From ethan at stoneleaf.us Fri Mar 3 10:25:22 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Fri, 03 Mar 2017 07:25:22 -0800 Subject: [Python-Dev] Enum conversions in the stdlib In-Reply-To: <20170303103558.x2hvlg7ntofdq3z7@BuGz.eclipse.m0g.net> References: <58B8B51D.1030700@stoneleaf.us> <20170303103558.x2hvlg7ntofdq3z7@BuGz.eclipse.m0g.net> Message-ID: <58B98AE2.3020204@stoneleaf.us> On 03/03/2017 02:35 AM, Guyzmo wrote: > On Thu, Mar 02, 2017 at 04:13:17PM -0800, Ethan Furman wrote: >> The resulting enumeration is neither in alpha nor value order. While this >> has no bearing on programmatic usage I would like these Enums to be ordered, >> preferably by value. >> >> Would anyone prefer lexicographical ordering, and if so, why? > > I just tried on my system with python 3.6: > > ``` >>>> pprint(list(signal.Signals)) > [, > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > , > ] > ``` > > so I'm not sure what the issue is, but #worksforme. Ah, I see I tried it on 3.5 -- oops. Thanks for the clarification! -- ~Ethan~ From louis.bouchard at canonical.com Fri Mar 3 10:27:24 2017 From: louis.bouchard at canonical.com (Louis Bouchard) Date: Fri, 3 Mar 2017 16:27:24 +0100 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: Hello, Le 03/03/2017 ? 15:37, Louis Bouchard a ?crit : > Hello, > > Le 03/03/2017 ? 15:31, Victor Stinner a ?crit : >>> Out of curiosity, I ran the set of benchmarks in two LXC containers running >>> centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The benchmarks >>> do run faster in 18 benchmarks, slower on 12 and insignificant for the rest (~33 >>> from memory). >> >> "faster" or "slower" is relative: I would like to see the ?.??x >> faster/slower or percent value. Can you please share the result? I >> don't know what is the best output: >> python3 -m performance compare centos.json fedora.json >> or the new: >> python3 -m perf compare_to centos.json fedora.json --table --quiet >> >> Victor >> > > All the results, including the latest are in the spreadsheet here (cited in the > analysis document) : > > https://docs.google.com/spreadsheets/d/1pKCOpyu4HUyw9YtJugn6jzVGa_zeDmBVNzqmXHtM6gM/edit#gid=1548436297 > > Third column is the ?.??x value that you are looking for, taken directly out of > the 'pyperformance analyze' results. > > I didn't know about the new options, I'll give it a spin & see if I can get a > better format. All the benchmark data using the new format have been uploaded to the spreadsheet. Each sheet is prefixed with pct_. HTH, Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 From z+py+pydev at m0g.net Fri Mar 3 05:35:58 2017 From: z+py+pydev at m0g.net (Guyzmo) Date: Fri, 3 Mar 2017 11:35:58 +0100 Subject: [Python-Dev] Enum conversions in the stdlib In-Reply-To: <58B8B51D.1030700@stoneleaf.us> References: <58B8B51D.1030700@stoneleaf.us> Message-ID: <20170303103558.x2hvlg7ntofdq3z7@BuGz.eclipse.m0g.net> On Thu, Mar 02, 2017 at 04:13:17PM -0800, Ethan Furman wrote: > The resulting enumeration is neither in alpha nor value order. While this > has no bearing on programmatic usage I would like these Enums to be ordered, > preferably by value. > > Would anyone prefer lexicographical ordering, and if so, why? I just tried on my system with python 3.6: ``` >>> pprint(list(signal.Signals)) [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] ``` so I'm not sure what the issue is, but #worksforme. -- zmo From status at bugs.python.org Fri Mar 3 12:09:03 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 3 Mar 2017 18:09:03 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20170303170903.E6DAE56389@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-02-24 - 2017-03-03) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5851 (+18) closed 35618 (+53) total 41469 (+71) Open issues with patches: 2466 Issues opened (46) ================== #29626: Issue with spacing in argparse module while using help http://bugs.python.org/issue29626 reopened by paul.j3 #29643: --enable-optimizations compiler flag has no effect http://bugs.python.org/issue29643 opened by awang #29645: webbrowser module import has heavy side effects http://bugs.python.org/issue29645 opened by serhiy.storchaka #29649: struct.pack_into check boundary error message didn't respect o http://bugs.python.org/issue29649 opened by louielu #29651: Inconsistent/undocumented urlsplit/urlparse behavior on invali http://bugs.python.org/issue29651 opened by vfaronov #29652: Fix evaluation order of keys/values in dict comprehensions http://bugs.python.org/issue29652 opened by Jim Fasarakis-Hilliard #29654: SimpleHTTPRequestHandler should support browser cache http://bugs.python.org/issue29654 opened by quentel #29655: Certain errors during IMPORT_STAR can leak a reference http://bugs.python.org/issue29655 opened by Peter Cawley #29656: Change "make patchcheck" to be branch aware http://bugs.python.org/issue29656 opened by ncoghlan #29657: os.symlink: FileExistsError shows wrong message http://bugs.python.org/issue29657 opened by wrohdewald #29659: Expose the `length` arg from shutil.copyfileobj for public use http://bugs.python.org/issue29659 opened by goodboy #29660: Document that print/format_exception ignore etype http://bugs.python.org/issue29660 opened by mbussonn #29667: socket module sometimes loses response packets http://bugs.python.org/issue29667 opened by bkline #29670: argparse: does not respect required args pre-populated into na http://bugs.python.org/issue29670 opened by mhcptg #29672: `catch_warnings` context manager should reset warning registry http://bugs.python.org/issue29672 opened by Gerrit.Holl #29673: Some gdb macros are broken in 3.6 http://bugs.python.org/issue29673 opened by belopolsky #29674: Use GCC __attribute__((alloc_size(x,y))) on PyMem_Malloc() fun http://bugs.python.org/issue29674 opened by haypo #29676: verbose output of test_cprofile http://bugs.python.org/issue29676 opened by xiang.zhang #29677: clarify docs about 'round()' accepting a negative integer for http://bugs.python.org/issue29677 opened by ChrisRands #29679: Add @contextlib.asynccontextmanager http://bugs.python.org/issue29679 opened by Jelle Zijlstra #29680: gdb/libpython.py does not work with gdb 7.2 http://bugs.python.org/issue29680 opened by belopolsky #29682: Possible missing NULL check in pyexpat http://bugs.python.org/issue29682 opened by alexc #29685: test_gdb failed http://bugs.python.org/issue29685 opened by MarcoC #29687: smtplib does not support proxy http://bugs.python.org/issue29687 opened by rares #29688: Document Path.absolute http://bugs.python.org/issue29688 opened by Jim Fasarakis-Hilliard #29689: Asyncio-namespace helpers for async_generators http://bugs.python.org/issue29689 opened by Codey Oxley #29691: Some tests fail in coverage Travis check http://bugs.python.org/issue29691 opened by Jelle Zijlstra #29692: contextlib.contextmanager may incorrectly unchain RuntimeError http://bugs.python.org/issue29692 opened by ncoghlan #29694: race condition in pathlib mkdir with flags parents=True http://bugs.python.org/issue29694 opened by whitespacer #29695: Weird keyword parameter names in builtins http://bugs.python.org/issue29695 opened by serhiy.storchaka #29696: Use namedtuple in string.Formatter.parse iterator response http://bugs.python.org/issue29696 opened by facundobatista #29697: Wrong ECDH configuration with OpenSSL 1.1 http://bugs.python.org/issue29697 opened by christian.heimes #29699: shutil.rmtree should not fail with FileNotFoundError (race con http://bugs.python.org/issue29699 opened by dkg #29700: readline memory corruption when sys.stdin fd >= FD_SETSIZE for http://bugs.python.org/issue29700 opened by gregory.p.smith #29701: Add close method to queue.Queue http://bugs.python.org/issue29701 opened by mwf #29702: Error 0x80070003: Failed to launch elevated child process http://bugs.python.org/issue29702 opened by alevonian #29703: Fix asyncio to support instantiation of new event loops in sub http://bugs.python.org/issue29703 opened by yselivanov #29704: Can't read data from Transport after asyncio.SubprocessStreamP http://bugs.python.org/issue29704 opened by SethMichaelLarson #29706: IDLE needs syntax highlighting for async and await http://bugs.python.org/issue29706 opened by David E. Franco G. #29707: os.path.ismount() always returns false for mount --bind on sam http://bugs.python.org/issue29707 opened by Oliver Smith #29708: support reproducible Python builds http://bugs.python.org/issue29708 opened by bmwiedemann #29709: Short-circuiting not only on False and True http://bugs.python.org/issue29709 opened by Stefan Pochmann #29710: Incorrect representation caveat on bitwise operation docs http://bugs.python.org/issue29710 opened by ncoghlan #29711: When you use stop_serving in proactor loop it's kill all liste http://bugs.python.org/issue29711 opened by noplay #29712: --enable-optimizations does not work with --enable-shared http://bugs.python.org/issue29712 opened by halfcoder #29713: String changes whether or not '\x81' is present http://bugs.python.org/issue29713 opened by dagnam Most recent 15 issues with no replies (15) ========================================== #29713: String changes whether or not '\x81' is present http://bugs.python.org/issue29713 #29712: --enable-optimizations does not work with --enable-shared http://bugs.python.org/issue29712 #29709: Short-circuiting not only on False and True http://bugs.python.org/issue29709 #29707: os.path.ismount() always returns false for mount --bind on sam http://bugs.python.org/issue29707 #29704: Can't read data from Transport after asyncio.SubprocessStreamP http://bugs.python.org/issue29704 #29694: race condition in pathlib mkdir with flags parents=True http://bugs.python.org/issue29694 #29692: contextlib.contextmanager may incorrectly unchain RuntimeError http://bugs.python.org/issue29692 #29691: Some tests fail in coverage Travis check http://bugs.python.org/issue29691 #29689: Asyncio-namespace helpers for async_generators http://bugs.python.org/issue29689 #29688: Document Path.absolute http://bugs.python.org/issue29688 #29687: smtplib does not support proxy http://bugs.python.org/issue29687 #29685: test_gdb failed http://bugs.python.org/issue29685 #29682: Possible missing NULL check in pyexpat http://bugs.python.org/issue29682 #29674: Use GCC __attribute__((alloc_size(x,y))) on PyMem_Malloc() fun http://bugs.python.org/issue29674 #29670: argparse: does not respect required args pre-populated into na http://bugs.python.org/issue29670 Most recent 15 issues waiting for review (15) ============================================= #29706: IDLE needs syntax highlighting for async and await http://bugs.python.org/issue29706 #29695: Weird keyword parameter names in builtins http://bugs.python.org/issue29695 #29680: gdb/libpython.py does not work with gdb 7.2 http://bugs.python.org/issue29680 #29655: Certain errors during IMPORT_STAR can leak a reference http://bugs.python.org/issue29655 #29645: webbrowser module import has heavy side effects http://bugs.python.org/issue29645 #29642: Why does unittest.TestLoader.discover still rely on existence http://bugs.python.org/issue29642 #29640: _PyThreadState_Init and fork race leads to inconsistent key li http://bugs.python.org/issue29640 #29623: configparser.ConfigParser.read() does not accept Pathlib path http://bugs.python.org/issue29623 #29615: SimpleXMLRPCDispatcher._dispatch mangles tracebacks when invok http://bugs.python.org/issue29615 #29613: Support for SameSite Cookies http://bugs.python.org/issue29613 #29568: undefined parsing behavior with the old style string formattin http://bugs.python.org/issue29568 #29557: binhex documentation claims unknown bug http://bugs.python.org/issue29557 #29555: Update Python Software Foundation Copyright Year http://bugs.python.org/issue29555 #29553: Argparser does not display closing parentheses in nested mutex http://bugs.python.org/issue29553 #29549: Improve docstring for str.index http://bugs.python.org/issue29549 Top 10 most discussed issues (10) ================================= #29679: Add @contextlib.asynccontextmanager http://bugs.python.org/issue29679 10 msgs #26389: Expand traceback module API to accept just an exception as an http://bugs.python.org/issue26389 9 msgs #29642: Why does unittest.TestLoader.discover still rely on existence http://bugs.python.org/issue29642 8 msgs #28231: zipfile does not support pathlib http://bugs.python.org/issue28231 7 msgs #29645: webbrowser module import has heavy side effects http://bugs.python.org/issue29645 7 msgs #29695: Weird keyword parameter names in builtins http://bugs.python.org/issue29695 7 msgs #29626: Issue with spacing in argparse module while using help http://bugs.python.org/issue29626 5 msgs #29639: test suite intentionally avoids referring to localhost, destro http://bugs.python.org/issue29639 5 msgs #29649: struct.pack_into check boundary error message didn't respect o http://bugs.python.org/issue29649 5 msgs #29677: clarify docs about 'round()' accepting a negative integer for http://bugs.python.org/issue29677 5 msgs Issues closed (52) ================== #7769: SimpleXMLRPCServer.SimpleXMLRPCServer.register_function as dec http://bugs.python.org/issue7769 closed by xiang.zhang #9303: Migrate sqlite3 module to _v2 API to enhance performance http://bugs.python.org/issue9303 closed by berker.peksag #16285: Update urllib quoting to RFC 3986 http://bugs.python.org/issue16285 closed by ncoghlan #22594: Add a link to the regex module in re documentation http://bugs.python.org/issue22594 closed by berker.peksag #24241: webbrowser default browser detection and/or public API for _tr http://bugs.python.org/issue24241 closed by ncoghlan #25008: Deprecate smtpd (based on deprecated asyncore/asynchat) http://bugs.python.org/issue25008 closed by barry #25452: Add __bool__() method to subprocess.CompletedProcess http://bugs.python.org/issue25452 closed by haypo #26128: Let the subprocess.STARTUPINFO constructor take arguments http://bugs.python.org/issue26128 closed by ncoghlan #26184: Add versionchanged note for error when create_module() is not http://bugs.python.org/issue26184 closed by Mariatta #26867: test_ssl test_options fails on ubuntu 16.04 http://bugs.python.org/issue26867 closed by xiang.zhang #27298: redundant iteration over digits in _PyLong_AsUnsignedLongMask http://bugs.python.org/issue27298 closed by haypo #28272: a redundant check in maybe_small_long http://bugs.python.org/issue28272 closed by mark.dickinson #28279: setuptools failing to read from setup.cfg only in Python 3.6 http://bugs.python.org/issue28279 closed by brett.cannon #28518: execute("begin immediate") throwing OperationalError http://bugs.python.org/issue28518 closed by berker.peksag #28598: RHS not consulted in `str % subclass_of_str` case. http://bugs.python.org/issue28598 closed by berker.peksag #28624: Make the `cwd` argument to `subprocess.Popen` accept a `PathLi http://bugs.python.org/issue28624 closed by berker.peksag #28663: Higher virtual memory usage on recent Linux versions http://bugs.python.org/issue28663 closed by inada.naoki #28911: Clarify the behaviour of assert_called_once_with http://bugs.python.org/issue28911 closed by berker.peksag #28961: unittest.mock._Call ignores `name` parameter http://bugs.python.org/issue28961 closed by berker.peksag #29098: document minimum sqlite version http://bugs.python.org/issue29098 closed by berker.peksag #29110: [patch] Fix file object leak in `aifc.open` when given invalid http://bugs.python.org/issue29110 closed by inada.naoki #29121: sqlite3 Controlling Transactions documentation not updated http://bugs.python.org/issue29121 closed by berker.peksag #29355: sqlite3: remove sqlite3_stmt_readonly() http://bugs.python.org/issue29355 closed by berker.peksag #29376: threading._DummyThread.__repr__ raises AssertionError http://bugs.python.org/issue29376 closed by xiang.zhang #29594: implementation of __or__ in enum.auto http://bugs.python.org/issue29594 closed by ethan.furman #29637: ast.get_docstring(): AttributeError: 'NoneType' object has no http://bugs.python.org/issue29637 closed by inada.naoki #29644: Importing webbrowser outputs a message on stderr http://bugs.python.org/issue29644 closed by ncoghlan #29646: ast.parse parses string literals as docstrings http://bugs.python.org/issue29646 closed by Valentin.Lorentz #29647: Python 3.6.0 http://bugs.python.org/issue29647 closed by r.david.murray #29648: Missed reference to create_module() in versionadded (import.rs http://bugs.python.org/issue29648 closed by Mariatta #29650: abstractmethod does not work when deriving from Exception http://bugs.python.org/issue29650 closed by serhiy.storchaka #29653: IDLE - call tips show wrapper's argpsec instead of wrapped http://bugs.python.org/issue29653 closed by terry.reedy #29658: Combining thread and process, process hangs (sometimes) http://bugs.python.org/issue29658 closed by djstrong #29661: Typo in the docstring of timeit.Timer.autorange http://bugs.python.org/issue29661 closed by xiang.zhang #29662: Fix wrong indentation of timeit.Timer's documenation http://bugs.python.org/issue29662 closed by xiang.zhang #29663: Make collections.deque json serializable http://bugs.python.org/issue29663 closed by serhiy.storchaka #29664: zip() python 3 documentation not working at all ! http://bugs.python.org/issue29664 closed by serhiy.storchaka #29665: how to edit or delete issues created by me in bug tracker ?! http://bugs.python.org/issue29665 closed by berker.peksag #29666: Issue in enum documentation http://bugs.python.org/issue29666 closed by haypo #29668: f-strings don't change the values as expected. http://bugs.python.org/issue29668 closed by serhiy.storchaka #29669: Missing import of bisect in the documentation examples http://bugs.python.org/issue29669 closed by rhettinger #29671: Add function to gc module to check if any reference cycles hav http://bugs.python.org/issue29671 closed by mark.dickinson #29675: SysLogHandler does not seem to always expand %(loglevel)s prop http://bugs.python.org/issue29675 closed by vinay.sajip #29678: email.Message.get_params decodes only first one header value http://bugs.python.org/issue29678 closed by r.david.murray #29681: getopt fails to handle option with missing value in middle of http://bugs.python.org/issue29681 closed by serhiy.storchaka #29683: _PyCode_SetExtra behaviour wrong on allocation failure and aft http://bugs.python.org/issue29683 closed by serhiy.storchaka #29684: Minor regression in PyEval_CallObjectWithKeywords() http://bugs.python.org/issue29684 closed by haypo #29686: Unittest - Return empty string instead of None object on short http://bugs.python.org/issue29686 closed by rhettinger #29690: no %z directive for strptime in python2, doc says nothing abou http://bugs.python.org/issue29690 closed by r.david.murray #29693: DeprecationWarning/SyntaxError in test_import http://bugs.python.org/issue29693 closed by serhiy.storchaka #29698: _collectionsmodule.c: Replace `n++; while (--n)` with `for (; http://bugs.python.org/issue29698 closed by rhettinger #29705: socket.gethostbyname, getaddrinfo etc broken on MacOS 10.12 http://bugs.python.org/issue29705 closed by ned.deily From THauk at copperleaf.com Fri Mar 3 16:56:28 2017 From: THauk at copperleaf.com (Thomas Hauk) Date: Fri, 3 Mar 2017 21:56:28 +0000 Subject: [Python-Dev] Type annotations and metaclasses Message-ID: I've read through PEPs 483, 484, and 526, but I don't see any discussion of how type hints should work when the type of a class member differs from the type of an instance member, like when metaclasses are used to create instances. e.g.: >>> from django.db import models >>> class MyModel(models.Model): ... name = models.CharField() ... class Meta: ... app_label = "myapp" ... >>> type(MyModel.name) >>> m = MyModel() >>> type(m.name) In this case, I would like to be able to specify an instance type of str for MyModel.name. Can someone point me to any existing relevant discussion? Or if not, where should a new discussion start? T -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Fri Mar 3 17:29:21 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Fri, 3 Mar 2017 23:29:21 +0100 Subject: [Python-Dev] Type annotations and metaclasses In-Reply-To: References: Message-ID: Hi Thomas, I think this question is more appropriate for mypy or typing trackers: https://github.com/python/mypy/issues https://github.com/python/typing/issues -- Ivan On 3 March 2017 at 22:56, Thomas Hauk wrote: > I?ve read through PEPs 483, 484, and 526, but I don?t see any discussion > of how type hints should work when the type of a class member differs from > the type of an instance member, like when metaclasses are used to create > instances. > > > > e.g.: > > > > >>> from django.db import models > > >>> class MyModel(models.Model): > > ... name = models.CharField() > > ... class Meta: > > ... app_label = "myapp" > > ... > > >>> type(MyModel.name) > > > > >>> m = MyModel() > > >>> type(m.name) > > > > > > In this case, I would like to be able to specify an instance type of str > for MyModel.name. > > > > Can someone point me to any existing relevant discussion? Or if not, where > should a new discussion start? > > > > T > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > levkivskyi%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.xihong.wang at intel.com Fri Mar 3 19:13:02 2017 From: peter.xihong.wang at intel.com (Wang, Peter Xihong) Date: Sat, 4 Mar 2017 00:13:02 +0000 Subject: [Python-Dev] Help requested with Python 2.7 performance regression In-Reply-To: References: <20170301122824.3f1f3689@subdivisions.wooz.org> Message-ID: <371EBC7881C7844EAAF5556BFF21BCCC583F0A0C@ORSMSX105.amr.corp.intel.com> Hello, All, We have been tracking Python performance over the last 1.5 years, and results (along with other languages) are published daily at this site: https://languagesperformance.intel.com/ This general regression trend discussed is same as we observed. The Python source codes are being pulled, built, and results published daily following exactly the same process, on exactly the same hardware running with exactly the same operating system image. Take Django_v2 as an example, with 2.7 Default build: comparing 2/10/2017 commitID 54c93e0fe79b0ec7c9acccc35dabae2ffa4d563a, with 8/27/2015 commitID 514f5d6101752f10758c5b89e20941bc3d13008a, the regression is 2.5% PGO build: comparing 2/10/2017 commitID 54c93e0fe79b0ec7c9acccc35dabae2ffa4d563a, with 8/27/2015 commitID 514f5d6101752f10758c5b89e20941bc3d13008a, the regression is 0.47% We turned off hyperthreading, turbo, and ASLR, and set CPU frequency at a constant value to mitigate run to run variation. Currently we are only running limited number of micro-benchmarks, but planning to run a more broad range of benchmark/workload. The one that's under consideration to start with is the Python benchmark suite (all): https://github.com/python/performance We'd love to hear feedback on how to best monitor Python code changes and performance, how to present (look and feel, charts etc) and communicate the results. Thanks, Peter ? -----Original Message----- From: Python-Dev [mailto:python-dev-bounces+peter.xihong.wang=intel.com at python.org] On Behalf Of Louis Bouchard Sent: Friday, March 03, 2017 7:27 AM To: Victor Stinner Cc: Barry Warsaw ; Nick Coghlan ; Python-Dev Subject: Re: [Python-Dev] Help requested with Python 2.7 performance regression Hello, Le 03/03/2017 ? 15:37, Louis Bouchard a ?crit : > Hello, > > Le 03/03/2017 ? 15:31, Victor Stinner a ?crit : >>> Out of curiosity, I ran the set of benchmarks in two LXC containers >>> running >>> centos7 (2.7.5 + gcc 4.8.5) and Fedora 25 (2.7.13 + gcc 6.3.x). The >>> benchmarks do run faster in 18 benchmarks, slower on 12 and >>> insignificant for the rest (~33 from memory). >> >> "faster" or "slower" is relative: I would like to see the ?.??x >> faster/slower or percent value. Can you please share the result? I >> don't know what is the best output: >> python3 -m performance compare centos.json fedora.json or the new: >> python3 -m perf compare_to centos.json fedora.json --table --quiet >> >> Victor >> > > All the results, including the latest are in the spreadsheet here > (cited in the analysis document) : > > https://docs.google.com/spreadsheets/d/1pKCOpyu4HUyw9YtJugn6jzVGa_zeDm > BVNzqmXHtM6gM/edit#gid=1548436297 > > Third column is the ?.??x value that you are looking for, taken > directly out of the 'pyperformance analyze' results. > > I didn't know about the new options, I'll give it a spin & see if I > can get a better format. All the benchmark data using the new format have been uploaded to the spreadsheet. Each sheet is prefixed with pct_. HTH, Kind regards, ...Louis -- Louis Bouchard Software engineer, Cloud & Sustaining eng. Canonical Ltd Ubuntu developer Debian Maintainer GPG : 429D 7A3B DD05 B6F8 AF63 B9C4 8B3D 867C 823E 7A61 _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/peter.xihong.wang%40intel.com From ncoghlan at gmail.com Sun Mar 5 02:50:38 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 5 Mar 2017 17:50:38 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale Message-ID: Hi folks, Late last year I started working on a change to the CPython CLI (*not* the shared library) to get it to coerce the legacy C locale to something based on UTF-8 when a suitable locale is available. After a couple of rounds of iteration on linux-sig and python-ideas, I'm now bringing it to python-dev as a concrete proposal for Python 3.7. For most folks, reading the Abstract plus the draft docs updates in the reference implementation will tell you everything you need to know (if the C.UTF-8, C.utf8 or UTF-8 locales are available, the CLI will automatically attempt to coerce the legacy C locale to one of those rather than persisting with the latter's default assumption of ASCII as the preferred text encoding). However, the full PEP goes into a lot more detail on: * exactly what's broken about CPython's behaviour in the legacy C locale * why I'm in favour of this particular approach to fixing it (i.e. it integrates better with other C/C++ components, as well as being amenable to redistributor backports for 3.6, and environment based configuration for 3.5 and earlier) * why I think implementing both this change *and* Victor's more comprehensive "PYTHONUTF8 mode" proposal in PEP 540 will be better than implementing just one or the other (in some situations, ignoring the platform locale subsystem entirely really is the right approach, and that's the aspect PEP 540 tackles, while this PEP tackles the situations where the C locale behaviour is broken, but you still need to be consistent with the platform settings). Cheers, Nick. ================================== PEP: 538 Title: Coercing the legacy C locale to a UTF-8 based locale Version: $Revision$ Last-Modified: $Date$ Author: Nick Coghlan Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 28-Dec-2016 Python-Version: 3.7 Post-History: 03-Jan-2017 (linux-sig), 07-Jan-2017 (python-ideas), 05-Mar-2017 (python-dev) Abstract ======== An ongoing challenge with Python 3 on \*nix systems is the conflict between needing to use the configured locale encoding by default for consistency with other C/C++ components in the same process and those invoked in subprocesses, and the fact that the standard C locale (as defined in POSIX:2001) typically implies a default text encoding of ASCII, which is entirely inadequate for the development of networked services and client applications in a multilingual world. PEP 540 proposes a change to CPython's handling of the legacy C locale such that CPython will assume the use of UTF-8 in such environments, rather than persisting with the demonstrably problematic assumption of ASCII as an appropriate encoding for communicating with operating system interfaces. This is a good approach for cases where network encoding interoperability is a more important concern than local encoding interoperability. However, it comes at the cost of making CPython's encoding assumptions diverge from those of other C and C++ components in the same process, as well as those of components running in subprocesses that share the same environment. It also requires changes to the internals of how CPython itself works, rather than using existing configuration settings that are supported by Python versions prior to Python 3.7. Accordingly, this PEP proposes that independently of the UTF-8 mode proposed in PEP 540, the way the CPython implementation handles the default C locale be changed such that: * unless the new ``PYTHONCOERCECLOCALE`` environment variable is set to ``0``, the standalone CPython binary will automatically attempt to coerce the ``C`` locale to the first available locale out of ``C.UTF-8``, ``C.utf8``, or ``UTF-8`` * if the locale is successfully coerced, and PEP 540 is not accepted, then ``PYTHONIOENCODING`` (if not otherwise set) will be set to ``utf-8:surrogateescape``. * if the locale is successfully coerced, and PEP 540 *is* accepted, then ``PYTHONUTF8`` (if not otherwise set) will be set to ``1`` * if the subsequent runtime initialization process detects that the legacy ``C`` locale remains active (e.g. none of ``C.UTF-8``, ``C.utf8`` or ``UTF-8`` are available, locale coercion is disabled, or the runtime is embedded in an application other than the main CPython binary), and the ``PYTHONUTF8`` feature defined in PEP 540 is also disabled (or not implemented), it will emit a warning on stderr that use of the legacy ``C`` locale's default ASCII text encoding may cause various Unicode compatibility issues With this change, any \*nix platform that does *not* offer at least one of the ``C.UTF-8``, ``C.utf8`` or ``UTF-8`` locales as part of its standard configuration would only be considered a fully supported platform for CPython 3.7+ deployments when either the new ``PYTHONUTF8`` mode defined in PEP 540 is used, or else a suitable locale other than the default ``C`` locale is configured explicitly (e.g. `en_AU.UTF-8`, ``zh_CN.gb18030``). Redistributors (such as Linux distributions) with a narrower target audience than the upstream CPython development team may also choose to opt in to this locale coercion behaviour for the Python 3.6.x series by applying the necessary changes as a downstream patch when first introducing Python 3.6.0. Background ========== While the CPython interpreter is starting up, it may need to convert from the ``char *`` format to the ``wchar_t *`` format, or from one of those formats to ``PyUnicodeObject *``, in a way that's consistent with the locale settings of the overall system. It handles these cases by relying on the operating system to do the conversion and then ensuring that the text encoding name reported by ``sys.getfilesystemencoding()`` matches the encoding used during this early bootstrapping process. On Apple platforms (including both Mac OS X and iOS), this is straightforward, as Apple guarantees that these operations will always use UTF-8 to do the conversion. On Windows, the limitations of the ``mbcs`` format used by default in these conversions proved sufficiently problematic that PEP 528 and PEP 529 were implemented to bypass the operating system supplied interfaces for binary data handling and force the use of UTF-8 instead. On Android, many components, including CPython, already assume the use of UTF-8 as the system encoding, regardless of the locale setting. However, this isn't the case for all components, and the discrepancy can cause problems in some situations (for example, when using the GNU readline module [16_]). On non-Apple and non-Android \*nix systems, these operations are handled using the C locale system in glibc, which has the following characteristics [4_]: * by default, all processes start in the ``C`` locale, which uses ``ASCII`` for these conversions. This is almost never what anyone doing multilingual text processing actually wants (including CPython and C/C++ GUI frameworks). * calling ``setlocale(LC_ALL, "")`` reconfigures the active locale based on the locale categories configured in the current process environment * if the locale requested by the current environment is unknown, or no specific locale is configured, then the default ``C`` locale will remain active The specific locale category that covers the APIs that CPython depends on is ``LC_CTYPE``, which applies to "classification and conversion of characters, and to multibyte and wide characters" [5_]. Accordingly, CPython includes the following key calls to ``setlocale``: * in the main ``python`` binary, CPython calls ``setlocale(LC_ALL, "")`` to configure the entire C locale subsystem according to the process environment. It does this prior to making any calls into the shared CPython library * in ``Py_Initialize``, CPython calls ``setlocale(LC_CTYPE, "")``, such that the configured locale settings for that category *always* match those set in the environment. It does this unconditionally, and it *doesn't* revert the process state change in ``Py_Finalize`` (This summary of the locale handling omits several technical details related to exactly where and when the text encoding declared as part of the locale settings is used - see PEP 540 for further discussion, as these particular details matter more when decoupling CPython from the declared C locale than they do when overriding the locale with one based on UTF-8) These calls are usually sufficient to provide sensible behaviour, but they can still fail in the following cases: * SSH environment forwarding means that SSH clients may sometimes forward client locale settings to servers that don't have that locale installed. This leads to CPython running in the default ASCII-based C locale * some process environments (such as Linux containers) may not have any explicit locale configured at all. As with unknown locales, this leads to CPython running in the default ASCII-based C locale The simplest way to deal with this problem for currently released versions of CPython is to explicitly set a more sensible locale when launching the application. For example:: LC_ALL=C.UTF-8 LANG=C.UTF-8 python3 ... The ``C.UTF-8`` locale is a full locale definition that uses ``UTF-8`` for the ``LC_CTYPE`` category, and the same settings as the ``C`` locale for all other categories (including ``LC_COLLATE``). It is offered by a number of Linux distributions (including Debian, Ubuntu, Fedora, Alpine and Android) as an alternative to the ASCII-based C locale. Mac OS X and other \*BSD systems have taken a different approach, and instead of offering a ``C.UTF-8`` locale, instead offer a partial ``UTF-8`` locale that only defines the ``LC_CTYPE`` category. On such systems, the preferred environmental locale adjustment is to set ``LC_CTYPE=UTF-8`` rather than to set ``LC_ALL`` or ``LANG``. [17_] In the specific case of Docker containers and similar technologies, the appropriate locale setting can be specified directly in the container image definition. Another common failure case is developers specifying ``LANG=C`` in order to see otherwise translated user interface messages in English, rather than the more narrowly scoped ``LC_MESSAGES=C``. Relationship with other PEPs ============================ This PEP shares a common problem statement with PEP 540 (improving Python 3's behaviour in the default C locale), but diverges markedly in the proposed solution: * PEP 540 proposes to entirely decouple CPython's default text encoding from the C locale system in that case, allowing text handling inconsistencies to arise between CPython and other C/C++ components running in the same process and in subprocesses. This approach aims to make CPython behave less like a locale-aware C/C++ application, and more like C/C++ independent language runtimes like the JVM, .NET CLR, Go, Node.js, and Rust * this PEP proposes to override the legacy C locale with a more recently defined locale that uses UTF-8 as its default text encoding. This means that the text encoding override will apply not only to CPython, but also to any locale aware extension modules loaded into the current process, as well as to locale aware C/C++ applications invoked in subprocesses that inherit their environment from the parent process. This approach aims to retain CPython's traditional strong support for integration with other components written in C and C++, while actively helping to push forward the adoption and standardisation of the C.UTF-8 locale as a Unicode-aware replacement for the legacy C locale in the wider C/C++ ecosystem After reviewing both PEPs, it became clear that they didn't actually conflict at a technical level, and the proposal in PEP 540 offered a superior option in cases where no suitable locale was available, as well as offering a better reference behaviour for platforms where the notion of a "locale encoding" doesn't make sense (for example, embedded systems running MicroPython rather than the CPython reference interpreter). Meanwhile, this PEP offered improved compatibility with other C/C++ components, and an approach more amenable to being backported to Python 3.6 by downstream redistributors. As a result, this PEP was amended to refer to PEP 540 as a complementary solution that offered improved behaviour both when locale coercion triggered, as well as when none of the standard UTF-8 based locales were available. The availability of PEP 540 also meant that the ``LC_CTYPE=en_US.UTF-8`` legacy fallback was removed from the list of UTF-8 locales tried as a coercion target, with CPython instead relying solely on the proposed PYTHONUTF8 mode in such cases. Motivation ========== While Linux container technologies like Docker, Kubernetes, and OpenShift are best known for their use in web service development, the related container formats and execution models are also being adopted for Linux command line application development. Technologies like Gnome Flatpak [7_] and Ubunty Snappy [8_] further aim to bring these same techniques to Linux GUI application development. When using Python 3 for application development in these contexts, it isn't uncommon to see text encoding related errors akin to the following:: $ docker run --rm fedora:25 python3 -c 'print("??????")' Unable to decode the command from the command line: UnicodeEncodeError: 'utf-8' codec can't encode character '\udce2' in position 7: surrogates not allowed $ docker run --rm ncoghlan/debian-python python3 -c 'print("??????")' Unable to decode the command from the command line: UnicodeEncodeError: 'utf-8' codec can't encode character '\udce2' in position 7: surrogates not allowed Even though the same command is likely to work fine when run locally:: $ python3 -c 'print("??????")' ?????? The source of the problem can be seen by instead running the ``locale`` command in the three environments:: $ locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG=en_AU.UTF-8 LC_CTYPE="en_AU.UTF-8" LC_ALL= $ docker run --rm fedora:25 locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG= LC_CTYPE="POSIX" LC_ALL= $ docker run --rm ncoghlan/debian-python locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG= LANGUAGE= LC_CTYPE="POSIX" LC_ALL= In this particular example, we can see that the host system locale is set to "en_AU.UTF-8", so CPython uses UTF-8 as the default text encoding. By contrast, the base Docker images for Fedora and Debian don't have any specific locale set, so they use the POSIX locale by default, which is an alias for the ASCII-based default C locale. The simplest way to get Python 3 (regardless of the exact version) to behave sensibly in Fedora and Debian based containers is to run it in the ``C.UTF-8`` locale that both distros provide:: $ docker run --rm -e LANG=C.UTF-8 fedora:25 python3 -c 'print("??????")' ?????? $ docker run --rm -e LANG=C.UTF-8 ncoghlan/debian-python python3 -c 'print("??????")' ?????? $ docker run --rm -e LANG=C.UTF-8 fedora:25 locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG=C.UTF-8 LC_CTYPE="C.UTF-8" LC_ALL= $ docker run --rm -e LANG=C.UTF-8 ncoghlan/debian-python locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG=C.UTF-8 LANGUAGE= LC_CTYPE="C.UTF-8" LC_ALL= The Alpine Linux based Python images provided by Docker, Inc, already use the C.UTF-8 locale by default:: $ docker run --rm python:3 python3 -c 'print("??????")' ?????? $ docker run --rm python:3 locale | grep -E 'LC_ALL|LC_CTYPE|LANG' LANG=C.UTF-8 LANGUAGE= LC_CTYPE="C.UTF-8" LC_ALL= Similarly, for custom container images (i.e. those adding additional content on top of a base distro image), a more suitable locale can be set in the image definition so everything just works by default. However, it would provide a much nicer and more consistent user experience if CPython were able to just deal with this problem automatically rather than relying on redistributors or end users to handle it through system configuration changes. While the glibc developers are working towards making the C.UTF-8 locale universally available for use by glibc based applications like CPython [6_], this unfortunately doesn't help on platforms that ship older versions of glibc without that feature, and also don't provide C.UTF-8 as an on-disk locale the way Debian and Fedora do. For these platforms, the mechanism proposed in PEP 540 at least allows CPython itself to behave sensibly, albeit without any mechanism to get other C/C++ components that decode binary streams as text to do the same. Design Principles ================= The above motivation leads to the following core design principles for the proposed solution: * if a locale other than the default C locale is explicitly configured, we'll continue to respect it * if we're changing the locale setting without an explicit config option, we'll emit a warning on stderr that we're doing so rather than silently changing the process configuration. This will alert application and system integrators to the change, even if they don't closely follow the PEP process or Python release announcements. However, to minimize the chance of introducing new problems for end users, we'll do this *without* using the warnings system, so even running with ``-Werror`` won't turn it into a runtime exception * any changes made will use *existing* configuration options To minimize the negative impact on systems currently correctly configured to use GB-18030 or another partially ASCII compatible universal encoding leads to an additional design principle: * if a UTF-8 based Linux container is run on a host that is explicitly configured to use a non-UTF-8 encoding, and tries to exchange locally encoded data with that host rather than exchanging explicitly UTF-8 encoded data, CPython will endeavour to correctly round-trip host provided data that is concatenated or split solely at common ASCII compatible code points, but may otherwise emit nonsensical results. Specification ============= To better handle the cases where CPython would otherwise end up attempting to operate in the ``C`` locale, this PEP proposes that CPython automatically attempt to coerce the legacy ``C`` locale to a UTF-8 based locale when it is run as a standalone command line application. It further proposes to emit a warning on stderr if the legacy ``C`` locale is in effect at the point where the language runtime itself is initialized, and the PEP 540 UTF-8 encoding override is also disabled, in order to warn system and application integrators that they're running CPython in an unsupported configuration. Legacy C locale coercion in the standalone Python interpreter binary -------------------------------------------------------------------- When run as a standalone application, CPython has the opportunity to reconfigure the C locale before any locale dependent operations are executed in the process. This means that it can change the locale settings not only for the CPython runtime, but also for any other C/C++ components running in the current process (e.g. as part of extension modules), as well as in subprocesses that inherit their environment from the current process. After calling ``setlocale(LC_ALL, "")`` to initialize the locale settings in the current process, the main interpreter binary will be updated to include the following call:: const char *ctype_loc = setlocale(LC_CTYPE, NULL); This cryptic invocation is the API that C provides to query the current locale setting without changing it. Given that query, it is possible to check for exactly the ``C`` locale with ``strcmp``:: ctype_loc != NULL && strcmp(ctype_loc, "C") == 0 # true only in the C locale This call also returns ``"C"`` when either no particular locale is set, or the nominal locale is set to an alias for the ``C`` locale (such as ``POSIX``). Given this information, CPython can then attempt to coerce the locale to one that uses UTF-8 rather than ASCII as the default encoding. Three such locales will be tried: * ``C.UTF-8`` (available at least in Debian, Ubuntu, and Fedora 25+, and expected to be available by default in a future version of glibc) * ``C.utf8`` (available at least in HP-UX) * ``UTF-8`` (available in at least some \*BSD variants) For ``C.UTF-8`` and ``C.utf8``, the coercion will be implemented by actually setting the ``LANG`` and ``LC_ALL`` environment variables to the candidate locale name, such that future calls to ``setlocale()`` will see them, as will other components looking for those settings (such as GUI development frameworks). For the platforms where it is defined, ``UTF-8`` is a partial locale that only defines the ``LC_CTYPE`` category. Accordingly, only the ``LC_CTYPE`` environment variable would be set when using this fallback option. To adjust automatically to future changes in locale availability, these checks will be implemented at runtime on all platforms other than Mac OS X and Windows, rather than attempting to determine which locales to try at compile time. If the locale settings are changed successfully, and the ``PYTHONIOENCODING`` environment variable is currently unset, then it will be forced to ``PYTHONIOENCODING=utf-8:surrogateescape``. When this locale coercion is activated, the following warning will be printed on stderr, with the warning containing whichever locale was successfully configured:: Python detected LC_CTYPE=C, LC_ALL & LANG set to C.UTF-8 (set another locale or PYTHONCOERCECLOCALE=0 to disable this locale coercion behaviour). When falling back to the ``UTF-8`` locale, the message would be slightly different:: Python detected LC_CTYPE=C, LC_CTYPE set to UTF-8 (set another locale or PYTHONCOERCECLOCALE=0 to disable this locale coercion behaviour). In combination with PEP 540, this locale coercion will mean that the standard Python binary *and* locale aware C/C++ extensions should once again "just work" in the three main failure cases we're aware of (missing locale settings, SSH forwarding of unknown locales, and developers explicitly requesting ``LANG=C``), as long as the target platform provides at least one of the candidate UTF-8 based environments. If ``PYTHONCOERCECLOCALE=0`` is set, or none of the candidate locales is successfully configured, then initialization will continue as usual in the C locale and the Unicode compatibility warning described in the next section will be emitted just as it would for any other application. The interpreter will always check for the ``PYTHONCOERCECLOCALE`` environment variable (even when running under the ``-E`` or ``-I`` switches), as the locale coercion check necessarily takes place before any command line argument processing. Changes to the runtime initialization process --------------------------------------------- By the time that ``Py_Initialize`` is called, arbitrary locale-dependent operations may have taken place in the current process. This means that by the time it is called, it is *too late* to switch to a different locale - doing so would introduce inconsistencies in decoded text, even in the context of the standalone Python interpreter binary. Accordingly, when ``Py_Initialize`` is called and CPython detects that the configured locale is still the default ``C`` locale *and* the ``PYTHONUTF8`` feature from PEP 540 is disabled, the following warning will be issued:: Python runtime initialized with LC_CTYPE=C (a locale with default ASCII encoding), which may cause Unicode compatibility problems. Using C.UTF-8 C.utf8, or UTF-8 (if available) as alternative Unicode-compatible locales is recommended. In this case, no actual change will be made to the locale settings. Instead, the warning informs both system and application integrators that they're running Python 3 in a configuration that we don't expect to work properly. The second sentence providing recommendations would be conditionally compiled based on the operating system (e.g. recommending ``LC_CTYPE=UTF-8`` on \*BSD systems. New build-time configuration options ------------------------------------ While both of the above behaviours would be enabled by default, they would also have new associated configuration options and preprocessor definitions for the benefit of redistributors that want to override those default settings. The locale coercion behaviour would be controlled by the flag ``--with[out]-c-locale-coercion``, which would set the ``PY_COERCE_C_LOCALE`` preprocessor definition. The locale warning behaviour would be controlled by the flag ``--with[out]-c-locale-warning``, which would set the ``PY_WARN_ON_C_LOCALE`` preprocessor definition. On platforms where they would have no effect (e.g. Mac OS X, iOS, Android, Windows) these preprocessor variables would always be undefined. Platform Support Changes ======================== A new "Legacy C Locale" section will be added to PEP 11 that states: * as of CPython 3.7, the legacy C locale is only supported when operating in "UTF-8" mode. Any Unicode handling issues that occur only in that locale and cannot be reproduced in an appropriately configured non-ASCII locale will be closed as "won't fix" * as of CPython 3.7, \*nix platforms are expected to provide at least one of ``C.UTF-8`` (full locale), ``C.utf8`` (full locale) or ``UTF-8`` ( ``LC_CTYPE``-only locale) as an alternative to the legacy ``C`` locale. Any Unicode related integration problems with C/C++ extensions that occur only in that locale and cannot be reproduced in an appropriately configured non-ASCII locale will be closed as "won't fix". Rationale ========= Improving the handling of the C locale -------------------------------------- It has been clear for some time that the C locale's default encoding of ``ASCII`` is entirely the wrong choice for development of modern networked services. Newer languages like Rust and Go have eschewed that default entirely, and instead made it a deployment requirement that systems be configured to use UTF-8 as the text encoding for operating system interfaces. Similarly, Node.js assumes UTF-8 by default (a behaviour inherited from the V8 JavaScript engine) and requires custom build settings to indicate it should use the system locale settings for locale-aware operations. Both the JVM and the .NET CLR use UTF-16-LE as their primary encoding for passing text between applications and the underlying platform. The challenge for CPython has been the fact that in addition to being used for network service development, it is also extensively used as an embedded scripting language in larger applications, and as a desktop application development language, where it is more important to be consistent with other C/C++ components sharing the same process, as well as with the user's desktop locale settings, than it is with the emergent conventions of modern network service development. The core premise of this PEP is that for *all* of these use cases, the assumption of ASCII implied by the default "C" locale is the wrong choice, and furthermore that the following assumptions are valid: * in desktop application use cases, the process locale will *already* be configured appropriately, and if it isn't, then that is an operating system or embedding application level problem that needs to be reported to and resolved by the operating system provider or application developer * in network service development use cases (especially those based on Linux containers), the process locale may not be configured *at all*, and if it isn't, then the expectation is that components will impose their own default encoding the way Rust, Go and Node.js do, rather than trusting the legacy C default encoding of ASCII the way CPython currently does Defaulting to "surrogateescape" error handling on the standard IO streams ------------------------------------------------------------------------- By coercing the locale away from the legacy C default and its assumption of ASCII as the preferred text encoding, this PEP also disables the implicit use of the "surrogateescape" error handler on the standard IO streams that was introduced in Python 3.5 ([15_]), as well as the automatic use of ``surrogateescape`` when operating in PEP 540's UTF-8 mode. Rather than introducing yet another configuration option to address that, this PEP proposes to use the existing ``PYTHONIOENCODING`` setting to ensure that the ``surrogateescape`` handler is enabled when the interpreter is required to make assumptions regarding the expected filesystem encoding. The aim of this behaviour is to attempt to ensure that operating system provided text values are typically able to be transparently passed through a Python 3 application even if it is incorrect in assuming that that text has been encoded as UTF-8. In particular, GB 18030 [12_] is a Chinese national text encoding standard that handles all Unicode code points, that is formally incompatible with both ASCII and UTF-8, but will nevertheless often tolerate processing as surrogate escaped data - the points where GB 18030 reuses ASCII byte values in an incompatible way are likely to be invalid in UTF-8, and will therefore be escaped and opaque to string processing operations that split on or search for the relevant ASCII code points. Operations that don't involve splitting on or searching for particular ASCII or Unicode code point values are almost certain to work correctly. Similarly, Shift-JIS [13_] and ISO-2022-JP [14_] remain in widespread use in Japan, and are incompatible with both ASCII and UTF-8, but will tolerate text processing operations that don't involve splitting on or searching for particular ASCII or Unicode code point values. As an example, consider two files, one encoded with UTF-8 (the default encoding for ``en_AU.UTF-8``), and one encoded with GB-18030 (the default encoding for ``zh_CN.gb18030``):: $ python3 -c 'open("utf8.txt", "wb").write("??????\n".encode("utf-8"))' $ python3 -c 'open("gb18030.txt", "wb").write("??????\n".encode("gb18030"))' On disk, we can see that these are two very different files:: $ python3 -c 'print("UTF-8: ", open("utf8.txt", "rb").read().strip()); \ print("GB18030:", open("gb18030.txt", "rb").read().strip())' UTF-8: b'\xe2\x84\x99\xc6\xb4\xe2\x98\x82\xe2\x84\x8c\xc3\xb8\xe1\xbc\xa4\n' GB18030: b'\x816\xbd6\x810\x9d0\x817\xa29\x816\xbc4\x810\x8b3\x816\x8d6\n' That nevertheless can both be rendered correctly to the terminal as long as they're decoded prior to printing:: $ python3 -c 'print("UTF-8: ", open("utf8.txt", "r", encoding="utf-8").read().strip()); \ print("GB18030:", open("gb18030.txt", "r", encoding="gb18030").read().strip())' UTF-8: ?????? GB18030: ?????? By contrast, if we just pass along the raw bytes, as ``cat`` and similar C/C++ utilities will tend to do:: $ LANG=en_AU.UTF-8 cat utf8.txt gb18030.txt ?????? ?6?6?0?0?7?9?6?4?0?3?6?6 Even setting a specifically Chinese locale won't help in getting the GB-18030 encoded file rendered correctly:: $ LANG=zh_CN.gb18030 cat utf8.txt gb18030.txt ?????? ?6?6?0?0?7?9?6?4?0?3?6?6 The problem is that the *terminal* encoding setting remains UTF-8, regardless of the nominal locale. A GB18030 terminal can be emulated using the ``iconv`` utility:: $ cat utf8.txt gb18030.txt | iconv -f GB18030 -t UTF-8 ???????? ?????? This reverses the problem, such that the GB18030 file is rendered correctly, but the UTF-8 file has been converted to unrelated hanzi characters, rather than the expected rendering of "Python" as non-ASCII characters. With the emulated GB18030 terminal encoding, assuming UTF-8 in Python results in *both* files being displayed incorrectly:: $ python3 -c 'print("UTF-8: ", open("utf8.txt", "r", encoding="utf-8").read().strip()); \ print("GB18030:", open("gb18030.txt", "r", encoding="gb18030").read().strip())' \ | iconv -f GB18030 -t UTF-8 UTF-8: ???????? GB18030: ???????? However, setting the locale correctly means that the emulated GB18030 terminal now displays both files as originally intended:: $ LANG=zh_CN.gb18030 \ python3 -c 'print("UTF-8: ", open("utf8.txt", "r", encoding="utf-8").read().strip()); \ print("GB18030:", open("gb18030.txt", "r", encoding="gb18030").read().strip())' \ | iconv -f GB18030 -t UTF-8 UTF-8: ?????? GB18030: ?????? The rationale for retaining ``surrogateescape`` as the default IO encoding is that it will preserve the following helpful behaviour in the C locale:: $ cat gb18030.txt \ | LANG=C python3 -c "import sys; print(sys.stdin.read())" \ | iconv -f GB18030 -t UTF-8 ?????? Rather than reverting to the exception seen when a UTF-8 based locale is explicitly configured:: $ cat gb18030.txt \ | python3 -c "import sys; print(sys.stdin.read())" \ | iconv -f GB18030 -t UTF-8 Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.5/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 0: invalid start byte Note: an alternative to setting ``PYTHONIOENCODING`` as the PEP currently proposes would be to instead *always* default to ``surrogateescape`` on the standard streams, and require the use of ``PYTHONIOENCODING=:strict`` to request text encoding validation during stream processing. Adopting such an approach would bring Python 3 more into line with typical C/C++ tools that pass along the raw bytes without checking them for conformance to their nominal encoding, and would hence also make the last example display the desired output:: $ cat gb18030.txt \ | PYTHONIOENCODING=:surrogateescape python3 -c "import sys; print(sys.stdin.read())" \ | iconv -f GB18030 -t UTF-8 ?????? Dropping official support for ASCII based text handling in the legacy C locale ------------------------------------------------------------------------------ We've been trying to get strict bytes/text separation to work reliably in the legacy C locale for over a decade at this point. Not only haven't we been able to get it to work, neither has anyone else - the only viable alternatives identified have been to pass the bytes along verbatim without eagerly decoding them to text (C/C++, Python 2.x, Ruby, etc), or else to ignore the nominal C/C++ locale encoding entirely and assume the use of either UTF-8 (PEP 540, Rust, Go, Node.js, etc) or UTF-16-LE (JVM, .NET CLR). While this PEP ensures that developers that need to do so can still opt-in to running their Python code in the legacy C locale, it also makes clear that we *don't* expect Python 3's Unicode handling to be reliable in that configuration, and the recommended alternative is to use a more appropriate locale setting. Providing implicit locale coercion only when running standalone --------------------------------------------------------------- Over the course of Python 3.x development, multiple attempts have been made to improve the handling of incorrect locale settings at the point where the Python interpreter is initialised. The problem that emerged is that this is ultimately *too late* in the interpreter startup process - data such as command line arguments and the contents of environment variables may have already been retrieved from the operating system and processed under the incorrect ASCII text encoding assumption well before ``Py_Initialize`` is called. The problems created by those inconsistencies were then even harder to diagnose and debug than those created by believing the operating system's claim that ASCII was a suitable encoding to use for operating system interfaces. This was the case even for the default CPython binary, let alone larger C/C++ applications that embed CPython as a scripting engine. The approach proposed in this PEP handles that problem by moving the locale coercion as early as possible in the interpreter startup sequence when running standalone: it takes place directly in the C-level ``main()`` function, even before calling in to the `Py_Main()`` library function that implements the features of the CPython interpreter CLI. The ``Py_Initialize`` API then only gains an explicit warning (emitted on ``stderr``) when it detects use of the ``C`` locale, and relies on the embedding application to specify something more reasonable. Querying LC_CTYPE for C locale detection ---------------------------------------- ``LC_CTYPE`` is the actual locale category that CPython relies on to drive the implicit decoding of environment variables, command line arguments, and other text values received from the operating system. As such, it makes sense to check it specifically when attempting to determine whether or not the current locale configuration is likely to cause Unicode handling problems. Setting both LANG & LC_ALL for C.UTF-8 locale coercion ------------------------------------------------------ Python is often used as a glue language, integrating other C/C++ ABI compatible components in the current process, and components written in arbitrary languages in subprocesses. Setting ``LC_ALL`` to ``C.UTF-8`` imposes a locale setting override on all C/C++ components in the current process and in any subprocesses that inherit the current environment. This is important to handle cases where the problem has arisen from a setting like ``LC_CTYPE=UTF-8`` being provided on a system where no ``UTF-8`` locale is defined (e.g. when a Mac OS X ssh client is configured to forward locale settings, and the user logs into a Linux server). Setting ``LANG`` to ``C.UTF-8`` ensures that even components that only check the ``LANG`` fallback for their locale settings will still use ``C.UTF-8``. Together, these should ensure that when the locale coercion is activated, the switch to the C.UTF-8 locale will be applied consistently across the current process and any subprocesses that inherit the current environment. Allowing restoration of the legacy behaviour -------------------------------------------- The CPython command line interpreter is often used to investigate faults that occur in other applications that embed CPython, and those applications may still be using the C locale even after this PEP is implemented. Providing a simple on/off switch for the locale coercion behaviour makes it much easier to reproduce the behaviour of such applications for debugging purposes, as well as making it easier to reproduce the behaviour of older 3.x runtimes even when running a version with this change applied. Implementation ============== A draft implementation of the change (including test cases and documentation) is linked from issue 28180 [1_], which is an end user request that ``sys.getfilesystemencoding()`` default to ``utf-8`` rather than ``ascii``. This patch is now being maintained as the ``pep538-coerce-c-locale`` feature branch [18_] in Nick Coghlan's fork of the CPython repository on GitHub. NOTE: As discussed in [1_], the currently posted draft implementation has some known issues on Android. Backporting to earlier Python 3 releases ======================================== Backporting to Python 3.6.0 --------------------------- If this PEP is accepted for Python 3.7, redistributors backporting the change specifically to their initial Python 3.6.0 release will be both allowed and encouraged. However, such backports should only be undertaken either in conjunction with the changes needed to also provide a suitable locale by default, or else specifically for platforms where such a locale is already consistently available. Backporting to other 3.x releases --------------------------------- While the proposed behavioural change is seen primarily as a bug fix addressing Python 3's current misbehaviour in the default ASCII-based C locale, it still represents a reasonably significant change in the way CPython interacts with the C locale system. As such, while some redistributors may still choose to backport it to even earlier Python 3.x releases based on the needs and interests of their particular user base, this wouldn't be encouraged as a general practice. However, configuring Python 3 *environments* (such as base container images) to use these configuration settings by default is both allowed and recommended. Acknowledgements ================ The locale coercion approach proposed in this PEP is inspired directly by Armin Ronacher's handling of this problem in the ``click`` command line utility development framework [2_]:: $ LANG=C python3 -c 'import click; cli = click.command()(lambda:None); cli()' Traceback (most recent call last): ... RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Either run this under Python 2 or consult http://click.pocoo.org/python3/ for mitigation steps. This system supports the C.UTF-8 locale which is recommended. You might be able to resolve your issue by exporting the following environment variables: export LC_ALL=C.UTF-8 export LANG=C.UTF-8 The change was originally proposed as a downstream patch for Fedora's system Python 3.6 package [3_], and then reformulated as a PEP for Python 3.7 with a section allowing for backports to earlier versions by redistributors. The initial draft was posted to the Python Linux SIG for discussion [10_] and then amended based on both that discussion and Victor Stinner's work in PEP 540 [11_]. The "??????" string used in the Unicode handling examples throughout this PEP is taken from Ned Batchelder's excellent "Pragmatic Unicode" presentation [9_]. Stephen Turnbull has long provided valuable insight into the text encoding handling challenges he regularly encounters at the University of Tsukuba (????). References ========== .. [1] CPython: sys.getfilesystemencoding() should default to utf-8 (http://bugs.python.org/issue28180) .. [2] Locale configuration required for click applications under Python 3 (http://click.pocoo.org/5/python3/#python-3-surrogate-handling) .. [3] Fedora: force C.UTF-8 when Python 3 is run under the C locale (https://bugzilla.redhat.com/show_bug.cgi?id=1404918) .. [4] GNU C: How Programs Set the Locale ( https://www.gnu.org/software/libc/manual/html_node/Setting-the-Locale.html) .. [5] GNU C: Locale Categories ( https://www.gnu.org/software/libc/manual/html_node/Locale-Categories.html) .. [6] glibc C.UTF-8 locale proposal (https://sourceware.org/glibc/wiki/Proposals/C.UTF-8) .. [7] GNOME Flatpak (http://flatpak.org/) .. [8] Ubuntu Snappy (https://www.ubuntu.com/desktop/snappy) .. [9] Pragmatic Unicode (http://nedbatchelder.com/text/unipain.html) .. [10] linux-sig discussion of initial PEP draft (https://mail.python.org/pipermail/linux-sig/2017-January/000014.html) .. [11] Feedback notes from linux-sig discussion and PEP 540 (https://github.com/python/peps/issues/171) .. [12] GB 18030 (https://en.wikipedia.org/wiki/GB_18030) .. [13] Shift-JIS (https://en.wikipedia.org/wiki/Shift_JIS) .. [14] ISO-2022 (https://en.wikipedia.org/wiki/ISO/IEC_2022) .. [15] Use "surrogateescape" error handler for sys.stdin and sys.stdout on UNIX for the C locale (https://bugs.python.org/issue19977) .. [16] test_readline.test_nonascii fails on Android (http://bugs.python.org/issue28997) .. [17] UTF-8 locale discussion on "locale.getdefaultlocale() fails on Mac OS X with default language set to English" (http://bugs.python.org/issue18378#msg215215) .. [18] GitHub branch diff for ``ncoghlan:pep538-coerce-c-locale`` ( https://github.com/python/cpython/compare/master...ncoghlan:pep538-coerce-c-locale ) Copyright ========= This document has been placed in the public domain under the terms of the CC0 1.0 license: https://creativecommons.org/publicdomain/zero/1.0/ -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at python.org Sun Mar 5 07:01:30 2017 From: nad at python.org (Ned Deily) Date: Sun, 5 Mar 2017 07:01:30 -0500 Subject: [Python-Dev] [RELEASE] Python 3.6.1rc1 is now available Message-ID: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> On behalf of the Python development community and the Python 3.6 release team, I would like to announce the availability of Python 3.6.1rc1. 3.6.1rc1 is the first release candidate for Python 3.6.1, the first maintenance release of Python 3.6. 3.6.0 was released on 2017-12-22 to great interest and now, three months later, we are providing the first set of bugfixes and documentation updates for it. While 3.6.1rc1 is a preview release and, thus, not intended for production environments, we encourage you to explore it and provide feedback via the Python bug tracker (https://bugs.python.org). Although it should be transparent to users of Python, 3.6.1 is the first release after some major changes to our development process so we ask users who build Python from source to be on the lookout for any unexpected differences. 3.6.1 is planned for final release on 2017-03-20 with the next maintenance release expected to follow in about 3 months. Please see "What?s New In Python 3.6" for more information: https://docs.python.org/3.6/whatsnew/3.6.html You can find Python 3.6.1rc1 here: https://www.python.org/downloads/release/python-361rc1/ More information about the 3.6 release schedule can be found here: https://www.python.org/dev/peps/pep-0494/ -- Ned Deily nad at python.org -- [] From steve.dower at python.org Sun Mar 5 09:12:19 2017 From: steve.dower at python.org (Steve Dower) Date: Sun, 5 Mar 2017 06:12:19 -0800 Subject: [Python-Dev] [RELEASE] Python 3.6.1rc1 is now available In-Reply-To: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> References: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> Message-ID: I just want to emphasize that this is a *very* important release to test, as it is the first one made after migrating the project to github. Please spend a bit of time running it through your normal build/installation steps and let us know at https://bugs.python.org/ if anything seems off. Top-posted from my Windows Phone -----Original Message----- From: "Ned Deily" Sent: ?3/?5/?2017 4:08 To: "python-announce at python.org" ; "python-list at python.org" ; "Python-Dev" ; "python-committers" Subject: [Python-Dev] [RELEASE] Python 3.6.1rc1 is now available On behalf of the Python development community and the Python 3.6 release team, I would like to announce the availability of Python 3.6.1rc1. 3.6.1rc1 is the first release candidate for Python 3.6.1, the first maintenance release of Python 3.6. 3.6.0 was released on 2017-12-22 to great interest and now, three months later, we are providing the first set of bugfixes and documentation updates for it. While 3.6.1rc1 is a preview release and, thus, not intended for production environments, we encourage you to explore it and provide feedback via the Python bug tracker (https://bugs.python.org). Although it should be transparent to users of Python, 3.6.1 is the first release after some major changes to our development process so we ask users who build Python from source to be on the lookout for any unexpected differences. 3.6.1 is planned for final release on 2017-03-20 with the next maintenance release expected to follow in about 3 months. Please see "What?s New In Python 3.6" for more information: https://docs.python.org/3.6/whatsnew/3.6.html You can find Python 3.6.1rc1 here: https://www.python.org/downloads/release/python-361rc1/ More information about the 3.6 release schedule can be found here: https://www.python.org/dev/peps/pep-0494/ -- Ned Deily nad at python.org -- [] _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Sun Mar 5 09:39:20 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Sun, 5 Mar 2017 23:39:20 +0900 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: Message-ID: LGTM and I love this PEP and PEP 540. Some comments: ... > * PEP 540 proposes to entirely decouple CPython's default text encoding from > the C locale system in that case, allowing text handling inconsistencies > to > arise between CPython and other C/C++ components running in the same > process > and in subprocesses. This approach aims to make CPython behave less like a > locale-aware C/C++ application, and more like C/C++ independent language > runtimes like the JVM, .NET CLR, Go, Node.js, and Rust I prefer just "locale-aware" / "locale-independent" (application | library | function) to "locale-aware C/C++ application" / "C/C++ independent" here. Both of Rust and Node.JS are linked with libc. And Node.JS (v8) is written in C++. They just demonstrates many people prefer "always UTF-8" to "LC_CTYPE aware encoding" in real world application. And C/C++ can be used for locale-aware and locale-independent application. I can print "????????" in C locale, because stdio is byte transparent. There are many locale independent libraries written in C (zlib, libjpeg, etc..), and some functions in libc are locale-independent or LC_CTYPE independent (printf is locale-aware, but it uses LC_NUMERIC, not LC_CTYPE). ... > Backporting to Python 3.6.0 > --------------------------- > > If this PEP is accepted for Python 3.7, redistributors backporting the > change > specifically to their initial Python 3.6.0 release will be both allowed and > encouraged. However, such backports should only be undertaken either in > conjunction with the changes needed to also provide a suitable locale by > default, or else specifically for platforms where such a locale is already > consistently available. > If it's really encouraged, how about providing patch officially, or backport it in 3.6.2 but disabled by default? Some Python users (including my company) uses pyenv or pythonz to build Python from source. This PEP and PEP 540 are important for them too. From darcy at vex.net Sun Mar 5 10:41:07 2017 From: darcy at vex.net (D'Arcy Cain) Date: Sun, 5 Mar 2017 10:41:07 -0500 Subject: [Python-Dev] [RELEASE] Python 3.6.1rc1 is now available In-Reply-To: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> References: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> Message-ID: <9cdc9d93-9015-826f-fa32-cd4c2811f67a@vex.net> On 2017-03-05 07:01 AM, Ned Deily wrote: > On behalf of the Python development community and the Python 3.6 release > team, I would like to announce the availability of Python 3.6.1rc1. > 3.6.1rc1 is the first release candidate for Python 3.6.1, the first > maintenance release of Python 3.6. 3.6.0 was released on 2017-12-22 from __future__ import 3.6.0 Did Guido finally get that time machine working? -- D'Arcy J.M. Cain System Administrator, Vex.Net http://www.Vex.Net/ IM:darcy at Vex.Net VoIP: sip:darcy at Vex.Net From darcy at vex.net Sun Mar 5 11:53:39 2017 From: darcy at vex.net (D'Arcy Cain) Date: Sun, 5 Mar 2017 11:53:39 -0500 Subject: [Python-Dev] [RELEASE] Python 3.6.1rc1 is now available In-Reply-To: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> References: <50484B0A-BA03-42D0-B5EC-4D8E1CF1BC26@python.org> Message-ID: <5fe75c2f-d49a-16f2-c63f-82e26cfe5138@vex.net> On 2017-03-05 07:01 AM, Ned Deily wrote: > On behalf of the Python development community and the Python 3.6 release > team, I would like to announce the availability of Python 3.6.1rc1. > 3.6.1rc1 is the first release candidate for Python 3.6.1, the first > maintenance release of Python 3.6. 3.6.0 was released on 2017-12-22 from __future__ import 3.6.0 Did Guido finally get that time machine working? -- D'Arcy J.M. Cain System Administrator, Vex.Net http://www.Vex.Net/ IM:darcy at Vex.Net VoIP: sip:darcy at Vex.Net From senthil at uthcode.com Sun Mar 5 13:14:05 2017 From: senthil at uthcode.com (Senthil Kumaran) Date: Sun, 5 Mar 2017 10:14:05 -0800 Subject: [Python-Dev] Reports on my CPython contributions In-Reply-To: References: Message-ID: On Fri, Feb 24, 2017 at 3:50 PM, Victor Stinner wrote: > Recently, I wrote reports of my CPython contributions since 1 year > 1/2. Some people on this list might be interested, so here is the > list. They are prolific! Thanks for keeping a log and sharing. -- Senthil From storchaka at gmail.com Sun Mar 5 15:47:36 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 5 Mar 2017 22:47:36 +0200 Subject: [Python-Dev] Reports on my CPython contributions In-Reply-To: References: Message-ID: On 25.02.17 01:50, Victor Stinner wrote: > Recently, I wrote reports of my CPython contributions since 1 year > 1/2. Some people on this list might be interested, so here is the > list. Nice reading! From victor.stinner at gmail.com Sun Mar 5 16:39:20 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 5 Mar 2017 22:39:20 +0100 Subject: [Python-Dev] Reports on my CPython contributions In-Reply-To: References: Message-ID: Le 5 mars 2017 19:14, "Senthil Kumaran" a ?crit : On Fri, Feb 24, 2017 at 3:50 PM, Victor Stinner wrote: > Recently, I wrote reports of my CPython contributions since 1 year > 1/2. Some people on this list might be interested, so here is the > list. They are prolific! Thanks for keeping a log and sharing. I fear that these articles are just boring, so don't hesitate to send me a private email if you have suggestions ;-) My static blog has no comment. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaroslav.lehenchuk at djangostars.com Mon Mar 6 10:53:56 2017 From: yaroslav.lehenchuk at djangostars.com (Yaroslav Lehenchuk) Date: Mon, 6 Mar 2017 17:53:56 +0200 Subject: [Python-Dev] we would like to share python articles with you Message-ID: Hi! I like your resource. We in Django Stars writing a lot about Python/Django and we would like to share our articles with other professionals and geeks. Here are two examples of our blog content: http://djangostars.com/blog/continuous-integration-circleci- vs-travisci-vs-jenkins/ http://djangostars.com/blog/how-to-create-and-deploy-a-telegram-bot/ And we also have an account on git hub where we sharing our libraries and open source projects. Tell me please, are you interested in such cooperation? I am talking about submitting our post to your tips-digest. Waiting for your response. Thank you in advance. -- Best Regards, Yaroslav Lehenchuk Marketer at Django Stars Cell: +380730903748 Skype: yaroslav_le Email: yaroslav.lehenchuk at djangostars.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Mar 6 11:25:00 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 7 Mar 2017 02:25:00 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: Message-ID: On 6 March 2017 at 00:39, INADA Naoki wrote: > I prefer just "locale-aware" / "locale-independent" (application | > library | function) > to "locale-aware C/C++ application" / "C/C++ independent" here. Good point, I'll fix that in the next update. > Backporting to Python 3.6.0 > > --------------------------- > > > > If this PEP is accepted for Python 3.7, redistributors backporting the > > change > > specifically to their initial Python 3.6.0 release will be both allowed > and > > encouraged. However, such backports should only be undertaken either in > > conjunction with the changes needed to also provide a suitable locale by > > default, or else specifically for platforms where such a locale is > already > > consistently available. > > > > If it's really encouraged, how about providing patch officially, or > backport it in 3.6.2 > but disabled by default? > Some Python users (including my company) uses pyenv or pythonz to > build Python from source. This PEP and PEP 540 are important for them too. > For PEP 540, the changes are too intrusive to consider it a reasonable candidate for backporting to an earlier feature release, so for that aspect, we'll *all* be waiting for 3.7. For this PEP, while it's deliberately unobtrusive to make it more backporting friendly, 3.7 isn't *that* far away, and I didn't think to seriously pursue this approach until well after the 3.6 beta deadline for new features had passed. With it being clearly outside the normal bounds of what's appropriate for a cross-platform maintenance release, that means the only folks that can consider it for earlier releases are those building their own binaries for more constrained target environments. I can definitely make sure the patch is readily available for anyone that wants to apply it to their own builds, though (I'll upload it to both the Python tracker issue and the downstream Fedora Bugzilla entry). I also wouldn't completely close the door on the idea of classifying the change as a bug fix in CPython's handling of the C locale (and hence adding to a latter 3.6.x feature release), but I think the time to pursue that would be *after* we've had a chance to see how folks react to the redistributor customizations. I *think* it will be universally positive (because the status quo really is broken), but it also wouldn't be the first time I've learned something new and confusing about the locale subsystem only after releasing software that relied on an incorrect assumption about it :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From desmoulinmichel at gmail.com Mon Mar 6 11:43:46 2017 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Mon, 6 Mar 2017 17:43:46 +0100 Subject: [Python-Dev] we would like to share python articles with you In-Reply-To: References: Message-ID: This mailling list is for coordinating the development of the Python programming language, not to be used for marketing. Share your articles on a social network or a forum such as reddit.com/r/python. Le 06/03/2017 ? 16:53, Yaroslav Lehenchuk a ?crit : > Hi! > > I like your resource. We in Django Stars writing a lot about > Python/Django and we would like to share our articles with other > professionals and geeks. > Here are two examples of our blog content: > http://djangostars.com/blog/continuous-integration-circleci-vs-travisci-vs-jenkins/ > > http://djangostars.com/blog/how-to-create-and-deploy-a-telegram-bot/ > > > And we also have an account on git hub where we sharing our libraries > and open source projects. > > Tell me please, are you interested in such cooperation? I am talking > about submitting our post to your tips-digest. > > Waiting for your response. Thank you in advance. > > -- > Best Regards, > Yaroslav Lehenchuk > Marketer at Django Stars > > Cell: +380730903748 > Skype: yaroslav_le > Email: yaroslav.lehenchuk at djangostars.com > > > > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/desmoulinmichel%40gmail.com > From raymond.hettinger at gmail.com Mon Mar 6 20:57:39 2017 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Mon, 6 Mar 2017 18:57:39 -0700 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> Message-ID: <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> > On Mar 1, 2017, at 8:47 AM, Yury Selivanov wrote: > >> IMHO this is a good idea*iff* the new APIs really are bound to >> asyncio, rather than being generic across all uses of async/await. > > I agree. There is no need to make asynccontextmanager and > AsyncExitStack dependent on asyncio or specific to asyncio. > > They should both stay framework agnostic (use only protocols > defined by PEP 492 and PEP 525) and both shouldn't be put > into asyncio package. Of course, it makes sense that anything not specific to asyncio should go outside of asyncio. What I'm more concerned about is what the other places actually are. Rather than putting async variants of everything sprinkled all over the standard library, I suggest collecting them all together, perhaps in a new asynctools module. Raymond From guido at python.org Mon Mar 6 21:08:51 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 6 Mar 2017 18:08:51 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> Message-ID: On Mon, Mar 6, 2017 at 5:57 PM, Raymond Hettinger < raymond.hettinger at gmail.com> wrote: > Of course, it makes sense that anything not specific to asyncio should go > outside of asyncio. > > What I'm more concerned about is what the other places actually are. > Rather than putting async variants of everything sprinkled all over the > standard library, I suggest collecting them all together, perhaps in a new > asynctools module. > That's a tough design choice. I think neither extreme is particularly attractive -- having everything in an asynctools package might also bundle together thing that are entirely unrelated. In the extreme it would be like proposing that all metaclasses should go in a new "metaclasstools" package. I think we did a reasonable job with ABCs: core support goes in abc.py, support for collections ABCs goes into the collections package (in a submodule), and other packages and modules sometimes define ABCs for their own users. Also, in some cases I expect we'll have to create a whole new module instead of updating some ancient piece of code with newfangled async variants to its outdated APIs. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From desmoulinmichel at gmail.com Tue Mar 7 05:22:38 2017 From: desmoulinmichel at gmail.com (Michel Desmoulin) Date: Tue, 7 Mar 2017 11:22:38 +0100 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> Message-ID: <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> Last week I had to download a CSV from an FTP and push any update on it using websocket so asyncio was a natural fit and the network part went well. The surprise was that the CSV part would not work as expected. Usually I read csv doing: import csv file_like_object = csv_crawler.get_file() for row in csv.DictReader(file_like_object) But it didn't work because file_like_object.read() was a coroutine which the csv module doesn't handle. So I had to do: import csv import io raw_bytes = await stream.read(10000000) wrapped_bytes = io.BytesIO(raw_bytes) text = io.TextIOWrapper(wrapped_bytes, encoding=encoding, errors='replace') for i, row in enumerate(csv.DictReader(text)): Turns out I used asyncio a bit, and I now the stdlib, the io AIP, etc. But for somebody that doesn't, it's not very easy to figure out. Plus it's not as elegant as traditional Python. Not to mention it loads the entire CSV in memory. So I wondered if I could fix the csv module so it accept async. But the question arised. Where should I put it ? - Create AsyncDictReader and AsyncReader ? - Add inspect.iscoroutine calls widh it in the regular Readers and some __aiter__ and __aenter__ ? - add a csv.async namespace ? What API design are we recommanding for expose both sync and async behaviors ? Le 07/03/2017 ? 03:08, Guido van Rossum a ?crit : > On Mon, Mar 6, 2017 at 5:57 PM, Raymond Hettinger > > wrote: > > Of course, it makes sense that anything not specific to asyncio > should go outside of asyncio. > > What I'm more concerned about is what the other places actually > are. Rather than putting async variants of everything sprinkled > all over the standard library, I suggest collecting them all > together, perhaps in a new asynctools module. > > > That's a tough design choice. I think neither extreme is particularly > attractive -- having everything in an asynctools package might also > bundle together thing that are entirely unrelated. In the extreme it > would be like proposing that all metaclasses should go in a new > "metaclasstools" package. I think we did a reasonable job with ABCs: > core support goes in abc.py, support for collections ABCs goes into the > collections package (in a submodule), and other packages and modules > sometimes define ABCs for their own users. > > Also, in some cases I expect we'll have to create a whole new module > instead of updating some ancient piece of code with newfangled async > variants to its outdated APIs. > > -- > --Guido van Rossum (python.org/~guido ) > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/desmoulinmichel%40gmail.com > From brett at python.org Tue Mar 7 12:41:05 2017 From: brett at python.org (Brett Cannon) Date: Tue, 07 Mar 2017 17:41:05 +0000 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> Message-ID: I don't think a common practice has bubbled up yet for when there's both synchronous and asynchronous versions of an API (closest I have seen is appending an "a" to the async version but that just looks like a spelling mistake to me most of the time). This is why the question of whether separate modules are a better idea is coming up. On Tue, 7 Mar 2017 at 02:24 Michel Desmoulin wrote: > Last week I had to download a CSV from an FTP and push any update on it > using websocket so asyncio was a natural fit and the network part went > well. > > The surprise was that the CSV part would not work as expected. Usually I > read csv doing: > > import csv > > file_like_object = csv_crawler.get_file() > for row in csv.DictReader(file_like_object) > > But it didn't work because file_like_object.read() was a coroutine which > the csv module doesn't handle. > > So I had to do: > > import csv > import io > > raw_bytes = await stream.read(10000000) > wrapped_bytes = io.BytesIO(raw_bytes) > text = io.TextIOWrapper(wrapped_bytes, encoding=encoding, > errors='replace') > > for i, row in enumerate(csv.DictReader(text)): > > Turns out I used asyncio a bit, and I now the stdlib, the io AIP, etc. > But for somebody that doesn't, it's not very easy to figure out. Plus > it's not as elegant as traditional Python. Not to mention it loads the > entire CSV in memory. > > So I wondered if I could fix the csv module so it accept async. But the > question arised. Where should I put it ? > > - Create AsyncDictReader and AsyncReader ? > - Add inspect.iscoroutine calls widh it in the regular Readers and some > __aiter__ and __aenter__ ? > - add a csv.async namespace ? > > What API design are we recommanding for expose both sync and async > behaviors ? > > > Le 07/03/2017 ? 03:08, Guido van Rossum a ?crit : > > On Mon, Mar 6, 2017 at 5:57 PM, Raymond Hettinger > > > > wrote: > > > > Of course, it makes sense that anything not specific to asyncio > > should go outside of asyncio. > > > > What I'm more concerned about is what the other places actually > > are. Rather than putting async variants of everything sprinkled > > all over the standard library, I suggest collecting them all > > together, perhaps in a new asynctools module. > > > > > > That's a tough design choice. I think neither extreme is particularly > > attractive -- having everything in an asynctools package might also > > bundle together thing that are entirely unrelated. In the extreme it > > would be like proposing that all metaclasses should go in a new > > "metaclasstools" package. I think we did a reasonable job with ABCs: > > core support goes in abc.py, support for collections ABCs goes into the > > collections package (in a submodule), and other packages and modules > > sometimes define ABCs for their own users. > > > > Also, in some cases I expect we'll have to create a whole new module > > instead of updating some ancient piece of code with newfangled async > > variants to its outdated APIs. > > > > -- > > --Guido van Rossum (python.org/~guido ) > > > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/desmoulinmichel%40gmail.com > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ethan at stoneleaf.us Tue Mar 7 13:15:07 2017 From: ethan at stoneleaf.us (Ethan Furman) Date: Tue, 07 Mar 2017 10:15:07 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> Message-ID: <58BEF8AB.50400@stoneleaf.us> On 03/07/2017 09:41 AM, Brett Cannon wrote: > I don't think a common practice has bubbled up yet for when there's both synchronous and asynchronous versions of an API > (closest I have seen is appending an "a" to the async version but that just looks like a spelling mistake to me most of > the time). This is why the question of whether separate modules are a better idea is coming up. I'm undoubtedly going to show my ignorance with this question, but is it feasible to have both sync and async support in the same object? -- ~Ethan~ From jelle.zijlstra at gmail.com Tue Mar 7 13:37:41 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Tue, 7 Mar 2017 10:37:41 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <58BEF8AB.50400@stoneleaf.us> References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> <58BEF8AB.50400@stoneleaf.us> Message-ID: 2017-03-07 10:15 GMT-08:00 Ethan Furman : > On 03/07/2017 09:41 AM, Brett Cannon wrote: > > I don't think a common practice has bubbled up yet for when there's both >> synchronous and asynchronous versions of an API >> (closest I have seen is appending an "a" to the async version but that >> just looks like a spelling mistake to me most of >> the time). This is why the question of whether separate modules are a >> better idea is coming up. >> > > I'm undoubtedly going to show my ignorance with this question, but is it > feasible to have both sync and async support in the same object? > > It's possible, but it quickly gets awkward and will require a lot of code duplication. For example, we could make @contextmanager work for async functions by making the _GeneratorContextManager class implement both enter/exit and aenter/aexit, but then you'd get an obscure error if you used with on an async contextmanager or async with on a non-async contextmanager. > -- > ~Ethan~ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jelle. > zijlstra%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srkunze at mail.de Tue Mar 7 14:25:42 2017 From: srkunze at mail.de (Sven R. Kunze) Date: Tue, 7 Mar 2017 20:25:42 +0100 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> <58BEF8AB.50400@stoneleaf.us> Message-ID: <43ebb6a8-8663-05d3-4a31-33afb0a4e739@mail.de> On 07.03.2017 19:37, Jelle Zijlstra wrote: > > > 2017-03-07 10:15 GMT-08:00 Ethan Furman >: > > On 03/07/2017 09:41 AM, Brett Cannon wrote: > > I don't think a common practice has bubbled up yet for when > there's both synchronous and asynchronous versions of an API > (closest I have seen is appending an "a" to the async version > but that just looks like a spelling mistake to me most of > the time). This is why the question of whether separate > modules are a better idea is coming up. > > > I'm undoubtedly going to show my ignorance with this question, but > is it feasible to have both sync and async support in the same object? > > It's possible, but it quickly gets awkward and will require a lot of > code duplication. Correct me if I'm wrong, but we would get the code duplication anyway. async intrinsically does the same thing (just a little bit different) as its sync counterpart. Otherwise, you wouldn't use it. > For example, we could make @contextmanager work for async functions by > making the _GeneratorContextManager class implement both enter/exit > and aenter/aexit, but then you'd get an obscure error if you used with > on an async contextmanager or async with on a non-async contextmanager. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Mar 7 19:17:03 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 7 Mar 2017 16:17:03 -0800 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> Message-ID: On Tue, Mar 7, 2017 at 9:41 AM, Brett Cannon wrote: > I don't think a common practice has bubbled up yet for when there's both > synchronous and asynchronous versions of an API (closest I have seen is > appending an "a" to the async version but that just looks like a spelling > mistake to me most of the time). This is why the question of whether > separate modules are a better idea is coming up. For the CSV case, it might be sensible to factor out the io. Like, provide an API that looks like: pushdictreader = csv.PushDictReader() while pushdictreader: chunk = read_some(...) pushdictreader.push(chunk) for row in pushdictreader: ... This API can now straightforwardly be used with sync and async code. Of course you'd want to wrap it up in a nicer interface, somewhere in the ballpark of: def sync_rows(read_some): pushdictreader = csv.PushDictReader() while pushdictreader: chunk = read_some(...) pushdictreader.push(chunk) for row in pushdictreader: yield row async def async_rows(read_some): pushdictreader = csv.PushDictReader() while pushdictreader: chunk = await read_some(...) pushdictreader.push(chunk) for row in pushdictreader: yield row So there'd still be a bit of code duplication, but much much less. Essentially the idea here is to convert the csv module to sans-io style (http://sans-io.readthedocs.io/). Another option is to make it all-async internally, and then offer a sync facade around it. So like start with the natural all-async interface: class AsyncFileLike(ABC): async def async_read(...): ... class AsyncDictReader: def __init__(self, async_file_like): self._async_file_like = async_file_like async def __anext__(self): ... And (crucially!) let's assume that the only way AsyncDictReader interacts with the coroutine runner is by calls to self._async_file_like.async_read. Now we can pass in a secretly-actually-synchronous AsyncFileLike and make a synchronous facade around the whole thing: class AsyncSyncAdapter(AsyncFileLike): def __init__(self, sync_file_like): self._sync_file_like = sync_file_like # Technically an async function, but guaranteed to never yield async def read(self, *args, **kwargs): return self._sync_file_like.read(*args, **kwargs) # Minimal coroutine supervisor: runs async_fn(*args, **kwargs), which must never yield def syncify(async_fn, *args, **kwargs): coro = async_fn(*args, **kwargs) it = coro.__await__() return next(it) class DictReader: def __init__(self, sync_file_like): # Technically an AsyncDictReader, but guaranteed to never yield self._async_dict_reader = AsyncDictReader(AsyncSyncAdapter(sync_file_like)) def __next__(self): return syncify(self._async_dict_reader.__anext__) So here we still have some goo around the edges of the module, but the actual CSV logic only has to be written once, and can still be written in a "pull" style where it does its own I/O, just like it is now. This is basically another approach to writing sans-io protocols, with the annoying trade-off that it means even your synchronous version requires Python 3.5+. But for a stdlib module that's no big deal... -n > On Tue, 7 Mar 2017 at 02:24 Michel Desmoulin > wrote: >> >> Last week I had to download a CSV from an FTP and push any update on it >> using websocket so asyncio was a natural fit and the network part went >> well. >> >> The surprise was that the CSV part would not work as expected. Usually I >> read csv doing: >> >> import csv >> >> file_like_object = csv_crawler.get_file() >> for row in csv.DictReader(file_like_object) >> >> But it didn't work because file_like_object.read() was a coroutine which >> the csv module doesn't handle. >> >> So I had to do: >> >> import csv >> import io >> >> raw_bytes = await stream.read(10000000) >> wrapped_bytes = io.BytesIO(raw_bytes) >> text = io.TextIOWrapper(wrapped_bytes, encoding=encoding, >> errors='replace') >> >> for i, row in enumerate(csv.DictReader(text)): >> >> Turns out I used asyncio a bit, and I now the stdlib, the io AIP, etc. >> But for somebody that doesn't, it's not very easy to figure out. Plus >> it's not as elegant as traditional Python. Not to mention it loads the >> entire CSV in memory. >> >> So I wondered if I could fix the csv module so it accept async. But the >> question arised. Where should I put it ? >> >> - Create AsyncDictReader and AsyncReader ? >> - Add inspect.iscoroutine calls widh it in the regular Readers and some >> __aiter__ and __aenter__ ? >> - add a csv.async namespace ? >> >> What API design are we recommanding for expose both sync and async >> behaviors ? >> >> >> Le 07/03/2017 ? 03:08, Guido van Rossum a ?crit : >> > On Mon, Mar 6, 2017 at 5:57 PM, Raymond Hettinger >> > > >> > wrote: >> > >> > Of course, it makes sense that anything not specific to asyncio >> > should go outside of asyncio. >> > >> > What I'm more concerned about is what the other places actually >> > are. Rather than putting async variants of everything sprinkled >> > all over the standard library, I suggest collecting them all >> > together, perhaps in a new asynctools module. >> > >> > >> > That's a tough design choice. I think neither extreme is particularly >> > attractive -- having everything in an asynctools package might also >> > bundle together thing that are entirely unrelated. In the extreme it >> > would be like proposing that all metaclasses should go in a new >> > "metaclasstools" package. I think we did a reasonable job with ABCs: >> > core support goes in abc.py, support for collections ABCs goes into the >> > collections package (in a submodule), and other packages and modules >> > sometimes define ABCs for their own users. >> > >> > Also, in some cases I expect we'll have to create a whole new module >> > instead of updating some ancient piece of code with newfangled async >> > variants to its outdated APIs. >> > >> > -- >> > --Guido van Rossum (python.org/~guido ) >> > >> > >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > https://mail.python.org/mailman/options/python-dev/desmoulinmichel%40gmail.com >> > >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/njs%40pobox.com > -- Nathaniel J. Smith -- https://vorpus.org From nad at python.org Tue Mar 7 20:59:06 2017 From: nad at python.org (Ned Deily) Date: Tue, 7 Mar 2017 20:59:06 -0500 Subject: [Python-Dev] 3.6.1 release status and plans Message-ID: An update on the 3.6.1 release: As you probably noticed, 3.6.1 release candidate 1 was made available (finally!) two days ago. Thank you for your patience as we worked though the details of producing a release using our new GitHub-based development workflow. As we've noted, it's really important for all segments of the community to try using 3.6.1rc1 to help make sure something didn't break along the way. Please report any potential problems via the bugs.python.org tracker and mark them as "release blocker". Because rc1 was delayed a week, I've moved the planned release date for 3.6.1 final back a week as well, now 2017-03-20. That gives two weeks of exposure for rc1. The plan is to, if at all possible, not ship any additional changes in the final beyond what is already in rc1 unless we discover any release-blocking critical problems in rc1. The 3.6 branch remains open for new cherry-pick PRs etc but you should expect that any PRs that are merged into the 3.6 branch since the v3.6.1rc1 tag will first be released in 3.6.2, expected before the end of June (about 3 months). Again, if something critical comes up that you feel needs to be in 3.6.1, you need to make sure the issue is marked as a "release blocker" and you should make sure I am aware of it. If any such release blockers do arise, we will discuss them and decide whether they should go into 3.6.1 and whether a second release candidate is needed. Thanks for your support! --Ned -- Ned Deily nad at python.org -- [] From ncoghlan at gmail.com Wed Mar 8 00:42:40 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Mar 2017 15:42:40 +1000 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: <58BEF8AB.50400@stoneleaf.us> References: <6d7d77b5-c008-c94a-348c-34e41a82c97e@gmail.com> <38F293AD-288A-4CEB-9D2E-5463E83F88C9@gmail.com> <62b6e453-b23a-f767-e630-5a6682f3cc72@gmail.com> <58BEF8AB.50400@stoneleaf.us> Message-ID: On 8 March 2017 at 04:15, Ethan Furman wrote: > On 03/07/2017 09:41 AM, Brett Cannon wrote: > > I don't think a common practice has bubbled up yet for when there's both >> synchronous and asynchronous versions of an API >> (closest I have seen is appending an "a" to the async version but that >> just looks like a spelling mistake to me most of >> the time). This is why the question of whether separate modules are a >> better idea is coming up. >> > > I'm undoubtedly going to show my ignorance with this question, but is it > feasible to have both sync and async support in the same object? > As Jelle says, it depends on the API. For contextlib, we've already decided that 'asynccontextmanager' and 'AsyncExitStack' are going to be parallel APIs, as even though they *could* be the same object, they're much easier to document if they're separate, and you get a form of "verb agreement" at both definition time and at runtime that lets us be confident of the developer's intent. For simpler APIs like "closing" though, I'm leaning more towards the "just make it work everywhere" approach, where the async protocol methods use "await obj.aclose()" if the latter is defined, and a synchronous "obj.close()" otherwise. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Mar 8 02:57:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Mar 2017 17:57:33 +1000 Subject: [Python-Dev] 3.6.1 release status and plans In-Reply-To: References: Message-ID: On 8 March 2017 at 11:59, Ned Deily wrote: > An update on the 3.6.1 release: As you probably noticed, 3.6.1 release > candidate 1 was made available (finally!) two days ago. Thank you for your > patience as we worked though the details of producing a release using our > new GitHub-based development workflow. As we've noted, it's really > important for all segments of the community to try using 3.6.1rc1 to help > make sure something didn't break along the way. Please report any > potential problems via the bugs.python.org tracker and mark them as > "release blocker". > > Because rc1 was delayed a week, I've moved the planned release date for > 3.6.1 final back a week as well, now 2017-03-20. That gives two weeks of > exposure for rc1. The plan is to, if at all possible, not ship any > additional changes in the final beyond what is already in rc1 unless we > discover any release-blocking critical problems in rc1. The 3.6 branch > remains open for new cherry-pick PRs etc but you should expect that any PRs > that are merged into the 3.6 branch since the v3.6.1rc1 tag will first be > released in 3.6.2, expected before the end of June (about 3 months). > And if anyone notices some oddities with sys.path initialisation, we're aware of them and are looking into it: http://bugs.python.org/issue29723 (it currently appears to be a case where a fix that worked as intended in Windows is unexpectedly leaving the parent directory of the given directory or archive on sys.path when running directories and zip archives on other platforms). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Wed Mar 8 03:33:11 2017 From: phd at phdru.name (Oleg Broytman) Date: Wed, 8 Mar 2017 09:33:11 +0100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? Message-ID: <20170308083311.GA8562@phdru.name> Hello! When I was writing PEP 103 I wanted to help to start using git. There were a few proponents and a few opponents: people expressed concerns that the PEP is too generic and isn't really related to Python development so I promised to revoke the PEP after the switch to git and Github. Now I think is the time. I hope revocation of the PEP wouldn't cause any problem? I'm gonna publish it at wiki.p.o. Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From ncoghlan at gmail.com Wed Mar 8 07:35:24 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Mar 2017 22:35:24 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: Message-ID: On 5 March 2017 at 17:50, Nick Coghlan wrote: > Hi folks, > > Late last year I started working on a change to the CPython CLI (*not* the > shared library) to get it to coerce the legacy C locale to something based > on UTF-8 when a suitable locale is available. > > After a couple of rounds of iteration on linux-sig and python-ideas, I'm > now bringing it to python-dev as a concrete proposal for Python 3.7. > In terms of resolving this PEP, if Guido doesn't feel inclined to wade into the intricacies of legacy C locale handling, Barry has indicated he'd be happy to act as BDFL-Delegate :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Mar 8 07:38:08 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 8 Mar 2017 22:38:08 +1000 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308083311.GA8562@phdru.name> References: <20170308083311.GA8562@phdru.name> Message-ID: On 8 March 2017 at 18:33, Oleg Broytman wrote: > Hello! When I was writing PEP 103 I wanted to help to start using git. > There were a few proponents and a few opponents: people expressed > concerns that the PEP is too generic and isn't really related to Python > development so I promised to revoke the PEP after the switch to git and > Github. > Now I think is the time. I hope revocation of the PEP wouldn't cause > any problem? I'm gonna publish it at wiki.p.o. > Withdrawing the PEP is just a matter of submitting a PR to change the state to Withdrawn, so it doesn't actually break any links. It's helpful to add a short "PEP Withdrawal" section to say why it's withdrawn though, and you'd be able to link to the wiki.python.org page from there. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Wed Mar 8 08:12:24 2017 From: phd at phdru.name (Oleg Broytman) Date: Wed, 8 Mar 2017 14:12:24 +0100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: References: <20170308083311.GA8562@phdru.name> Message-ID: <20170308131224.GA22397@phdru.name> On Wed, Mar 08, 2017 at 10:38:08PM +1000, Nick Coghlan wrote: > On 8 March 2017 at 18:33, Oleg Broytman wrote: > > > Hello! When I was writing PEP 103 I wanted to help to start using git. > > There were a few proponents and a few opponents: people expressed > > concerns that the PEP is too generic and isn't really related to Python > > development so I promised to revoke the PEP after the switch to git and > > Github. > > Now I think is the time. I hope revocation of the PEP wouldn't cause > > any problem? I'm gonna publish it at wiki.p.o. > > Withdrawing the PEP is just a matter of submitting a PR to change the state > to Withdrawn, so it doesn't actually break any links. It's helpful to add a > short "PEP Withdrawal" section to say why it's withdrawn though, and you'd > be able to link to the wiki.python.org page from there. Thanks! So the plan for me is to get editor rights for wiki, publish the text there and submit a PR. > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From barry at python.org Wed Mar 8 09:50:06 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 8 Mar 2017 09:50:06 -0500 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: References: <20170308083311.GA8562@phdru.name> Message-ID: <20170308095006.3ebe4229@subdivisions.wooz.org> On Mar 08, 2017, at 10:38 PM, Nick Coghlan wrote: >Withdrawing the PEP is just a matter of submitting a PR to change the state >to Withdrawn, so it doesn't actually break any links. It's helpful to add a >short "PEP Withdrawal" section to say why it's withdrawn though, and you'd >be able to link to the wiki.python.org page from there. We don't have a great Status for obsolete informational PEPs, so Withdrawn is about as good as it gets. I've heard little parrots ask whether the whole PEP can just be deleted, implying the number could be reused in the future. I *really* don't want for that to ever be possible. Maybe change the status to "This is an Ex-PEP" or "This PEP is pining for the fiords". It's also okay to remove much of the content and just leave a placeholder. The historical record would of course always be available in the vcs. Cheers, -Barry From phd at phdru.name Wed Mar 8 10:30:41 2017 From: phd at phdru.name (Oleg Broytman) Date: Wed, 8 Mar 2017 16:30:41 +0100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308095006.3ebe4229@subdivisions.wooz.org> References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> Message-ID: <20170308153041.GA6842@phdru.name> On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw wrote: > On Mar 08, 2017, at 10:38 PM, Nick Coghlan wrote: > > >Withdrawing the PEP is just a matter of submitting a PR to change the state > >to Withdrawn, so it doesn't actually break any links. It's helpful to add a > >short "PEP Withdrawal" section to say why it's withdrawn though, and you'd > >be able to link to the wiki.python.org page from there. > > We don't have a great Status for obsolete informational PEPs, so Withdrawn is > about as good as it gets. I've heard little parrots ask whether the whole PEP > can just be deleted, implying the number could be reused in the future. I > *really* don't want for that to ever be possible. > > Maybe change the status to "This is an Ex-PEP" or "This PEP is pining for the > fiords". > > It's also okay to remove much of the content and just leave a placeholder. > The historical record would of course always be available in the vcs. Thanks! That's what I've planned to do in case we don't remove PEPs. > Cheers, > -Barry Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From y_ravi00 at yahoo.com Wed Mar 8 07:38:14 2017 From: y_ravi00 at yahoo.com (ravi y) Date: Wed, 08 Mar 2017 12:38:14 -0000 Subject: [Python-Dev] Python Design issue with print() function References: <107835727.890521.1485620476244.ref@mail.yahoo.com> Message-ID: <107835727.890521.1485620476244@mail.yahoo.com> Hi Python Developers, print() function has a slight design issue, when user gives start and end positions of character array.Issue: >>> str_ary="abcdef" >>> print(str_ary[1]) b >>> print(str_ary[4]) e >>> print(str_ary[1:4]) bcd >>>? In the above scenario, user is expecting that output of print function will be bcde (not bcd). Analysis: I kind of figured out what could be the issue.? To get the string slice, "between" (or equivalent) was used. i.e. ?str_ary array position >=1 and < 4 Solution:? ?User experience will be better if the code is updated to get last character.? ?i.e str_ary array position >=1 and <= 4 Note:?To keep the code as backward compatibility, you may come up with different name like printf()? ThanksRavi Yarlagadda ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Wed Mar 8 12:18:44 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Wed, 8 Mar 2017 09:18:44 -0800 Subject: [Python-Dev] Python Design issue with print() function In-Reply-To: <107835727.890521.1485620476244@mail.yahoo.com> References: <107835727.890521.1485620476244.ref@mail.yahoo.com> <107835727.890521.1485620476244@mail.yahoo.com> Message-ID: This is a list for python interpreter development, not new ideas -- that list is python-ideas. However, sorry to be blunt, but this post shows great ignorance of Python -- please study up more in the future before posting suggestions on any list. Specifics: 1) this has nothing to do with the print function -- it is simply printing what you are asking it to print. 2) This does have to do with how slicing is done in Python, and that is very well justified and is not going to change. Don't be discouraged, though -- keep learning about Python, but while you are, be sure to ask questions on python-tutor and the like before proposing changes! -CHB On Sat, Jan 28, 2017 at 8:21 AM, ravi y via Python-Dev < python-dev at python.org> wrote: > Hi Python Developers, > > print() function has a slight design issue, when user gives start and end > positions of character array. > Issue: > >>> str_ary="abcdef" > >>> print(str_ary[1]) > b > >>> print(str_ary[4]) > e > >>> print(str_ary[1:4]) > bcd > >>> > > In the above scenario, user is expecting that output of print function > will be bcde (not bcd). > > Analysis: > I kind of figured out what could be the issue. > To get the string slice, "between" (or equivalent) was used. > i.e. str_ary array position >=1 and < 4 > Solution: > User experience will be better if the code is updated to get last > character. > i.e str_ary array position >=1 and <= 4 > > Note: > To keep the code as backward compatibility, you may come up with > different name like printf() > > > Thanks > Ravi Yarlagadda > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > chris.barker%40noaa.gov > > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Wed Mar 8 15:53:04 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 9 Mar 2017 07:53:04 +1100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308153041.GA6842@phdru.name> References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> <20170308153041.GA6842@phdru.name> Message-ID: <20170308205303.GP5689@ando.pearwood.info> On Wed, Mar 08, 2017 at 04:30:41PM +0100, Oleg Broytman wrote: > On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw wrote: > > It's also okay to remove much of the content and just leave a placeholder. > > The historical record would of course always be available in the vcs. > > Thanks! That's what I've planned to do in case we don't remove PEPs. Why remove the content? In fact, since its just an informational PEP, why withdraw it? Some people find it too generic and not enough about Python -- okay. So what? Is PEP 103 actively harmful? -- Steve From phd at phdru.name Wed Mar 8 16:07:54 2017 From: phd at phdru.name (Oleg Broytman) Date: Wed, 8 Mar 2017 22:07:54 +0100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308205303.GP5689@ando.pearwood.info> References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> <20170308153041.GA6842@phdru.name> <20170308205303.GP5689@ando.pearwood.info> Message-ID: <20170308210754.GA24728@phdru.name> On Thu, Mar 09, 2017 at 07:53:04AM +1100, Steven D'Aprano wrote: > On Wed, Mar 08, 2017 at 04:30:41PM +0100, Oleg Broytman wrote: > > On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw wrote: > > > > It's also okay to remove much of the content and just leave a placeholder. > > > The historical record would of course always be available in the vcs. > > > > Thanks! That's what I've planned to do in case we don't remove PEPs. > > Why remove the content? > > In fact, since its just an informational PEP, why withdraw it? Some > people find it too generic and not enough about Python -- okay. So what? > > Is PEP 103 actively harmful? Certainly not! > -- > Steve Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From guido at python.org Wed Mar 8 16:58:29 2017 From: guido at python.org (Guido van Rossum) Date: Wed, 8 Mar 2017 13:58:29 -0800 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: Message-ID: On Wed, Mar 8, 2017 at 4:35 AM, Nick Coghlan wrote: > > On 5 March 2017 at 17:50, Nick Coghlan wrote: > >> Late last year I started working on a change to the CPython CLI (*not* >> the shared library) to get it to coerce the legacy C locale to something >> based on UTF-8 when a suitable locale is available. >> >> After a couple of rounds of iteration on linux-sig and python-ideas, I'm >> now bringing it to python-dev as a concrete proposal for Python 3.7. >> > > In terms of resolving this PEP, if Guido doesn't feel inclined to wade > into the intricacies of legacy C locale handling, Barry has indicated he'd > be happy to act as BDFL-Delegate :) > Hi Nick and Barry, I'd very much appreciate if you two could resolve this without involving me. Godspeed! -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Mar 8 18:18:51 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 8 Mar 2017 18:18:51 -0500 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308210754.GA24728@phdru.name> References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> <20170308153041.GA6842@phdru.name> <20170308205303.GP5689@ando.pearwood.info> <20170308210754.GA24728@phdru.name> Message-ID: On 3/8/2017 4:07 PM, Oleg Broytman wrote: > On Thu, Mar 09, 2017 at 07:53:04AM +1100, Steven D'Aprano wrote: >> On Wed, Mar 08, 2017 at 04:30:41PM +0100, Oleg Broytman wrote: >>> On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw wrote: >> >>>> It's also okay to remove much of the content and just leave a placeholder. >>>> The historical record would of course always be available in the vcs. >>> >>> Thanks! That's what I've planned to do in case we don't remove PEPs. >> >> Why remove the content? >> >> In fact, since its just an informational PEP, why withdraw it? Some >> people find it too generic and not enough about Python -- okay. So what? >> >> Is PEP 103 actively harmful? > > Certainly not! I recommend adding a note to the top that the info, which correct, is somewhat obsolescent (or whatever) with the new workflow. We have PEPs which are not 'wrong' in that they have been replaced by later PEPs, but we do not delete them, either in whole or in part. -- Terry Jan Reedy From phd at phdru.name Wed Mar 8 18:27:19 2017 From: phd at phdru.name (Oleg Broytman) Date: Thu, 9 Mar 2017 00:27:19 +0100 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> <20170308153041.GA6842@phdru.name> <20170308205303.GP5689@ando.pearwood.info> <20170308210754.GA24728@phdru.name> Message-ID: <20170308232719.GA1590@phdru.name> On Wed, Mar 08, 2017 at 06:18:51PM -0500, Terry Reedy wrote: > On 3/8/2017 4:07 PM, Oleg Broytman wrote: > >On Thu, Mar 09, 2017 at 07:53:04AM +1100, Steven D'Aprano wrote: > >>On Wed, Mar 08, 2017 at 04:30:41PM +0100, Oleg Broytman wrote: > >>>On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw wrote: > >> > >>>>It's also okay to remove much of the content and just leave a placeholder. > >>>>The historical record would of course always be available in the vcs. > >>> > >>> Thanks! That's what I've planned to do in case we don't remove PEPs. > >> > >>Why remove the content? > >> > >>In fact, since its just an informational PEP, why withdraw it? Some > >>people find it too generic and not enough about Python -- okay. So what? > >> > >>Is PEP 103 actively harmful? > > > > Certainly not! > > I recommend adding a note to the top that the info, which correct, is > somewhat obsolescent (or whatever) with the new workflow. We have PEPs > which are not 'wrong' in that they have been replaced by later PEPs, but we > do not delete them, either in whole or in part. I see. Thanks! > -- > Terry Jan Reedy Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From smurf at noris.de Thu Mar 9 06:04:40 2017 From: smurf at noris.de (Matthias Urlichs) Date: Thu, 9 Mar 2017 12:04:40 +0100 Subject: [Python-Dev] iscoroutinefunction vs. coroutines Message-ID: <20170309110438.GA15933@smurf.noris.de> Hi, Is this pattern def foo(): return bar() async def bar(): await async def async_main(): await foo() considered to be valid? The reason I'm asking is that some code out there likes to accept a might-be-a-coroutine-function argument, using def run_callback(fn): if iscoroutinefunction(fn): res = await fn() else: res = fn() instead of def run_callback(fn): res = fn() if iscoroutine(res): res = await res() The former obviously breaks when somebody combines these idioms and calls run_callback(foo) but I can't help but wonder whether the latter use might be deprecated, or even warned about, in the future and/or with non-CPython implementations. -- -- Matthias Urlichs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From guido at python.org Thu Mar 9 12:39:24 2017 From: guido at python.org (Guido van Rossum) Date: Thu, 9 Mar 2017 09:39:24 -0800 Subject: [Python-Dev] iscoroutinefunction vs. coroutines In-Reply-To: <20170309110438.GA15933@smurf.noris.de> References: <20170309110438.GA15933@smurf.noris.de> Message-ID: On Thu, Mar 9, 2017 at 3:04 AM, Matthias Urlichs wrote: > Is this pattern > > def foo(): > return bar() > async def bar(): > await > > async def async_main(): > await foo() > > considered to be valid? > Yes, it is valid. > The reason I'm asking is that some code out there likes to accept a > might-be-a-coroutine-function argument, using > > def run_callback(fn): > if iscoroutinefunction(fn): > res = await fn() > else: > res = fn() > > instead of > > def run_callback(fn): > res = fn() > if iscoroutine(res): > res = await res() > > The former obviously breaks when somebody combines these idioms and calls > > run_callback(foo) > > but I can't help but wonder whether the latter use might be deprecated, or > even warned about, in the future and/or with non-CPython implementations. > In general I would recommend against patterns that support either awaitables or non-awaitables. The recommended solution is for run_callback() to require an awaitable, and if you have a function that just returns the value, you should wrap it in an async def that doesn't use await. The difference between the two versions of run_callback() is merely the difference you pointed out -- iscoroutinefunction(f) is not entirely equivalent to iscoroutine(f()). If you're thinking in terms of static types (e.g. PEP 484 and mypy), in the latter version the type of `res` is problematic (it's Union[Awaitable[T], T]), but there's an easy way to rewrite it to avoid that, while still calling iscoroutine(). If there's something else you worry about with the latter please clarify. But in general I would stay far away from this kind of "do what I mean" API -- they are hard to reason about and difficult to debug. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at python.org Thu Mar 9 18:35:55 2017 From: nad at python.org (Ned Deily) Date: Thu, 9 Mar 2017 18:35:55 -0500 Subject: [Python-Dev] Translated Python documentation In-Reply-To: References: <20170222164058.7b0333ef@fsol> <1b5001d28e8d$6c1e6570$445b3050$@hotmail.com> Message-ID: [catching up on an older thread] On Feb 27, 2017, at 05:31, Victor Stinner wrote: > 2017-02-25 19:19 GMT+01:00 Brett Cannon : >> It's getting a little hard to tease out what exactly is being asked at this >> point. Perhaps it's time to move the discussion over to a translation SIG >> (which probably needs to be created unless the old >> https://mail.python.org/mailman/listinfo/i18n-sig makes sense)? That way >> active translators can figure out exactly what they want to ask of >> python-dev in terms of support and we can have a more focused discussion. > > Things are already happening in the background on other lists and > other Python projects, but the problem is that the translation project > seems "blocked" for some reasons. That's why I started the thread. > > Example of a recent CPython PR, blocked: > https://github.com/python/cpython/pull/195 > "bpo-28331: fix "CPython implementation detail:" label is removed when > content is translated." opened 7 days ago by INADA Naoki (JP > translation) > > Example of a docsbuild PR: > https://github.com/python/docsbuild-scripts/pull/8 > "[WIP] Add french, japanese, and chinese", opened at 12 Dec 2016 by > Julien Palard (FR translation) > > See also Julien Palard's threads on python-ideas: no decision was > taken, so the project is blocked. > > According to this thread, there is an official GO for official > translations, so these PR should be merged, right? I don't know exactly what you mean by an "official GO" but I don't think there has been any agreement yet since there hasn't been a specific proposal yet to review. I think what *was* agreed is that, in principle, translation *sounds* like a good idea to follow up on elsewhere, i.e. on one of the existing sigs, and then come back with a specific proposal for review. Thinking about that a little more, I think the appropriate output of those discussions should be a process PEP. Then we can review the proposal properly and also have the process clearly documented for the future. -- Ned Deily nad at python.org -- [] From python at lucidity.plus.com Thu Mar 9 18:43:20 2017 From: python at lucidity.plus.com (Erik) Date: Thu, 9 Mar 2017 23:43:20 +0000 Subject: [Python-Dev] Impoverished compare ... Message-ID: <7487d6aa-345c-b3f6-8068-a5d7da4b5649@lucidity.plus.com> Hi. I'm looking at stuff proposed over on Python-Ideas, and I'd appreciate some pointers as to the basics of how C-level objects are generally compared in Python 3. The issue is related to the performance of PyObject_RichCompare. I got to the point where I was trying to work out what was the _alternative_ to RichCompare. If a comparison is not "rich", then what is it? There's a tp_richcompare slot in the type structure, but I haven't noticed anything else obvious for simple comparison (In 2.x days - which I have more experience with - __cmp__ was a thing which now seems to be gone. I understand the Python-level changes with sort(..., key=foo) but I've not looked at the underlying C implementation until now). Anyway, I followed things as far as Objects/typeobject.c and then I got bitten by a macro dialect that I don't immediately grok, so anything that spells it out a bit more plainly would be nice (I can follow the code if I need to - but a shortcut from those who know this off the top of their head would be helpful). Thanks, E. From victor.stinner at gmail.com Thu Mar 9 19:03:08 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 10 Mar 2017 01:03:08 +0100 Subject: [Python-Dev] Translated Python documentation In-Reply-To: References: <20170222164058.7b0333ef@fsol> <1b5001d28e8d$6c1e6570$445b3050$@hotmail.com> Message-ID: 2017-03-10 0:35 GMT+01:00 Ned Deily : > I don't know exactly what you mean by an "official GO" but I don't think there has been any agreement yet since there hasn't been a specific proposal yet to review. I think what *was* agreed is that, in principle, translation *sounds* like a good idea to follow up on elsewhere, i.e. on one of the existing sigs, and then come back with a specific proposal for review. Thinking about that a little more, I think the appropriate output of those discussions should be a process PEP. Then we can review the proposal properly and also have the process clearly documented for the future. FYI we are already working on a PEP with Julien Palard (FR) and INADA Naoki (JP). We will post it when it will be ready ;-) Victor From random832 at fastmail.com Thu Mar 9 20:41:00 2017 From: random832 at fastmail.com (Random832) Date: Thu, 09 Mar 2017 20:41:00 -0500 Subject: [Python-Dev] Impoverished compare ... In-Reply-To: <7487d6aa-345c-b3f6-8068-a5d7da4b5649@lucidity.plus.com> References: <7487d6aa-345c-b3f6-8068-a5d7da4b5649@lucidity.plus.com> Message-ID: <1489110060.1152654.906502176.6E012F29@webmail.messagingengine.com> On Thu, Mar 9, 2017, at 18:43, Erik wrote: > Hi. > > I'm looking at stuff proposed over on Python-Ideas, and I'd appreciate > some pointers as to the basics of how C-level objects are generally > compared in Python 3. > > The issue is related to the performance of PyObject_RichCompare. I got > to the point where I was trying to work out what was the _alternative_ > to RichCompare. If a comparison is not "rich", then what is it? There's > a tp_richcompare slot in the type structure, but I haven't noticed > anything else obvious for simple comparison (In 2.x days - which I have > more experience with - __cmp__ was a thing which now seems to be gone. I > understand the Python-level changes with sort(..., key=foo) but I've not > looked at the underlying C implementation until now). In 2.x the C version of __cmp__ was tp_compare (and it existed even in python 0.9.1, which had neither dunder methods nor tp_richcompare). I assume that the "rich compare" name was retained in Python 3 for the same reason that other names like PyLongObject, PyUnicodeObject were (not PyStringObject, though, presumably because they don't want people to unintentionally create a bytes-only API as a result of a lazy porting process). Maybe Python 4000 can alias it to tp_compare (PyIntObject, PyStringObject) and Python 5000 can get rid of the current names. It's "rich" because it knows which of the object-level methods (less than, greater than, etc) is being called, whereas tp_compare/__cmp__ did not. From ncoghlan at gmail.com Thu Mar 9 22:54:22 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Mar 2017 13:54:22 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: Message-ID: On 9 March 2017 at 07:58, Guido van Rossum wrote: > On Wed, Mar 8, 2017 at 4:35 AM, Nick Coghlan wrote: > >> >> On 5 March 2017 at 17:50, Nick Coghlan wrote: >> >>> Late last year I started working on a change to the CPython CLI (*not* >>> the shared library) to get it to coerce the legacy C locale to something >>> based on UTF-8 when a suitable locale is available. >>> >>> After a couple of rounds of iteration on linux-sig and python-ideas, I'm >>> now bringing it to python-dev as a concrete proposal for Python 3.7. >>> >> >> In terms of resolving this PEP, if Guido doesn't feel inclined to wade >> into the intricacies of legacy C locale handling, Barry has indicated he'd >> be happy to act as BDFL-Delegate :) >> > > Hi Nick and Barry, I'd very much appreciate if you two could resolve this > without involving me. > OK, I've added Barry to the PEP as BDFL-Delegate: https://github.com/python/peps/commit/4c46c5710031cac03a8d1ab7639272957998a1cc Thanks for the quick response! Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Mar 9 22:56:57 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 10 Mar 2017 13:56:57 +1000 Subject: [Python-Dev] Can I revoke PEP 103 (info about git)? In-Reply-To: <20170308205303.GP5689@ando.pearwood.info> References: <20170308083311.GA8562@phdru.name> <20170308095006.3ebe4229@subdivisions.wooz.org> <20170308153041.GA6842@phdru.name> <20170308205303.GP5689@ando.pearwood.info> Message-ID: On 9 March 2017 at 06:53, Steven D'Aprano wrote: > On Wed, Mar 08, 2017 at 04:30:41PM +0100, Oleg Broytman wrote: > > On Wed, Mar 08, 2017 at 09:50:06AM -0500, Barry Warsaw > wrote: > > > > It's also okay to remove much of the content and just leave a > placeholder. > > > The historical record would of course always be available in the vcs. > > > > Thanks! That's what I've planned to do in case we don't remove PEPs. > > Why remove the content? > > In fact, since its just an informational PEP, why withdraw it? The PEP Index organises itself by status, so withdrawing it moves it down into the historical PEPs section, and out of the active section. (We're probably due for a PEP "spring clean" in general, but doing that is about as exciting as actual spring cleaning, so it's easy to keep putting it off) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From yaroslav.lehenchuk at djangostars.com Fri Mar 10 07:14:06 2017 From: yaroslav.lehenchuk at djangostars.com (Yaroslav Lehenchuk) Date: Fri, 10 Mar 2017 14:14:06 +0200 Subject: [Python-Dev] Sharing our python articles with python community Message-ID: Hi! My name is Yaroslav, I am a member of geeks team - Django Stars. We just released 2 good posts and I'd like to propose you to share them with your digest readers. 1. I hope that two articles is not too much for one digest. In case that is ? we can separate those between two digests. 2. I hope that you are ok with me marking urls with utm-tags ? it's pretty important for us to monitor the website's incoming traffic and understand the visitors` behavior. 1. Docker. http://djangostars.com/blog/heres-how-you-start-using- docker?utm_source=dbader&utm_campaign=Pythondigest&utm_medium=post 2. django-classifier http://djangostars.com/blog/django-classifier-or-what- have-i-done?utm_source=dbader&utm_campaign=Pythondigest&utm_medium=post 2.1 Library https://github.com/django-stars/django-classifier 2.2 Two examples how to use it https://github.com/django-stars/django-classifier-profile https://github.com/django-stars/django-classifier-shop Thanks in advance and have a nice day! -- Best Regards, Yaroslav Lehenchuk Marketer at Django Stars Cell: +380730903748 Skype: yaroslav_le Email: yaroslav.lehenchuk at djangostars.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From k7hoven at gmail.com Fri Mar 10 11:44:37 2017 From: k7hoven at gmail.com (Koos Zevenhoven) Date: Fri, 10 Mar 2017 18:44:37 +0200 Subject: [Python-Dev] API design: where to add async variants of existing stdlib APIs? In-Reply-To: References: Message-ID: On Wed, Mar 1, 2017 at 7:42 AM, Nick Coghlan wrote: > Short version: > > - there are some reasonable requests for async variants of contextlib APIs > for 3.7 > - prompted by Raymond, I'm thinking it actually makes more sense to add > these in a new `asyncio.contextlib` module than it does to add them directly > to the existing module > - would anyone object strongly to my asking authors of the affected PRs to > take their changes in that direction? > Related to this, here's a post from two years ago in attempt to tackle the cause of this problem (of needing async and non-async variants) and solve it in the long term. https://mail.python.org/pipermail/python-ideas/2015-May/033267.html You can read the details in that thread, but in short, the idea is that all functionality that may have to wait for something (IO etc.) should be explicitly awaited, regardless of whether the code takes advantage of concurrency or not. This solution is an attempt to do this without enforcing a specific async framework. In the post, I made up the terms "Y end" and "L end", because I did not know what to call them. This was when the draft PEP492 was being discussed. L is the end that 'drives' the (chain of) coroutines, usually an event loop. Y is the other end, the most inner co-routine in the calling/awaiting chain that does the yields. The L and Y end together could hide the need of two variants, as explained in the above link. ?Koos > Longer version: > > There are a couple of open issues requesting async variants of some > contextlib APIs (asynccontextmanager and AsyncExitStack). I'm inclined to > accept both of them, but Raymond raised a good question regarding our > general design philosophy for these kinds of additions: would it make more > sense to put these in an "asyncio.contextlib" module than it would to add > them directly to contextlib itself? > > The main advantage I see to the idea is that if someone proposed adding an > "asyncio" dependency to contextlib, I'd say no. For the existing > asynccontextmanager PR, I even said no to adding that dependency to the > standard contextlib test suite, and instead asked that the new tests be > moved out to a separate file, so the existing tests could continue to run > even if asyncio was unavailable for some reason. > > While rejecting the idea of an asyncio dependency isn't a problem for > asyncontextmanager specifically (it's low level enough for it not to > matter), it's likely to be more of a concern for the AsyncExitStack API, > where the "asyncio.iscoroutinefunction" introspection API is likely to be > quite helpful, as are other APIs like `asyncio.ensure_future()`. > > So would folks be OK with my asking the author of the PR for > https://bugs.python.org/issue29679 (adding asynccontextmanager) to rewrite > the patch to add it as asyncio.contextlib.asyncontextmanager (with a > cross-reference from the synchronous contextlib docs), rather than the > current approach of adding it directly to contextlib? > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/k7hoven%40gmail.com > -- + Koos Zevenhoven + http://twitter.com/k7hoven + From status at bugs.python.org Fri Mar 10 12:09:09 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 10 Mar 2017 18:09:09 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20170310170909.81C215672D@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-03-03 - 2017-03-10) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5850 ( -1) closed 35695 (+77) total 41545 (+76) Open issues with patches: 2444 Issues opened (62) ================== #27362: json.dumps to check for obj.__json__ before raising TypeError http://bugs.python.org/issue27362 reopened by serhiy.storchaka #28685: Optimizing list.sort() by performing safety checks in advance http://bugs.python.org/issue28685 reopened by serhiy.storchaka #29571: test_re is failing when local is set for `en_IN` http://bugs.python.org/issue29571 reopened by serhiy.storchaka #29715: Arparse improperly handles "-_" http://bugs.python.org/issue29715 opened by Max Rothman #29717: `loop.add_reader` and `< References: Message-ID: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> This is a very bad idea. It seems to based on an assumption that the C locale is always some kind of pathology. Admittedly, it sometimes is a result of misconfiguration or a mistake. (But I don't see why it's the interpreter's job to correct such mistakes.) However, in some cases the C locale is a normal environment for system services, cron scripts, distro package builds and whatnot. It's possible to write Python programs that are locale-agnostic. It's also possible to write programs that are locale-dependent, but handle ASCII as locale encoding gracefully. Or you might want to write a program that intentionally aborts with an explanatory error message when the locale encoding doesn't have sufficient Unicode coverage. ("Errors should never pass silently" anyone?) With this proposal, none of the above seems possible to correctly implement in Python. * Nick Coghlan , 2017-03-05, 17:50: >Another common failure case is developers specifying ``LANG=C`` in order to >see otherwise translated user interface messages in English, rather than the >more narrowly scoped ``LC_MESSAGES=C``. Setting LANGUAGE=en might be better, because it doesn't affect locale encoding either, and it works even when LC_ALL is set. >Three such locales will be tried: > >* ``C.UTF-8`` (available at least in Debian, Ubuntu, and Fedora 25+, and >expected to be available by default in a future version of glibc) >* ``C.utf8`` (available at least in HP-UX) >* ``UTF-8`` (available in at least some \*BSD variants) Calling the C locale "legacy" is a bit unfair, when there's even no agreement what the name of the successor is supposed to be... NB, both "C.UTF-8" and "C.utf8" work on Fedora, thanks to glibc normalizing the encoding part. Only "C.UTF-8" works on Debian, though, for whatever reason. >For ``C.UTF-8`` and ``C.utf8``, the coercion will be implemented by actually >setting the ``LANG`` and ``LC_ALL`` environment variables to the candidate >locale name, Sounds wrong. This will override all LC_*, even if they were originally set to something different that C. >Python detected LC_CTYPE=C, LC_ALL & LANG set to C.UTF-8 (set another locale >or PYTHONCOERCECLOCALE=0 to disable this locale coercion behaviour). Comma splice. s/set/was set/ would probably make it clearer. >Python detected LC_CTYPE=C, LC_CTYPE set to UTF-8 (set another locale or >PYTHONCOERCECLOCALE=0 to disable this locale coercion behaviour). Ditto. >The second sentence providing recommendations would be conditionally compiled >based on the operating system (e.g. recommending ``LC_CTYPE=UTF-8`` on \*BSD >systems. Note that at least OpenBSD supports both "C.UTF-8" and "UTF-8" locales. >While this PEP ensures that developers that need to do so can still opt-in to >running their Python code in the legacy C locale, Yeah, no, it doesn't. It's impossible do disable coercion from Python code, because it happens to early. The best you can do is to write a wrapper script in a different language that sets PYTHONCOERCECLOCALE=0; but then you still get a spurious warning. -- Jakub Wilk From ncoghlan at gmail.com Sun Mar 12 08:57:05 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 12 Mar 2017 22:57:05 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: On 12 March 2017 at 08:36, Jakub Wilk wrote: > This is a very bad idea. > > It seems to based on an assumption that the C locale is always some kind > of pathology. Admittedly, it sometimes is a result of misconfiguration or a > mistake. (But I don't see why it's the interpreter's job to correct such > mistakes.) However, in some cases the C locale is a normal environment for > system services, cron scripts, distro package builds and whatnot. An environment in which Python 3's eager decoding of operating system provided values to Unicode fails. > It's possible to write Python programs that are locale-agnostic. > If a program is genuinely locale-agnostic, it will be unaffected by this PEP. > It's also possible to write programs that are locale-dependent, but handle > ASCII as locale encoding gracefully. > No, it is not generally feasible to write such programs in Python 3. That's the essence of the problem, and why the PEP deprecates support for the legacy C locale in Python 3. > Or you might want to write a program that intentionally aborts with an > explanatory error message when the locale encoding doesn't have sufficient > Unicode coverage. ("Errors should never pass silently" anyone?) > This is what click does, but it only does it because that isn't possible for click to do the right thing given Python 3's eager decoding of various values as ASCII. > With this proposal, none of the above seems possible to correctly > implement in Python. > The first case remains unchanged, the other two will need to use Python 2.7 or Tauthon. I'm fine with that. > * Nick Coghlan , 2017-03-05, 17:50: > > While this PEP ensures that developers that need to do so can still opt-in >> to running their Python code in the legacy C locale, >> > > Yeah, no, it doesn't. > > It's impossible do disable coercion from Python code, because it happens > to early. The best you can do is to write a wrapper script in a different > language that sets PYTHONCOERCECLOCALE=0; but then you still get a spurious > warning. > It's not a spurious warning, as Python 3's Unicode handling for environmental interactions genuinely doesn't work properly in the legacy C locale (unless you're genuinely promising to only ever feed it ASCII values, but that isn't a realistic guarantee to make). However, I'm also open to having that particular setting also disable the runtime warning from the shared library. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryanjkmurray at gmail.com Sun Mar 12 13:49:13 2017 From: ryanjkmurray at gmail.com (Ryan James Kenneth Murray) Date: Sun, 12 Mar 2017 10:49:13 -0700 Subject: [Python-Dev] Application support In-Reply-To: References: Message-ID: To whom it may concern, I was about to use Markdown to verify or indicate any changes to the files when I was directed to c python and pep, however I have reached my abilities in understanding or contributing. I will have to broden my knowledge and insight into python. That is why I am asking for help in making sure that everything is satisfactory. As you can see I am concerned about entering anymore data Sincerely yours Ryan Murray -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Sun Mar 12 21:44:26 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 13 Mar 2017 12:44:26 +1100 Subject: [Python-Dev] Application support In-Reply-To: References: Message-ID: <20170313014425.GU5689@ando.pearwood.info> Hello Ryan, Welcome! My response is below. On Sun, Mar 12, 2017 at 10:49:13AM -0700, Ryan James Kenneth Murray wrote: > To whom it may concern, > > I was about to use Markdown to verify or indicate any changes to the files > when I was directed to c python and pep, however I have reached my > abilities in understanding or contributing. I will have to broden my > knowledge and insight into python. That is why I am asking for help in > making sure that everything is satisfactory. As you can see I am concerned > about entering anymore data I'm afraid I cannot make head or tail of what you are talking about here. To be perfectly honest, your post sounds like something generated by a quite clever bot using a Markov chain to generate random text. It *almost* is meaningful, but not quite: there's a bunch of tech buzzwords (Markdown, C, Python, PEP) in some sentences which are grammatically correct but don't seem to mean anything. What files are you changing, what data are you talking about, and what is "everything" that needs to be satisfactory? How is this relevant to developing Python? If you are a bot, then you aren't welcome and somebody will soon remove you from the mailing list. (And I will feel silly for talking to you as if you were a person.) But in case you actually are a human being, I'm giving you the benefit of the doubt and allowing you the opportunity to say something that proves you are a human. Thank you. -- Steve From ncoghlan at gmail.com Sun Mar 12 22:50:22 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Mar 2017 12:50:22 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: On 12 March 2017 at 22:57, Nick Coghlan wrote: > However, I'm also open to having [PYTHONCOERCECLOCALE=0] also disable the > runtime warning from the shared library. > Considering this a little further, I think this is going to be necessary in order to sensibly handle the build time "--with[out]-c-locale-warning" flag in the test suite. Currently, there are a number of tests beyond the new ones in Lib/test/test_locale_coercion.py that would need to know whether or not to expect to see a warning in subprocesses in order to correctly handle the "--without-c-locale-warning" case: https://github.com/ncoghlan/cpython/commit/78c17a7cea04aed7cd1fce8ae5afb085a544a89c If PYTHONCOERCECLOCALE=0 turned off the runtime warning as well, then the behaviour of those tests would remain independent of the build flag as long as they set the new environment variable in the child process - the warning would be disabled either at build time via "--without-c-locale-warning" or at runtime with "PYTHONCOERCECLOCALE=0". The check for the runtime C locale warning would then be added to _testembed rather than going through a normal Python subprocess, and that test would be the only one that needed to know whether or not the locale warning had been disabled at build time (which we could indicate simply by compiling the embedding part of the test differently in that case). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Mon Mar 13 04:37:25 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Mon, 13 Mar 2017 17:37:25 +0900 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: > It seems to based on an assumption that the C locale is always some kind of > pathology. Admittedly, it sometimes is a result of misconfiguration or a > mistake. (But I don't see why it's the interpreter's job to correct such > mistakes.) However, in some cases the C locale is a normal environment for > system services, cron scripts, distro package builds and whatnot. I think "C locale + use UTF-8 for stdio + fs" is common setup, especially for servers. It's not mistake or misconfiguration. Perl, Ruby, Rust, Node.JS and Go can use UTF-8 without any pain on C locale. And current Python is painful for such cases. So I strongly +1 for PEP 540 (UTF-8 mode). On the other hand, PEP 538 is for for locale-dependent libraries (like curses) and subprocesses. I agree C locale is misconfiguration if user want to use UTF-8 in locale-dependent libraries. And I agree current PEP 538 seems carrying it a bit too far. But locale coercing works nice on platforms like android. So how about simplified version of PEP 538? Just adding configure option for locale coercing which is disabled by default. No envvar options and no warnings. Regards, From ncoghlan at gmail.com Mon Mar 13 07:01:33 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 13 Mar 2017 21:01:33 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: On 13 March 2017 at 18:37, INADA Naoki wrote: > But locale coercing works nice on platforms like android. > So how about simplified version of PEP 538? Just adding configure > option for locale coercing > which is disabled by default. No envvar options and no warnings. > That doesn't solve my original Linux distro problem, where locale misconfiguration problems show up as "Python 2 works, Python 3 doesn't work" behaviour and bug reports. The problem is that where Python 2 was largely locale-independent by default (just passing raw bytes through) such that you'd only get immediate encoding or decoding errors if you had a Unicode literal or a decode() call somewhere in your code and would otherwise pass data corruption problems further down the chain, Python 3 is locale-*aware* by default, and eagerly decodes: - command line parameters - environment variables - responses from operating system API calls - standard stream input - file contents You *can* still write locale-independent Python 3 applications, but they involve sprinkling liberal doses of "b" prefixes and suffixes and mode settings and "surrogateescape" error handler declarations in various places - you can't just run python-modernize over a pre-existing Python 2 application and expect it to behave the same way in the C locale as it did before. Once implemented, PEP 540 will partially solve the problem by introducing a locale independent UTF-8 mode, but that still leaves the inconsistency with other locale-aware components that are needing to deal with Python 3 API calls that accept or return Unicode objects where Python 2 allowed the use of 8-bit strings. Folks that really want the old behaviour back will be able to set PYTHONCOERCECLOCALE=0 (as that no longer emits any warnings), or else build their own CPython from source using `--without-c-locale-coercion` and ``--without-c-locale-warning`. However, they'll also get the explicit support notification from PEP 11 that any Unicode handling bugs they run into in those configurations are entirely their own problem - we won't fix them, because we consider those configurations unsupportable in the general case. That puts the additional self-support burden on folks doing something unusual (i.e. insisting on running an ASCII-only environment in 2017), rather than on those with a more conventional use case (i.e. running an up to date \*nix OS using UTF-8 or another universal encoding for both local and remote interfaces). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Mon Mar 13 08:22:32 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Mon, 13 Mar 2017 21:22:32 +0900 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: On Mon, Mar 13, 2017 at 8:01 PM, Nick Coghlan wrote: > On 13 March 2017 at 18:37, INADA Naoki wrote: >> >> But locale coercing works nice on platforms like android. >> So how about simplified version of PEP 538? Just adding configure >> option for locale coercing >> which is disabled by default. No envvar options and no warnings. > > > That doesn't solve my original Linux distro problem, where locale > misconfiguration problems show up as "Python 2 works, Python 3 doesn't work" > behaviour and bug reports. Sorry, I meant "PEP 540 + Simplified PEP 538 (coercing by configure option)". distros can enable the configure option, off course. > > The problem is that where Python 2 was largely locale-independent by default > (just passing raw bytes through) such that you'd only get immediate encoding > or decoding errors if you had a Unicode literal or a decode() call somewhere > in your code and would otherwise pass data corruption problems further down > the chain, Python 3 is locale-*aware* by default, and eagerly decodes: > > - command line parameters > - environment variables > - responses from operating system API calls > - standard stream input > - file contents > > You *can* still write locale-independent Python 3 applications, but they > involve sprinkling liberal doses of "b" prefixes and suffixes and mode > settings and "surrogateescape" error handler declarations in various places > - you can't just run python-modernize over a pre-existing Python 2 > application and expect it to behave the same way in the C locale as it did > before. > > Once implemented, PEP 540 will partially solve the problem by introducing a > locale independent UTF-8 mode, but that still leaves the inconsistency with > other locale-aware components that are needing to deal with Python 3 API > calls that accept or return Unicode objects where Python 2 allowed the use > of 8-bit strings. I feel problems PEP 538 solves, but PEP 540 doesn't solve are relatively small compared with complexity introduced PEP 538. As my understanding, PEP 538 solves problems only when: * python executable is used. (GUI applications linking Python for plugin is not affected) * One of C.UTF-8, C.utf8 or UTF8 is accepted for LC_CTYPE. * The "locale aware components" uses something other than ASCII or UTF-8 on C locale, but uses UTF-8 on UTF-8 locale. Can't we reduce options from 3 (2 configure, 1 envvar) when PEP 540 is accepted too? > > Folks that really want the old behaviour back will be able to set > PYTHONCOERCECLOCALE=0 (as that no longer emits any warnings), or else build > their own CPython from source using `--without-c-locale-coercion` and > ``--without-c-locale-warning`. However, they'll also get the explicit > support notification from PEP 11 that any Unicode handling bugs they run > into in those configurations are entirely their own problem - we won't fix > them, because we consider those configurations unsupportable in the general > case. > > That puts the additional self-support burden on folks doing something > unusual (i.e. insisting on running an ASCII-only environment in 2017), > rather than on those with a more conventional use case (i.e. running an up > to date \*nix OS using UTF-8 or another universal encoding for both local > and remote interfaces). > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From random832 at fastmail.com Mon Mar 13 09:31:19 2017 From: random832 at fastmail.com (Random832) Date: Mon, 13 Mar 2017 09:31:19 -0400 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> On Mon, Mar 13, 2017, at 04:37, INADA Naoki wrote: > But locale coercing works nice on platforms like android. > So how about simplified version of PEP 538? Just adding configure > option for locale coercing > which is disabled by default. No envvar options and no warnings. A configure option just kicks the decision to packagers - either no-one uses it (and thus it solves nothing) or people do use it (and any problems it causes won't be mitigated at all) From songofacandy at gmail.com Mon Mar 13 10:00:11 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Mon, 13 Mar 2017 23:00:11 +0900 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> Message-ID: On Mon, Mar 13, 2017 at 10:31 PM, Random832 wrote: > On Mon, Mar 13, 2017, at 04:37, INADA Naoki wrote: >> But locale coercing works nice on platforms like android. >> So how about simplified version of PEP 538? Just adding configure >> option for locale coercing >> which is disabled by default. No envvar options and no warnings. > > A configure option just kicks the decision to packagers - either no-one > uses it (and thus it solves nothing) or people do use it (and any > problems it causes won't be mitigated at all) Yes. people who building Python understand about the platform than users in most cases. For android build, they know coercing is works well on android. For Linux distros, they know the system supports locales like C.UTF-8 or not, and there are any python-xxxx packages which may cause the problem and coercing solve it. For people who building Python themselves (in docker, pyenv, etc...) They knows how they use the Python. From jaysinhp at gmail.com Mon Mar 13 07:56:44 2017 From: jaysinhp at gmail.com (Jaysinh Shukla) Date: Mon, 13 Mar 2017 17:26:44 +0530 Subject: [Python-Dev] Regarding writing tests for module tabnanny Message-ID: Respected Members, I identified the standard module 'tabnanny' is having 16.66% of code coverage (Source: https://codecov.io/gh/python/cpython/src/master/Lib/tabnanny.py). I am interested to write tests for this module. Before starting, I would like to get help from any core developer on this. 1. Is community expecting to have tests for this module? 2. Any module specific guidelines? I am waiting for green signal from any core developer. Thanks! From brett at python.org Mon Mar 13 12:41:20 2017 From: brett at python.org (Brett Cannon) Date: Mon, 13 Mar 2017 16:41:20 +0000 Subject: [Python-Dev] Regarding writing tests for module tabnanny In-Reply-To: References: Message-ID: These questions are best asked on the core-mentorship mailing list, Jaysinh, but to quickly answer your question: 1. Yes, tests would be appreciated. 2. Nothing from me On Mon, 13 Mar 2017 at 08:20 Jaysinh Shukla wrote: > Respected Members, > > I identified the standard module 'tabnanny' is having 16.66% of > code coverage (Source: > https://codecov.io/gh/python/cpython/src/master/Lib/tabnanny.py). I am > interested to write tests for this module. Before starting, I would like > to get help from any core developer on this. > > > 1. Is community expecting to have tests for this module? > 2. Any module specific guidelines? > > I am waiting for green signal from any core developer. Thanks! > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Mon Mar 13 17:32:24 2017 From: vadmium+py at gmail.com (Martin Panter) Date: Mon, 13 Mar 2017 21:32:24 +0000 Subject: [Python-Dev] Regarding writing tests for module tabnanny In-Reply-To: References: Message-ID: On 13 March 2017 at 11:56, Jaysinh Shukla wrote: > Respected Members, > > I identified the standard module 'tabnanny' is having 16.66% of code > coverage (Source: > https://codecov.io/gh/python/cpython/src/master/Lib/tabnanny.py). I am > interested to write tests for this module. Before starting, I would like to > get help from any core developer on this. > > > 1. Is community expecting to have tests for this module? > 2. Any module specific guidelines? > > I am waiting for green signal from any core developer. Thanks! Try the people involved in the existing patches at and . From ncoghlan at gmail.com Tue Mar 14 10:17:09 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Mar 2017 00:17:09 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> Message-ID: On 13 March 2017 at 23:31, Random832 wrote: > On Mon, Mar 13, 2017, at 04:37, INADA Naoki wrote: > > But locale coercing works nice on platforms like android. > > So how about simplified version of PEP 538? Just adding configure > > option for locale coercing > > which is disabled by default. No envvar options and no warnings. > > A configure option just kicks the decision to packagers - either no-one > uses it (and thus it solves nothing) or people do use it (and any > problems it causes won't be mitigated at all) > Distro packagers have narrower user bases and a better known set of compatibility constraints than upstream, so kicking platform integration related config decisions downstream to us(/them) is actually a pretty reasonable thing for upstream to do :) For example, while I've been iterating on the reference implementation for 3.7, Charalampos Stratakis has been iterating on the backport patch for Fedora 26, and he's found that we really need the PEP's "disable the C locale warning" config option to turn off the CLI's coercion warning in addition to the warning in the shared library, as leaving it visible breaks build processes for other packages that check that there aren't any messages being emitted to stderr (or otherwise care about the exact output from build tools that rely on the system Python 3 runtime). However, when it comes to choosing the upstream config defaults, it's important to keep in mind that one of the explicit goals of the PEP is to modify PEP 11 to *formally drop upstream support* for running Python 3 in the legacy C locale without using PEP 538, PEP 540 or a combination of the two to assume UTF-8 instead of ASCII for system interfaces. It's not that you *can't* run Python 3 in that kind of environment, and it's not that there are never any valid reasons to do so. It's that lots of things that you'd typically expect to work are going to misbehave (one I discovered myself yesterday is that the GNU readline problems reported in interactive mode on Android also show up when you do either "LANG=C python2" or "LANG=C python3" on traditional Linux and attempt to *edit* lines containing multi-byte characters), so you really need to know what you're doing in order to operate under those constraints. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Tue Mar 14 10:22:15 2017 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 15 Mar 2017 01:22:15 +1100 Subject: [Python-Dev] A big THANK YOU to the maintainers of What's New Message-ID: A bit of a thankless job, updating What's New for a bugfix release, but can be so important. Today I was trying to figure out why a Python script behaved differently on my dev system and my server, even when I used Python 3.4 on both ends - but it was 3.4.4 on one and 3.4.2 on the other. My first port of call: https://docs.python.org/3.4/whatsnew/changelog.html#python-3-4-4 Search for 'argparse'. Find this: Issue #9351: Defaults set with set_defaults on an argparse subparser are no longer ignored when also set on the parent parser. Bingo. That's where the difference came from. So, thank you to the release managers and those who volunteer to trawl the tracker and put stuff into What's New! It is very much appreciated. ChrisA From jaysinhp at gmail.com Tue Mar 14 03:30:00 2017 From: jaysinhp at gmail.com (Jaysinh Shukla) Date: Tue, 14 Mar 2017 13:00:00 +0530 Subject: [Python-Dev] Regarding writing tests for module tabnanny In-Reply-To: References: Message-ID: <665d98f0-1d7b-390c-5e94-6efa44d10c7c@gmail.com> On Monday 13 March 2017 10:11 PM, Brett Cannon wrote: > These questions are best asked on the core-mentorship mailing list, > Jaysinh, but to quickly answer your question: Thanks Brett for replaying. I will take care of this next time. From jorge.conrado at cptec.inpe.br Tue Mar 14 10:32:36 2017 From: jorge.conrado at cptec.inpe.br (jorge.conrado at cptec.inpe.br) Date: Tue, 14 Mar 2017 11:32:36 -0300 Subject: [Python-Dev] Python 2.7.13 Message-ID: <3ac095aeb02c0a6df99d698aa320d255@cptec.inpe.br> Hi, I dowloaded the Python 2.7.13 and install it as root . it installed in my directory: /usr/src/Python-2.7.13 Then I typed: python and I had: Python 2.7.13 (default, Mar 14 2017, 09:30:46) [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Then I did: import numpy Traceback (most recent call last): File "", line 1, in ImportError: No module named numpy import scpy Traceback (most recent call last): File "", line 1, in ImportError: No module named scpy import basemap Traceback (most recent call last): File "", line 1, in ImportError: No module named basemap Please, what can I do to solve these. Conrado From phd at phdru.name Tue Mar 14 11:17:04 2017 From: phd at phdru.name (Oleg Broytman) Date: Tue, 14 Mar 2017 16:17:04 +0100 Subject: [Python-Dev] Python 2.7.13 In-Reply-To: <3ac095aeb02c0a6df99d698aa320d255@cptec.inpe.br> References: <3ac095aeb02c0a6df99d698aa320d255@cptec.inpe.br> Message-ID: <20170314151704.GA2467@phdru.name> Hello. This mailing list is to work on developing Python (adding new features to Python itself and fixing bugs); if you're having problems learning, understanding or using Python, please find another forum. Probably python-list/comp.lang.python mailing list/news group is the best place; there are Python developers who participate in it; you may get a faster, and probably more complete, answer there. See http://www.python.org/community/ for other lists/news groups/fora. Thank you for understanding. On Tue, Mar 14, 2017 at 11:32:36AM -0300, jorge.conrado at cptec.inpe.br wrote: > I dowloaded the Python 2.7.13 and install it as root . it installed in my > directory: > > /usr/src/Python-2.7.13 > > Then I typed: > > python and I had: > > Python 2.7.13 (default, Mar 14 2017, 09:30:46) > [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > > Then I did: > > import numpy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named numpy > > import scpy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named scpy > > import basemap > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named basemap > > Please, what can I do to solve these. These modules are not in the standard library, you have to download and install them separately. I recommend you to learn what is PyPI and how to use `pip install`. > Conrado Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From raulcumplido at gmail.com Tue Mar 14 11:17:30 2017 From: raulcumplido at gmail.com (=?UTF-8?Q?Ra=C3=BAl_Cumplido?=) Date: Tue, 14 Mar 2017 15:17:30 +0000 Subject: [Python-Dev] Python 2.7.13 In-Reply-To: <3ac095aeb02c0a6df99d698aa320d255@cptec.inpe.br> References: <3ac095aeb02c0a6df99d698aa320d255@cptec.inpe.br> Message-ID: Hi Jorge, This is the mailing list for the Python language development itself. It's used for discussion of features and topics on the development of Python. You can use other mailing lists for questions as: python-help at python.org --> ask for help to other people on the python community tutor at python.org --> if you are learning Kind Regards, Raul On Tue, Mar 14, 2017 at 2:32 PM, wrote: > > > Hi, > > I dowloaded the Python 2.7.13 and install it as root . it installed in my > directory: > > /usr/src/Python-2.7.13 > > Then I typed: > > python and I had: > > Python 2.7.13 (default, Mar 14 2017, 09:30:46) > [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > > > Then I did: > > import numpy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named numpy > > > import scpy > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named scpy > > > import basemap > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named basemap > > > Please, what can I do to solve these. > > > Conrado > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/raulcumpl > ido%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Tue Mar 14 15:06:53 2017 From: random832 at fastmail.com (Random832) Date: Tue, 14 Mar 2017 15:06:53 -0400 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> Message-ID: <1489518413.3197254.911233544.2BFDEFE4@webmail.messagingengine.com> On Tue, Mar 14, 2017, at 10:17, Nick Coghlan wrote: > It's not that you *can't* run Python 3 in that kind of environment, and > it's not that there are never any valid reasons to do so. It's that lots > of > things that you'd typically expect to work are going to misbehave (one I > discovered myself yesterday is that the GNU readline problems reported in > interactive mode on Android also show up when you do either "LANG=C > python2" or "LANG=C python3" on traditional Linux and attempt to *edit* > lines containing multi-byte characters) It occurs to me that (at least for readline... and maybe also as a general proxy for whether the rest should be done) detecting the IUTF8 terminal flag (which, properly, controls basic non-readline-based line editing such as backspace) may be worthwhile. (And maybe Readline itself should be doing this, more or less independent of Python. But that's a discussion for elsewhere) From chris.barker at noaa.gov Tue Mar 14 16:22:17 2017 From: chris.barker at noaa.gov (Chris Barker) Date: Tue, 14 Mar 2017 13:22:17 -0700 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: There was a bunch of discussion about all this a while back, in which I think these points were addressed: However, in some cases the C locale is a normal environment for system > services, cron scripts, distro package builds and whatnot. > Indeed it is. But: if you run a Python (or any) program that is expecting an ASCII-only locale, then it will work jsut fine with any ascii-compatible locale. -- so no problem there. On the other hand, if you run a program that is expectign a unicode-aware locale, then it might barf unexpectently if run on a ASCII-only locale. A lot of people do in fiact have these issues (which are due to mis-configuration of the host system, which is indeed not properly Python's problem). So if we do all this, then: A) mis-configured systems will magically work (sometimes) This is a Good Thing. and B) If someone runs a python program that is expecting Unicode support on an properly configured ASCII-only system, then it will mostly "just work" -- after all a lot of C APIs are simply char*, who cares what the encoding is? It would not, however, fail if when a non-ascii value is used somewhere it shouldn't. So the question nis -- is anyone counting on errors in this case? i.e., is a sysadmin thinking: "I want an ASCII-only system, so I'll set the locale, and now I can expect any program running on this system that is not ascii compatible to fail." I honestly don't know if this is common -- but I would argue that trying to run a unicode-aware program on an ASCII-only system could be considered a mis-configuration as well. Also -- many programs will just be writing bytes to the system without checking encoding anyway. So this would simply let Python3 programs behave like most others... -CHB -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 14 22:13:11 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Mar 2017 12:13:11 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> <1489411879.1845573.909524576.5F46A560@webmail.messagingengine.com> Message-ID: On 15 March 2017 at 00:17, Nick Coghlan wrote: > On 13 March 2017 at 23:31, Random832 wrote: > >> On Mon, Mar 13, 2017, at 04:37, INADA Naoki wrote: >> > But locale coercing works nice on platforms like android. >> > So how about simplified version of PEP 538? Just adding configure >> > option for locale coercing >> > which is disabled by default. No envvar options and no warnings. >> >> A configure option just kicks the decision to packagers - either no-one >> uses it (and thus it solves nothing) or people do use it (and any >> problems it causes won't be mitigated at all) >> > > Distro packagers have narrower user bases and a better known set of > compatibility constraints than upstream, so kicking platform integration > related config decisions downstream to us(/them) is actually a pretty > reasonable thing for upstream to do :) > > For example, while I've been iterating on the reference implementation for > 3.7, Charalampos Stratakis has been iterating on the backport patch for > Fedora 26, and he's found that we really need the PEP's "disable the C > locale warning" config option to turn off the CLI's coercion warning in > addition to the warning in the shared library, as leaving it visible breaks > build processes for other packages that check that there aren't any > messages being emitted to stderr (or otherwise care about the exact output > from build tools that rely on the system Python 3 runtime). > The build processes that broke due to the warning were judged to be a bug in autoconf rather than a problem with the warning itself: http://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=commit;h=883a2abd5af5c96be894d5ef7ee6e9a2b8e64307 So we're going to leave this as it is in the PEP for now (i.e. the locale coercion warning always happens unless you preconfigure a locale other than C), but keep an eye on it to see if it causes any other problems. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Tue Mar 14 22:29:17 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 15 Mar 2017 12:29:17 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: On 15 March 2017 at 06:22, Chris Barker wrote: > So the question nis -- is anyone counting on errors in this case? i.e., is > a sysadmin thinking: > > "I want an ASCII-only system, so I'll set the locale, and now I can expect > any program running on this system that is not ascii compatible to fail." > > I honestly don't know if this is common -- but I would argue that trying > to run a unicode-aware program on an ASCII-only system could be considered > a mis-configuration as well. > >From a mainstream Linux point of view, it's not common - on systemd-managed systems, for example, the only way to get the C locale these days is to either specify it in /etc/locale.conf, or to set it specifically in the environment. Upstart was a little less reliable about that, and sysvinit was less reliable still, but the trend is definitely towards making C.UTF-8 the assumed default, rather than "C". Even glibc itself would quite like to get to a point where you only get the C locale if you explicitly ask for it: https://sourceware.org/glibc/wiki/Proposals/C.UTF-8 The main practical objection that comes up in relation to "UTF-8 everywhere" isn't to do with UTF-8 per se, but rather with the size of the collation tables needed to do "proper" sorting of Unicode code points. However, there's a neat hack in the design of UTF-8 where sorting the encoded bytes by byte value is equivalent to sorting the decoded text by the Unicode code point values, which means that "LC_COLLATE=C" sorting by byte value, and "LC_COLLATE=C.UTF-8" sorting by "Unicode code point value" give the same results. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Mar 15 10:30:08 2017 From: barry at python.org (Barry Warsaw) Date: Wed, 15 Mar 2017 10:30:08 -0400 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> Message-ID: <20170315103008.37a93299@subdivisions.wooz.org> On Mar 15, 2017, at 12:29 PM, Nick Coghlan wrote: >From a mainstream Linux point of view, it's not common - on systemd-managed >systems, for example, the only way to get the C locale these days is to >either specify it in /etc/locale.conf, or to set it specifically in the >environment. I think it's still the case that some isolation environments (e.g. Debian chroots) default to bare C locales. Often it doesn't matter, but sometimes tests or other applications run inside those environments will fail in ways they don't in a normal execution environment. The answer is almost always to explicitly coerce those environments to C.UTF-8 for Linuxes that support that. -Barry From Meiling.Ge at synopsys.com Wed Mar 15 06:28:50 2017 From: Meiling.Ge at synopsys.com (Meiling Ge) Date: Wed, 15 Mar 2017 10:28:50 +0000 Subject: [Python-Dev] python(_hashlib.so) compiled with libssl.so.1.0.1e cannot work with libssl.so.0.9.8e Message-ID: <9DD6A94F511AF94DB2B699311770C0885622635D@US01wembx3.internal.synopsys.com> Hi, I just want to confirm that _hashlib.so in python references something new in libssl.so.1.0.1e(hmac related?). And if we want to work on platforms with libssl.so.0.9.8e, we should compile python with this lower version, right? Thanks. Regards, -Meiling -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian at python.org Wed Mar 15 11:57:14 2017 From: christian at python.org (Christian Heimes) Date: Wed, 15 Mar 2017 16:57:14 +0100 Subject: [Python-Dev] python(_hashlib.so) compiled with libssl.so.1.0.1e cannot work with libssl.so.0.9.8e In-Reply-To: <9DD6A94F511AF94DB2B699311770C0885622635D@US01wembx3.internal.synopsys.com> References: <9DD6A94F511AF94DB2B699311770C0885622635D@US01wembx3.internal.synopsys.com> Message-ID: On 2017-03-15 11:28, Meiling Ge wrote: > Hi, > > I just want to confirm that _hashlib.so in python references something > new in libssl.so.1.0.1e(hmac related?). > > And if we want to work on platforms with libssl.so.0.9.8e, we should > compile python with this lower version, right? OpenSSL 0.9.8 and 1.0.1 have an incompatible ABI. You cannot use 0.9.8 builds with 1.0.1 or the other way around. You have to re-compile _hashlib.so. By the way, you should neither use 0.9.8 nor 1.0.1. Both versions are no longer supported by upstream and receive no security fixes. Some vendors (RH, Ubuntu) still maintain 1.0.1, though. Christian From ncoghlan at gmail.com Thu Mar 16 01:06:35 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 16 Mar 2017 15:06:35 +1000 Subject: [Python-Dev] PEP 538: Coercing the legacy C locale to a UTF-8 based locale In-Reply-To: <20170315103008.37a93299@subdivisions.wooz.org> References: <20170311223641.v5jzc4rklvuy3il7@jwilk.net> <20170315103008.37a93299@subdivisions.wooz.org> Message-ID: On 16 March 2017 at 00:30, Barry Warsaw wrote: > On Mar 15, 2017, at 12:29 PM, Nick Coghlan wrote: > > >From a mainstream Linux point of view, it's not common - on > systemd-managed > >systems, for example, the only way to get the C locale these days is to > >either specify it in /etc/locale.conf, or to set it specifically in the > >environment. > > I think it's still the case that some isolation environments (e.g. Debian > chroots) default to bare C locales. Often it doesn't matter, but sometimes > tests or other applications run inside those environments will fail in ways > they don't in a normal execution environment. Yeah, I think mock (the Fedora/RHEL/CentOS build environment for RPMs) still defaults to a bare C locale, and Docker environments usually aren't systemd-managed in the first place (since PID 1 inside a container typically isn't an init system at all). The general trend for all of those seems to be "they don't use C.UTF-8... yet", though (even though some of them may not shift until the default changes at the level of the given distro's libc implementation). The answer is almost always to > explicitly coerce those environments to C.UTF-8 for Linuxes that support > that. > I also double checked that "LANG=C ./python -m test" still worked with the reference implementation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaysinhp at gmail.com Thu Mar 16 08:24:01 2017 From: jaysinhp at gmail.com (Jaysinh Shukla) Date: Thu, 16 Mar 2017 17:54:01 +0530 Subject: [Python-Dev] we would like to share python articles with you In-Reply-To: References: Message-ID: <7e64ae6d-f00c-27bd-7f74-99a24b56211f@gmail.com> On Monday 06 March 2017 09:23 PM, Yaroslav Lehenchuk wrote: > > Hi! > > I like your resource. We in Django Stars writing a lot about > Python/Django and we would like to share our articles with other > professionals and geeks. > Here are two examples of our blog content: > http://djangostars.com/blog/continuous-integration-circleci-vs-travisci-vs-jenkins/ > > > http://djangostars.com/blog/how-to-create-and-deploy-a-telegram-bot/ > > > And we also have an account on git hub where we sharing our libraries > and open source projects. > > Tell me please, are you interested in such cooperation? I am talking > about submitting our post to your tips-digest. > > Waiting for your response. Thank you in advance. > > -- > Best Regards, > Yaroslav Lehenchuk > Marketer at Django Stars > > Cell: +380730903748 > Skype: yaroslav_le > Email: yaroslav.lehenchuk at djangostars.com > > > Hello, http:// planetpython.org/ is the url where community blogs on Python are collected. You can add ATOM, RSS feed url of your blog to Planet and then all your blogs will be published at Planet. Please visit https://github.com/python/planet for adding your feed url to Planet. Thanks! From status at bugs.python.org Fri Mar 17 13:09:12 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 17 Mar 2017 18:09:12 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20170317170912.920B756A24@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-03-10 - 2017-03-17) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5841 ( -9) closed 35752 (+57) total 41593 (+48) Open issues with patches: 2436 Issues opened (27) ================== #29791: print documentation: flush is also a keyword argument http://bugs.python.org/issue29791 opened by Lucio Ricardo Montero Valenzuela #29793: Convert some builtin types constructors to Argument Clinic http://bugs.python.org/issue29793 opened by serhiy.storchaka #29796: test_weakref hangs on AppVeyor (2.7) http://bugs.python.org/issue29796 opened by zach.ware #29798: Handle "git worktree" in "make patchcheck" http://bugs.python.org/issue29798 opened by ncoghlan #29799: Add tests for header API of 'urllib.request.Request' class http://bugs.python.org/issue29799 opened by jaysinh.shukla #29802: A possible null-pointer dereference in struct.s_unpack_interna http://bugs.python.org/issue29802 opened by artem.smotrakov #29803: Remove some redandunt ops in unicodeobject.c http://bugs.python.org/issue29803 opened by xiang.zhang #29804: test_ctypes test_pass_by_value fails on arm64 (aarch64) archit http://bugs.python.org/issue29804 opened by ishcherb #29805: Pathlib.replace cannot move file to a different drive on Windo http://bugs.python.org/issue29805 opened by Laurent.Mazuel #29807: ArgParse page in library reference rewrite http://bugs.python.org/issue29807 opened by sweavo #29810: Rename ssl.Purpose.{CLIENT,SERVER}_AUTH http://bugs.python.org/issue29810 opened by alex #29811: Avoid temporary method object in PyObject_CallMethod() and PyO http://bugs.python.org/issue29811 opened by haypo #29812: test for token.py, and consistency tests for tokenize.py http://bugs.python.org/issue29812 opened by r.david.murray #29816: Get rid of C limitation for shift count in right shift http://bugs.python.org/issue29816 opened by serhiy.storchaka #29818: Py_SetStandardStreamEncoding leads to a memory error in debug http://bugs.python.org/issue29818 opened by ncoghlan #29819: Avoid raising OverflowError in truncate() if possible http://bugs.python.org/issue29819 opened by serhiy.storchaka #29822: inspect.isabstract does not work on abstract base classes duri http://bugs.python.org/issue29822 opened by So8res #29823: mimetypes guesses XSL mimetype when passed an XML file http://bugs.python.org/issue29823 opened by Aleksey Bilogur #29824: Hostname validation in SSL match_hostname() http://bugs.python.org/issue29824 opened by ssivakorn #29828: Allow registering after-fork initializers in multiprocessing http://bugs.python.org/issue29828 opened by pitrou #29829: Documentation lacks clear warning of subprocess issue with pyt http://bugs.python.org/issue29829 opened by Steve Barnes #29830: pyexpat.errors doesn't have __spec__ and __loader__ set http://bugs.python.org/issue29830 opened by mjacob #29832: Don't refer to getsockaddrarg in error messages http://bugs.python.org/issue29832 opened by serhiy.storchaka #29833: Avoid raising OverflowError if possible http://bugs.python.org/issue29833 opened by serhiy.storchaka #29834: Raise ValueError rather of OverflowError in PyLong_AsUnsignedL http://bugs.python.org/issue29834 opened by serhiy.storchaka #29835: py_blake2*_new_impl produces inconsistent error messages, and http://bugs.python.org/issue29835 opened by Oren Milman #29836: Remove nturl2path from test_sundry and amend its docstring http://bugs.python.org/issue29836 opened by Jim Fasarakis-Hilliard Most recent 15 issues with no replies (15) ========================================== #29833: Avoid raising OverflowError if possible http://bugs.python.org/issue29833 #29832: Don't refer to getsockaddrarg in error messages http://bugs.python.org/issue29832 #29830: pyexpat.errors doesn't have __spec__ and __loader__ set http://bugs.python.org/issue29830 #29822: inspect.isabstract does not work on abstract base classes duri http://bugs.python.org/issue29822 #29819: Avoid raising OverflowError in truncate() if possible http://bugs.python.org/issue29819 #29818: Py_SetStandardStreamEncoding leads to a memory error in debug http://bugs.python.org/issue29818 #29803: Remove some redandunt ops in unicodeobject.c http://bugs.python.org/issue29803 #29802: A possible null-pointer dereference in struct.s_unpack_interna http://bugs.python.org/issue29802 #29799: Add tests for header API of 'urllib.request.Request' class http://bugs.python.org/issue29799 #29796: test_weakref hangs on AppVeyor (2.7) http://bugs.python.org/issue29796 #29791: print documentation: flush is also a keyword argument http://bugs.python.org/issue29791 #29780: Interpreter hang on self._epoll.poll(timeout, max_ev) http://bugs.python.org/issue29780 #29766: --with-lto still implied by --enable-optimizations in Python 2 http://bugs.python.org/issue29766 #29755: python3 gettext.lgettext sometimes returns bytes, not string http://bugs.python.org/issue29755 #29748: Argument Clinic: slice index converter http://bugs.python.org/issue29748 Most recent 15 issues waiting for review (15) ============================================= #29816: Get rid of C limitation for shift count in right shift http://bugs.python.org/issue29816 #29803: Remove some redandunt ops in unicodeobject.c http://bugs.python.org/issue29803 #29802: A possible null-pointer dereference in struct.s_unpack_interna http://bugs.python.org/issue29802 #29793: Convert some builtin types constructors to Argument Clinic http://bugs.python.org/issue29793 #29776: Modernize properties http://bugs.python.org/issue29776 #29769: pkgutil._iter_file_finder_modules should not be fooled by *.py http://bugs.python.org/issue29769 #29762: Use "raise from None" http://bugs.python.org/issue29762 #29760: tarfile chokes on reading .tar file with no entries (but does http://bugs.python.org/issue29760 #29748: Argument Clinic: slice index converter http://bugs.python.org/issue29748 #29741: BytesIO methods don't accept integer types, while StringIO cou http://bugs.python.org/issue29741 #29737: Optimize concatenating empty tuples http://bugs.python.org/issue29737 #29728: Expose TCP_NOTSENT_LOWAT http://bugs.python.org/issue29728 #29718: Fixed compile on cygwin. http://bugs.python.org/issue29718 #29706: IDLE needs syntax highlighting for async and await http://bugs.python.org/issue29706 #29694: race condition in pathlib mkdir with flags parents=True http://bugs.python.org/issue29694 Top 10 most discussed issues (10) ================================= #28685: Optimizing list.sort() by performing safety checks in advance http://bugs.python.org/issue28685 25 msgs #29816: Get rid of C limitation for shift count in right shift http://bugs.python.org/issue29816 10 msgs #29688: Add support for Path.absolute() http://bugs.python.org/issue29688 8 msgs #3353: make built-in tokenizer available via Python C API http://bugs.python.org/issue3353 7 msgs #15988: Inconsistency in overflow error messages of integer argument http://bugs.python.org/issue15988 6 msgs #25478: Consider adding a normalize() method to collections.Counter() http://bugs.python.org/issue25478 6 msgs #29828: Allow registering after-fork initializers in multiprocessing http://bugs.python.org/issue29828 6 msgs #29517: "Can't pickle local object" when uses functools.partial with m http://bugs.python.org/issue29517 5 msgs #29636: Specifying indent in the json.tool command http://bugs.python.org/issue29636 5 msgs #29715: Argparse improperly handles "-_" http://bugs.python.org/issue29715 5 msgs Issues closed (53) ================== #4851: xml.dom.minidom.Element.cloneNode fails with AttributeError http://bugs.python.org/issue4851 closed by berker.peksag #12284: argparse.ArgumentParser: usage example option http://bugs.python.org/issue12284 closed by martin.panter #15695: Correct __sizeof__ support for StgDict http://bugs.python.org/issue15695 closed by serhiy.storchaka #24622: tokenize.py: missing EXACT_TOKEN_TYPES http://bugs.python.org/issue24622 closed by skrah #26121: Use C99 functions in math if available http://bugs.python.org/issue26121 closed by serhiy.storchaka #27880: cPickle fails on large objects (still - 2011 and counting) http://bugs.python.org/issue27880 closed by serhiy.storchaka #28230: tarfile does not support pathlib http://bugs.python.org/issue28230 closed by berker.peksag #28667: FD_SETSIZE is unsigned on FreeBSD http://bugs.python.org/issue28667 closed by serhiy.storchaka #28739: PEP 498: docstrings as f-strings http://bugs.python.org/issue28739 closed by Mariatta #28856: %b format for bytes does not support objects that follow the b http://bugs.python.org/issue28856 closed by xiang.zhang #29026: time.time() documentation should mention UTC timezone http://bugs.python.org/issue29026 closed by haypo #29319: Embedded 3.6.0 distribution cannot run pyz files http://bugs.python.org/issue29319 closed by ncoghlan #29507: Use FASTCALL in typeobject.c call_method() to avoid temporary http://bugs.python.org/issue29507 closed by haypo #29540: Add compact=True flag to json.dump/dumps http://bugs.python.org/issue29540 closed by rhettinger #29548: Recommend PyObject_Call* APIs over PyEval_Call*() APIs http://bugs.python.org/issue29548 closed by inada.naoki #29592: abs_paths() in site.py is slow http://bugs.python.org/issue29592 closed by inada.naoki #29600: Returning an exception object from a coroutine triggers implic http://bugs.python.org/issue29600 closed by yselivanov #29656: Change "make patchcheck" to be branch aware http://bugs.python.org/issue29656 closed by ncoghlan #29722: heapq.merge docs are misleading with the "reversed" flag http://bugs.python.org/issue29722 closed by rhettinger #29723: 3.6.1rc1 adds the current directory to sys.path when running a http://bugs.python.org/issue29723 closed by ncoghlan #29726: test_xmlrpc raises DeprecationWarnings http://bugs.python.org/issue29726 closed by berker.peksag #29730: unoptimal calls to PyNumber_Check http://bugs.python.org/issue29730 closed by serhiy.storchaka #29735: Optimize functools.partial() for positional arguments http://bugs.python.org/issue29735 closed by haypo #29742: asyncio get_extra_info() throws exception http://bugs.python.org/issue29742 closed by yselivanov #29746: Update marshal docs to Python 3 http://bugs.python.org/issue29746 closed by serhiy.storchaka #29763: test_site failing on AppVeyor http://bugs.python.org/issue29763 closed by zach.ware #29765: 2.7.12 compile error from ssl related http://bugs.python.org/issue29765 closed by christian.heimes #29770: Executable help output (--help) at commandline is wrong for op http://bugs.python.org/issue29770 closed by xiang.zhang #29784: Erroneous link in shutil.copy description http://bugs.python.org/issue29784 closed by Mariatta #29785: Registration link sent via email by the tracker is http http://bugs.python.org/issue29785 closed by ezio.melotti #29786: asyncio.wrap_future() is not documented http://bugs.python.org/issue29786 closed by martin.panter #29787: Internal importlib frames visible when module imported by impo http://bugs.python.org/issue29787 closed by brett.cannon #29790: Optional use of /dev/random on linux http://bugs.python.org/issue29790 closed by ncoghlan #29792: "Fatal Python error: Cannot recover from stack overflow." from http://bugs.python.org/issue29792 closed by David MacIver #29794: Incorrect error message on invalid __class__ assignments http://bugs.python.org/issue29794 closed by steven.daprano #29795: Clarify how to share multiprocessing primitives http://bugs.python.org/issue29795 closed by davin #29797: Deadlock with multiprocessing.Queue() http://bugs.python.org/issue29797 closed by tim.peters #29800: functools.partial segfaults in repr when keywords attribute is http://bugs.python.org/issue29800 closed by serhiy.storchaka #29801: Spam http://bugs.python.org/issue29801 closed by zach.ware #29806: Requesting version info with lowercase -v or -vv causes an imp http://bugs.python.org/issue29806 closed by ned.deily #29808: SyslogHandler: should not raise exception in constructor if co http://bugs.python.org/issue29808 closed by vinay.sajip #29809: TypeError in traceback.print_exc - unicode does not have the b http://bugs.python.org/issue29809 closed by jason.coombs #29813: PyTuple_GetSlice does not always return a new tuple http://bugs.python.org/issue29813 closed by rhettinger #29814: parsing f-strings -- opening brace of expression gets duplicat http://bugs.python.org/issue29814 closed by serhiy.storchaka #29815: Fail at divide a negative integer number for a positive intege http://bugs.python.org/issue29815 closed by martin.panter #29817: File IO r+ read, write, read causes garbage data write. http://bugs.python.org/issue29817 closed by paul.moore #29820: Broken link to "GUI Programming with Python: QT Edition" book http://bugs.python.org/issue29820 closed by Mariatta #29821: importing module shutil executes file 'copy.py' http://bugs.python.org/issue29821 closed by ammar2 #29825: PyFunction_New() not validate code object http://bugs.python.org/issue29825 closed by serhiy.storchaka #29826: " don't work on Mac under IDLE http://bugs.python.org/issue29826 closed by ned.deily #29827: os.path.exists() returns False for certain file name http://bugs.python.org/issue29827 closed by eryksun #29831: os.path.exists seems can not recgnize "~" http://bugs.python.org/issue29831 closed by martin.panter #29837: python3 pycopg2 import issue on solaris 10 http://bugs.python.org/issue29837 closed by eric.smith From freddyrietdijk at fridh.nl Sat Mar 18 09:15:12 2017 From: freddyrietdijk at fridh.nl (Freddy Rietdijk) Date: Sat, 18 Mar 2017 14:15:12 +0100 Subject: [Python-Dev] Set program name through exec -a or environment variable Message-ID: Hi, I would like to know if you're open to supporting `exec -a` or an environment variable for setting `argv[0]`, and have some pointers as to where that should be implemented. On Nixpkgs we typically use wrappers to set environment variables like PATH or PYTHONPATH for individual programs. Consider a program named `prog`. We move the original program `prog` to `.prog-wrapped` and then create a wrapper `prog` that does `exec -a prog .prog-wrapped`. Unfortunately `exec -a` does not work with Python. The process is still named `.prog-wrapped` (although that's not really a problem) but worse, `sys.argv[0]` is also `.prog-wrapped`. Currently we inject some code in programs that sets `sys.argv=[0] = "prog" but this is fragile and I would prefer to get rid of this. Kind regards, Frederik -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Sat Mar 18 11:27:58 2017 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Sat, 18 Mar 2017 10:27:58 -0500 Subject: [Python-Dev] Set program name through exec -a or environment variable In-Reply-To: References: Message-ID: exec -a would seem to end up setting argv[0] on the CPython interpreter itself, which I don't think is the desired effect... -- Ryan (????) Yoko Shimomura > ryo (supercell/EGOIST) > Hiroyuki Sawano >> everyone else http://refi64.com On Mar 18, 2017 10:11 AM, "Freddy Rietdijk" wrote: > Hi, > > I would like to know if you're open to supporting `exec -a` or an > environment variable for setting `argv[0]`, and have some pointers as to > where that should be implemented. > > On Nixpkgs we typically use wrappers to set environment variables like > PATH or PYTHONPATH for individual programs. Consider a program named > `prog`. We move the original program `prog` to `.prog-wrapped` and then > create a wrapper `prog` that does `exec -a prog .prog-wrapped`. > > Unfortunately `exec -a` does not work with Python. The process is still > named `.prog-wrapped` (although that's not really a problem) but worse, > `sys.argv[0]` is also `.prog-wrapped`. Currently we inject some code in > programs that sets `sys.argv=[0] = "prog" but this is fragile and I would > prefer to get rid of this. > > Kind regards, > > Frederik > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > rymg19%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phdru.name Sat Mar 18 11:42:46 2017 From: phd at phdru.name (Oleg Broytman) Date: Sat, 18 Mar 2017 16:42:46 +0100 Subject: [Python-Dev] Set program name through exec -a In-Reply-To: References: Message-ID: <20170318154246.GA4970@phdru.name> Hi! On Sat, Mar 18, 2017 at 02:15:12PM +0100, Freddy Rietdijk wrote: > I would like to know if you're open to supporting `exec -a` or an Not everyone here knows what `exec -a` is so let me say that it's a bashism that sets program's name. `exec prog` is interpreted as a system call `exec('prog', 'prog')` and `exec -a name prog` is interpreted as `exec('prog', 'name')`. Currently sys.argv[0] is the name of the script and it should stay that way. But it would be interesting to preserve argv[0] from C and expose it via sys in addition to sys.executable. Something like sys.original_prog_name. Then the OP can do anything application-specific -- set sys.argv[0], call setproctitle, whatever. > Kind regards, > > Frederik Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From phd at phdru.name Sat Mar 18 11:43:47 2017 From: phd at phdru.name (Oleg Broytman) Date: Sat, 18 Mar 2017 16:43:47 +0100 Subject: [Python-Dev] Set program name through exec -a or environment variable In-Reply-To: References: Message-ID: <20170318154347.GB4970@phdru.name> On Sat, Mar 18, 2017 at 10:27:58AM -0500, Ryan Gonzalez wrote: > exec -a would seem to end up setting argv[0] on the CPython interpreter > itself, which I don't think is the desired effect... That's exactly what OP asked -- how to change that? > -- > Ryan (????????????) > Yoko Shimomura > ryo (supercell/EGOIST) > Hiroyuki Sawano >> everyone else > http://refi64.com > > On Mar 18, 2017 10:11 AM, "Freddy Rietdijk" wrote: > > I would like to know if you're open to supporting `exec -a` or an > > environment variable for setting `argv[0]`, and have some pointers as to > > where that should be implemented. > > > > On Nixpkgs we typically use wrappers to set environment variables like > > PATH or PYTHONPATH for individual programs. Consider a program named > > `prog`. We move the original program `prog` to `.prog-wrapped` and then > > create a wrapper `prog` that does `exec -a prog .prog-wrapped`. > > > > Unfortunately `exec -a` does not work with Python. The process is still > > named `.prog-wrapped` (although that's not really a problem) but worse, > > `sys.argv[0]` is also `.prog-wrapped`. Currently we inject some code in > > programs that sets `sys.argv=[0] = "prog" but this is fragile and I would > > prefer to get rid of this. > > > > Kind regards, > > > > Frederik Oleg. -- Oleg Broytman http://phdru.name/ phd at phdru.name Programmers don't die, they just GOSUB without RETURN. From greg.ewing at canterbury.ac.nz Sat Mar 18 19:12:07 2017 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 19 Mar 2017 12:12:07 +1300 Subject: [Python-Dev] Set program name through exec -a or environment variable In-Reply-To: <20170318154347.GB4970@phdru.name> References: <20170318154347.GB4970@phdru.name> Message-ID: <58CDBEC7.9090108@canterbury.ac.nz> Oleg Broytman wrote: > On Sat, Mar 18, 2017 at 10:27:58AM -0500, Ryan Gonzalez wrote: > >>exec -a would seem to end up setting argv[0] on the CPython interpreter >>itself, which I don't think is the desired effect... > > That's exactly what OP asked -- how to change that? Maybe python itself should have an -a option, so that python -a blarg foo.py would run foo.py with sys.argv[0] == 'blarg'. -- Greg From tritium-list at sdamon.com Sat Mar 18 22:48:40 2017 From: tritium-list at sdamon.com (tritium-list at sdamon.com) Date: Sat, 18 Mar 2017 22:48:40 -0400 Subject: [Python-Dev] Set program name through exec -a or environment variable In-Reply-To: References: Message-ID: <085501d2a05b$51d446a0$f57cd3e0$@hotmail.com> https://pypi.python.org/pypi/setproctitle From: Python-Dev [mailto:python-dev-bounces+tritium-list=sdamon.com at python.org] On Behalf Of Freddy Rietdijk Sent: Saturday, March 18, 2017 9:15 AM To: Python-Dev Subject: [Python-Dev] Set program name through exec -a or environment variable Hi, I would like to know if you're open to supporting `exec -a` or an environment variable for setting `argv[0]`, and have some pointers as to where that should be implemented. On Nixpkgs we typically use wrappers to set environment variables like PATH or PYTHONPATH for individual programs. Consider a program named `prog`. We move the original program `prog` to `.prog-wrapped` and then create a wrapper `prog` that does `exec -a prog .prog-wrapped`. Unfortunately `exec -a` does not work with Python. The process is still named `.prog-wrapped` (although that's not really a problem) but worse, `sys.argv[0]` is also `.prog-wrapped`. Currently we inject some code in programs that sets `sys.argv=[0] = "prog" but this is fragile and I would prefer to get rid of this. Kind regards, Frederik -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sun Mar 19 01:51:56 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sun, 19 Mar 2017 07:51:56 +0200 Subject: [Python-Dev] Set program name through exec -a or environment variable In-Reply-To: References: Message-ID: On 18.03.17 15:15, Freddy Rietdijk wrote: > I would like to know if you're open to supporting `exec -a` or an > environment variable for setting `argv[0]`, and have some pointers as to > where that should be implemented. > > On Nixpkgs we typically use wrappers to set environment variables like > PATH or PYTHONPATH for individual programs. Consider a program named > `prog`. We move the original program `prog` to `.prog-wrapped` and then > create a wrapper `prog` that does `exec -a prog .prog-wrapped`. > > Unfortunately `exec -a` does not work with Python. The process is still > named `.prog-wrapped` (although that's not really a problem) but worse, > `sys.argv[0]` is also `.prog-wrapped`. Currently we inject some code in > programs that sets `sys.argv=[0] = "prog" but this is fragile and I > would prefer to get rid of this. You can move the original program `prog` into the subdirectory `.wrapped` and then create a wrapper `prog` that does `exec .wrapped/prog` or `exec python3 .wrapped/prog`. From ncoghlan at gmail.com Sun Mar 19 07:30:53 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 19 Mar 2017 21:30:53 +1000 Subject: [Python-Dev] Set program name through exec -a In-Reply-To: <20170318154246.GA4970@phdru.name> References: <20170318154246.GA4970@phdru.name> Message-ID: On 19 March 2017 at 01:42, Oleg Broytman wrote: > Hi! > > On Sat, Mar 18, 2017 at 02:15:12PM +0100, Freddy Rietdijk < > freddyrietdijk at fridh.nl> wrote: > > I would like to know if you're open to supporting `exec -a` or an > > Not everyone here knows what `exec -a` is so let me say that it's a > bashism that sets program's name. `exec prog` is interpreted as a system > call `exec('prog', 'prog')` and `exec -a name prog` is interpreted as > `exec('prog', 'name')`. > > Currently sys.argv[0] is the name of the script and it should stay > that way. But it would be interesting to preserve argv[0] from C and > expose it via sys in addition to sys.executable. Something like > sys.original_prog_name. Then the OP can do anything application-specific > -- set sys.argv[0], call setproctitle, whatever. > There are a lot of other ways that the C level argv contents can differ from what's published in "sys.argv" (especially when things are run with -m, -c, or by executing a zipfile or directory rather than a Python script directly). https://bugs.python.org/issue14208 proposes offering an attribute like "sys._raw_argv" to get the full details of how Python was invoked. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Mon Mar 20 07:26:34 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 20 Mar 2017 13:26:34 +0200 Subject: [Python-Dev] Py_SIZE vs PyXXX_GET_SIZE Message-ID: What is the preferable way of getting the size of tuple, list, bytes, bytearray: Py_SIZE or PyTuple_GET_SIZE, PyList_GET_SIZE, PyBytes_GET_SIZE, PyByteArray_GET_SIZE? Are macros for concrete types more preferable or they are outdated? On one hand concrete type macros are longer than Py_SIZE, and since concrete type macros are defined not for all PyVarObject types we need to use Py_SIZE for them in any case (for example for PyLongObject and PyTypeObject). On other hand we can add asserts for checking that concrete type macros are used with correct types. When I wrote a patch that replaces Py_SIZE with concrete type macros I found two cases of misusing Py_SIZE with dict object: one in _json.c (already fixed in 3023ebb43f7607584c3e123aff56e867cb04a418) and other in dictobject.c (still not fixed). If prefer using concrete type macros this would unlikely happen. From levkivskyi at gmail.com Mon Mar 20 08:00:06 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 20 Mar 2017 13:00:06 +0100 Subject: [Python-Dev] PEP 544: Protocols Message-ID: Hi all, PEP 484 specifies semantics for type hints. These type hints are used by various tools, including static type checkers. However, PEP 484 only specifies the semantics for nominal subtyping (subtyping based on subclassing). Here we propose a specification for semantics of structural subtyping (static duck typing). Previous discussions on this PEP happened at: https://mail.python.org/pipermail/python-ideas/2015-September/thread.html#35859 https://github.com/python/typing/issues/11 https://github.com/python/peps/pull/224 -- Ivan =========================================== PEP: 544 Title: Protocols Version: $Revision$ Last-Modified: $Date$ Author: Ivan Levkivskyi , Jukka Lehtosalo < jukka.lehtosalo at iki.fi>, ?ukasz Langa Discussions-To: Python-Dev Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 05-Mar-2017 Python-Version: 3.7 Abstract ======== Type hints introduced in PEP 484 can be used to specify type metadata for static type checkers and other third party tools. However, PEP 484 only specifies the semantics of *nominal* subtyping. In this PEP we specify static and runtime semantics of protocol classes that will provide a support for *structural* subtyping (static duck typing). .. _rationale: Rationale and Goals =================== Currently, PEP 484 and the ``typing`` module [typing]_ define abstract base classes for several common Python protocols such as ``Iterable`` and ``Sized``. The problem with them is that a class has to be explicitly marked to support them, which is unpythonic and unlike what one would normally do in idiomatic dynamically typed Python code. For example, this conforms to PEP 484:: from typing import Sized, Iterable, Iterator class Bucket(Sized, Iterable[int]): ... def __len__(self) -> int: ... def __iter__(self) -> Iterator[int]: ... The same problem appears with user-defined ABCs: they must be explicitly subclassed or registered. This is particularly difficult to do with library types as the type objects may be hidden deep in the implementation of the library. Also, extensive use of ABCs might impose additional runtime costs. The intention of this PEP is to solve all these problems by allowing users to write the above code without explicit base classes in the class definition, allowing ``Bucket`` to be implicitly considered a subtype of both ``Sized`` and ``Iterable[int]`` by static type checkers using structural [wiki-structural]_ subtyping:: from typing import Iterator, Iterable class Bucket: ... def __len__(self) -> int: ... def __iter__(self) -> Iterator[int]: ... def collect(items: Iterable[int]) -> int: ... result: int = collect(Bucket()) # Passes type check Note that ABCs in ``typing`` module already provide structural behavior at runtime, ``isinstance(Bucket(), Iterable)`` returns ``True``. The main goal of this proposal is to support such behavior statically. The same functionality will be provided for user-defined protocols, as specified below. The above code with a protocol class matches common Python conventions much better. It is also automatically extensible and works with additional, unrelated classes that happen to implement the required protocol. Nominal vs structural subtyping ------------------------------- Structural subtyping is natural for Python programmers since it matches the runtime semantics of duck typing: an object that has certain properties is treated independently of its actual runtime class. However, as discussed in PEP 483, both nominal and structural subtyping have their strengths and weaknesses. Therefore, in this PEP we *do not propose* to replace the nominal subtyping described by PEP 484 with structural subtyping completely. Instead, protocol classes as specified in this PEP complement normal classes, and users are free to choose where to apply a particular solution. See section on `rejected`_ ideas at the end of this PEP for additional motivation. Non-goals --------- At runtime, protocol classes will be simple ABCs. There is no intent to provide sophisticated runtime instance and class checks against protocol classes. This would be difficult and error-prone and will contradict the logic of PEP 484. As well, following PEP 484 and PEP 526 we state that protocols are **completely optional**: * No runtime semantics will be imposed for variables or parameters annotated with a protocol class. * Any checks will be performed only by third-party type checkers and other tools. * Programmers are free to not use them even if they use type annotations. * There is no intent to make protocols non-optional in the future. Existing Approaches to Structural Subtyping =========================================== Before describing the actual specification, we review and comment on existing approaches related to structural subtyping in Python and other languages: * ``zope.interface`` [zope-interfaces]_ was one of the first widely used approaches to structural subtyping in Python. It is implemented by providing special classes to distinguish interface classes from normal classes, to mark interface attributes, and to explicitly declare implementation. For example:: from zope.interface import Interface, Attribute, implements class IEmployee(Interface): name = Attribute("Name of employee") def do(work): """Do some work""" class Employee(object): implements(IEmployee) name = 'Anonymous' def do(self, work): return work.start() Zope interfaces support various contracts and constraints for interface classes. For example:: from zope.interface import invariant def required_contact(obj): if not (obj.email or obj.phone): raise Exception("At least one contact info is required") class IPerson(Interface): name = Attribute("Name") email = Attribute("Email Address") phone = Attribute("Phone Number") invariant(required_contact) Even more detailed invariants are supported. However, Zope interfaces rely entirely on runtime validation. Such focus on runtime properties goes beyond the scope of the current proposal, and static support for invariants might be difficult to implement. However, the idea of marking an interface class with a special base class is reasonable and easy to implement both statically and at runtime. * Python abstract base classes [abstract-classes]_ are the standard library tool to provide some functionality similar to structural subtyping. The drawback of this approach is the necessity to either subclass the abstract class or register an implementation explicitly:: from abc import ABC class MyTuple(ABC): pass MyTuple.register(tuple) assert issubclass(tuple, MyTuple) assert isinstance((), MyTuple) As mentioned in the `rationale`_, we want to avoid such necessity, especially in static context. However, in a runtime context, ABCs are good candidates for protocol classes and they are already used extensively in the ``typing`` module. * Abstract classes defined in ``collections.abc`` module [collections-abc]_ are slightly more advanced since they implement a custom ``__subclasshook__()`` method that allows runtime structural checks without explicit registration:: from collections.abc import Iterable class MyIterable: def __iter__(self): return [] assert isinstance(MyIterable(), Iterable) Such behavior seems to be a perfect fit for both runtime and static behavior of protocols. As discussed in `rationale`_, we propose to add static support for such behavior. In addition, to allow users to achieve such runtime behavior for *user defined* protocols a special ``@runtime`` decorator will be provided, see detailed `discussion`_ below. * TypeScript [typescript]_ provides support for user defined classes and interfaces. Explicit implementation declaration is not required and structural subtyping is verified statically. For example:: interface LabeledItem { label: string; size?: int; } function printLabel(obj: LabeledValue) { console.log(obj.label); } let myObj = {size: 10, label: "Size 10 Object"}; printLabel(myObj); Note that optional interface members are supported. Also, TypeScript prohibits redundant members in implementations. While the idea of optional members looks interesting, it would complicate this proposal and it is not clear how useful it will be. Therefore it is proposed to postpone this; see `rejected`_ ideas. In general, the idea of static protocol checking without runtime implications looks reasonable, and basically this proposal follows the same line. * Go [golang]_ uses a more radical approach and makes interfaces the primary way to provide type information. Also, assignments are used to explicitly ensure implementation:: type SomeInterface interface { SomeMethod() ([]byte, error) } if _, ok := someval.(SomeInterface); ok { fmt.Printf("value implements some interface") } Both these ideas are questionable in the context of this proposal. See the section on `rejected`_ ideas. .. _specification: Specification ============= Terminology ----------- We propose to use the term *protocols* for types supporting structural subtyping. The reason is that the term *iterator protocol*, for example, is widely understood in the community, and coming up with a new term for this concept in a statically typed context would just create confusion. This has the drawback that the term *protocol* becomes overloaded with two subtly different meanings: the first is the traditional, well-known but slightly fuzzy concept of protocols such as iterator; the second is the more explicitly defined concept of protocols in statically typed code. The distinction is not important most of the time, and in other cases we propose to just add a qualifier such as *protocol classes* when referring to the static type concept. If a class includes a protocol in its MRO, the class is called an *explicit* subclass of the protocol. If a class is a structural subtype of a protocol, it is said to implement the protocol and to be compatible with a protocol. If a class is compatible with a protocol but the protocol is not included in the MRO, the class is an *implicit* subtype of the protocol. The attributes (variables and methods) of a protocol that are mandatory for other class in order to be considered a structural subtype are called protocol members. .. _definition: Defining a protocol ------------------- Protocols are defined by including a special new class ``typing.Protocol`` (an instance of ``abc.ABCMeta``) in the base classes list, preferably at the end of the list. Here is a simple example:: from typing import Protocol class SupportsClose(Protocol): def close(self) -> None: ... Now if one defines a class ``Resource`` with a ``close()`` method that has a compatible signature, it would implicitly be a subtype of ``SupportsClose``, since the structural subtyping is used for protocol types:: class Resource: ... def close(self) -> None: self.file.close() self.lock.release() Apart from few restrictions explicitly mentioned below, protocol types can be used in every context where a normal types can:: def close_all(things: Iterable[SupportsClose]) -> None: for t in things: t.close() f = open('foo.txt') r = Resource() close_all([f, r]) # OK! close_all([1]) # Error: 'int' has no 'close' method Note that both the user-defined class ``Resource`` and the built-in ``IO`` type (the return type of ``open()``) are considered subtypes of ``SupportsClose``, because they provide a ``close()`` method with a compatible type signature. Protocol members ---------------- All methods defined in the protocol class body are protocol members, both normal and decorated with ``@abstractmethod``. If some or all parameters of protocol method are not annotated, then their types are assumed to be ``Any`` (see PEP 484). Bodies of protocol methods are type checked, except for methods decorated with ``@abstractmethod`` with trivial bodies. A trivial body can contain a docstring. Example:: from typing import Protocol from abc import abstractmethod class Example(Protocol): def first(self) -> int: # This is a protocol member return 42 @abstractmethod def second(self) -> int: # Method without a default implementation """Some method.""" Note that although formally the implicit return type of a method with a trivial body is ``None``, type checker will not warn about above example, such convention is similar to how methods are defined in stub files. Static methods, class methods, and properties are equally allowed in protocols. To define a protocol variable, one must use PEP 526 variable annotations in the class body. Additional attributes *only* defined in the body of a method by assignment via ``self`` are not allowed. The rationale for this is that the protocol class implementation is often not shared by subtypes, so the interface should not depend on the default implementation. Examples:: from typing import Protocol, List class Template(Protocol): name: str # This is a protocol member value: int = 0 # This one too (with default) def method(self) -> None: self.temp: List[int] = [] # Error in type checker To distinguish between protocol class variables and protocol instance variables, the special ``ClassVar`` annotation should be used as specified by PEP 526. Explicitly declaring implementation ----------------------------------- To explicitly declare that a certain class implements the given protocols, they can be used as regular base classes. In this case a class could use default implementations of protocol members. ``typing.Sequence`` is a good example of a protocol with useful default methods. Abstract methods with trivial bodies are recognized by type checkers as having no default implementation and can't be used via ``super()`` in explicit subclasses. The default implementations can not be used if the subtype relationship is implicit and only via structural subtyping -- the semantics of inheritance is not changed. Examples:: class PColor(Protocol): @abstractmethod def draw(self) -> str: ... def complex_method(self) -> int: # some complex code here class NiceColor(PColor): def draw(self) -> str: return "deep blue" class BadColor(PColor): def draw(self) -> str: return super().draw() # Error, no default implementation class ImplicitColor: # Note no 'PColor' base here def draw(self) -> str: return "probably gray" def comlex_method(self) -> int: # class needs to implement this nice: NiceColor another: ImplicitColor def represent(c: PColor) -> None: print(c.draw(), c.complex_method()) represent(nice) # OK represent(another) # Also OK Note that there is no conceptual difference between explicit and implicit subtypes, the main benefit of explicit subclassing is to get some protocol methods "for free". In addition, type checkers can statically verify that the class actually implements the protocol correctly:: class RGB(Protocol): rgb: Tuple[int, int, int] @abstractmethod def intensity(self) -> int: return 0 class Point(RGB): def __init__(self, red: int, green: int, blue: str) -> None: self.rgb = red, green, blue # Error, 'blue' must be 'int' # Type checker might warn that 'intensity' is not defined A class can explicitly inherit from multiple protocols and also form normal classes. In this case methods are resolved using normal MRO and a type checker verifies that all subtyping are correct. The semantics of ``@abstractmethod`` is not changed, all of them must be implemented by an explicit subclass before it could be instantiated. Merging and extending protocols ------------------------------- The general philosophy is that protocols are mostly like regular ABCs, but a static type checker will handle them specially. Subclassing a protocol class would not turn the subclass into a protocol unless it also has ``typing.Protocol`` as an explicit base class. Without this base, the class is "downgraded" to a regular ABC that cannot be used with structural subtyping. A subprotocol can be defined by having *both* one or more protocols as immediate base classes and also having ``typing.Protocol`` as an immediate base class:: from typing import Sized, Protocol class SizedAndCloseable(Sized, Protocol): def close(self) -> None: ... Now the protocol ``SizedAndCloseable`` is a protocol with two methods, ``__len__`` and ``close``. If one omits ``Protocol`` in the base class list, this would be a regular (non-protocol) class that must implement ``Sized``. If ``Protocol`` is included in the base class list, all the other base classes must be protocols. A protocol can't extend a regular class. Alternatively, one can implement ``SizedAndCloseable`` like this, assuming the existence of ``SupportsClose`` from the example in `definition`_ section:: from typing import Sized class SupportsClose(...): ... # Like above class SizedAndCloseable(Sized, SupportsClose, Protocol): pass The two definitions of ``SizedAndClosable`` are equivalent. Subclass relationships between protocols are not meaningful when considering subtyping, since structural compatibility is the criterion, not the MRO. Note that rules around explicit subclassing are different from regular ABCs, where abstractness is simply defined by having at least one abstract method being unimplemented. Protocol classes must be marked *explicitly*. Generic and recursive protocols ------------------------------- Generic protocols are important. For example, ``SupportsAbs``, ``Iterable`` and ``Iterator`` are generic protocols. They are defined similar to normal non-protocol generic types:: T = TypeVar('T', covariant=True) class Iterable(Protocol[T]): @abstractmethod def __iter__(self) -> Iterator[T]: ... Note that ``Protocol[T, S, ...]`` is allowed as a shorthand for ``Protocol, Generic[T, S, ...]``. Recursive protocols are also supported. Forward references to the protocol class names can be given as strings as specified by PEP 484. Recursive protocols will be useful for representing self-referential data structures like trees in an abstract fashion:: class Traversable(Protocol): leaves: Iterable['Traversable'] Using Protocols =============== Subtyping relationships with other types ---------------------------------------- Protocols cannot be instantiated, so there are no values with protocol types. For variables and parameters with protocol types, subtyping relationships are subject to the following rules: * A protocol is never a subtype of a concrete type. * A concrete type or a protocol ``X`` is a subtype of another protocol ``P`` if and only if ``X`` implements all protocol members of ``P``. In other words, subtyping with respect to a protocol is always structural. * Edge case: for recursive protocols, a class is considered a subtype of the protocol in situations where such decision depends on itself. Continuing the previous example:: class Tree(Generic[T]): def __init__(self, value: T, leaves: 'List[Tree[T]]') -> None: self.value = value self.leaves = leaves def walk(graph: Traversable) -> None: ... tree: Tree[float] = Tree(0, []) walk(tree) # OK, 'Tree[float]' is a subtype of 'Traversable' Generic protocol types follow the same rules of variance as non-protocol types. Protocol types can be used in all contexts where any other types can be used, such as in ``Union``, ``ClassVar``, type variables bounds, etc. Generic protocols follow the rules for generic abstract classes, except for using structural compatibility instead of compatibility defined by inheritance relationships. Unions and intersections of protocols ------------------------------------- ``Union`` of protocol classes behaves the same way as for non-protocol classes. For example:: from typing import Union, Optional, Protocol class Exitable(Protocol): def exit(self) -> int: ... class Quitable(Protocol): def quit(self) -> Optional[int]: ... def finish(task: Union[Exitable, Quitable]) -> int: ... class GoodJob: ... def quit(self) -> int: return 0 finish(GoodJob()) # OK One can use multiple inheritance to define an intersection of protocols. Example:: from typing import Sequence, Hashable class HashableFloats(Sequence[float], Hashable, Protocol): pass def cached_func(args: HashableFloats) -> float: ... cached_func((1, 2, 3)) # OK, tuple is both hashable and sequence If this will prove to be a widely used scenario, then a special intersection type construct may be added in future as specified by PEP 483, see `rejected`_ ideas for more details. ``Type[]`` with protocols ------------------------- Variables and parameters annotated with ``Type[Proto]`` accept only concrete (non-protocol) subtypes of ``Proto``. The main reason for this is to allow instantiation of parameters with such type. For example:: class Proto(Protocol): @abstractmethod def meth(self) -> int: ... class Concrete: def meth(self) -> int: return 42 def fun(cls: Type[Proto]) -> int: return cls().meth() # OK fun(Proto) # Error fun(Concrete) # OK The same rule applies to variables:: var: Type[Proto] var = Proto # Error var = Concrete # OK var().meth() # OK Assigning an ABC or a protocol class to a variable is allowed if it is not explicitly typed, and such assignment creates a type alias. For normal (non-abstract) classes, the behavior of ``Type[]`` is not changed. ``NewType()`` and type aliases ------------------------------ Protocols are essentially anonymous. To emphasize this point, static type checkers might refuse protocol classes inside ``NewType()`` to avoid an illusion that a distinct type is provided:: form typing import NewType , Protocol, Iterator class Id(Protocol): code: int secrets: Iterator[bytes] UserId = NewType('UserId', Id) # Error, can't provide distinct type On the contrary, type aliases are fully supported, including generic type aliases:: from typing import TypeVar, Reversible, Iterable, Sized T = TypeVar('T') class SizedIterable(Iterable[T], Sized, Protocol): pass CompatReversible = Union[Reversible[T], SizedIterable[T]] .. _discussion: ``@runtime`` decorator and narrowing types by ``isinstance()`` -------------------------------------------------------------- The default semantics is that ``isinstance()`` and ``issubclass()`` fail for protocol types. This is in the spirit of duck typing -- protocols basically would be used to model duck typing statically, not explicitly at runtime. However, it should be possible for protocol types to implement custom instance and class checks when this makes sense, similar to how ``Iterable`` and other ABCs in ``collections.abc`` and ``typing`` already do it, but this is limited to non-generic and unsubscripted generic protocols (``Iterable`` is statically equivalent to ``Iterable[Any]`). The ``typing`` module will define a special ``@runtime`` class decorator that provides the same semantics for class and instance checks as for ``collections.abc`` classes, essentially making them "runtime protocols":: from typing import runtime, Protocol @runtime class Closeable(Protocol): def close(self): ... assert isinstance(open('some/file'), Closeable) Static type checkers will understand ``isinstance(x, Proto)`` and ``issubclass(C, Proto)`` for protocols defined with this decorator (as they already do for ``Iterable`` etc.). Static type checkers will narrow types after such checks by the type erased ``Proto`` (i.e. with all variables having type ``Any`` and all methods having type ``Callable[..., Any]``). Note that ``isinstance(x, Proto[int])`` etc. will always fail in agreement with PEP 484. Examples:: from typing import Iterable, Iterator, Sequence def process(items: Iterable[int]) -> None: if isinstance(items, Iterator): # 'items' have type 'Iterator[int]' here elif isinstance(items, Sequence[int]): # Error! Can't use 'isinstance()' with subscripted protocols Note that instance checks are not 100% reliable statically, this is why this behavior is opt-in, see section on `rejected`_ ideas for examples. Using Protocols in Python 2.7 - 3.5 =================================== Variable annotation syntax was added in Python 3.6, so that the syntax for defining protocol variables proposed in `specification`_ section can't be used in earlier versions. To define these in earlier versions of Python one can use properties:: class Foo(Protocol): @property def c(self) -> int: return 42 # Default value can be provided for property... @abstractproperty def d(self) -> int: # ... or it can be abstract return 0 In Python 2.7 the function type comments should be used as per PEP 484. The ``typing`` module changes proposed in this PEP will be also backported to earlier versions via the backport currently available on PyPI. Runtime Implementation of Protocol Classes ========================================== Implementation details ---------------------- The runtime implementation could be done in pure Python without any effects on the core interpreter and standard library except in the ``typing`` module: * Define class ``typing.Protocol`` similar to ``typing.Generic``. * Implement metaclass functionality to detect whether a class is a protocol or not. Add a class attribute ``__protocol__ = True`` if that is the case. Verify that a protocol class only has protocol base classes in the MRO (except for object). * Implement ``@runtime`` that adds all attributes to ``__subclasshook__()``. * All structural subtyping checks will be performed by static type checkers, such as ``mypy`` [mypy]_. No additional support for protocol validation will be provided at runtime. Changes in the typing module ---------------------------- The following classes in ``typing`` module will be protocols: * ``Hashable`` * ``SupportsAbs`` (and other ``Supports*`` classes) * ``Iterable``, ``Iterator`` * ``Sized`` * ``Container`` * ``Collection`` * ``Reversible`` * ``Sequence``, ``MutableSequence`` * ``AbstractSet``, ``MutableSet`` * ``Mapping``, ``MutableMapping`` * ``ItemsView`` (and other ``*View`` classes) * ``AsyncIterable``, ``AsyncIterator`` * ``Awaitable`` * ``Callable`` * ``ContextManager``, ``AsyncContextManager`` Most of these classes are small and conceptually simple. It is easy to see what are the methods these protocols implement, and immediately recognize the corresponding runtime protocol counterpart. Practically, few changes will be needed in ``typing`` since some of these classes already behave the necessary way at runtime. Most of these will need to be updated only in the corresponding ``typeshed`` stubs [typeshed]_. All other concrete generic classes such as ``List``, ``Set``, ``IO``, ``Deque``, etc are sufficiently complex that it makes sense to keep them non-protocols (i.e. require code to be explicit about them). Also, it is too easy to leave some methods unimplemented by accident, and explicitly marking the subclass relationship allows type checkers to pinpoint the missing implementations. Introspection ------------- The existing class introspection machinery (``dir``, ``__annotations__`` etc) can be used with protocols. In addition, all introspection tools implemented in the ``typing`` module will support protocols. Since all attributes need to be defined in the class body based on this proposal, protocol classes will have even better perspective for introspection than regular classes where attributes can be defined implicitly -- protocol attributes can't be initialized in ways that are not visible to introspection (using ``setattr()``, assignment via ``self``, etc.). Still, some things like types of attributes will not be visible at runtime in Python 3.5 and earlier, but this looks like a reasonable limitation. There will be only limited support of ``isinstance()`` and ``issubclass()`` as discussed above (these will *always* fail with ``TypeError`` for subscripted generic protocols, since a reliable answer could not be given at runtime in this case). But together with other introspection tools this give a reasonable perspective for runtime type checking tools. .. _rejected: Rejected/Postponed Ideas ======================== The ideas in this section were previously discussed in [several]_ [discussions]_ [elsewhere]_. Make every class a protocol by default -------------------------------------- Some languages such as Go make structural subtyping the only or the primary form of subtyping. We could achieve a similar result by making all classes protocols by default (or even always). However we believe that it is better to require classes to be explicitly marked as protocols, for the following reasons: * Protocols don't have some properties of regular classes. In particular, ``isinstance()``, as defined for normal classes, is based on the nominal hierarchy. In order to make everything a protocol by default, and have ``isinstance()`` work would require changing its semantics, which won't happen. * Protocol classes should generally not have many method implementations, as they describe an interface, not an implementation. Most classes have many implementations, making them bad protocol classes. * Experience suggests that many classes are not practical as protocols anyway, mainly because their interfaces are too large, complex or implementation-oriented (for example, they may include de facto private attributes and methods without a ``__`` prefix). * Most actually useful protocols in existing Python code seem to be implicit. The ABCs in ``typing`` and ``collections.abc`` are rather an exception, but even they are recent additions to Python and most programmers do not use them yet. * Many built-in functions only accept concrete instances of ``int`` (and subclass instances), and similarly for other built-in classes. Making ``int`` a structural type wouldn't be safe without major changes to the Python runtime, which won't happen. Support optional protocol members --------------------------------- We can come up with examples where it would be handy to be able to say that a method or data attribute does not need to be present in a class implementing a protocol, but if it is present, it must conform to a specific signature or type. One could use a ``hasattr()`` check to determine whether they can use the attribute on a particular instance. Languages such as TypeScript have similar features and apparently they are pretty commonly used. The current realistic potential use cases for protocols in Python don't require these. In the interest of simplicity, we propose to not support optional methods or attributes. We can always revisit this later if there is an actual need. Make protocols interoperable with other approaches -------------------------------------------------- The protocols as described here are basically a minimal extension to the existing concept of ABCs. We argue that this is the way they should be understood, instead of as something that *replaces* Zope interfaces, for example. Attempting such interoperabilities will significantly complicate both the concept and the implementation. On the other hand, Zope interfaces are conceptually a superset of protocols defined here, but using an incompatible syntax to define them, because before PEP 526 there was no straightforward way to annotate attributes. In the 3.6+ world, ``zope.interface`` might potentially adopt the ``Protocol`` syntax. In this case, type checkers could be taught to recognize interfaces as protocols and make simple structural checks with respect to them. Use assignments to check explicitly that a class implements a protocol ---------------------------------------------------------------------- In Go language the explicit checks for implementation are performed via dummy assignments [golang]_. Such a way is also possible with the current proposal. Example:: class A: def __len__(self) -> float: return ... _: Sized = A() # Error: A.__len__ doesn't conform to 'Sized' # (Incompatible return type 'float') This approach moves the check away from the class definition and it almost requires a comment as otherwise the code probably would not make any sense to an average reader -- it looks like dead code. Besides, in the simplest form it requires one to construct an instance of ``A``, which could be problematic if this requires accessing or allocating some resources such as files or sockets. We could work around the latter by using a cast, for example, but then the code would be ugly. Therefore we discourage the use of this pattern. Support ``isinstance()`` checks by default ------------------------------------------ The problem with this is instance checks could be unreliable, except for situations where there is a common signature convention such as ``Iterable``. For example:: class P(Protocol): def common_method_name(self, x: int) -> int: ... class X: def common_method_name(self) -> None: ... # Note different signature def do_stuff(o: Union[P, X]) -> int: if isinstance(o, P): return o.common_method_name(1) # oops, what if it's an X instance? Another potentially problematic case is assignment of attributes *after* instantiation:: class P(Protocol): x: int class C: def initialize(self) -> None: self.x = 0 c = C() isinstance(c1, P) # False c.initialize() isinstance(c, P) # True def f(x: Union[P, int]) -> None: if isinstance(x, P): # static type of x is P here ... else: # type of x is "int" here? print(x + 1) f(C()) # oops We argue that requiring an explicit class decorator would be better, since one can then attach warnings about problems like this in the documentation. The user would be able to evaluate whether the benefits outweigh the potential for confusion for each protocol and explicitly opt in -- but the default behavior would be safer. Finally, it will be easy to make this behavior default if necessary, while it might be problematic to make it opt-in after being default. Provide a special intersection type construct --------------------------------------------- There was an idea to allow ``Proto = All[Proto1, Proto2, ...]`` as a shorthand for:: class Proto(Proto1, Proto2, ..., Protocol): pass However, it is not yet clear how popular/useful it will be and implementing this in type checkers for non-protocol classes could be difficult. Finally, it will be very easy to add this later if needed. References ========== .. [typing] https://docs.python.org/3/library/typing.html .. [wiki-structural] https://en.wikipedia.org/wiki/Structural_type_system .. [zope-interfaces] https://zopeinterface.readthedocs.io/en/latest/ .. [abstract-classes] https://docs.python.org/3/library/abc.html .. [collections-abc] https://docs.python.org/3/library/collections.abc.html .. [typescript] https://www.typescriptlang.org/docs/handbook/interfaces.html .. [golang] https://golang.org/doc/effective_go.html#interfaces_and_types .. [typeshed] https://github.com/python/typeshed/ .. [mypy] http://github.com/python/mypy/ .. [several] https://mail.python.org/pipermail/python-ideas/2015-September/thread.html#35859 .. [discussions] https://github.com/python/typing/issues/11 .. [elsewhere] https://github.com/python/peps/pull/224 Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Mar 20 13:04:52 2017 From: brett at python.org (Brett Cannon) Date: Mon, 20 Mar 2017 17:04:52 +0000 Subject: [Python-Dev] Py_SIZE vs PyXXX_GET_SIZE In-Reply-To: References: Message-ID: On Mon, 20 Mar 2017 at 04:28 Serhiy Storchaka wrote: > What is the preferable way of getting the size of tuple, list, bytes, > bytearray: Py_SIZE or PyTuple_GET_SIZE, PyList_GET_SIZE, > PyBytes_GET_SIZE, PyByteArray_GET_SIZE? Are macros for concrete types > more preferable or they are outdated? > > On one hand concrete type macros are longer than Py_SIZE, and since > concrete type macros are defined not for all PyVarObject types we need > to use Py_SIZE for them in any case (for example for PyLongObject and > PyTypeObject). > > On other hand we can add asserts for checking that concrete type macros > are used with correct types. When I wrote a patch that replaces Py_SIZE > with concrete type macros I found two cases of misusing Py_SIZE with > dict object: one in _json.c (already fixed in > 3023ebb43f7607584c3e123aff56e867cb04a418) and other in dictobject.c > (still not fixed). If prefer using concrete type macros this would > unlikely happen. > Personally I have always used the concrete versions when available when it doesn't forcibly constrain the input to the function. In other words I wouldn't force a function to only take a list so I could use PyList_GET_SIZE, but if I'm constructing some internal list object or a function is defined to return a list already then I would just use the concrete versions. But I also wouldn't worry about changing uses of Py_SIZE unless I was already changing the surrounding code. I guess we could clarify this in PEP 7 if it doesn't say when to care about this and once we reach consensus on what we all prefer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg at redhat.com Mon Mar 20 13:28:29 2017 From: oleg at redhat.com (Oleg Nesterov) Date: Mon, 20 Mar 2017 18:28:29 +0100 Subject: [Python-Dev] __del__ is not called after creating a new reference Message-ID: <20170320172829.GA28448@redhat.com> Hello, I already tried to ask on python-list, see https://mail.python.org/pipermail/python-list/2017-March/720037.html but it seems that this list is not for technical questions. Let me resend my question to python-dev. Please tell me if I should not spam this list with newbiesh questions, and thanks in advance. --------------------------------------------------------------------------- I started to learn python a few days ago and I am trying to understand what __del__() actually does. https://docs.python.org/3/reference/datamodel.html says: object.__del__(self) ... Note that it is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. It may then be called at a later time when this new reference is deleted. However, this trivial test-case class C: def __del__(self): print("DEL") global X X = self C() print(X) X = 0 print(X) shows that __del__ is called only once, it is not called again after "X = 0": DEL <__main__.C object at 0x7f067695f4a8> 0 (Just in case, I verified later that this object actually goes away and its memory is freed, so the problem is not that it still has a reference). I've cloned https://github.com/python/cpython.git and everything looks clear at first glance (but let me repeat that I am very new to python): PyObject_CallFinalizerFromDealloc() calls PyObject_CallFinalizer() which finally calls "__del__" method in slot_tp_finalize(), then it notices that "X = self" creates the new reference and does: /* tp_finalize resurrected it! Make it look like the original Py_DECREF * never happened. */ refcnt = self->ob_refcnt; _Py_NewReference(self); self->ob_refcnt = refcnt; However, PyObject_CallFinalizer() also does _PyGC_SET_FINALIZED(self, 1) and that is why __del__ is not called again after "X = 0": /* tp_finalize should only be called once. */ if (PyType_IS_GC(tp) && _PyGC_FINALIZED(self)) return; The comment and the code are very explicit, so this does nt look like a bug in cpython. Probably the docs should be fixed? Or this code is actually wrong? The test-case works as documented if I remove _PyGC_SET_FINALIZED() in PyObject_CallFinalizer() or add another _PyGC_SET_FINALIZED(self, 0) into PyObject_CallFinalizerFromDealloc() after _Py_NewReference(self), but yes, yes, I understand that this is not correct and won't really help. Oleg. From oleg at redhat.com Mon Mar 20 13:30:26 2017 From: oleg at redhat.com (Oleg Nesterov) Date: Mon, 20 Mar 2017 18:30:26 +0100 Subject: [Python-Dev] why _PyGen_Finalize(gen) propagates close() to _PyGen_yf() ? Message-ID: <20170320173026.GA28483@redhat.com> Hello, Let me first clarify, I do not claim this is a bug, I am trying to learn python and now I trying to understand yield-from. This simple test-case g = (x for x in range(10)) def fg(): for x in g: yield x print(next(fg())) print(next(g)) works as expected and prints: 0 1 However, if I change fg() to use yield-from g = (x for x in range(10)) def fg(): yield from g print(next(fg())) print(next(g)) then next(g) raises StopIteration: 0 Traceback (most recent call last): File "/tmp/T.py", line 10, in print(next(g)) StopIteration because g.close() is called by destructor of the object returned by fg(). To me this looks strange and confusing. I tried to google, and found https://docs.python.org/3/whatsnew/3.3.html#pep-380 but it doesn't document this behaviour. I understand that yield-from should propagate .close(), but why _PyGen_Finalize() should send close() to the gi_yieldfrom object? I applied the patch below just to verify that I actually understand what's going on, and with this patch the 2nd test-case works as I'd expect. But since I am very new to python I'd suspect that the code is fine and I simply do not understand why it works this way. So. could someone please explain the rationale behind this behaviour? And probably update the docs should be updated? Thanks in advance. Oleg. --- diff --git a/Objects/genobject.c b/Objects/genobject.c index 24a1da6..d5152eb 100644 --- a/Objects/genobject.c +++ b/Objects/genobject.c @@ -6,6 +6,7 @@ #include "opcode.h" static PyObject *gen_close(PyGenObject *, PyObject *); +static PyObject *do_gen_close(PyGenObject *, PyObject *); static PyObject *async_gen_asend_new(PyAsyncGenObject *, PyObject *); static PyObject *async_gen_athrow_new(PyAsyncGenObject *, PyObject *); @@ -71,7 +72,7 @@ _PyGen_Finalize(PyObject *self) } } else { - res = gen_close(gen, NULL); + res = do_gen_close(gen, NULL); } if (res == NULL) { @@ -373,10 +374,9 @@ _PyGen_yf(PyGenObject *gen) } static PyObject * -gen_close(PyGenObject *gen, PyObject *args) +do_gen_close(PyGenObject *gen, PyObject *yf) { PyObject *retval; - PyObject *yf = _PyGen_yf(gen); int err = 0; if (yf) { @@ -407,6 +407,11 @@ gen_close(PyGenObject *gen, PyObject *args) return NULL; } +static PyObject * +gen_close(PyGenObject *gen, PyObject *args) +{ + return do_gen_close(gen, _PyGen_yf(gen)); +} PyDoc_STRVAR(throw_doc, "throw(typ[,val[,tb]]) -> raise exception in generator,\n\ From brett at python.org Mon Mar 20 14:07:32 2017 From: brett at python.org (Brett Cannon) Date: Mon, 20 Mar 2017 18:07:32 +0000 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: I'm overall very supportive of seeing something like this make it into Python to further strengthen duck typing in the language. I know I've wanted something something like this since ABCs were introduced. I personally only have one issue/clarification for the PEP. On Mon, 20 Mar 2017 at 05:02 Ivan Levkivskyi wrote: > [SNIP] > Protocol members > ---------------- > > All methods defined in the protocol class body are protocol members, both > normal and decorated with ``@abstractmethod``. If some or all parameters of > protocol method are not annotated, then their types are assumed to be > ``Any`` > (see PEP 484). Bodies of protocol methods are type checked, except for > methods > decorated with ``@abstractmethod`` with trivial bodies. A trivial body can > contain a docstring. > What is a "trivial body"? I don't know of any such definition anywhere in Python so this is too loosely defined. You also don't say what happens if the body isn't trivial. Are tools expected to raise an error? > Example:: > > from typing import Protocol > from abc import abstractmethod > > class Example(Protocol): > def first(self) -> int: # This is a protocol member > return 42 > > @abstractmethod > def second(self) -> int: # Method without a default implementation > """Some method.""" > > Note that although formally the implicit return type of a method with > a trivial body is ``None``, > This seems to suggest a trivial body is anything lacking a return statement. > type checker will not warn about above example, > such convention is similar to how methods are defined in stub files. > Static methods, class methods, and properties are equally allowed > in protocols. > Personally, I think even an abstract method should be properly typed. So in the example above, second() should either return a reasonable default value or raise NotImplementedError. My argument is "explicit is better than implicit" and you make errors when people call super() on an abstract method that doesn't return None when it doesn't make sense. I would also argue that you can't expect an abstract method to always be simple. For instance, I might define an abstract method that has horrible complexity characteristics (e.g. O(n**2)), but which might be acceptable in select cases. By making the method abstract you force subclasses to explicitly opt-in to using the potentially horrible implementation. -Brett > > To define a protocol variable, one must use PEP 526 variable > annotations in the class body. Additional attributes *only* defined in > the body of a method by assignment via ``self`` are not allowed. The > rationale > for this is that the protocol class implementation is often not shared by > subtypes, so the interface should not depend on the default implementation. > Examples:: > > from typing import Protocol, List > > class Template(Protocol): > name: str # This is a protocol member > value: int = 0 # This one too (with default) > > def method(self) -> None: > self.temp: List[int] = [] # Error in type checker > > To distinguish between protocol class variables and protocol instance > variables, the special ``ClassVar`` annotation should be used as specified > by PEP 526. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Mon Mar 20 14:19:52 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 20 Mar 2017 19:19:52 +0100 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On 20 March 2017 at 19:07, Brett Cannon wrote: > I'm overall very supportive of seeing something like this make it into > Python to further strengthen duck typing in the language. > Thanks! > Personally, I think even an abstract method should be properly typed. > [SNIP] > or raise NotImplementedError. > Yes, I think this is a reasonable requirement. (Also assuming unconditional raise is a bottom type, raising body is properly typed). Initially I thought a type checker could warn about invalid calls to super(), but this complicates things, and indeed "explicit is better than implicit". -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Mon Mar 20 14:39:36 2017 From: random832 at fastmail.com (Random832) Date: Mon, 20 Mar 2017 14:39:36 -0400 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: <1490035176.1189060.917511696.617E88DC@webmail.messagingengine.com> On Mon, Mar 20, 2017, at 14:07, Brett Cannon wrote: > What is a "trivial body"? I don't know of any such definition anywhere in > Python so this is too loosely defined. You also don't say what happens if > the body isn't trivial. Are tools expected to raise an error? My assumption would be that a trivial body is any body consisting of only a docstring and/or a "pass" statement. From solipsis at pitrou.net Mon Mar 20 16:18:30 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 20 Mar 2017 21:18:30 +0100 Subject: [Python-Dev] Py_SIZE vs PyXXX_GET_SIZE References: Message-ID: <20170320211830.68aafed4@fsol> On Mon, 20 Mar 2017 13:26:34 +0200 Serhiy Storchaka wrote: > What is the preferable way of getting the size of tuple, list, bytes, > bytearray: Py_SIZE or PyTuple_GET_SIZE, PyList_GET_SIZE, > PyBytes_GET_SIZE, PyByteArray_GET_SIZE? Are macros for concrete types > more preferable or they are outdated? +1 for using concrete macros. Py_SIZE is a low-level internal thing (e.g. it will return a negative size on negative ints). Regards Antoine. From solipsis at pitrou.net Mon Mar 20 16:23:59 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 20 Mar 2017 21:23:59 +0100 Subject: [Python-Dev] __del__ is not called after creating a new reference References: <20170320172829.GA28448@redhat.com> Message-ID: <20170320212359.64c7e112@fsol> Hello Oleg, On Mon, 20 Mar 2017 18:28:29 +0100 Oleg Nesterov wrote: > I started to learn python a few days ago and I am trying to understand what > __del__() actually does. https://docs.python.org/3/reference/datamodel.html > says: > > object.__del__(self) > ... > Note that it is possible (though not recommended!) for the __del__() > method to postpone destruction of the instance by creating a new > reference to it. It may then be called at a later time when this new > reference is deleted. This sentence is not technically wrong, but it can easily be misleading. It says "it *may* then be called at a later time" and probably it should say "it may or may not be called at a later time, depending on the Python implementation you are using". Indeed CPython, the reference implementation, only calls __del__ once and doesn't call it again on resurrected objects. It is an implementation detail, though, and other implementations are free to behave otherwise, as garbage collectors are delicate beasts, difficult to tame. Regards Antoine. From python at mrabarnett.plus.com Mon Mar 20 16:36:12 2017 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 20 Mar 2017 20:36:12 +0000 Subject: [Python-Dev] __del__ is not called after creating a new reference In-Reply-To: <20170320212359.64c7e112@fsol> References: <20170320172829.GA28448@redhat.com> <20170320212359.64c7e112@fsol> Message-ID: <1c52a853-c224-cdc9-bb2d-6178ed118b2c@mrabarnett.plus.com> On 2017-03-20 20:23, Antoine Pitrou wrote: > > Hello Oleg, > > On Mon, 20 Mar 2017 18:28:29 +0100 > Oleg Nesterov wrote: >> I started to learn python a few days ago and I am trying to understand what >> __del__() actually does. https://docs.python.org/3/reference/datamodel.html >> says: >> >> object.__del__(self) >> ... >> Note that it is possible (though not recommended!) for the __del__() >> method to postpone destruction of the instance by creating a new >> reference to it. It may then be called at a later time when this new >> reference is deleted. > > This sentence is not technically wrong, but it can easily be > misleading. It says "it *may* then be called at a later time" and > probably it should say "it may or may not be called at a later time, > depending on the Python implementation you are using". > [snip] I don't think I'd say it's misleading, but only that it might be misunderstood. From kramm at google.com Mon Mar 20 17:11:54 2017 From: kramm at google.com (Matthias Kramm) Date: Mon, 20 Mar 2017 14:11:54 -0700 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: I'm a big fan of this. I really want structural subtyping for http://github.com/google/pytype. On Mon, Mar 20, 2017 at 5:00 AM, Ivan Levkivskyi wrote: > Explicitly declaring implementation > ----------------------------------- > > To explicitly declare that a certain class implements the given protocols, > Why is this necessary? The whole point of ducktyping is that you *don't* have to declare what you implement. I get that it looks convenient to have your protocol A also supply some of the methods you'd expect classes of type A to have. But completing an implementation in that way should be done explicitly (via including a utility class or using a decorator like functools.total_ordering), not as side-effect of an (unnecessary) protocol declaration. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Mar 20 17:28:42 2017 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 20 Mar 2017 14:28:42 -0700 Subject: [Python-Dev] __del__ is not called after creating a new reference In-Reply-To: <20170320212359.64c7e112@fsol> References: <20170320172829.GA28448@redhat.com> <20170320212359.64c7e112@fsol> Message-ID: On Mar 20, 2017 1:26 PM, "Antoine Pitrou" wrote: Hello Oleg, On Mon, 20 Mar 2017 18:28:29 +0100 Oleg Nesterov wrote: > I started to learn python a few days ago and I am trying to understand what > __del__() actually does. https://docs.python.org/3/ reference/datamodel.html > says: > > object.__del__(self) > ... > Note that it is possible (though not recommended!) for the __del__() > method to postpone destruction of the instance by creating a new > reference to it. It may then be called at a later time when this new > reference is deleted. This sentence is not technically wrong, but it can easily be misleading. It says "it *may* then be called at a later time" and probably it should say "it may or may not be called at a later time, depending on the Python implementation you are using". Modern CPython, and all extant versions of PyPy and Jython, guarantee that __del__ is called at most once. MicroPython doesn't support user-defined __del__ methods. It's fine if the text wants to leave that open, but the current phrasing is pretty misleading IMO. I also read it as saying that __del__ would be called again if the object is collected again (which may or may not happen). But AFAICT there are actually zero implementations where this is true. Probably worth a small edit :-) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Mon Mar 20 17:42:54 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Mon, 20 Mar 2017 22:42:54 +0100 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On 20 March 2017 at 22:11, Matthias Kramm wrote: > I'm a big fan of this. I really want structural subtyping for > http://github.com/google/pytype. > > I am glad you like it. > > On Mon, Mar 20, 2017 at 5:00 AM, Ivan Levkivskyi > wrote: > >> Explicitly declaring implementation >> ----------------------------------- >> >> To explicitly declare that a certain class implements the given protocols, >> > > Why is this necessary? The whole point of ducktyping is that you *don't* > have to declare what you implement. > > I get that it looks convenient to have your protocol A also supply some of > the methods you'd expect classes of type A to have. But completing an > implementation in that way should be done explicitly (via including a > utility class or using a decorator like functools.total_ordering), not as > side-effect of an (unnecessary) protocol declaration. > I would put the question backwards: do we need to *prohibit* explicit subclassing? I think we shouldn't. Mostly for two reasons: 1. Backward compatibility: People are already using ABCs, including generic ABCs from typing module. If we prohibit explicit subclassing of these ABCs, then quite a lot of code will break. 2. Convenience: There are existing protocol-like ABCs (that will be turned into protocols) that have many useful "mix-in" (non-abstract) methods. For example in case of Sequence one only needs to implement __getitem__ and __len__ in an explicit subclass, and one gets __iter__, __contains__, __reversed__, index, and count for free. Another example is Mapping, one needs to implement only __getitem__, __len__, and __iter__, and one gets __contains__, keys, items, values, get, __eq__, and __ne__ for free. If you think it makes sense to add a note that implicit subtyping is preferred (especially for user defined protocols), then this certainly could be done. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Mon Mar 20 19:23:57 2017 From: barry at python.org (Barry Warsaw) Date: Mon, 20 Mar 2017 19:23:57 -0400 Subject: [Python-Dev] PEP 544: Protocols References: Message-ID: <20170320192357.682b0385@subdivisions.wooz.org> On Mar 20, 2017, at 01:00 PM, Ivan Levkivskyi wrote: > from zope.interface import Interface, Attribute, implements > > class IEmployee(Interface): > > name = Attribute("Name of employee") > > def do(work): > """Do some work""" > > class Employee(object): > implements(IEmployee) IIUC, the Python 3 way to spell this is with a decorator. from zope.interface import implementer @implementer(IEmployee) class Employee: (also, since this is Python 3, do you really need to inherit from object?) Cheers, -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From levkivskyi at gmail.com Mon Mar 20 20:28:45 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 21 Mar 2017 01:28:45 +0100 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: <20170320192357.682b0385@subdivisions.wooz.org> References: <20170320192357.682b0385@subdivisions.wooz.org> Message-ID: On 21 March 2017 at 00:23, Barry Warsaw wrote: > On Mar 20, 2017, at 01:00 PM, Ivan Levkivskyi wrote: > > [SNIP] > > IIUC, the Python 3 way to spell this is with a decorator. > Thanks, I will update this. > [SNIP] > (also, since this is Python 3, do you really need to inherit from object?) > Indeed. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Tue Mar 21 02:22:46 2017 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 20 Mar 2017 23:22:46 -0700 Subject: [Python-Dev] Py_SIZE vs PyXXX_GET_SIZE In-Reply-To: <20170320211830.68aafed4@fsol> References: <20170320211830.68aafed4@fsol> Message-ID: <1490077366.3786250.918073528.12B2F55B@webmail.messagingengine.com> On Mon, Mar 20, 2017, at 13:18, Antoine Pitrou wrote: > On Mon, 20 Mar 2017 13:26:34 +0200 > Serhiy Storchaka wrote: > > What is the preferable way of getting the size of tuple, list, bytes, > > bytearray: Py_SIZE or PyTuple_GET_SIZE, PyList_GET_SIZE, > > PyBytes_GET_SIZE, PyByteArray_GET_SIZE? Are macros for concrete types > > more preferable or they are outdated? > > +1 for using concrete macros. Py_SIZE is a low-level internal thing > (e.g. it will return a negative size on negative ints). +1 Py_SIZE is an implementation detail of varsize types. Using the concrete macros also the implementation to change without altering API consumers. From victor.stinner at gmail.com Tue Mar 21 02:48:06 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 21 Mar 2017 07:48:06 +0100 Subject: [Python-Dev] Py_SIZE vs PyXXX_GET_SIZE In-Reply-To: References: Message-ID: We may modify PyXXX_GET_SIZE() to add assert(PyXXX_Check()) to help to detect bugs and misuses of these macros in debug mode. The problem is that I expect a compilation error on PyXXX_GET_SIZE()=size. The new PyDict_GET_SIZE() macro has the assertion. Use Py_SIZE() to set the size. Victor Le 20 mars 2017 12:28, "Serhiy Storchaka" a ?crit : > What is the preferable way of getting the size of tuple, list, bytes, > bytearray: Py_SIZE or PyTuple_GET_SIZE, PyList_GET_SIZE, PyBytes_GET_SIZE, > PyByteArray_GET_SIZE? Are macros for concrete types more preferable or they > are outdated? > > On one hand concrete type macros are longer than Py_SIZE, and since > concrete type macros are defined not for all PyVarObject types we need to > use Py_SIZE for them in any case (for example for PyLongObject and > PyTypeObject). > > On other hand we can add asserts for checking that concrete type macros > are used with correct types. When I wrote a patch that replaces Py_SIZE > with concrete type macros I found two cases of misusing Py_SIZE with dict > object: one in _json.c (already fixed in 3023ebb43f7607584c3e123aff56e867cb04a418) > and other in dictobject.c (still not fixed). If prefer using concrete type > macros this would unlikely happen. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/victor. > stinner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kramm at google.com Tue Mar 21 12:09:31 2017 From: kramm at google.com (Matthias Kramm) Date: Tue, 21 Mar 2017 09:09:31 -0700 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On Mon, Mar 20, 2017 at 2:42 PM, Ivan Levkivskyi wrote: > 1. Backward compatibility: People are already using ABCs, including > generic ABCs from typing module. > If we prohibit explicit subclassing of these ABCs, then quite a lot of > code will break. > Fair enough. Backwards compatibility is a valid point, and both abc.py and typing.py have classes that lend itself to becoming protocols. The one thing that isn't clear to me is how type checkers will distinguish between 1.) Protocol methods in A that need to implemented in B so that B is considered a structural subclass of A. 2.) Extra methods you get for free when you explicitly inherit from A. To provide a more concrete example: Since Mapping implements __eq__, do I also have to implement __eq__ if I want my class to be (structurally) compatible with Mapping? > If you think it makes sense to add a note that implicit subtyping is > preferred (especially for user defined protocols), > then this certainly could be done. > Yes, I believe it would be good to mention that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Tue Mar 21 12:36:45 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 21 Mar 2017 17:36:45 +0100 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On 21 March 2017 at 17:09, Matthias Kramm wrote: > > The one thing that isn't clear to me is how type checkers will distinguish > between > 1.) Protocol methods in A that need to implemented in B so that B is > considered a structural subclass of A. > 2.) Extra methods you get for free when you explicitly inherit from A. > > To provide a more concrete example: Since Mapping implements __eq__, do I > also have to implement __eq__ if I want my class to be (structurally) > compatible with Mapping? > An implicit subtype should implement all methods, so that yes, in this case __eq__ should be implemented for Mapping. There was an idea to make some methods "non-protocol" (i.e. not necessary to implement), but it was rejected, since this complicates things. Briefly, consider this function: def fun(m: Mapping): m.keys() The question is should this be an error? I think most people would expect this to be valid. The same applies to most other methods in Mapping, people expect that they are provided my Mapping. Therefore, to be on the safe side, we need to require these methods to be implemented. If you look at definitions in collections.abc, there are very few methods that could be considered "non-protocol". Therefore, it was decided to not introduce "non-protocol" methods. There is only one downside for this: it will require some boilerplate for implicit subtypes of Mapping etc. But, this applies to few "built-in" protocols (like Mapping and Sequence) and people already subclass them. Also, such style will be discouraged for user defined protocols. It will be recommended to create compact protocols and combine them. (This was discussed, but it looks like we forgot to add an explicit statement about this.) I will add a section on non-protocol methods to rejected/postponed ideas. -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Mar 21 12:57:05 2017 From: guido at python.org (Guido van Rossum) Date: Tue, 21 Mar 2017 09:57:05 -0700 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: Technically, `__eq__` is implemented by `object` so a `Mapping` implementation that didn't implement it would still be considered valid. But probably not very useful (since the default implementation in this case is implemented by comparing object identity). On Tue, Mar 21, 2017 at 9:36 AM, Ivan Levkivskyi wrote: > On 21 March 2017 at 17:09, Matthias Kramm wrote: > >> >> The one thing that isn't clear to me is how type checkers will >> distinguish between >> 1.) Protocol methods in A that need to implemented in B so that B is >> considered a structural subclass of A. >> 2.) Extra methods you get for free when you explicitly inherit from A. >> >> To provide a more concrete example: Since Mapping implements __eq__, do I >> also have to implement __eq__ if I want my class to be (structurally) >> compatible with Mapping? >> > > An implicit subtype should implement all methods, so that yes, in this > case __eq__ should be implemented for Mapping. > > There was an idea to make some methods "non-protocol" (i.e. not necessary > to implement), but it was rejected, > since this complicates things. Briefly, consider this function: > > def fun(m: Mapping): > m.keys() > > The question is should this be an error? I think most people would expect > this to be valid. > The same applies to most other methods in Mapping, people expect that > they are provided my Mapping. Therefore, to be on the safe side, we need > to require these methods to be implemented. If you look at definitions in > collections.abc, > there are very few methods that could be considered "non-protocol". > Therefore, it was decided > to not introduce "non-protocol" methods. > > There is only one downside for this: it will require some boilerplate for > implicit subtypes of Mapping etc. > But, this applies to few "built-in" protocols (like Mapping and Sequence) > and people already subclass them. > Also, such style will be discouraged for user defined protocols. It will > be recommended to create compact > protocols and combine them. (This was discussed, but it looks like we > forgot to add an explicit statement about this.) > > I will add a section on non-protocol methods to rejected/postponed ideas. > > -- > Ivan > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Mar 21 13:03:17 2017 From: brett at python.org (Brett Cannon) Date: Tue, 21 Mar 2017 17:03:17 +0000 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On Tue, 21 Mar 2017 at 09:17 Matthias Kramm via Python-Dev < python-dev at python.org> wrote: > On Mon, Mar 20, 2017 at 2:42 PM, Ivan Levkivskyi > wrote: > > 1. Backward compatibility: People are already using ABCs, including > generic ABCs from typing module. > If we prohibit explicit subclassing of these ABCs, then quite a lot of > code will break. > > > Fair enough. Backwards compatibility is a valid point, and both abc.py and > typing.py have classes that lend itself to becoming protocols. > Another key point is that if you block subclassing then this stops being useful to anyone not using a type checker. While the idea of protocols is to support structural typing, there is nothing wrong with having nominal typing through ABCs help enforce the structural typing of a subclass at the same time. You could argue that if you want that you define the base ABC first and then have a class that literally does nothing but inherit from that base ABC and Protocol, but that's unnecessary duplication in an API to have the structural type and nominal type separate when we have a mechanism that can support both. > > The one thing that isn't clear to me is how type checkers will distinguish > between > 1.) Protocol methods in A that need to implemented in B so that B is > considered a structural subclass of A. > 2.) Extra methods you get for free when you explicitly inherit from A. > > To provide a more concrete example: Since Mapping implements __eq__, do I > also have to implement __eq__ if I want my class to be (structurally) > compatible with Mapping? > > > If you think it makes sense to add a note that implicit subtyping is > preferred (especially for user defined protocols), > then this certainly could be done. > > > Yes, I believe it would be good to mention that. > I don't think it needs to be explicitly discouraged if you want to make sure you implement the abstract methods (ABCs are useful for a reason). I do think it's fine, though, to make it very clear that whether you subclass or not makes absolutely no difference to tools validating the type soundness of the code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From levkivskyi at gmail.com Tue Mar 21 14:05:41 2017 From: levkivskyi at gmail.com (Ivan Levkivskyi) Date: Tue, 21 Mar 2017 19:05:41 +0100 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On 21 March 2017 at 18:03, Brett Cannon wrote: > I do think it's fine, though, to make it very clear that whether you > subclass or not makes absolutely no difference to tools validating the type > soundness of the code. > There are two places where PEP draft says: "Note that there is no conceptual difference between explicit and implicit subtypes" and "The general philosophy is that protocols are mostly like regular ABCs, but a static type checker will handle them specially." Do you want to propose alternative wording for these, or would you rather like an additional statement? -- Ivan -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg at redhat.com Tue Mar 21 14:22:23 2017 From: oleg at redhat.com (Oleg Nesterov) Date: Tue, 21 Mar 2017 19:22:23 +0100 Subject: [Python-Dev] __del__ is not called after creating a new reference In-Reply-To: References: <20170320172829.GA28448@redhat.com> <20170320212359.64c7e112@fsol> Message-ID: <20170321182223.GA32729@redhat.com> On 03/20, Nathaniel Smith wrote: > > Modern CPython, and all extant versions of PyPy and Jython, guarantee that > __del__ is called at most once. MicroPython doesn't support user-defined > __del__ methods. > > It's fine if the text wants to leave that open, but the current phrasing is > pretty misleading IMO. I also read it as saying that __del__ would be > called again if the object is collected again (which may or may not > happen). Yes, that is why I was confused. Just I could not believe nobody else noticed this "bug" so I decided to check the sources and yes, the code looks very clear. > But AFAICT there are actually zero implementations where this is > true. Probably this was mostly true until the commit 796564c2 ("Issue #18112: PEP 442 implementation (safe object finalization)."), python2 calls __del__ again. > Probably worth a small edit :-) Agreed. And it seems that not only me was confused, http://doc.pypy.org/en/latest/cpython_differences.html says: There are a few extra implications from the difference in the GC. Most notably, if an object has a __del__, the __del__ is never called more than once in PyPy; but CPython will call the same __del__ several times if the object is resurrected and dies again. Thanks to all! Oleg. From kramm at google.com Tue Mar 21 19:50:39 2017 From: kramm at google.com (Matthias Kramm) Date: Tue, 21 Mar 2017 16:50:39 -0700 Subject: [Python-Dev] PEP 544: Protocols In-Reply-To: References: Message-ID: On Tue, Mar 21, 2017 at 11:05 AM, Ivan Levkivskyi wrote: > There are two places where PEP draft says: > > "Note that there is no conceptual difference between explicit and implicit > subtypes" > > and > > "The general philosophy is that protocols are mostly like regular ABCs, > but a static type checker will handle them specially." > > Do you want to propose alternative wording for these, or would you rather > like an additional statement? > Let's do an additional statement. Something like "Static analysis tools are expected to automatically detect that a class implements a given protocol. So while it's possible to subclass a protocol explicitly, it's not necessary to do so for the sake of type-checking." -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Mar 21 20:46:39 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 22 Mar 2017 01:46:39 +0100 Subject: [Python-Dev] Translated Python documentation In-Reply-To: References: <20170222164058.7b0333ef@fsol> <1b5001d28e8d$6c1e6570$445b3050$@hotmail.com> Message-ID: 2017-03-10 1:03 GMT+01:00 Victor Stinner : > FYI we are already working on a PEP with Julien Palard (FR) and INADA > Naoki (JP). We will post it when it will be ready ;-) Ok, Julien wrote the PEP with the help of Naoki and myself. He posted it on python-ideas for a first review: https://mail.python.org/pipermail/python-ideas/2017-March/045226.html The PEP is now very complete and lists all requested changes on the Python side. Let's discuss that on the python-ideas list ;-) Victor From nad at python.org Tue Mar 21 23:16:50 2017 From: nad at python.org (Ned Deily) Date: Tue, 21 Mar 2017 23:16:50 -0400 Subject: [Python-Dev] [RELEASE] Python 3.6.1 is now available Message-ID: <4AA9F522-CAC7-4ACC-991C-495F17E1045A@python.org> On behalf of the Python development community and the Python 3.6 release team, I would like to announce the availability of Python 3.6.1, the first maintenance release of Python 3.6. 3.6.0 was released on 2016-12-22 to great interest and now, three months later, we are providing the first set of bugfixes and documentation updates for it. Although it should be transparent to users of Python, 3.6.1 is the first release after some major changes to our development process so we ask users who build Python from source to be on the lookout for any unexpected differences. Please see "What?s New In Python 3.6" for more information: https://docs.python.org/3.6/whatsnew/3.6.html You can find Python 3.6.1 here: https://www.python.org/downloads/release/python-361/ and its change log here: https://docs.python.org/3.6/whatsnew/changelog.html#python-3-6-1 The next maintenance release of Python 3.6 is expected to follow in about 3 months by the end of 2017-06. More information about the 3.6 release schedule can be found here: https://www.python.org/dev/peps/pep-0494/ -- Ned Deily nad at python.org -- [] From victor.stinner at gmail.com Thu Mar 23 04:41:11 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 23 Mar 2017 09:41:11 +0100 Subject: [Python-Dev] Exact date of Python 2 EOL? Message-ID: Hi, I looked at the "Status of Python branches" to check if it was up to date. It's the case, thanks :-) But it recalled me that no exact date was decided for the official end of line of the the Python 2 branch (2.7 EOL). https://docs.python.org/devguide/#status-of-python-branches says January 1st, 2020 https://pythonclock.org/ picked April 12th, 2020 (Pycon US) "Python 2.7 will not be maintained past 2020. No official date has been given, so this clock counts down until April 12th, 2020, which will be roughly the time of the 2020 PyCon. I am hereby suggesting we make PyCon 2020 the official end-of-life date, and we throw a massive party to celebrate all that Python 2 has done for us. (If this sounds interesting to you, email pythonclockorg at gmail.com)." Can we pick an official date? By the way, maybe we can also start to list vendors (Linux vendors?) who plan to offer commercial extended support? For example, you can expect longer support than 2020 from RHEL: https://access.redhat.com/support/policy/updates/errata Ubuntu 12.04 reached its end of life after 5 years, but it seems like Canonical also starts to offer extended support to customers: http://www.omgubuntu.co.uk/2017/03/ubuntu-12-04-esm-support You can expect longer Python 2 support indirectely ;-) Victor From ilya.kazakevich at jetbrains.com Thu Mar 23 10:04:45 2017 From: ilya.kazakevich at jetbrains.com (Ilya Kazakevich) Date: Thu, 23 Mar 2017 17:04:45 +0300 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering Message-ID: Hello. I have following layout: \---tests | test_module.py | __init__.py When I launch "python.exe" -m unittest discover -t . -s tests" it works perfectly. But when I remove " __init__.py" it says Start directory is not importable: "tests'" ``loader.py``: if start_dir != top_level_dir: is_not_importable = not os.path.isfile(os.path.join(start_dir, '__init__.py')) I believe ``__init__.py`` does not play any role since Python 3.3, am I right? If so, it seems to be a bug. Should I create an issue? Ilya. -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Thu Mar 23 11:53:31 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Fri, 24 Mar 2017 00:53:31 +0900 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering In-Reply-To: References: Message-ID: There is already http://bugs.python.org/issue29642 On Thu, Mar 23, 2017 at 11:04 PM, Ilya Kazakevich wrote: > Hello. > I have following layout: > > \---tests > | test_module.py > | __init__.py > > > When I launch "python.exe" -m unittest discover -t . -s tests" it works > perfectly. > But when I remove " __init__.py" it says > > Start directory is not importable: "tests'" > > ``loader.py``: > if start_dir != top_level_dir: > is_not_importable = not os.path.isfile(os.path.join(start_dir, > '__init__.py')) > > > I believe ``__init__.py`` does not play any role since Python 3.3, am I > right? > If so, it seems to be a bug. Should I create an issue? > > Ilya. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com > From songofacandy at gmail.com Thu Mar 23 11:59:23 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Fri, 24 Mar 2017 00:59:23 +0900 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering In-Reply-To: References: Message-ID: And this issue is relating to it too: http://bugs.python.org/issue29716 In short, "namespace package" is for make it possible to `pip install foo_bar foo_baz`, when foo_bar provides `foo.bar` and foo_baz provides `foo.baz` package. (foo is namespace package). If unittests searches normal directly, it may walk deep into very large tree containing millions of directories. I don't like it. On Fri, Mar 24, 2017 at 12:53 AM, INADA Naoki wrote: > There is already http://bugs.python.org/issue29642 > > On Thu, Mar 23, 2017 at 11:04 PM, Ilya Kazakevich > wrote: >> Hello. >> I have following layout: >> >> \---tests >> | test_module.py >> | __init__.py >> >> >> When I launch "python.exe" -m unittest discover -t . -s tests" it works >> perfectly. >> But when I remove " __init__.py" it says >> >> Start directory is not importable: "tests'" >> >> ``loader.py``: >> if start_dir != top_level_dir: >> is_not_importable = not os.path.isfile(os.path.join(start_dir, >> '__init__.py')) >> >> >> I believe ``__init__.py`` does not play any role since Python 3.3, am I >> right? >> If so, it seems to be a bug. Should I create an issue? >> >> Ilya. >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com >> From skip.montanaro at gmail.com Thu Mar 23 12:12:59 2017 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Thu, 23 Mar 2017 11:12:59 -0500 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: Message-ID: By the way, maybe we can also start to list vendors (Linux vendors?) who plan to offer commercial extended ... Delurking ever so briefly... Might be worthwhile to list published vendor EOL dates no matter if they are before or after the 2020 EOL date. Different Linux distros have different focuses. This could very well be a useful gauge for potential users. Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Thu Mar 23 12:19:15 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 23 Mar 2017 12:19:15 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: Message-ID: <20170323121915.4085bcde@subdivisions.wooz.org> On Mar 23, 2017, at 09:41 AM, Victor Stinner wrote: >Can we pick an official date? Benjamin should pick the date and update PEP 373. >Ubuntu 12.04 reached its end of life after 5 years, but it seems like >Canonical also starts to offer extended support to customers: >http://www.omgubuntu.co.uk/2017/03/ubuntu-12-04-esm-support Ubuntu 14.04 and 16.04 both have Python 2.7 so those will be supported until 2019 and 2021 respectively. I'm still hoping to relegate 2.7 to universe for 18.04 LTS. I don't think we'll be able to drop it entirely (nor should we). If it stays in main, it will get official support until 2023. https://www.ubuntu.com/info/release-end-of-life Cheers, -Barry From brian at python.org Thu Mar 23 12:37:11 2017 From: brian at python.org (Brian Curtin) Date: Thu, 23 Mar 2017 12:37:11 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: <20170323121915.4085bcde@subdivisions.wooz.org> References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: On Thu, Mar 23, 2017 at 12:19 PM, Barry Warsaw wrote: > On Mar 23, 2017, at 09:41 AM, Victor Stinner wrote: > >>Can we pick an official date? > > Benjamin should pick the date and update PEP 373. Not to start a bikeshed (calendarshed?), but how about 8 February 2020, or 2/8 as some in the US would write it? From steve.dower at python.org Thu Mar 23 12:56:21 2017 From: steve.dower at python.org (Steve Dower) Date: Thu, 23 Mar 2017 09:56:21 -0700 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: On 23Mar2017 0937, Brian Curtin wrote: > On Thu, Mar 23, 2017 at 12:19 PM, Barry Warsaw wrote: >> On Mar 23, 2017, at 09:41 AM, Victor Stinner wrote: >> >>> Can we pick an official date? >> >> Benjamin should pick the date and update PEP 373. > > Not to start a bikeshed (calendarshed?), but how about 8 February > 2020, or 2/8 as some in the US would write it? Whatever date is it will have to have a release on it. There's no point in taking fixes after the final release, and I'd suggest that given it is the last one that we'll want a longer RC period with RM approval required for any changes after that. (I predict a slight chance of a last minute rush to get contribution made - possibly it's worth locking things down from the second-to-last release?) All of which to say, there are more important concerns than choosing a cute date :) I've got no doubt that Benjamin will come up with a plan that leads to a stable and reliable release. Cheers, Steve From nad at python.org Thu Mar 23 12:59:27 2017 From: nad at python.org (Ned Deily) Date: Thu, 23 Mar 2017 12:59:27 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: Message-ID: <0EF5AC3F-A551-4C1C-9448-F92B139E540C@python.org> On Mar 23, 2017, at 04:41, Victor Stinner wrote: > By the way, maybe we can also start to list vendors (Linux vendors?) > who plan to offer commercial extended support? IMO, we should definitely not list 2.7 vendor and their plans on a python.org web page. One, it would be a maintenance headache for the web site / devguide. Two, it puts us and the PSF in the position of curating such a list and deciding who get to be on it and in what order and with keeping up with vendors' changing plans etc etc. Third, most likely they are already using a particular vendor so they are not likely to change just because of Python 2 support. And, fourth and most important, we don't really want to be encouraging people to stay on 2.7 anyway! If it is important enough for them, they can do their research and find out directly from the vendors. -- Ned Deily nad at python.org -- [] From tjreedy at udel.edu Thu Mar 23 14:54:23 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 23 Mar 2017 14:54:23 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: Message-ID: On 3/23/2017 4:41 AM, Victor Stinner wrote: > Hi, > > I looked at the "Status of Python branches" to check if it was up to > date. It's the case, thanks :-) But it recalled me that no exact date > was decided for the official end of line of the the Python 2 branch > (2.7 EOL). > https://docs.python.org/devguide/#status-of-python-branches > says January 1st, 2020 > Can we pick an official date? The devguide list is effectively the official list. No one should plan on getting anything from pydev after 1/1/20. As far as ''we' are concerned, 2.7 EOL will be when the final 2.7.n release is made. If Benjamin continues twice-a-year releases, n could be 20 in mid 2020, about 10 years after 2.7.0 in 2010-7-3. Or perhaps he will push a possible 2.7.19 'Dec 2019' release to 2020-01-01. Especially if the rate of commits decreases, he might lengthen the interval, thereby reducing the final ''n'. In the meanwhile, I think we should let him continue planning one release ahead. https://www.python.org/dev/peps/pep-0373/ Planned future release dates: 2.7.14 in mid 2017 -- Terry Jan Reedy From robertc at robertcollins.net Thu Mar 23 15:14:28 2017 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 24 Mar 2017 08:14:28 +1300 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering In-Reply-To: References: Message-ID: On 24 March 2017 at 04:59, INADA Naoki wrote: > And this issue is relating to it too: http://bugs.python.org/issue29716 > > In short, "namespace package" is for make it possible to `pip install > foo_bar foo_baz`, > when foo_bar provides `foo.bar` and foo_baz provides `foo.baz` > package. (foo is namespace package). > > If unittests searches normal directly, it may walk deep into very > large tree containing > millions of directories. I don't like it. That is a risk, OTOH I think the failure to do what folk expect is a bigger risk. -Rob From python at mrabarnett.plus.com Thu Mar 23 16:02:02 2017 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 23 Mar 2017 20:02:02 +0000 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: On 2017-03-23 16:37, Brian Curtin wrote: > On Thu, Mar 23, 2017 at 12:19 PM, Barry Warsaw wrote: >> On Mar 23, 2017, at 09:41 AM, Victor Stinner wrote: >> >>>Can we pick an official date? >> >> Benjamin should pick the date and update PEP 373. > > Not to start a bikeshed (calendarshed?), but how about 8 February > 2020, or 2/8 as some in the US would write it? > If you see 2/8, is that 2 August or February 8? We could avoid ambiguity if day_of_month == month or day_of_month > 12. From ned at nedbatchelder.com Thu Mar 23 16:41:01 2017 From: ned at nedbatchelder.com (Ned Batchelder) Date: Thu, 23 Mar 2017 16:41:01 -0400 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering In-Reply-To: References: Message-ID: <79e9ad62-6976-8fd7-c683-882916349f12@nedbatchelder.com> On 3/23/17 3:14 PM, Robert Collins wrote: > On 24 March 2017 at 04:59, INADA Naoki wrote: >> And this issue is relating to it too: http://bugs.python.org/issue29716 >> >> In short, "namespace package" is for make it possible to `pip install >> foo_bar foo_baz`, >> when foo_bar provides `foo.bar` and foo_baz provides `foo.baz` >> package. (foo is namespace package). >> >> If unittests searches normal directly, it may walk deep into very >> large tree containing >> millions of directories. I don't like it. > That is a risk, OTOH I think the failure to do what folk expect is a > bigger risk. The issue here is, what do folks expect? PEP 420 is pretty clear on its purpose. The first sentence of the abstract: > Namespace packages are a mechanism for splitting a single Python package across multiple directories on disk. And the first sentence of the specification: > Regular packages will continue to have an __init__.py and will reside in a single directory. PEP 420 is not meant to make all __init__.py files optional. It has a specific purpose. These proposed changes are not in support of that purpose. We should not bend over backwards to support getting rid of __init__.py files just because people don't like empty __init__.py files. That's not what PEP 420 is for. --Ned. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Thu Mar 23 17:42:13 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 23 Mar 2017 22:42:13 +0100 Subject: [Python-Dev] Exact date of Python 2 EOL? References: Message-ID: <20170323224213.620f7afb@fsol> On Thu, 23 Mar 2017 09:41:11 +0100 Victor Stinner wrote: > Hi, > > I looked at the "Status of Python branches" to check if it was up to > date. It's the case, thanks :-) But it recalled me that no exact date > was decided for the official end of line of the the Python 2 branch > (2.7 EOL). > > https://docs.python.org/devguide/#status-of-python-branches > says January 1st, 2020 > > https://pythonclock.org/ picked April 12th, 2020 (Pycon US) I suggest we pick April 1st, 2020, so as to extra confuse users who are still on Python 2 ;-) Regards Antoine. From p.f.moore at gmail.com Thu Mar 23 18:15:04 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 23 Mar 2017 22:15:04 +0000 Subject: [Python-Dev] 3.5 unittest does not support namespace packages for discovering In-Reply-To: <79e9ad62-6976-8fd7-c683-882916349f12@nedbatchelder.com> References: <79e9ad62-6976-8fd7-c683-882916349f12@nedbatchelder.com> Message-ID: On 23 March 2017 at 20:41, Ned Batchelder wrote: >>> If unittests searches normal directly, it may walk deep into very >>> large tree containing >>> millions of directories. I don't like it. > >> That is a risk, OTOH I think the failure to do what folk expect is a >> bigger risk. > > The issue here is, what do folks expect? PEP 420 is pretty clear on its > purpose. The unittest docs say (https://docs.python.org/3.6/library/unittest.html#test-discovery) """ Unittest supports simple test discovery. In order to be compatible with test discovery, all of the test files must be modules or packages (including namespace packages) importable from the top-level directory of the project (this means that their filenames must be valid identifiers). """ Personally, that to me pretty clearly implies that namespace packages *should* work. They are explicitly mentioned, after all. I agree that the implication that any random directory will get scanned is unexpected - after all *I* know which directories I intended to be namespace packages, it's just the computer that doesn't. Either way, someone is likely to get surprised. But I'd argue that if we retain the current __init__.py check, we need to amend the docs - and that's then a change in the documented behaviour. I do think it's a relatively minor point either way, and practicality may well imply that we're better avoiding the risk of discovery being impractical in projects with large directories full of resource files or similar. But on the expectation side of things, I'd consider the current behaviour as unexpected. Paul From barry at python.org Thu Mar 23 18:47:09 2017 From: barry at python.org (Barry Warsaw) Date: Thu, 23 Mar 2017 18:47:09 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: <20170323184709.5c89af40@subdivisions.wooz.org> On Mar 23, 2017, at 08:02 PM, MRAB wrote: >If you see 2/8, is that 2 August or February 8? I think that's 0.25 which doesn't look like a date to me . ISO 8601 dates please: 2020-02-08 is unambiguous. -Barry From jsbueno at python.org.br Thu Mar 23 20:08:24 2017 From: jsbueno at python.org.br (Joao S. O. Bueno) Date: Thu, 23 Mar 2017 21:08:24 -0300 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: <20170323184709.5c89af40@subdivisions.wooz.org> References: <20170323121915.4085bcde@subdivisions.wooz.org> <20170323184709.5c89af40@subdivisions.wooz.org> Message-ID: On 23 March 2017 at 19:47, Barry Warsaw wrote: > On Mar 23, 2017, at 08:02 PM, MRAB wrote: > >>If you see 2/8, is that 2 August or February 8? > > I think that's 0.25 which doesn't look like a date to me . ISO 8601 > dates please: 2020-02-08 is unambiguous. In Python 2, 2/8 is just 0. > > -Barry > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/jsbueno%40python.org.br From victor.stinner at gmail.com Thu Mar 23 20:28:16 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 24 Mar 2017 01:28:16 +0100 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: Message-ID: Le 23 mars 2017 19:56, "Terry Reedy" a ?crit : > https://docs.python.org/devguide/#status-of-python-branches > says January 1st, 2020 Can we pick an official date? > The devguide list is effectively the official list. No one should plan on getting anything from pydev after 1/1/20. Oh sorry, I forgot to mention that *I* wrote this table. I chose this date just to fill quickly the table. But I now would like a precise date to prevent future confusion. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From schesis at gmail.com Thu Mar 23 20:26:50 2017 From: schesis at gmail.com (Zero Piraeus) Date: Thu, 23 Mar 2017 21:26:50 -0300 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: <20170323121915.4085bcde@subdivisions.wooz.org> <20170323184709.5c89af40@subdivisions.wooz.org> Message-ID: <1490315210.1910.7.camel@gmail.com> : On Thu, 2017-03-23 at 21:08 -0300, Joao S. O. Bueno wrote: > In Python 2, 2/8 is just 0. 27/7 is 3 in Python 2, and between 3.8 and 3.9 in Python 3 (which is probably about where 3.x will be). ?-[]z. From ncoghlan at gmail.com Fri Mar 24 02:03:38 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Mar 2017 16:03:38 +1000 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: <20170323121915.4085bcde@subdivisions.wooz.org> References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: On 24 March 2017 at 02:19, Barry Warsaw wrote: > On Mar 23, 2017, at 09:41 AM, Victor Stinner wrote: > > >Can we pick an official date? > > Benjamin should pick the date and update PEP 373. > +1. The other detail to be clarified is that we know 2020-MM (whatever MM ends up being) will be the last upstream 2.7.x binary release (so the security-updates-for-bundled-OpenSSL problem will go away), but we haven't decided yet how other security updates after that point will be handled. If It's just RHEL/CentOS (and maybe Software Collections) keeping the branch alive past the upstream end-of-community-support date it's not a big deal, but if there are other redistributors or folks running self-supported binaries still monitoring it for security updates, then it may make sense to continue putting any required security patches in a common location, even if we stop doing any new upstream releases. > I'm still hoping to relegate 2.7 to universe for 18.04 LTS. I don't think > we'll be able to drop it entirely (nor should we). If it stays in main, it > will get official support until 2023. > > https://www.ubuntu.com/info/release-end-of-life > As far as I'm aware, Samba is the main remaining challenge for Fedora Server on that front, but at least all of the libraries it depends on have received the necessary updates to make them Python 2/3 compatible: http://fedora.portingdb.xyz/pkg/samba/ Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Mar 24 02:30:28 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 24 Mar 2017 16:30:28 +1000 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: <0EF5AC3F-A551-4C1C-9448-F92B139E540C@python.org> References: <0EF5AC3F-A551-4C1C-9448-F92B139E540C@python.org> Message-ID: On 24 March 2017 at 02:59, Ned Deily wrote: > On Mar 23, 2017, at 04:41, Victor Stinner > wrote: > > By the way, maybe we can also start to list vendors (Linux vendors?) > > who plan to offer commercial extended support? > > IMO, we should definitely not list 2.7 vendor and their plans on a > python.org web page. One, it would be a maintenance headache for the web > site / devguide. Two, it puts us and the PSF in the position of curating > such a list and deciding who get to be on it and in what order and with > keeping up with vendors' changing plans etc etc. Third, most likely they > are already using a particular vendor so they are not likely to change just > because of Python 2 support. And, fourth and most important, we don't > really want to be encouraging people to stay on 2.7 anyway! If it is > important enough for them, they can do their research and find out directly > from the vendors. > Agreed, although I'd be open to maintaining such a list in my Python 3 Q&A at http://python-notes.curiousefficiency.org/en/latest/python3/questions_and_answers.html That's explicitly caveated as being both from my personal perspective and only intermittently updated. It also already mentions the 2024 date, since that comes from RHEL 7's 2014 release date and 10 year support cycle. All the other commercial support end dates we know for certain at this point finish before 2020, with Canonical being a "maybe" on 2023 (depending on what happens in Ubuntu 18.04). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Fri Mar 24 09:49:17 2017 From: barry at python.org (Barry Warsaw) Date: Fri, 24 Mar 2017 09:49:17 -0400 Subject: [Python-Dev] Exact date of Python 2 EOL? In-Reply-To: References: <20170323121915.4085bcde@subdivisions.wooz.org> Message-ID: <20170324094917.08d41066@subdivisions.wooz.org> On Mar 24, 2017, at 04:03 PM, Nick Coghlan wrote: >As far as I'm aware, Samba is the main remaining challenge for Fedora >Server on that front, but at least all of the libraries it depends on have >received the necessary updates to make them Python 2/3 compatible: >http://fedora.portingdb.xyz/pkg/samba/ Samba is the last big one keeping 2.7 on the Ubuntu desktop edition as well. -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From aymeric.fromherz at ens.fr Fri Mar 24 06:57:55 2017 From: aymeric.fromherz at ens.fr (Aymeric Fromherz) Date: Fri, 24 Mar 2017 11:57:55 +0100 Subject: [Python-Dev] Await and Async keywords Message-ID: <58D4FBB3.1040704@ens.fr> Hi, I'm currently looking into how Python3 is parsed, and I'm wondering why await and async aren't considered as keywords? Are there programs actually using await and async as variable names? Is there another behaviour where it is interesting to use async for something different? Cheers, Aymeric From jelle.zijlstra at gmail.com Fri Mar 24 11:17:51 2017 From: jelle.zijlstra at gmail.com (Jelle Zijlstra) Date: Fri, 24 Mar 2017 08:17:51 -0700 Subject: [Python-Dev] Await and Async keywords In-Reply-To: <58D4FBB3.1040704@ens.fr> References: <58D4FBB3.1040704@ens.fr> Message-ID: 2017-03-24 3:57 GMT-07:00 Aymeric Fromherz : > Hi, > > I'm currently looking into how Python3 is parsed, and I'm wondering why > await and async aren't considered as keywords? Are there programs > actually using await and async as variable names? Is there another > behaviour where it is interesting to use async for something different? > > They are not keywords to prevent breaking backwards compatibility, but they will be full keywords in 3.7. async/await was introduced in 3.5, and Python generally avoids introducing backwards-incompatible in minor versions. Usually, that's done with __future__ imports; if I recall correctly, when "with" statements were introduced (making "with" a keyword), Python first released one or two versions where you had to do "from __future__ import with_statement" to use them, and then this flag was turned on by default. For async/await, instead the parser was hacked to recognize "async def" as a special token, and to add special parsing rules within "async def" function to recognize other uses of async and await. However, this is temporary and async and await will be full keywords in Python 3.7. See https://www.python.org/dev/peps/pep-0492/#transition-plan. And yes, real code uses async and await as identifiers. asyncio itself had a function called asyncio.async() (now renamed to ensure_future()). Making async and await full keywords would have immediately broken any such code for people who were upgrading to Python 3.5. > Cheers, > Aymeric > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > jelle.zijlstra%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aymeric.fromherz at ens.fr Fri Mar 24 11:31:48 2017 From: aymeric.fromherz at ens.fr (Aymeric Fromherz) Date: Fri, 24 Mar 2017 16:31:48 +0100 Subject: [Python-Dev] Await and Async keywords In-Reply-To: References: <58D4FBB3.1040704@ens.fr> Message-ID: <58D53BE4.1030301@ens.fr> Thanks for the quick answer! I'll have a look at this PEP. Cheers On 24/03/2017 16:17, Jelle Zijlstra wrote: > > > 2017-03-24 3:57 GMT-07:00 Aymeric Fromherz >: > > Hi, > > I'm currently looking into how Python3 is parsed, and I'm wondering why > await and async aren't considered as keywords? Are there programs > actually using await and async as variable names? Is there another > behaviour where it is interesting to use async for something different? > > They are not keywords to prevent breaking backwards compatibility, but > they will be full keywords in 3.7. async/await was introduced in 3.5, > and Python generally avoids introducing backwards-incompatible in minor > versions. Usually, that's done with __future__ imports; if I recall > correctly, when "with" statements were introduced (making "with" a > keyword), Python first released one or two versions where you had to do > "from __future__ import with_statement" to use them, and then this flag > was turned on by default. For async/await, instead the parser was hacked > to recognize "async def" as a special token, and to add special parsing > rules within "async def" function to recognize other uses of async and > await. However, this is temporary and async and await will be full > keywords in Python 3.7. > See https://www.python.org/dev/peps/pep-0492/#transition-plan. > > And yes, real code uses async and await as identifiers. asyncio itself > had a function called asyncio.async() (now renamed to ensure_future()). > Making async and await full keywords would have immediately broken any > such code for people who were upgrading to Python 3.5. > > > > Cheers, > Aymeric > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/jelle.zijlstra%40gmail.com > > > From status at bugs.python.org Fri Mar 24 13:09:17 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 24 Mar 2017 18:09:17 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20170324170917.1C33456263@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-03-17 - 2017-03-24) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5860 (+19) closed 35792 (+40) total 41652 (+59) Open issues with patches: 2431 Issues opened (40) ================== #29838: Check that sq_length and mq_length return non-negative result http://bugs.python.org/issue29838 opened by serhiy.storchaka #29839: Avoid raising OverflowError in len() when __len__() returns ne http://bugs.python.org/issue29839 opened by serhiy.storchaka #29840: Avoid raising OverflowError in bool() http://bugs.python.org/issue29840 opened by serhiy.storchaka #29841: errors raised by bytes and bytearray constructors for invalid http://bugs.python.org/issue29841 opened by Oren Milman #29842: Make Executor.map work with infinite/large inputs correctly http://bugs.python.org/issue29842 opened by josh.r #29843: errors raised by ctypes.Array for invalid _length_ attribute http://bugs.python.org/issue29843 opened by Oren Milman #29844: Windows Python installers not installing DLL to System32/SysWO http://bugs.python.org/issue29844 opened by TBBle #29846: ImportError: Symbol not found: __PyCodecInfo_GetIncrementalDec http://bugs.python.org/issue29846 opened by ajstewart #29847: Path takes and ignores **kwargs http://bugs.python.org/issue29847 opened by Jelle Zijlstra #29851: Have importlib.reload() raise ImportError when a spec can't be http://bugs.python.org/issue29851 opened by Richard Cooper #29852: Argument Clinic: add common converter to Py_ssize_t that accep http://bugs.python.org/issue29852 opened by serhiy.storchaka #29854: Segfault when readline history is more then 2 * history size http://bugs.python.org/issue29854 opened by nirs #29857: Provide `sys.executable_argv` for host application's command l http://bugs.python.org/issue29857 opened by ncoghlan #29858: inspect.signature includes bound argument for wrappers around http://bugs.python.org/issue29858 opened by anton-ryzhov #29860: smtplib.py doesn't capitalize EHLO. http://bugs.python.org/issue29860 opened by Lord Anton Hvornum #29862: Fix grammar typo in importlib.reload() exception http://bugs.python.org/issue29862 opened by brett.cannon #29863: Add a COMPACT constant to the json module http://bugs.python.org/issue29863 opened by brett.cannon #29867: Add asserts in PyXXX_GET_SIZE macros http://bugs.python.org/issue29867 opened by serhiy.storchaka #29868: multiprocessing.dummy missing cpu_count http://bugs.python.org/issue29868 opened by johnwiseman #29869: Underscores in numeric literals not supported in lib2to3. http://bugs.python.org/issue29869 opened by nevsan #29870: ssl socket leak http://bugs.python.org/issue29870 opened by thehesiod #29871: Enable optimized locks on Windows http://bugs.python.org/issue29871 opened by josh.r #29877: compileall hangs when accessing urandom even if number of work http://bugs.python.org/issue29877 opened by virtuald #29878: Add global instances of int 0 and 1 http://bugs.python.org/issue29878 opened by serhiy.storchaka #29879: typing.Text not available in python 3.5.1 http://bugs.python.org/issue29879 opened by Charles Bouchard-L??gar?? #29881: Add a new private API clear private variables, which are initi http://bugs.python.org/issue29881 opened by haypo #29882: Add an efficient popcount method for integers http://bugs.python.org/issue29882 opened by niklasf #29883: asyncio: Windows Proactor Event Loop UDP Support http://bugs.python.org/issue29883 opened by Adam Meily #29885: Allow GMT timezones to be used in datetime. http://bugs.python.org/issue29885 opened by Decorater #29886: links between binascii.{un,}hexlify / bytes.{,to}hex http://bugs.python.org/issue29886 opened by chrysn #29887: test_normalization doesn't work http://bugs.python.org/issue29887 opened by serhiy.storchaka #29888: The link referring to "Python download page" is broken http://bugs.python.org/issue29888 opened by cocoatomo #29889: test_asyncio fails always http://bugs.python.org/issue29889 opened by Thomas Knox #29890: Constructor of ipaddress.IPv*Interface does not follow documen http://bugs.python.org/issue29890 opened by Ilya.Kulakov #29891: urllib.request.Request accepts but doesn't check bytes headers http://bugs.python.org/issue29891 opened by ezio.melotti #29892: change statement for open() is splited into two part in middle http://bugs.python.org/issue29892 opened by OSAMU.NAKAMURA #29893: create_subprocess_exec doc doesn't match software http://bugs.python.org/issue29893 opened by Torrin Jones #29894: Deprecate returning a subclass of complex from __complex__ http://bugs.python.org/issue29894 opened by serhiy.storchaka #29895: Distutils blows up with an incorrect pypirc, should be caught http://bugs.python.org/issue29895 opened by Tommy Carpenter #29896: ElementTree.fromstring raises undocumented UnicodeError http://bugs.python.org/issue29896 opened by vfaronov Most recent 15 issues with no replies (15) ========================================== #29896: ElementTree.fromstring raises undocumented UnicodeError http://bugs.python.org/issue29896 #29895: Distutils blows up with an incorrect pypirc, should be caught http://bugs.python.org/issue29895 #29894: Deprecate returning a subclass of complex from __complex__ http://bugs.python.org/issue29894 #29893: create_subprocess_exec doc doesn't match software http://bugs.python.org/issue29893 #29891: urllib.request.Request accepts but doesn't check bytes headers http://bugs.python.org/issue29891 #29888: The link referring to "Python download page" is broken http://bugs.python.org/issue29888 #29887: test_normalization doesn't work http://bugs.python.org/issue29887 #29886: links between binascii.{un,}hexlify / bytes.{,to}hex http://bugs.python.org/issue29886 #29883: asyncio: Windows Proactor Event Loop UDP Support http://bugs.python.org/issue29883 #29877: compileall hangs when accessing urandom even if number of work http://bugs.python.org/issue29877 #29868: multiprocessing.dummy missing cpu_count http://bugs.python.org/issue29868 #29852: Argument Clinic: add common converter to Py_ssize_t that accep http://bugs.python.org/issue29852 #29840: Avoid raising OverflowError in bool() http://bugs.python.org/issue29840 #29838: Check that sq_length and mq_length return non-negative result http://bugs.python.org/issue29838 #29818: Py_SetStandardStreamEncoding leads to a memory error in debug http://bugs.python.org/issue29818 Most recent 15 issues waiting for review (15) ============================================= #29894: Deprecate returning a subclass of complex from __complex__ http://bugs.python.org/issue29894 #29892: change statement for open() is splited into two part in middle http://bugs.python.org/issue29892 #29878: Add global instances of int 0 and 1 http://bugs.python.org/issue29878 #29869: Underscores in numeric literals not supported in lib2to3. http://bugs.python.org/issue29869 #29867: Add asserts in PyXXX_GET_SIZE macros http://bugs.python.org/issue29867 #29858: inspect.signature includes bound argument for wrappers around http://bugs.python.org/issue29858 #29854: Segfault when readline history is more then 2 * history size http://bugs.python.org/issue29854 #29852: Argument Clinic: add common converter to Py_ssize_t that accep http://bugs.python.org/issue29852 #29843: errors raised by ctypes.Array for invalid _length_ attribute http://bugs.python.org/issue29843 #29840: Avoid raising OverflowError in bool() http://bugs.python.org/issue29840 #29839: Avoid raising OverflowError in len() when __len__() returns ne http://bugs.python.org/issue29839 #29838: Check that sq_length and mq_length return non-negative result http://bugs.python.org/issue29838 #29822: inspect.isabstract does not work on abstract base classes duri http://bugs.python.org/issue29822 #29816: Get rid of C limitation for shift count in right shift http://bugs.python.org/issue29816 #29803: Remove some redandunt ops in unicodeobject.c http://bugs.python.org/issue29803 Top 10 most discussed issues (10) ================================= #29881: Add a new private API clear private variables, which are initi http://bugs.python.org/issue29881 21 msgs #21895: signal.pause() and signal handlers don't react to SIGCHLD in n http://bugs.python.org/issue21895 14 msgs #29857: Provide `sys.executable_argv` for host application's command l http://bugs.python.org/issue29857 12 msgs #15988: Inconsistency in overflow error messages of integer argument http://bugs.python.org/issue15988 9 msgs #29843: errors raised by ctypes.Array for invalid _length_ attribute http://bugs.python.org/issue29843 8 msgs #29846: ImportError: Symbol not found: __PyCodecInfo_GetIncrementalDec http://bugs.python.org/issue29846 8 msgs #29847: Path takes and ignores **kwargs http://bugs.python.org/issue29847 7 msgs #29844: Windows Python installers not installing DLL to System32/SysWO http://bugs.python.org/issue29844 6 msgs #29863: Add a COMPACT constant to the json module http://bugs.python.org/issue29863 6 msgs #29869: Underscores in numeric literals not supported in lib2to3. http://bugs.python.org/issue29869 6 msgs Issues closed (39) ================== #8256: input() doesn't catch _PyUnicode_AsString() exception; io.Stri http://bugs.python.org/issue8256 closed by serhiy.storchaka #14208: No way to recover original argv with python -m http://bugs.python.org/issue14208 closed by ncoghlan #19930: os.makedirs('dir1/dir2', 0) always fails http://bugs.python.org/issue19930 closed by serhiy.storchaka #22744: os.mkdir on Windows silently strips trailing blanks from direc http://bugs.python.org/issue22744 closed by serhiy.storchaka #24037: Argument Clinic: add the boolint converter http://bugs.python.org/issue24037 closed by serhiy.storchaka #24796: Deleting names referencing from enclosed and enclosing scopes http://bugs.python.org/issue24796 closed by Mariatta #25455: Some repr implementations don't check for self-referential str http://bugs.python.org/issue25455 closed by serhiy.storchaka #26418: multiprocessing.pool.ThreadPool eats up memories http://bugs.python.org/issue26418 closed by pitrou #28331: "CPython implementation detail:" removed when content translat http://bugs.python.org/issue28331 closed by inada.naoki #28749: Fixed the documentation of the mapping codec APIs http://bugs.python.org/issue28749 closed by serhiy.storchaka #28876: bool of large range raises OverflowError http://bugs.python.org/issue28876 closed by serhiy.storchaka #29574: python-3.6.0.tgz permissions borked http://bugs.python.org/issue29574 closed by ned.deily #29615: SimpleXMLRPCDispatcher._dispatch mangles tracebacks when invok http://bugs.python.org/issue29615 closed by serhiy.storchaka #29638: Spurious failures in test_collections in releak hunting mode a http://bugs.python.org/issue29638 closed by serhiy.storchaka #29728: Expose TCP_NOTSENT_LOWAT http://bugs.python.org/issue29728 closed by Mariatta #29748: Argument Clinic: slice index converter http://bugs.python.org/issue29748 closed by serhiy.storchaka #29776: Modernize properties http://bugs.python.org/issue29776 closed by serhiy.storchaka #29793: Convert some builtin types constructors to Argument Clinic http://bugs.python.org/issue29793 closed by serhiy.storchaka #29836: Remove nturl2path from test_sundry and amend its docstring http://bugs.python.org/issue29836 closed by brett.cannon #29845: Mark tests that use _testcapi as CPython-only http://bugs.python.org/issue29845 closed by serhiy.storchaka #29848: Cannot use Decorators of the same class that requires an insta http://bugs.python.org/issue29848 closed by r.david.murray #29849: fix memory leak in import_from http://bugs.python.org/issue29849 closed by xiang.zhang #29850: file access, other drives http://bugs.python.org/issue29850 closed by Gabriel POTTER #29853: Improve exception messages for remove and index methods http://bugs.python.org/issue29853 closed by serhiy.storchaka #29855: The traceback compounding of RecursionError fails to work with http://bugs.python.org/issue29855 closed by ncoghlan #29856: curses online documentation typo http://bugs.python.org/issue29856 closed by Mariatta #29859: Return code of pthread_* in thread_pthread.h is not used for p http://bugs.python.org/issue29859 closed by Birne94 #29861: multiprocessing Pool keeps objects (tasks, args, results) aliv http://bugs.python.org/issue29861 closed by pitrou #29864: Misuse of Py_SIZE in dict.fromkey() http://bugs.python.org/issue29864 closed by serhiy.storchaka #29865: Use PyXXX_GET_SIZE macros rather than Py_SIZE for concrete typ http://bugs.python.org/issue29865 closed by serhiy.storchaka #29866: Added datetime_diff to datetime.py. http://bugs.python.org/issue29866 closed by serhiy.storchaka #29872: spam http://bugs.python.org/issue29872 closed by xiang.zhang #29873: Need a look for return value checking [_elementtree.c] http://bugs.python.org/issue29873 closed by xiang.zhang #29874: Need a look for return value checking [selectmodule.c] http://bugs.python.org/issue29874 closed by xiang.zhang #29875: IDLE quit unexpectedly http://bugs.python.org/issue29875 closed by ned.deily #29876: Check for null return value [_elementtree.c : subelement] http://bugs.python.org/issue29876 closed by xiang.zhang #29880: python3.6 install readline ,and then cpython exit http://bugs.python.org/issue29880 closed by zach.ware #29884: faulthandler does not properly restore sigaltstack during tear http://bugs.python.org/issue29884 closed by Mariatta #1117601: os.path.exists returns false negatives in MAC environments. http://bugs.python.org/issue1117601 closed by serhiy.storchaka From victor.stinner at gmail.com Sat Mar 25 06:04:02 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 25 Mar 2017 11:04:02 +0100 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 Message-ID: https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ "What changed in Python 3.6 to allow this? The new frame evaluation API was introduced to CPython in PEP 523 and it allows to specify a per-interpreter function pointer to handle the evaluation of frames." Nice! Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Sat Mar 25 08:56:26 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 25 Mar 2017 14:56:26 +0200 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 In-Reply-To: References: Message-ID: On 25.03.17 12:04, Victor Stinner wrote: > https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ > > "What changed in Python 3.6 to allow this? > > The new frame evaluation API was introduced to CPython in PEP 523 and it > allows to specify a per-interpreter function pointer to handle the > evaluation of frames." > > Nice! Awesome! Any chance that pdb can utilize similar technique? Or this doesn't make sense for pdb? From brett at python.org Sat Mar 25 15:37:52 2017 From: brett at python.org (Brett Cannon) Date: Sat, 25 Mar 2017 19:37:52 +0000 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 In-Reply-To: References: Message-ID: On Sat, 25 Mar 2017 at 03:05 Victor Stinner wrote: > > https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ > > "What changed in Python 3.6 to allow this? > > The new frame evaluation API was introduced to CPython in PEP 523 and it > allows to specify a per-interpreter function pointer to handle the > evaluation of frames." > > Nice! > I just wanted to publicly thank Microsoft for paying for that PEP. :) It came out of the Pyjion work that Dino and I got to spend work time on. The hook is also used by the Python workload in Visual Studio 2017 Preview, so even if no JITs ever use the hook at least debuggers are finding it useful. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Mar 25 15:41:07 2017 From: brett at python.org (Brett Cannon) Date: Sat, 25 Mar 2017 19:41:07 +0000 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 In-Reply-To: References: Message-ID: On Sat, 25 Mar 2017 at 05:58 Serhiy Storchaka wrote: > On 25.03.17 12:04, Victor Stinner wrote: > > > https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ > > > > "What changed in Python 3.6 to allow this? > > > > The new frame evaluation API was introduced to CPython in PEP 523 and it > > allows to specify a per-interpreter function pointer to handle the > > evaluation of frames." > > > > Nice! > > Awesome! Any chance that pdb can utilize similar technique? Or this > doesn't make sense for pdb? > I guess it's possible. It probably depends on how you're using the debugger. It sounds like PyCharm is injecting bytecode for specified breakpoints and so I suspect the speed is only there when you press "debug" and are not stepping through line-by-line. Getting gdb to have the same level of sophistication might not be too bad as long as you keep the hook simple and you're okay injected new bytecode just before a frame begins execution. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sat Mar 25 15:57:38 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 25 Mar 2017 15:57:38 -0400 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 In-Reply-To: References: Message-ID: On 3/25/2017 8:56 AM, Serhiy Storchaka wrote: > On 25.03.17 12:04, Victor Stinner wrote: >> https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ >> >> >> "What changed in Python 3.6 to allow this? >> >> The new frame evaluation API was introduced to CPython in PEP 523 and it >> allows to specify a per-interpreter function pointer to handle the >> evaluation of frames." >> >> Nice! > > Awesome! Any chance that pdb can utilize similar technique? Or this > doesn't make sense for pdb? According to the bdb.Bdb docstring, pdb implements a command-line user interface on top of bdb, while bdb.Bdb "takes care of the details of the trace facility". idlelib.debugger similarly implements a GUI user interface on top of bdb. I am sure that there are other debuggers that build directly or indirectly (via pdb) on bdb. So the question is whether bdb can be enhanced or augmented with a C-coded _bdb or other new module. As I understand it, sys.settrace results in an execution break and function call at each point in the bytecode corresponding to the beginning of a (logical?) line. This add much overhead. In return, a trace-based debugger allows one to flexibly control stop and go execution either with preset breakpoints* or with interactive commands: step (one line), step into (a function frame), step over (a function frame), or go to next breakpoint. The last is implemented by the debugger automatically stepping at each break call unless the line is in the existing breakpoint list. * Breakpoints can be defined either in an associated editor or with breakpoint commands in the debugger when execution is stopped. PEP 523 envisioned an alternate non-trace implementation of 'go to next breakpoint' by a debugger going "as far as to dynamically rewrite bytecode prior to execution to inject e.g. breakpoints in the bytecode." https://www.python.org/dev/peps/pep-0523/#debugging A debugger doing this could either eliminate the other 'go' commands (easiest) or implement them by either setting temporary breakpoints or temporarily turning tracing on. I presume it should be possible to make bdb.Bdb use bytecode breakpoints or add a new class with a similar API. Then any bdb-based debugger to be modified to make the speedup available. -- Terry Jan Reedy From storchaka at gmail.com Mon Mar 27 06:22:01 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 27 Mar 2017 13:22:01 +0300 Subject: [Python-Dev] Not all public names in C API have the "Py" prefix Message-ID: A number of public typedef names without the "Py" prefix survived the Grand Renaming [1]. A couple of new names without the "Py" prefix were added after the Grand Renaming (e.g. getter and setter [2]). That names were included in the Stable ABI. The long list of such names can be found in PEP 384 [3]: unaryfunc binaryfunc ternaryfunc inquiry lenfunc ssizeargfunc ssizessizeargfunc ssizeobjargproc ssizessizeobjargproc objobjargproc objobjproc visitproc traverseproc destructor getattrfunc getattrofunc setattrfunc setattrofunc reprfunc hashfunc richcmpfunc getiterfunc iternextfunc descrgetfunc descrsetfunc initproc newfunc allocfunc getter setter And I suppose new names were added since Python 3.2. A couple of underscored name without the "_Py" prefix (e.g. _object, _typeobject) are defined when include "Python.h". Should we to do something with this? Maybe add Py-prefixed aliases and temporary keep old names for compatibility (but allow to hide them if define a special macro)? [1] https://python-history.blogspot.com/2009/03/great-or-grand-renaming.html [2] https://github.com/python/cpython/commit/6d6c1a35e08b95a83dbe47dbd9e6474daff00354 [3] https://www.python.org/dev/peps/pep-0384/ From victor.stinner at gmail.com Mon Mar 27 06:43:22 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 27 Mar 2017 12:43:22 +0200 Subject: [Python-Dev] Not all public names in C API have the "Py" prefix In-Reply-To: References: Message-ID: 2017-03-27 12:22 GMT+02:00 Serhiy Storchaka : > Should we to do something with this? Maybe add Py-prefixed aliases and > temporary keep old names for compatibility (but allow to hide them if define > a special macro)? Is is possible to keep backward compatibility if an older version of the stable ABI is explicitly requested? Something like: #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 < 0x03070000 #define getter _Py_getter ... #endif Victor From victor.stinner at gmail.com Mon Mar 27 08:12:21 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 27 Mar 2017 14:12:21 +0200 Subject: [Python-Dev] Issue #21071: change struct.Struct.format type from bytes to str Message-ID: Hi I would like to change struct.Struct.format type from bytes to str. I don't expect that anyone uses this attribute, and struct.Struct() constructor accepts both bytes and str. http://bugs.python.org/issue21071 It's just to be convenient: more functions accept str than bytes in Python 3. Example: print() (python3 -bb raises an exceptions if you pass bytes to print). Is anything opposed to breaking the backward compatibility? Victor From storchaka at gmail.com Mon Mar 27 11:27:30 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 27 Mar 2017 18:27:30 +0300 Subject: [Python-Dev] Not all public names in C API have the "Py" prefix In-Reply-To: References: Message-ID: On 27.03.17 13:43, Victor Stinner wrote: > 2017-03-27 12:22 GMT+02:00 Serhiy Storchaka : >> Should we to do something with this? Maybe add Py-prefixed aliases and >> temporary keep old names for compatibility (but allow to hide them if define >> a special macro)? > > Is is possible to keep backward compatibility if an older version of > the stable ABI is explicitly requested? > > Something like: > > #if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 < 0x03070000 > #define getter _Py_getter > .... > #endif I think it is better to use typedef than #define. From rymg19 at gmail.com Mon Mar 27 13:32:09 2017 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Mon, 27 Mar 2017 12:32:09 -0500 Subject: [Python-Dev] Distutils frozen? Message-ID: So, I had opened up a PR (#563) to add README.rst to the distutils readme list. Turns out, I didn't read the devguide correctly, and there needed to be an open issue first. Oops. Then I found bpo-11913 (https://bugs.python.org/issue11913), which said: This would be easy to fix, but as it would be considered a new feature, it can?t go into distutils, which is frozen. I interpreted this as being "frozen to new features"...but I can't find any info about that. Anywhere. The devguide doesn't even remotely mention this. A Google search only gives me information on how to freeze modules *using* distutils, which is hardly helpful. FWIW, no one on the PR seemed to mention that, either. If distutils is indeed frozen, shouldn't it be documented somewhere in the devguide? -- Ryan (????) Yoko Shimomura > ryo (supercell/EGOIST) > Hiroyuki Sawano >> everyone else http://refi64.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Mon Mar 27 13:54:55 2017 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 27 Mar 2017 19:54:55 +0200 Subject: [Python-Dev] Distutils frozen? References: Message-ID: <20170327195455.3d0a4afb@fsol> On Mon, 27 Mar 2017 12:32:09 -0500 Ryan Gonzalez wrote: > So, I had opened up a PR (#563) to add README.rst to the distutils readme > list. Turns out, I didn't read the devguide correctly, and there needed to > be an open issue first. Oops. > > Then I found bpo-11913 (https://bugs.python.org/issue11913), which said: > > > This would be easy to fix, but as it would be considered a new feature, it > can?t go into distutils, which is frozen. I've reopened the issue. Feel free to propose a patch! Regards Antoine. From larry at hastings.org Mon Mar 27 14:26:45 2017 From: larry at hastings.org (Larry Hastings) Date: Mon, 27 Mar 2017 11:26:45 -0700 Subject: [Python-Dev] Signups for 2017 Python Language Summit are now open Message-ID: (reposting, cc'ing python-dev) It?s that time again: time to start thinking about the Python Language Summit! The 2017 summit will be held on Wednesday, May 17, from 10am to 4pm, at the Oregon Convention Center in Portland, Oregon, USA. Your befezzled hosts Larry and Barry will once again be at the helm. The summit?s purpose is to disseminate information and spark conversation among core Python developers. It?s our yearly opportunity to get together for an in-person discussion, to review interesting developments of the previous year and hash out where we?re going next. And we have lots to talk about! Since our last summit, Python 3.6 was released, and the main CPython development process has been moved to GitHub. Naturally Python 3.7 development continues apace. Speaking of changes, we?re continuing to evolve the summit. Everyone seemed to like the lightning talks, so we?ll keep those. Everyone seemed to hate us keeping the schedule secret -sorry!- so we?ll make that available beforehand, with the understanding that it?ll be fluid as the day progresses. Due to room size limitations and the yearly increase in participation, we?re limiting summit invitations to just core developers and invited speakers. As usual, we?ll have whiteboards and a projector. But this year we?re adding roaming microphones, so everybody in the room will be able to hear your question! With the help of the ever awesome Ewa, this year we?ll have badge ribbons for Language Summit participants, which we?ll hand out at the summit room in the morning. As with last year, we?re using Google Forms to collect signups. The form will let you request an invitation to the summit and optionally propose a talk. Signups are open now, and will remain open until Wednesday April 12th, 2017. You can find the link to the signup form from the summit?s official web page, here: https://us.pycon.org/2017/events/language-summit/ But never forget: you don?t need to be registered for PyCon in order to attend the summit! One final note. We?re re-inviting Jake Edge from Linux Weekly News to attend the summit and provide press coverage. Jake?s done a phenomenal job of covering the previous two years? summits, providing valuable information not just for summit attendees, but also for the Python community at large. Jake?s coverage goes a long way toward demystifying the summit, while remaining respectful of confidential information that?s deemed ?off the record? ahead of time by participants. We hope to see you at the summit! [BL]arry -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Mar 27 14:48:48 2017 From: brett at python.org (Brett Cannon) Date: Mon, 27 Mar 2017 18:48:48 +0000 Subject: [Python-Dev] Not all public names in C API have the "Py" prefix In-Reply-To: References: Message-ID: On Mon, 27 Mar 2017 at 03:23 Serhiy Storchaka wrote: > A number of public typedef names without the "Py" prefix survived the > Grand Renaming [1]. A couple of new names without the "Py" prefix were > added after the Grand Renaming (e.g. getter and setter [2]). > > That names were included in the Stable ABI. The long list of such names > can be found in PEP 384 [3]: > > unaryfunc binaryfunc ternaryfunc inquiry lenfunc ssizeargfunc > ssizessizeargfunc ssizeobjargproc ssizessizeobjargproc objobjargproc > objobjproc visitproc traverseproc destructor getattrfunc getattrofunc > setattrfunc setattrofunc reprfunc hashfunc richcmpfunc getiterfunc > iternextfunc descrgetfunc descrsetfunc initproc newfunc allocfunc getter > setter > > And I suppose new names were added since Python 3.2. > > A couple of underscored name without the "_Py" prefix (e.g. _object, > _typeobject) are defined when include "Python.h". > > Should we to do something with this? Maybe add Py-prefixed aliases and > temporary keep old names for compatibility (but allow to hide them if > define a special macro)? > I think we should at least add aliases somehow. Maybe in a Py4k world we can update the stable ABI and drop names that re not properly prefixed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xdegaye at gmail.com Mon Mar 27 15:48:08 2017 From: xdegaye at gmail.com (Xavier de Gaye) Date: Mon, 27 Mar 2017 21:48:08 +0200 Subject: [Python-Dev] PyCharm debugger became 40x faster on Python 3.6 thanks to PEP 523 In-Reply-To: References: Message-ID: <591781cc-3e51-48b7-cb8f-be4a246ad18e@gmail.com> On 03/25/2017 08:57 PM, Terry Reedy wrote: > On 3/25/2017 8:56 AM, Serhiy Storchaka wrote: >> On 25.03.17 12:04, Victor Stinner wrote: >>> https://blog.jetbrains.com/pycharm/2017/03/inside-the-debugger-interview-with-elizaveta-shashkova/ >>> >>> >>> "What changed in Python 3.6 to allow this? >>> >>> The new frame evaluation API was introduced to CPython in PEP 523 and it >>> allows to specify a per-interpreter function pointer to handle the >>> evaluation of frames." >>> >>> Nice! >> >> Awesome! Any chance that pdb can utilize similar technique? Or this >> doesn't make sense for pdb? > > According to the bdb.Bdb docstring, pdb implements a command-line user interface on top of bdb, while bdb.Bdb "takes care of the details of the trace facility". idlelib.debugger similarly implements > a GUI user interface on top of bdb. I am sure that there are other debuggers that build directly or indirectly (via pdb) on bdb. So the question is whether bdb can be enhanced or augmented with a > C-coded _bdb or other new module. > > As I understand it, sys.settrace results in an execution break and function call at each point in the bytecode corresponding to the beginning of a (logical?) line. This add much overhead. In return, > a trace-based debugger allows one to flexibly control stop and go execution either with preset breakpoints* or with interactive commands: step (one line), step into (a function frame), step over (a > function frame), or go to next breakpoint. The last is implemented by the debugger automatically stepping at each break call unless the line is in the existing breakpoint list. > > * Breakpoints can be defined either in an associated editor or with breakpoint commands in the debugger when execution is stopped. > > PEP 523 envisioned an alternate non-trace implementation of 'go to next breakpoint' by a debugger going "as far as to dynamically rewrite bytecode prior to execution to inject e.g. breakpoints in the > bytecode." > https://www.python.org/dev/peps/pep-0523/#debugging > > A debugger doing this could either eliminate the other 'go' commands (easiest) or implement them by either setting temporary breakpoints or temporarily turning tracing on. > > I presume it should be possible to make bdb.Bdb use bytecode breakpoints or add a new class with a similar API. Then any bdb-based debugger to be modified to make the speedup available. pdb-clone, an extension to pdb, gets about those same performance gains over pdb while still using sys.settrace(). pdb-clone runs at a speed of less than twice the speed of the interpreter when pdb runs at about 80 times the speed of the interpreter. See some performance measurements at https://bitbucket.org/xdegaye/pdb-clone/wiki/Performances.md Given those results, it is not clear how one would get a boost of a factor 40 by implementing PEP 523 for the pdb debugger as pdb could already be very close to the speed of the interpreter mostly by implementing in a C extension module the bdb.Bdb methods that check whether the debugger should take control. Setting a trace function with sys.settrace() adds the following incompressible overhead: * 15-20 % overhead: computed goto are not used in the ceval loop when tracing is active. * The trace function receives all the PyTrace_LINE events (even when the frame f_trace is NULL :(). The interpreter calls _PyCode_CheckLineNumber() for each of these events and the processing in this function is the one that is costly. An optimization is done in pdb-clone that swaps the trace function with a profiler function whenever possible (i.e. when there is no need to trace the lines of the function) to avoid calling _PyCode_CheckLineNumber() (the profiler still gets PyTrace_C_CALL events but there is not such overhead with these events). The performance gain obtained with this scheme is about 30%. I think that the main point here is not whether to switch from sys.settrace() to PEP 523, but first to implement the stop_here() bdb.Bdb method in a C extension module. Xavier From brett at python.org Mon Mar 27 16:13:20 2017 From: brett at python.org (Brett Cannon) Date: Mon, 27 Mar 2017 20:13:20 +0000 Subject: [Python-Dev] I will be deleting the cpython-mirror repo on April 10 Message-ID: On the two-month anniversary of the GitHub migration I'm going to delete the old git mirror: https://github.com/python/cpython-mirror. If you have a old PR that got closed with comments or something, now is the time to get those comments off. -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Mon Mar 27 17:58:40 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 27 Mar 2017 23:58:40 +0200 Subject: [Python-Dev] Distutils frozen? In-Reply-To: References: Message-ID: 2017-03-27 19:32 GMT+02:00 Ryan Gonzalez : > Then I found bpo-11913 (https://bugs.python.org/issue11913), which said: > > This would be easy to fix, but as it would be considered a new feature, it > can?t go into distutils, which is frozen. Oh, that painful story. There was a huge "distutils2 project" which failed for some reasons. While distutils2 was developed, it was decided to revert recent changes in distutils to prevent *any kind* of regression and block further changes. But that was in 2011. Today, pip is very popular, released often, take care on backward compatibility on top of setuptools and distutils. Hopefully, parts of distutils2 became distlib which is now a core library of pip. Victor From victor.stinner at gmail.com Mon Mar 27 18:38:45 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 28 Mar 2017 00:38:45 +0200 Subject: [Python-Dev] I will be deleting the cpython-mirror repo on April 10 In-Reply-To: References: Message-ID: Oops, thanks for the reminder! I found two old pull requests that I forgot to rebase and republish on the new CPython Git repository. Victor 2017-03-27 22:13 GMT+02:00 Brett Cannon : > On the two-month anniversary of the GitHub migration I'm going to delete the > old git mirror: https://github.com/python/cpython-mirror. If you have a old > PR that got closed with comments or something, now is the time to get those > comments off. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/victor.stinner%40gmail.com > From victor.stinner at gmail.com Mon Mar 27 19:23:01 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 28 Mar 2017 01:23:01 +0200 Subject: [Python-Dev] OpenIndiana and Solaris support In-Reply-To: References: Message-ID: 2017-02-08 15:14 GMT+01:00 Jesus Cea : > On 08/02/17 11:24, Victor Stinner wrote: >> So I suggest to drop official Solaris support, but I don't propose to >> remove the C code specific to Solaris. In practice, I suggest to >> remove Solaris and OpenIndiana buildbots since they are broken for >> months and are more annoying than useful. > > Give me a week to move this forward. Last hope. Any update? "One week" was one month ago :-) Victor From victor.stinner at gmail.com Mon Mar 27 19:31:44 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 28 Mar 2017 01:31:44 +0200 Subject: [Python-Dev] Reminder: buildbots are alive :-) Message-ID: Hi, Don't forget our brave buildbots which compile Python to run the full test suite, every day, every night, even if it's raining or worse! Python 3.7: http://buildbot.python.org/all/waterfall?category=3.x.stable&category=3.x.unstable Python 3.6: http://buildbot.python.org/all/waterfall?category=3.6.stable&category=3.6.unstable Python 3.5: http://buildbot.python.org/all/waterfall?category=3.5.stable&category=3.5.unstable Python 2.7: http://buildbot.python.org/all/waterfall?category=2.7.stable&category=2.7.unstable It seems like most buildbots are back to normal (green). I backported ".gitattributes" to 3.5 today to fix the last major known buildbot issue on Windows. I proposed to drop OS X Tiger support since the OS is old, tests are failing, and I don't know how to get access to such old OS nowadays: http://bugs.python.org/issue29915 But Ned Deily wrote that right now, it's the last online macOS buildbot! Moreover, only one test fails and it's a minor regression of test_uuid: http://bugs.python.org/issue29925 For OpenIndiana, I just sent a new ping on the "OpenIndiana and Solaris support" thread on this list. I'm still in favor of removing completely the buildbot. Maybe we can add a new Illumos buildbot slave later? That's all Folks! Victor From tjreedy at udel.edu Mon Mar 27 22:33:44 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 27 Mar 2017 22:33:44 -0400 Subject: [Python-Dev] Issue with _thread.interrupt_main (29926) Message-ID: https://bugs.python.org/issue29926 was opened as an IDLE issue, which means that most watching the new issues list would ignore it. But I think it is an issue with _thread.interrupt_main (which IDLE calls in respond to ^C) not interrupting time.sleep(n) in main thread*. I tested on Windows, don't know yet about OP. Since there is no Expert's Index listing for _thread (or threading), I am asking here for someone who knows anything to take a look. * >>> time.sleep(10) <... remainder of 10 seconds pass> KeyboardInterrupt >>> I don't know if this is a bug or inherent limitation. -- Terry Jan Reedy From steve at pearwood.info Mon Mar 27 23:11:17 2017 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 28 Mar 2017 14:11:17 +1100 Subject: [Python-Dev] Issue with _thread.interrupt_main (29926) In-Reply-To: References: Message-ID: <20170328031116.GF27969@ando.pearwood.info> On Mon, Mar 27, 2017 at 10:33:44PM -0400, Terry Reedy wrote: > https://bugs.python.org/issue29926 was opened as an IDLE issue, which > means that most watching the new issues list would ignore it. But I > think it is an issue with _thread.interrupt_main (which IDLE calls in > respond to ^C) not interrupting time.sleep(n) in main thread*. I tested > on Windows, don't know yet about OP. Since there is no Expert's Index > listing for _thread (or threading), I am asking here for someone who > knows anything to take a look. > > * > >>> time.sleep(10) > > > <... remainder of 10 seconds pass> > KeyboardInterrupt I get similar behaviour under Linux. I don't have the debug print, but the KeyboardInterrupt doesn't interrupt the sleep until the 10 seconds are up. -- Steve From vadmium+py at gmail.com Tue Mar 28 02:00:11 2017 From: vadmium+py at gmail.com (Martin Panter) Date: Tue, 28 Mar 2017 06:00:11 +0000 Subject: [Python-Dev] Issue with _thread.interrupt_main (29926) In-Reply-To: <20170328031116.GF27969@ando.pearwood.info> References: <20170328031116.GF27969@ando.pearwood.info> Message-ID: On 28 March 2017 at 03:11, Steven D'Aprano wrote: > On Mon, Mar 27, 2017 at 10:33:44PM -0400, Terry Reedy wrote: >> https://bugs.python.org/issue29926 was opened as an IDLE issue, which >> means that most watching the new issues list would ignore it. But I >> think it is an issue with _thread.interrupt_main (which IDLE calls in >> respond to ^C) not interrupting time.sleep(n) in main thread*. I tested >> on Windows, don't know yet about OP. Since there is no Expert's Index >> listing for _thread (or threading), I am asking here for someone who >> knows anything to take a look. >> >> * >> >>> time.sleep(10) >> >> >> <... remainder of 10 seconds pass> >> KeyboardInterrupt > > > I get similar behaviour under Linux. I don't have the debug print, but > the KeyboardInterrupt doesn't interrupt the sleep until the 10 seconds > are up. Looking at the implementation, _thread.interrupt_main just calls PyErr_SetInterrupt. It doesn?t appear to send a signal. I played with ?strace? and couldn?t see any evidence of a signal. I guess it just sets a flag that will be polled. To actually interrupt the ?sleep? call, you might need to use ?pthread_kill? or similar (at least on Unix). From songofacandy at gmail.com Tue Mar 28 08:49:41 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Tue, 28 Mar 2017 21:49:41 +0900 Subject: [Python-Dev] Misc/NEWS entries for Python 3.7a1 Message-ID: Hi. Currently, changelog of Python 3.7a1 [1] contains changes between 3.6b1 and 3.7a1. So lot's of bugfixes are listed twice or more in changelog. For example, "bpo-28258: Fixed build with Estonian locale..." are listed under 3.5.3rc1, 3.6.0b2 and 3.7.0a1. [1] https://docs.python.org/3.7/whatsnew/changelog.html#changelog This has two problems: * The changelog is longer than necessary. * If bpo-xxxx is fixed in 3.7a1, people may think the bug exists in 3.6.0, even if the bug is fixed in 3.6b2 too. Since we stopped merging 3.6 -> master, I suggest to remove such duplicates. There are two ways: # A: 3.7a1 -> 3.6.0 -> 3.6.0rc2 ... -> 3.6a1 -> 3.5.0 ... This may be what people expect. In this case, we will remove: * 3.6.1(rc*) from changelog * duplicated entries in 3.7.0 (which fixed before 3.6.0) # B: 3.7a1 -> 3.6b1 -> ... 3.6a1 -> 3.5b1 ... This reflects our branch model. In this case, we will remove: * All 3.6 versions after 3.6b1 How do you think? Regards, From tjreedy at udel.edu Tue Mar 28 09:40:55 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 28 Mar 2017 09:40:55 -0400 Subject: [Python-Dev] Issue with _thread.interrupt_main (29926) In-Reply-To: References: <20170328031116.GF27969@ando.pearwood.info> Message-ID: Steven, thanks for verifying bug on *nix. On 3/28/2017 2:00 AM, Martin Panter wrote: > On 28 March 2017 at 03:11, Steven D'Aprano wrote: >> On Mon, Mar 27, 2017 at 10:33:44PM -0400, Terry Reedy wrote: >>> https://bugs.python.org/issue29926 was opened as an IDLE issue, which >>> means that most watching the new issues list would ignore it. But I >>> think it is an issue with _thread.interrupt_main (which IDLE calls in >>> respond to ^C) not interrupting time.sleep(n) in main thread*. I tested >>> on Windows, don't know yet about OP. Since there is no Expert's Index >>> listing for _thread (or threading), I am asking here for someone who >>> knows anything to take a look. >>> >>> * >>>>>> time.sleep(10) >>> >>> >>> <... remainder of 10 seconds pass> >>> KeyboardInterrupt >> >> >> I get similar behaviour under Linux. I don't have the debug print, but >> the KeyboardInterrupt doesn't interrupt the sleep until the 10 seconds >> are up. > > Looking at the implementation, _thread.interrupt_main just calls > PyErr_SetInterrupt. It doesn?t appear to send a signal. I played with > ?strace? and couldn?t see any evidence of a signal. I guess it just > sets a flag that will be polled. To actually interrupt the ?sleep? > call, you might need to use ?pthread_kill? or similar (at least on > Unix). I copied this to the issue. Eryk Sun suggested a patch for Windows, (and the possibility of using pthread_kill). Can you possibly do one for *nix? This is out of my ballpark, but the bug (relative to console behavior) is a nuisance. -- Terry Jan Reedy From nad at python.org Tue Mar 28 10:07:57 2017 From: nad at python.org (Ned Deily) Date: Tue, 28 Mar 2017 10:07:57 -0400 Subject: [Python-Dev] Misc/NEWS entries for Python 3.7a1 In-Reply-To: References: Message-ID: <062460EC-C3DA-4681-BF33-E6E935900B8C@python.org> On Mar 28, 2017, at 08:49, INADA Naoki wrote: > Currently, changelog of Python 3.7a1 [1] contains changes between > 3.6b1 and 3.7a1. > So lot's of bugfixes are listed twice or more in changelog. > For example, "bpo-28258: Fixed build with Estonian locale..." are > listed under 3.5.3rc1, > 3.6.0b2 and 3.7.0a1. > > [1] https://docs.python.org/3.7/whatsnew/changelog.html#changelog [...] Thanks for noticing. Misc/NEWS is always somewhat problematic. As you probably know, the Core Workflow SIG, led by Brett, is working on a long-term solution to generating Misc/NEWS, a solution that should be available soon. One of the duties of the release manager is to "edit" Misc/NEWS; I was planning to wait for the new Misc/NEWS solution and for more of the conversion to Git/GitHub to settle to do anything major to the master (i.e. 3.7) version. There have already been some major merge mistakes for the 3.6.x Misc/NEWS. I would recommend not to worry too much about master's Misc/NEWS right now. I may do some cleaning up before the new Misc/NEWS process is introduced but I will also be reviewing it prior to each of the preview releases, which start later this year. -- Ned Deily nad at python.org -- [] From tjreedy at udel.edu Tue Mar 28 10:24:33 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 28 Mar 2017 10:24:33 -0400 Subject: [Python-Dev] Misc/NEWS entries for Python 3.7a1 In-Reply-To: References: Message-ID: On 3/28/2017 8:49 AM, INADA Naoki wrote: > Currently, changelog of Python 3.7a1 [1] contains changes between > 3.6b1 and 3.7a1. I think the changelog for x.(y+1).0a1 should start with the released x.y.0. This used to be the case (with perhaps a few exceptions) when x.y.0 was not branched off until the release candidate (or maybe not until the releas), and people were asked to stop pushing enhancements between beta1 and the release. -- Terry Jan Reedy From mhroncok at redhat.com Tue Mar 28 07:24:55 2017 From: mhroncok at redhat.com (=?UTF-8?Q?Miro_Hron=c4=8dok?=) Date: Tue, 28 Mar 2017 13:24:55 +0200 Subject: [Python-Dev] What version is an extension module binary compatible with Message-ID: Hi, as per [0], ABI of the C API is generally not stable and the binary compatibility may break between versions. It is hard from the text to know whether it talks about minor versions (such as 3.6 vs 3.5) or patch versions (such as 3.6.1 vs 3.6.0). In Fedora we currently only keep track about the minor version dependency. I.e. an RPM package with a Python module depends on Python 3.6, not specifically on Python 3.6.1. However, recently we found an issue with this approach [1]: an extension module built against Python 3.6.1 cannot be run on Python 3.6.0, because it uses a macro that, in 3.6.1, uses the new PySlice_AdjustIndices function. I'd like some clarification on what ABI compatibility we can expect. * Should the ABI be stable across patch releases (so calling PySlice_AdjustIndices from an existing macro would be a bug)? * Should the ABI be forward-compatible within a minor release (so modules built for 3.6.0 should be usable with 3.6.1, but not vice versa)? * Or should we expect the ABI to change even across patch releases? It would be nice to say this explicitly in the docs ([0] or another suitable place). [0] https://docs.python.org/3/c-api/stable.html [1] https://bugzilla.redhat.com/show_bug.cgi?id=1435135 Thanks for clarification, On behalf of the Fedora Python SIG, Miro Hron?ok -- Phone: +420777974800 IRC: mhroncok From storchaka at gmail.com Tue Mar 28 11:27:15 2017 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 28 Mar 2017 18:27:15 +0300 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On 28.03.17 14:24, Miro Hron?ok wrote: > However, recently we found an issue with this approach [1]: an extension > module built against Python 3.6.1 cannot be run on Python 3.6.0, because > it uses a macro that, in 3.6.1, uses the new PySlice_AdjustIndices > function. The macro expanding to PySlice_AdjustIndices is used only when Py_LIMITED_API is not defined or is defined to the version equal or greater the version in which PySlice_AdjustIndices was added. From p.f.moore at gmail.com Tue Mar 28 12:18:41 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 28 Mar 2017 17:18:41 +0100 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On 28 March 2017 at 12:24, Miro Hron?ok wrote: > I'd like some clarification on what ABI compatibility we can expect. > * Should the ABI be stable across patch releases (so calling > PySlice_AdjustIndices from an existing macro would be a bug)? > * Should the ABI be forward-compatible within a minor release (so modules > built for 3.6.0 should be usable with 3.6.1, but not vice versa)? > * Or should we expect the ABI to change even across patch releases? Given that binary wheels are built against a specific minor version (3.6, 3.5, ...) I would expect the ABI to be consistent over a minor release. That would fit with my expectations of the compatibility guarantees on patch releases. So I from what you describe, I'd consider this as a bug. Certainly, if someone built a C extension as a wheel using Python 3.6.1, it would be tagged as compatible with cp36, and pip would happily use it when installing to a Python 3.6.0 system, where it would fail. Paul From njs at pobox.com Tue Mar 28 12:31:58 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 28 Mar 2017 09:31:58 -0700 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On Mar 28, 2017 8:29 AM, "Serhiy Storchaka" wrote: On 28.03.17 14:24, Miro Hron?ok wrote: > However, recently we found an issue with this approach [1]: an extension > module built against Python 3.6.1 cannot be run on Python 3.6.0, because > it uses a macro that, in 3.6.1, uses the new PySlice_AdjustIndices > function. > The macro expanding to PySlice_AdjustIndices is used only when Py_LIMITED_API is not defined or is defined to the version equal or greater the version in which PySlice_AdjustIndices was added. That's nice, but not sufficient. Py_LIMITED_ABI is cool, but the vast majority of distributed packages don't use it, and instead rely on the "unlimited" ABI being forward and backwards compatible within each minor release. For example, this assumption is hard coded in the wheel format, which has no way to even describe a wheel that needs 3.6.x with x >= 1. People uploading packages to pypi use whatever version of 3.6 they have lying around and assume it will work for everyone downloading. IMO this is a bug, and depending on how many packages are affected it might even call for an emergency 3.6.2. The worst case is that we start getting large numbers of packages uploaded to pypi that claim to be 3.6.0 compatible but that crash like crash with an obscure error when people download them. -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From v+python at g.nevcal.com Tue Mar 28 13:05:53 2017 From: v+python at g.nevcal.com (Glenn Linderman) Date: Tue, 28 Mar 2017 10:05:53 -0700 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: <2864abee-bcc1-c82f-05ff-ed8ddf46900c@g.nevcal.com> On 3/28/2017 9:18 AM, Paul Moore wrote: > On 28 March 2017 at 12:24, Miro Hron?ok wrote: >> I'd like some clarification on what ABI compatibility we can expect. >> * Should the ABI be stable across patch releases (so calling >> PySlice_AdjustIndices from an existing macro would be a bug)? >> * Should the ABI be forward-compatible within a minor release (so modules >> built for 3.6.0 should be usable with 3.6.1, but not vice versa)? >> * Or should we expect the ABI to change even across patch releases? > Given that binary wheels are built against a specific minor version > (3.6, 3.5, ...) I would expect the ABI to be consistent over a minor > release. That would fit with my expectations of the compatibility > guarantees on patch releases. > > So I from what you describe, I'd consider this as a bug. Certainly, if > someone built a C extension as a wheel using Python 3.6.1, it would be > tagged as compatible with cp36, and pip would happily use it when > installing to a Python 3.6.0 system, where it would fail. > Somewhere I got the idea that extension authors were supposed to build against the n.m.0 releases, expressly so that the extensions would then be compatible with the whole n.m.x series of releases. Did I dream that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ericsnowcurrently at gmail.com Tue Mar 28 13:28:44 2017 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 28 Mar 2017 11:28:44 -0600 Subject: [Python-Dev] why _PyGen_Finalize(gen) propagates close() to _PyGen_yf() ? In-Reply-To: <20170320173026.GA28483@redhat.com> References: <20170320173026.GA28483@redhat.com> Message-ID: On Mon, Mar 20, 2017 at 11:30 AM, Oleg Nesterov wrote: > Hello, > > Let me first clarify, I do not claim this is a bug, I am trying to learn > python and now I trying to understand yield-from. Given that you haven't gotten a response here, you may want to take this over to the core-mentorship at python.org list. -eric From p.f.moore at gmail.com Tue Mar 28 13:35:08 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 28 Mar 2017 18:35:08 +0100 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: <2864abee-bcc1-c82f-05ff-ed8ddf46900c@g.nevcal.com> References: <2864abee-bcc1-c82f-05ff-ed8ddf46900c@g.nevcal.com> Message-ID: On 28 March 2017 at 18:05, Glenn Linderman wrote: > Somewhere I got the idea that extension authors were supposed to build > against the n.m.0 releases, expressly so that the extensions would then be > compatible with the whole n.m.x series of releases. Did I dream that? I've certainly never heard it stated - and I'd think it's a pretty annoying requirement to make of extension builders. Paul From steve.dower at python.org Tue Mar 28 13:52:44 2017 From: steve.dower at python.org (Steve Dower) Date: Tue, 28 Mar 2017 10:52:44 -0700 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: <2864abee-bcc1-c82f-05ff-ed8ddf46900c@g.nevcal.com> Message-ID: <49d90f72-47f2-0aba-9d27-d9b861310a76@python.org> On 28Mar2017 1035, Paul Moore wrote: > On 28 March 2017 at 18:05, Glenn Linderman wrote: >> Somewhere I got the idea that extension authors were supposed to build >> against the n.m.0 releases, expressly so that the extensions would then be >> compatible with the whole n.m.x series of releases. Did I dream that? > > I've certainly never heard it stated - and I'd think it's a pretty > annoying requirement to make of extension builders. > Paul Agreed, we should avoid both additive and subtractive breaking changes to the binary API within micro versions completely. Additive ones like this are difficult to catch, unfortunately, and so building against the .0 release is not bad advice. Building with the .0 headers is probably sufficient. I wonder if there's a way we can preprocess the headers to define a baseline? Then we could automatically compare against that and force explicit decisions leading to public API changes (probably the process of finishing off the limited API validation could include this task fairly easily as well). Cheers, Steve From njs at pobox.com Tue Mar 28 15:52:39 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 28 Mar 2017 12:52:39 -0700 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: <49d90f72-47f2-0aba-9d27-d9b861310a76@python.org> References: <2864abee-bcc1-c82f-05ff-ed8ddf46900c@g.nevcal.com> <49d90f72-47f2-0aba-9d27-d9b861310a76@python.org> Message-ID: On Mar 28, 2017 10:54 AM, "Steve Dower" wrote: On 28Mar2017 1035, Paul Moore wrote: > On 28 March 2017 at 18:05, Glenn Linderman wrote: > >> Somewhere I got the idea that extension authors were supposed to build >> against the n.m.0 releases, expressly so that the extensions would then be >> compatible with the whole n.m.x series of releases. Did I dream that? >> > > I've certainly never heard it stated - and I'd think it's a pretty > annoying requirement to make of extension builders. > Paul > Agreed, we should avoid both additive and subtractive breaking changes to the binary API within micro versions completely. Additive ones like this are difficult to catch, unfortunately, and so building against the .0 release is not bad advice. Building with the .0 headers is probably sufficient. I wonder if there's a way we can preprocess the headers to define a baseline? Then we could automatically compare against that and force explicit decisions leading to public API changes (probably the process of finishing off the limited API validation could include this task fairly easily as well). It wouldn't be as fancy as analyzing the headers, but a much easier and still useful way to get started would be a test to check no new exported symbols appear in the shared library during a stable release cycle. If you want to get quite fancy, libabigail provides a toolkit that can read debug information and check that all your structs remain the same size etc. ELF only, but still would catch a lot: https://sourceware.org/libabigail/manual/libabigail-overview.html -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From songofacandy at gmail.com Wed Mar 29 02:35:03 2017 From: songofacandy at gmail.com (INADA Naoki) Date: Wed, 29 Mar 2017 15:35:03 +0900 Subject: [Python-Dev] Misc/NEWS entries for Python 3.7a1 In-Reply-To: <062460EC-C3DA-4681-BF33-E6E935900B8C@python.org> References: <062460EC-C3DA-4681-BF33-E6E935900B8C@python.org> Message-ID: On Tue, Mar 28, 2017 at 11:07 PM, Ned Deily wrote: > On Mar 28, 2017, at 08:49, INADA Naoki wrote: >> Currently, changelog of Python 3.7a1 [1] contains changes between >> 3.6b1 and 3.7a1. >> So lot's of bugfixes are listed twice or more in changelog. >> For example, "bpo-28258: Fixed build with Estonian locale..." are >> listed under 3.5.3rc1, >> 3.6.0b2 and 3.7.0a1. >> >> [1] https://docs.python.org/3.7/whatsnew/changelog.html#changelog > [...] > > Thanks for noticing. Misc/NEWS is always somewhat problematic. As you probably know, the Core Workflow SIG, led by Brett, is working on a long-term solution to generating Misc/NEWS, a solution that should be available soon. One of the duties of the release manager is to "edit" Misc/NEWS; I was planning to wait for the new Misc/NEWS solution and for more of the conversion to Git/GitHub to settle to do anything major to the master (i.e. 3.7) version. There have already been some major merge mistakes for the 3.6.x Misc/NEWS. I would recommend not to worry too much about master's Misc/NEWS right now. I may do some cleaning up before the new Misc/NEWS process is introduced but I will also be reviewing it prior to each of the preview releases, which start later this year. > > -- > Ned Deily > nad at python.org -- [] > I forgot to mention about it. I have seen three preview pull requests about new NEWS handling. All of them are quite large. That's one reason why I suggest removing some sections / entries from NEWS now. I thought reducing changelog size may help the transition. If I was wrong, let's stop discussion until transition. I want to help workflow team. I don't want to disturb them. Thanks, From julien at palard.fr Wed Mar 29 07:47:31 2017 From: julien at palard.fr (Julien Palard) Date: Wed, 29 Mar 2017 07:47:31 -0400 Subject: [Python-Dev] PEP 545: Python Documentation Translations Message-ID: Hi. Here's PEP 545, ready to be reviewed! This is the follow-up to the "PEP: Python Documentation Translations" thread on python-ideas [1]_, itself a follow-up of the "Translated Python documentation" thread on python-dev [2]_. This PEP describes the steps to make existing and future translations of the Python documentation official and accessible on docs.python.org. You can read it rendered here https://www.python.org/dev/peps/pep-0545/ or inline here, I'll happily get your feedback. ======================================== PEP: 545 Title: Python Documentation Translations Version: $Revision$ Last-Modified: $Date$ Author: Victor Stinner , Inada Naoki , Julien Palard Status: Draft Type: Process Content-Type: text/x-rst Created: 04-Mar-2017 Abstract ======== The intent of this PEP is to make existing translations of the Python Documentation more accessible and discoverable. By doing so, we hope to attract and motivate new translators and new translations. Translated documentation will be hosted on python.org. Examples of two active translation teams: * http://docs.python.org/fr/: French * http://docs.python.org/ja/: Japanese http://docs.python.org/en/ will redirect to http://docs.python.org/. Sources of translated documentation will be hosted in the Python organization on GitHub: https://github.com/python/. Contributors will have to sign the Python Contributor Agreement (CLA) and the license will be the PSF License. Motivation ========== On the French ``#python-fr`` IRC channel on freenode, it's not rare to meet people who don't speak English and so are unable to read the Python official documentation. Python wants to be widely available to all users in any language: this is also why Python 3 supports any non-ASCII identifiers: https://www.python.org/dev/peps/pep-3131/#rationale There are a least 3 groups of people who are translating the Python documentation to their native language (French [16]_ [17]_ [18]_, Japanese [19]_ [20]_, Spanish [21]_) even though their translations are not visible on d.p.o. Other, less visible and less organized groups, are also translating the documentation, we've heard of Russian [26]_, Chinese and Korean. Others we haven't found yet might also exist. This PEP defines rules describing how to move translations on docs.python.org so they can easily be found by developers, newcomers and potential translators. The Japanese team has (as of March 2017) translated ~80% of the documentation, the French team ~20%. French translation went from 6% to 23% in 2016 [13]_ with 7 contributors [14]_, proving a translation team can be faster than the rate the documentation mutates. Quoting Xiang Zhang about Chinese translations: I have seen several groups trying to translate part of our official doc. But their efforts are disperse and quickly become lost because they are not organized to work towards a single common result and their results are hold anywhere on the Web and hard to find. An official one could help ease the pain. Rationale ========= Translation ----------- Issue tracker ''''''''''''' Considering that issues opened about translations may be written in the translation language, which can be considered noise but at least is inconsistent, issues should be placed outside `bugs.python.org `_ (b.p.o). As all translation must have their own github project (see `Repository for Po Files`_), they must use the associated github issue tracker. Considering the noise induced by translation issues redacted in any languages which may beyond every warnings land in b.p.o, triage will have to be done. Considering that translations already exist and are not actually a source of noise in b.p.o, an unmanageable amount of work is not to be expected. Considering that Xiang Zhang and Victor Stinner are already triaging, and Julien Palard is willing to help on this task, noise on b.p.o is not to be expected. Also, language team coordinators (see `Language Team`_) should help with triaging b.p.o by properly indicating, in the language of the issue author if required, the right issue tracker. Branches '''''''' Translation teams should focus on last stable versions, and use tools (scripts, translation memory, ?) to automatically translate what is done in one branch to other branches. .. note:: Translation memories are a kind of database of previously translated paragraphs, even removed ones. See also `Sphinx Internationalization `_. The three currently stable branches that will be translated are [12]_: 2.7, 3.5, and 3.6. The scripts to build the documentation of older branches needs to be modified to support translation [12]_, whereas these branches now only accept security-only fixes. The development branch (master) should have a lower translation priority than stable branches. But docsbuild-scripts should build it anyway so it is possible for a team to work on it to be ready for the next release. Hosting ------- Domain Name, Content negotiation and URL '''''''''''''''''''''''''''''''''''''''' Different translations can be identified by changing one of the following: Country Code Top Level Domain (CCTLD), path segment, subdomain or content negotiation. Buying a CCTLD for each translations is expensive, time-consuming, and sometimes almost impossible when already registered, this solution should be avoided. Using subdomains like "es.docs.python.org" or "docs.es.python.org" is possible but confusing ("is it `es.docs.python.org` or `docs.es.python.org`?"). Hyphens in subdomains like `pt-br.doc.python.org` is uncommon and SEOMoz [23]_ correlated the presence of hyphens as a negative factor. Usage of underscores in subdomain is prohibited by the RFC1123 [24]_, section 2.1. Finally, using subdomains means creating TLS certificates for each language. This not only requires more maintenance but will also cause issues in language switcher if, as for version switcher, we want a preflight to check if the translation exists in the given version: preflight will probably be blocked by same-origin-policy. Wildcard TLS certificates are very expensive. Using content negotiation (HTTP headers ``Accept-Language`` in the request and ``Vary: Accept-Language``) leads to a bad user experience where they can't easily change the language. According to Mozilla: "This header is a hint to be used when the server has no way of determining the language via another way, like a specific URL, that is controlled by an explicit user decision." [25]_. As we want to be able to easily change the language, we should not use the content negotiation as a main language determination, so we need something else. Last solution is to use the URL path, which looks readable, allows for an easy switch from a language to another, and nicely accepts hyphens. Typically something like: "docs.python.org/de/" or, by using a hyphen: "docs.python.org/pt-BR/". As for the version, sphinx-doc does not support compiling for multiple languages, so we'll have full builds rooted under a path, exactly like we're already doing with versions. So we can have "docs.python.org/de/3.6/" or "docs.python.org/3.6/de/". A question that arises is: "Does the language contain multiple versions or does the version contain multiple languages?". As versions exist in any case and translations for a given version may or may not exist, we may prefer "docs.python.org/3.6/de/", but doing so scatters languages everywhere. Having "/de/3.6/" is clearer, meaning: "everything under /de/ is written in German". Having the version at the end is also a habit taken by readers of the documentation: they like to easily change the version by changing the end of the path. So we should use the following pattern: "docs.python.org/LANGUAGE_TAG/VERSION/". The current documentation is not moved to "/en/", insted "docs.python.org/en/" will redirect to "docs.python.org". Language Tag '''''''''''' A common notation for language tags is the IETF Language Tag [3]_ [4]_ based on ISO 639, although gettext uses ISO 639 tags with underscores (ex: ``pt_BR``) instead of dashes to join tags [5]_ (ex: ``pt-BR``). Examples of IETF Language Tags: ``fr`` (French), ``ja`` (Japanese), ``pt-BR`` (Orthographic formulation of 1943 - Official in Brazil). It is more common to see dashes instead of underscores in URLs [6]_, so we should use IETF language tags, even if sphinx uses gettext internally: URLs are not meant to leak the underlying implementation. It's uncommon to see capitalized letters in URLs, and docs.python.org doesn't use any, so it may hurt readability by attracting the eye on it, like in: "https://docs.python.org/pt-BR/3.6/library/stdtypes.html". RFC 5646 (Tags for Identifying Languages (IETF)) section-2.1 [7]_ states that tags are not case sensitive. As the RFC allows lower case, and it enhances readability, we should use lowercased tags like ``pt-br``. It's redundant to display both language and country code if they're the same, for example: "de-DE" or "fr-FR". Although it might make sense, respectively meaning "German as spoken in Germany" and "French as spoken in France", it's not useful information for the reader. So we may drop these redundancies and only keep the country code for cases where it makes sense, for example, "pt-BR" for "Portuguese as spoken in Brazil". So we should use IETF language tags, lowercased, like ``/fr/``, ``/pt-br/``, ``/de/`` and so on. Fetching And Building Translations '''''''''''''''''''''''''''''''''' Currently docsbuild-scripts are building the documentation [8]_. These scripts should be modified to fetch and build translations. Building new translations is like building new versions so, while we're adding complexity it is not that much. Two steps should be configurable distinctively: Building a new language, and adding it to the language switcher. This allows a transition step between "we accepted the language" and "it is translated enough to be made public". During this step, translators can review their modifications on d.p.o without having to build the documentation locally. From the translation repositories, only the ``.po`` files should be opened by the docsbuild-script to keep the attack surface and probable bug sources at a minimum. This means no translation can patch sphinx to advertise their translation tool. (This specific feature should be handled by sphinx anyway [9]_). Community --------- Mailing List '''''''''''' The `doc-sig`_ mailing list will be used to discuss cross-language changes on translated documentation. There is also the i18n-sig list but it's more oriented towards i18n APIs [1]_ than translating the Python documentation. .. _i18n-sig: https://mail.python.org/mailman/listinfo/i18n-sig .. _doc-sig: https://mail.python.org/mailman/listinfo/doc-sig Chat '''' Due to the Python community being highly active on IRC, we should create a new IRC channel on freenode, typically #python-doc for consistency with the mailing list name. Each language coordinator can organize their own team, even by choosing another chat system if the local usage asks for it. As local teams will write in their native languages, we don't want each team in a single channel. It's also natural for the local teams to reuse their local channels like "#python-fr" for French translators. Repository for PO Files ''''''''''''''''''''''' Considering that each translation team may want to use different translation tools, and that those tools should easily be synchronized with git, all translations should expose their ``.po`` files via a git repository. Considering that each translation will be exposed via git repositories, and that Python has migrated to GitHub, translations will be hosted on github. For consistency and discoverability, all translations should be in the same github organization and named according to a common pattern. Given that we want translations to be official, and that Python already has a github organization, translations should be hosted as projects of the `Python GitHub organization`_. For consistency, translation repositories should be called ``python-docs-LANGUAGE_TAG`` [22]_, using the language tag used in paths: without region subtag if redundent, and lowercased. The docsbuild-scripts may enforce this rule by refusing to fetch outside of the Python organization or a wrongly named repository. The CLA bot may be used on the translation repositories, but with a limited effect as local coordinators may synchronize themselves with translations from an external tool, like transifex, and loose track of who translated what in the process. Versions can be hosted on different repositories, different directories or different branches. Storing them on different repositories will probably pollute the Python github organization. As it is typical and natural to use branches to separate versions, branches should be used to do so. .. _Python GitHub organization: https://github.com/python/ Translation tools ''''''''''''''''' Most of the translation work is actually done on Transifex [15]_. Other tools may be used later https://pontoon.mozilla.org/ and http://zanata.org/ Contributor Agreement ''''''''''''''''''''' Contributions to translated documentation will be requested to sign the Python Contributor Agreement (CLA): https://www.python.org/psf/contrib/contrib-form/ Language Team ''''''''''''' Each language team should have one coordinator responsible for: - Managing the team. - Choosing and managing the tools the team will use (chat, mailing list, ?). - Ensure contributors understand and agree with the CLA. - Ensure quality (grammar, vocabulary, consistency, filtering spam, ads, ?). - Redirect issues posted on b.p.o to the correct GitHub issue tracker for the language. The license will be the `PSF License `_, and copyright should be transferable to PSF later. Alternatives ------------ Simplified English '''''''''''''''''' It would be possible to introduce a "simplified English" version like wikipedia did [10]_, as discussed on python-dev [11]_, targeting English learners and children. Pros: It yields a single translation, theoretically readable by everyone and reviewable by current maintainers. Cons: Subtle details may be lost, and translators from English to English may be hard to find as stated by Wikipedia: > The main English Wikipedia has 5 million articles, written by nearly 140K active users; the Swedish Wikipedia is almost as big, 3M articles from only 3K active users; but the Simple English Wikipedia has just 123K articles and 871 active users. That's fewer articles than Esperanto! Changes ======= Migrate GitHub Repositories --------------------------- We (authors of this PEP) already own French and Japanese Git repositories, so moving them to the Python documentation organization will not be a problem. We'll however be following the `New Translation Procedure`_. Patch docsbuild-scripts to Compile Translations ----------------------------------------------- Docsbuild-script must be patched to: - List the language tags to build along with the branches to build. - List the language tags to display in the language switcher. - Find translation repositories by formatting ``github.com:python/python-docs-{language_tag}.git`` (See `Repository for Po Files`_) - Build translations for each branch and each language. Patched docsbuild-scripts must only open ``.po`` files from translation repositories. List coordinators in the devguide --------------------------------- Add a page or a section with an empty list of coordinators to the devguide, each new coordinator will be added to this list. Create sphinx-doc Language Switcher ----------------------------------- Highly similar to the version switcher, a language switcher must be implemented. This language switcher must be configurable to hide or show a given language. The language switcher will only have to update or add the language segment to the path like the current version switcher does. Unlike the version switcher, no preflight are required as destination page always exists (translations does not add or remove pages). Untranslated (but existing) pages still exists, they should however be rendered as so, see `Enhance Rendering of Untranslated and Fuzzy Translations`_. Update sphinx-doc Version Switcher ---------------------------------- The ``patch_url`` function of the version switcher in ``version_switch.js`` have to be updated to understand and allow the presence of the language segment in the path. Enhance Rendering of Untranslated and Fuzzy Translations -------------------------------------------------------- It's an opened sphinx issue [9]_, but we'll need it so we'll have to work on it. Translated, fuzzy, and untranslated paragraphs should be differentiated. (Fuzzy paragraphs have to warn the reader what he's reading may be out of date.) New Translation Procedure ========================= Designate a Coordinator ----------------------- The first step is to designate a coordinator, see `Language Team`_, The coordinator must sign the CLA. The coordinator should be added to the list of translation coordinators on the devguide. Create Github Repository ------------------------ Create a repository named "python-docs-{LANGUAGE_TAG}" (IETF language tag, without redundent region subtag, with a dash, and lowercased.) on the Python github organization (See `Repository For Po Files`_.), and grant the language coordinator push rights to this repository. Add support for translations in docsbuild-scripts ------------------------------------------------- As soon as the translation hits its first commits, update the docsbuild-scripts configuration to build the translation (but not displaying it in the language switcher). Add Translation to the Language Switcher ---------------------------------------- As soon as the translation hits: - 100% of bugs.html with proper links to the language repository issue tracker. - 100% of tutorial. - 100% of library/functions (builtins). the translation can be added to the language switcher. Previous Discussions ==================== - `[Python-ideas] Cross link documentation translations (January, 2016)`_ - `[Python-ideas] Cross link documentation translations (January, 2016)`_ - `[Python-ideas] https://docs.python.org/fr/ ? (March 2016)`_ .. _[Python-ideas] Cross link documentation translations (January, 2016): https://mail.python.org/pipermail/python-ideas/2016-January/038010.html .. _[Python-Dev] Translated Python documentation (Febrary 2016): https://mail.python.org/pipermail/python-dev/2017-February/147416.html .. _[Python-ideas] https://docs.python.org/fr/ ? (March 2016): https://mail.python.org/pipermail/python-ideas/2016-March/038879.html References ========== .. [1] [I18n-sig] Hello Python members, Do you have any idea about Python documents? (https://mail.python.org/pipermail/i18n-sig/2013-September/002130.html) .. [2] [Doc-SIG] Localization of Python docs (https://mail.python.org/pipermail/doc-sig/2013-September/003948.html) .. [3] Tags for Identifying Languages (http://tools.ietf.org/html/rfc5646) .. [4] IETF language tag (https://en.wikipedia.org/wiki/IETF_language_tag) .. [5] GNU Gettext manual, section 2.3.1: Locale Names (https://www.gnu.org/software/gettext/manual/html_node/Locale-Names.html) .. [6] Semantic URL: Slug (https://en.wikipedia.org/wiki/Semantic_URL#Slug) .. [7] Tags for Identifying Languages: Formatting of Language Tags (https://tools.ietf.org/html/rfc5646#section-2.1.1) .. [8] Docsbuild-scripts github repository (https://github.com/python/docsbuild-scripts/) .. [9] i18n: Highlight untranslated paragraphs (https://github.com/sphinx-doc/sphinx/issues/1246) .. [10] Wikipedia: Simple English (https://simple.wikipedia.org/wiki/Main_Page) .. [11] Python-dev discussion about simplified english (https://mail.python.org/pipermail/python-dev/2017-February/147446.html) .. [12] Passing options to sphinx from Doc/Makefile (https://github.com/python/cpython/commit/57acb82d275ace9d9d854b156611e641f68e9e7c) .. [13] French translation progression (https://mdk.fr/pycon2016/#/11) .. [14] French translation contributors (https://github.com/AFPy/python_doc_fr/graphs/contributors?from=2016-01-01&to=2016-12-31&type=c) .. [15] Python-doc on Transifex (https://www.transifex.com/python-doc/) .. [16] French translation (https://www.afpy.org/doc/python/) .. [17] French translation github (https://github.com/AFPy/python_doc_fr) .. [18] French mailing list (http://lists.afpy.org/mailman/listinfo/traductions) .. [19] Japanese translation (http://docs.python.jp/3/) .. [20] Japanese github (https://github.com/python-doc-ja/python-doc-ja) .. [21] Spanish translation (http://docs.python.org.ar/tutorial/3/index.html) .. [22] [Python-Dev] Translated Python documentation: doc vs docs (https://mail.python.org/pipermail/python-dev/2017-February/147472.html) .. [23] Domains - SEO Best Practices | Moz (https://moz.com/learn/seo/domain) .. [24] Requirements for Internet Hosts -- Application and Support (https://www.ietf.org/rfc/rfc1123.txt) .. [25] Accept-Language (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language) .. [26] ???????????? Python 2.7! (http://python-lab.ru/documentation/index.html) Copyright ========= This document has been placed in the public domain. .. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: ======================================== .. [1] [Python-ideas] PEP: Python Documentation Translations (https://mail.python.org/pipermail/python-ideas/2017-March/045226.html) .. [2] [Python-Dev] Translated Python documentation (https://mail.python.org/pipermail/python-dev/2017-February/147416.html) Bests, -- Julien Palard https://mdk.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Wed Mar 29 11:22:45 2017 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 29 Mar 2017 16:22:45 +0100 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On 28 March 2017 at 17:31, Nathaniel Smith wrote: > IMO this is a bug, and depending on how many packages are affected it might > even call for an emergency 3.6.2. The worst case is that we start getting > large numbers of packages uploaded to pypi that claim to be 3.6.0 compatible > but that crash like crash with an obscure error when people download them. Has anyone logged this on bugs.python.org? There's nothing in the Fedora bug referenced by the OP that indicates they've done so. Paul From brett at python.org Wed Mar 29 12:26:22 2017 From: brett at python.org (Brett Cannon) Date: Wed, 29 Mar 2017 16:26:22 +0000 Subject: [Python-Dev] Misc/NEWS entries for Python 3.7a1 In-Reply-To: References: <062460EC-C3DA-4681-BF33-E6E935900B8C@python.org> Message-ID: On Tue, 28 Mar 2017 at 23:36 INADA Naoki wrote: > On Tue, Mar 28, 2017 at 11:07 PM, Ned Deily wrote: > > On Mar 28, 2017, at 08:49, INADA Naoki wrote: > >> Currently, changelog of Python 3.7a1 [1] contains changes between > >> 3.6b1 and 3.7a1. > >> So lot's of bugfixes are listed twice or more in changelog. > >> For example, "bpo-28258: Fixed build with Estonian locale..." are > >> listed under 3.5.3rc1, > >> 3.6.0b2 and 3.7.0a1. > >> > >> [1] https://docs.python.org/3.7/whatsnew/changelog.html#changelog > > [...] > > > > Thanks for noticing. Misc/NEWS is always somewhat problematic. As you > probably know, the Core Workflow SIG, led by Brett, is working on a > long-term solution to generating Misc/NEWS, a solution that should be > available soon. One of the duties of the release manager is to "edit" > Misc/NEWS; I was planning to wait for the new Misc/NEWS solution and for > more of the conversion to Git/GitHub to settle to do anything major to the > master (i.e. 3.7) version. There have already been some major merge > mistakes for the 3.6.x Misc/NEWS. I would recommend not to worry too much > about master's Misc/NEWS right now. I may do some cleaning up before the > new Misc/NEWS process is introduced but I will also be reviewing it prior > to each of the preview releases, which start later this year. > > > > -- > > Ned Deily > > nad at python.org -- [] > > > > I forgot to mention about it. > I have seen three preview pull requests about new NEWS handling. All > of them are quite large. > That's one reason why I suggest removing some sections / entries from NEWS > now. > I thought reducing changelog size may help the transition. > > If I was wrong, let's stop discussion until transition. > I want to help workflow team. I don't want to disturb them. > I'm planning on make a decision on the tooling this week with the hope we can start the transition by the end of April so that maybe everything can be cleaned up by PyCon US. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Mar 29 12:47:50 2017 From: brett at python.org (Brett Cannon) Date: Wed, 29 Mar 2017 16:47:50 +0000 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: References: Message-ID: Sounds good overall! I only have one thing to suggest in regards to the license, two grammar tweaks, and a question. On Wed, 29 Mar 2017 at 08:10 Julien Palard via Python-Dev < python-dev at python.org> wrote: > [SNIP] > Sources of translated documentation will be hosted in the Python > organization on GitHub: https://github.com/python/. Contributors will > have to sign the Python Contributor Agreement (CLA) and the license > will be the PSF License. > Please check with the PSF that this is what we really want. In the past the suggestion has been to **not** use the PSF license with all of its historical baggage but instead use something like Apache. But since IANAL we really should ask the PSF what the best license for new code is. > [SNIP] > The CLA bot may be used on the translation repositories, but with a > limited effect as local coordinators may synchronize themselves with > translations from an external tool, like transifex, and loose track > of who translated what in the process. > "loose" -> "lose" > > [SNIP] > Other tools may be used later https://pontoon.mozilla.org/ > and http://zanata.org/ > "be used later like". > [SNIP] > > Create sphinx-doc Language Switcher > ----------------------------------- > > Highly similar to the version switcher, a language switcher must be > implemented. This language switcher must be configurable to hide or > show a given language. > > The language switcher will only have to update or add the language > segment to the path like the current version switcher does. Unlike > the version switcher, no preflight are required as destination page > always exists (translations does not add or remove pages). > Untranslated (but existing) pages still exists, they should however be > rendered as so, see `Enhance Rendering of Untranslated and Fuzzy > Translations`_. > What kind of support does Read the Docs have for translations? I have no active plans to push for this but it has been idea in the back of my head for a while so it would be good to know if such a move would make this easier or harder. -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at palard.fr Wed Mar 29 16:13:05 2017 From: julien at palard.fr (Julien Palard) Date: Wed, 29 Mar 2017 16:13:05 -0400 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: References: Message-ID: <6-6zandAUD_HqCXBfrlg0-lw5H9zwipt2GRfqW2ZptmVLdHkeFVyc7lvfIawnnm0qo3pV-CKdvXTmWenV-uA3IvxjGFdHMMSzdPYCcautMU=@palard.fr> Hi Brett, thanks for the feedback! Please check with the PSF that this is what we really want. Gladly, but ? how? I'm very new to all those process and have now idea on how I can get in touch with PSF lawyers. What kind of support does Read the Docs have for translations? I have no active plans to push for this but it has been idea in the back of my head for a while so it would be good to know if such a move would make this easier or harder. Read the Docs support translations [1]_, quoting them: > To support this, you will have one parent project and a number > of projects marked as translations of that parent. Let?s use > phpmyadmin as an example. > The main phpmyadmin project is the parent for all translations. > Then you must create a project for each translation, for > example phpmyadmin-spanish. You will set the Language for > phpmyadmin-spanish to Spanish. In the parent projects > Translations page, you will say that phpmyadmin-spanish is a > translation for your project. > This has the results of serving: > - phpmyadmin at http://phpmyadmin.readthedocs.io/en/latest/ > - phpmyadmin-spanish at http://phpmyadmin.readthedocs.io/es/latest/ Which is nice as it's almost the same syntax the PEP proposes for paths: /{language_tag}/{version_tag}. Their language tags are simplified too (redundency removed (fr-FR ? fr)) but not lowercased, and they use underscore "instead of" dashes as a separator, see for example: - https://docs.phpmyadmin.net/fr/latest/ - https://docs.phpmyadmin.net/pt_BR/latest/ while the PEP proposes /pt-br/ instead. .. [1] Project with multiple translations (https://docs.readthedocs.io/en/latest/localization.html#project-with-multiple-translations) -- Julien Palard [https://mdk.fr](https://mdk.fr/) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwilk at jwilk.net Wed Mar 29 15:35:44 2017 From: jwilk at jwilk.net (Jakub Wilk) Date: Wed, 29 Mar 2017 21:35:44 +0200 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: References: Message-ID: <20170329193544.7lvbky67wccr2d6d@jwilk.net> Typos: english -> English Febrary -> February insted -> instead redundent -> redundant * Julien Palard , 2017-03-29, 07:47: >It's redundant to display both language and country code if they're the same, >for example: "de-DE" or "fr-FR". This wording is unfortunate. It seems to imply that you can meaningfully compare a language code and a territory code for equality. This is not the case. For example, Belarusian (language code "be") is mainly spoken in Belarus (country code "by"), not in Belgium (country code "be"). -- Jakub Wilk From julien at palard.fr Wed Mar 29 16:45:07 2017 From: julien at palard.fr (Julien Palard) Date: Wed, 29 Mar 2017 16:45:07 -0400 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: <20170329193544.7lvbky67wccr2d6d@jwilk.net> References: <20170329193544.7lvbky67wccr2d6d@jwilk.net> Message-ID: Hi Jakub, Typos Fixed, thanks. * Julien Palard , 2017-03-29, 07:47: >It's redundant to display both language and country code if they're the same, >for example: "de-DE" or "fr-FR". This wording is unfortunate. It seems to imply that you can meaningfully compare a language code and a territory code for equality. This is not the case. For example, Belarusian (language code "be") is mainly spoken in Belarus (country code "by"), not in Belgium (country code "be"). Thanks for noticing, would the intented meaning is "when they add no distinguishing information", is it better like: ====== We may drop the region subtag when it does does not add distinguishing information, for example: "de-DE" or "fr-FR". (Although it might make sense, respectively meaning "German as spoken in Germany" and "French as spoken in France"). But when the region subtag actually adds information, for example "pt-BR" for "Portuguese as spoken in Brazil", it should be kept. ====== ? Bests, -- Julien Palard https://mdk.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Wed Mar 29 17:54:52 2017 From: vadmium+py at gmail.com (Martin Panter) Date: Thu, 30 Mar 2017 08:54:52 +1100 Subject: [Python-Dev] Issue with _thread.interrupt_main (29926) In-Reply-To: References: <20170328031116.GF27969@ando.pearwood.info> Message-ID: On 29 March 2017 at 00:40, Terry Reedy wrote: > [. . .] Eryk Sun suggested a patch for Windows, (and > the possibility of using pthread_kill). Can you possibly do one for *nix? > This is out of my ballpark, but the bug (relative to console behavior) is a > nuisance. I'll try to find time, but no promises. From brett at python.org Wed Mar 29 18:04:14 2017 From: brett at python.org (Brett Cannon) Date: Wed, 29 Mar 2017 22:04:14 +0000 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: <6-6zandAUD_HqCXBfrlg0-lw5H9zwipt2GRfqW2ZptmVLdHkeFVyc7lvfIawnnm0qo3pV-CKdvXTmWenV-uA3IvxjGFdHMMSzdPYCcautMU=@palard.fr> References: <6-6zandAUD_HqCXBfrlg0-lw5H9zwipt2GRfqW2ZptmVLdHkeFVyc7lvfIawnnm0qo3pV-CKdvXTmWenV-uA3IvxjGFdHMMSzdPYCcautMU=@palard.fr> Message-ID: On Wed, 29 Mar 2017 at 13:13 Julien Palard wrote: > Hi Brett, thanks for the feedback! > > > > > > > > > > Please check with the PSF that this is what we really want. > > > Gladly, but ? how? I'm very new to all those process and have now idea on > how I can get in touch with PSF lawyers. > > What kind of support does Read the Docs have for translations? I have no > active plans to push for this but it has been idea in the back of my head > for a while so it would be good to know if such a move would make this > easier or harder. > > Read the Docs support translations [1]_, quoting them: > > > To support this, you will have one parent project and a number > > of projects marked as translations of that parent. Let?s use > > phpmyadmin as an example. > > > The main phpmyadmin project is the parent for all translations. > > Then you must create a project for each translation, for > > example phpmyadmin-spanish. You will set the Language for > > phpmyadmin-spanish to Spanish. In the parent projects > > Translations page, you will say that phpmyadmin-spanish is a > > translation for your project. > > > This has the results of serving: > > - phpmyadmin at http://phpmyadmin.readthedocs.io/en/latest/ > > - phpmyadmin-spanish at http://phpmyadmin.readthedocs.io/es/latest/ > > Which is nice as it's almost the same syntax the PEP proposes for paths: > /{language_tag}/{version_tag}. > Their language tags are simplified too (redundency removed (fr-FR ? fr)) > but not lowercased, and they > use underscore "instead of" dashes as a separator, see for example: > > - https://docs.phpmyadmin.net/fr/latest/ > - https://docs.phpmyadmin.net/pt_BR/latest/ > > while the PEP proposes /pt-br/ instead. > > .. [1] Project with multiple translations > ( > https://docs.readthedocs.io/en/latest/localization.html#project-with-multiple-translations > ) > Should we just match what Read the Docs does then? -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Wed Mar 29 19:36:57 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 29 Mar 2017 16:36:57 -0700 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On Wed, Mar 29, 2017 at 8:22 AM, Paul Moore wrote: > On 28 March 2017 at 17:31, Nathaniel Smith wrote: >> IMO this is a bug, and depending on how many packages are affected it might >> even call for an emergency 3.6.2. The worst case is that we start getting >> large numbers of packages uploaded to pypi that claim to be 3.6.0 compatible >> but that crash like crash with an obscure error when people download them. > > Has anyone logged this on bugs.python.org? There's nothing in the > Fedora bug referenced by the OP that indicates they've done so. I didn't see one, so: https://bugs.python.org/issue29943 -n -- Nathaniel J. Smith -- https://vorpus.org From tjreedy at udel.edu Wed Mar 29 21:14:20 2017 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 29 Mar 2017 21:14:20 -0400 Subject: [Python-Dev] PEP 545: Python Documentation Translations In-Reply-To: <6-6zandAUD_HqCXBfrlg0-lw5H9zwipt2GRfqW2ZptmVLdHkeFVyc7lvfIawnnm0qo3pV-CKdvXTmWenV-uA3IvxjGFdHMMSzdPYCcautMU=@palard.fr> References: <6-6zandAUD_HqCXBfrlg0-lw5H9zwipt2GRfqW2ZptmVLdHkeFVyc7lvfIawnnm0qo3pV-CKdvXTmWenV-uA3IvxjGFdHMMSzdPYCcautMU=@palard.fr> Message-ID: On 3/29/2017 4:13 PM, Julien Palard via Python-Dev wrote: > Gladly, but ? how? I'm very new to all those process and have now idea > on how I can get in touch with PSF lawyers. https://www.python.org/about/legal/ "If you have any questions, please send them to the legal mailing list at: legal at python.org." -- Terry Jan Reedy From ncoghlan at gmail.com Thu Mar 30 00:31:42 2017 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 30 Mar 2017 14:31:42 +1000 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On 29 March 2017 at 02:18, Paul Moore wrote: > On 28 March 2017 at 12:24, Miro Hron?ok wrote: >> I'd like some clarification on what ABI compatibility we can expect. >> * Should the ABI be stable across patch releases (so calling >> PySlice_AdjustIndices from an existing macro would be a bug)? >> * Should the ABI be forward-compatible within a minor release (so modules >> built for 3.6.0 should be usable with 3.6.1, but not vice versa)? >> * Or should we expect the ABI to change even across patch releases? > > Given that binary wheels are built against a specific minor version > (3.6, 3.5, ...) I would expect the ABI to be consistent over a minor > release. That would fit with my expectations of the compatibility > guarantees on patch releases. > > So I from what you describe, I'd consider this as a bug. Certainly, if > someone built a C extension as a wheel using Python 3.6.1, it would be > tagged as compatible with cp36, and pip would happily use it when > installing to a Python 3.6.0 system, where it would fail. Right, this is the main problem - while "build against the X.Y.0 headers" is useful advice, it's not something we've ever explicitly stated, and it's not something we can reasonably expect all providers of pre-built binary modules to do. Instead, it makes sense to explicitly strengthen the ABI guarantees within CPython maintenance releases, and add some automated testing to avoid accidental changes and oversights (similar to the pending test to ensure magic number stability for cached bytecode files) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From vadmium+py at gmail.com Thu Mar 30 01:44:32 2017 From: vadmium+py at gmail.com (Martin Panter) Date: Thu, 30 Mar 2017 16:44:32 +1100 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: On 30 March 2017 at 15:31, Nick Coghlan wrote: > On 29 March 2017 at 02:18, Paul Moore wrote: >> On 28 March 2017 at 12:24, Miro Hron?ok wrote: >>> I'd like some clarification on what ABI compatibility we can expect. >>> * Should the ABI be stable across patch releases (so calling >>> PySlice_AdjustIndices from an existing macro would be a bug)? >>> * Should the ABI be forward-compatible within a minor release (so modules >>> built for 3.6.0 should be usable with 3.6.1, but not vice versa)? >>> * Or should we expect the ABI to change even across patch releases? >> >> Given that binary wheels are built against a specific minor version >> (3.6, 3.5, ...) I would expect the ABI to be consistent over a minor >> release. That would fit with my expectations of the compatibility >> guarantees on patch releases. >> >> So I from what you describe, I'd consider this as a bug. Certainly, if >> someone built a C extension as a wheel using Python 3.6.1, it would be >> tagged as compatible with cp36, and pip would happily use it when >> installing to a Python 3.6.0 system, where it would fail. > > Right, this is the main problem - while "build against the X.Y.0 > headers" is useful advice, it's not something we've ever explicitly > stated, and it's not something we can reasonably expect all providers > of pre-built binary modules to do. > > Instead, it makes sense to explicitly strengthen the ABI guarantees > within CPython maintenance releases, and add some automated testing to > avoid accidental changes and oversights (similar to the pending test > to ensure magic number stability for cached bytecode files) There's a website that has nice reports of ABI compatibility of various packages and might useful for testing. It shows up the added PySlice_AdjustIndices function in 3.6.1, along with PySlice_Unpack (and some changes to internal names, so probably not such a problem). https://abi-laboratory.pro/tracker/compat_report/python/3.6.0/3.6.1/496a4/abi_compat_report.html From encukou at gmail.com Thu Mar 30 03:49:33 2017 From: encukou at gmail.com (Petr Viktorin) Date: Thu, 30 Mar 2017 09:49:33 +0200 Subject: [Python-Dev] What version is an extension module binary compatible with In-Reply-To: References: Message-ID: <021ca898-42e2-e222-3abb-adb5eec5b4ed@gmail.com> On 03/30/2017 06:31 AM, Nick Coghlan wrote: > On 29 March 2017 at 02:18, Paul Moore wrote: >> On 28 March 2017 at 12:24, Miro Hron?ok wrote: >>> I'd like some clarification on what ABI compatibility we can expect. >>> * Should the ABI be stable across patch releases (so calling >>> PySlice_AdjustIndices from an existing macro would be a bug)? >>> * Should the ABI be forward-compatible within a minor release (so modules >>> built for 3.6.0 should be usable with 3.6.1, but not vice versa)? >>> * Or should we expect the ABI to change even across patch releases? >> >> Given that binary wheels are built against a specific minor version >> (3.6, 3.5, ...) I would expect the ABI to be consistent over a minor >> release. That would fit with my expectations of the compatibility >> guarantees on patch releases. >> >> So I from what you describe, I'd consider this as a bug. Certainly, if >> someone built a C extension as a wheel using Python 3.6.1, it would be >> tagged as compatible with cp36, and pip would happily use it when >> installing to a Python 3.6.0 system, where it would fail. > > Right, this is the main problem - while "build against the X.Y.0 > headers" is useful advice, it's not something we've ever explicitly > stated, and it's not something we can reasonably expect all providers > of pre-built binary modules to do. Also, while building against 3.6.0 headers will ensure compatibility, it will also restore the original bug that PySlice_AdjustIndices fixes. Expecting extension authors to build against x.y.0 would lock them out of such bug fixes in later releases. > Instead, it makes sense to explicitly strengthen the ABI guarantees > within CPython maintenance releases, and add some automated testing to > avoid accidental changes and oversights (similar to the pending test > to ensure magic number stability for cached bytecode files) > > Cheers, > Nick. > From oleg at redhat.com Thu Mar 30 14:05:56 2017 From: oleg at redhat.com (Oleg Nesterov) Date: Thu, 30 Mar 2017 20:05:56 +0200 Subject: [Python-Dev] why _PyGen_Finalize(gen) propagates close() to _PyGen_yf() ? In-Reply-To: References: <20170320173026.GA28483@redhat.com> Message-ID: <20170330180556.GA29318@redhat.com> On 03/28, Eric Snow wrote: > > On Mon, Mar 20, 2017 at 11:30 AM, Oleg Nesterov wrote: > > Hello, > > > > Let me first clarify, I do not claim this is a bug, I am trying to learn > > python and now I trying to understand yield-from. > > Given that you haven't gotten a response here, and this looks a bit strange. My question is simple, the implementation looks clear and straightforward, I am a bit surprised none of cpython devs bothered to reply. > you may want to take > this over to the core-mentorship at python.org list. Well, I'm afraid to contact this closed and not-for-mortals list, not sure this very basic question should go there ;) perhaps you are already a member, feel free to forward. I downloaded micropython and it doesn't propagate .close() in this case. but it seem to differ very much from cpython, not sure this matters at all. Thanks, Oleg. From njs at pobox.com Thu Mar 30 14:22:34 2017 From: njs at pobox.com (Nathaniel Smith) Date: Thu, 30 Mar 2017 11:22:34 -0700 Subject: [Python-Dev] why _PyGen_Finalize(gen) propagates close() to _PyGen_yf() ? In-Reply-To: <20170330180556.GA29318@redhat.com> References: <20170320173026.GA28483@redhat.com> <20170330180556.GA29318@redhat.com> Message-ID: On Thu, Mar 30, 2017 at 11:05 AM, Oleg Nesterov wrote: > On 03/28, Eric Snow wrote: >> >> On Mon, Mar 20, 2017 at 11:30 AM, Oleg Nesterov wrote: >> > Hello, >> > >> > Let me first clarify, I do not claim this is a bug, I am trying to learn >> > python and now I trying to understand yield-from. >> >> Given that you haven't gotten a response here, > > and this looks a bit strange. My question is simple, the implementation looks > clear and straightforward, I am a bit surprised none of cpython devs bothered > to reply. > >> you may want to take >> this over to the core-mentorship at python.org list. > > Well, I'm afraid to contact this closed and not-for-mortals list, not sure > this very basic question should go there ;) perhaps you are already a member, > feel free to forward. core-mentorship is intended as a friendly place for folks who are starting to study CPython internals. I'm not sure where you got the impression that it's not-for-mortals but I suspect the people running it would like to know so they can fix it :-). In any case the short answer to your original question is that PEP 342 says that generator finalization calls the generator's close() method, which throws a GeneratorExit into the generator, and PEP 380 says that as a special case, when a GeneratorExit is thrown into a yield from, then this is propagated by calling .close() on the yielded-from iterator (if such a method exists) and then re-raised in the original generator. -n -- Nathaniel J. Smith -- https://vorpus.org From zouyuheng1998 at gmail.com Fri Mar 31 00:38:30 2017 From: zouyuheng1998 at gmail.com (Yuheng Zou) Date: Fri, 31 Mar 2017 12:38:30 +0800 Subject: [Python-Dev] Developing a Python JIT and have troubld Message-ID: I am building a Python JIT, so I want to change the interp->eval_frame to my own function. I built a C++ library which contains EvalFrame function, and then use dlopen and dlsym to use it. It looks like this: extern "C" PyObject *EvalFrame(PyFrameObject *f, int throwflag) { return _PyEval_EvalFrameDefault(f, throwflag);} I added following code to Python/pylifecycle.c at function _Py_InitializeEx_ Private(Python version is 3.6.1): void *pyjit = NULL; pyjit = dlopen("../cmake-build-debug/libPubbon.dylib", 0);if (pyjit != NULL) { interp->eval_frame = (_PyFrameEvalFunction)dlsym(pyjit, "EvalFrame"); //interp->eval_frame = _PyEval_EvalFrameDefault;} Then something strange happened. I used LLDB to trace the variables. When it ran at EvalFrame, the address of f pointer didn't change, but f->f_lineno changed. Why the address of the pointer didn't change, but the context change? I am working on Mac OS X and Python 3.6.1. I want to know how to replace _PyEval_EvalFrameDefault in interp->eval_frame with my own function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From siu at continuum.io Fri Mar 31 11:15:04 2017 From: siu at continuum.io (Siu Kwan Lam) Date: Fri, 31 Mar 2017 15:15:04 +0000 Subject: [Python-Dev] [Python-compilers] Developing a Python JIT and have troubld In-Reply-To: References: Message-ID: I have never tried PEP0523 before so I have just did a quick look and pushed what I got to https://github.com/sklam/etude_py36_custom_jit. If you run https://github.com/sklam/etude_py36_custom_jit/blob/master/test.py, you should get the following printouts: Hello Hey Yes ** myjit is evaluating frame=0x10c623048 lasti=-1 lineno=10 Enter apple() ** myjit is evaluating frame=0x7f9a74e02178 lasti=-1 lineno=16 Enter orange() Exit orange() Exit apple() ** myjit is evaluating frame=0x10c460d48 lasti=-1 lineno=27 The frame is different for each method. Can you try your implementation with my test so we can compare? On Thu, Mar 30, 2017 at 11:46 PM Yuheng Zou wrote: I am building a Python JIT, so I want to change the interp->eval_frame to my own function. I built a C++ library which contains EvalFrame function, and then use dlopen and dlsym to use it. It looks like this: extern "C" PyObject *EvalFrame(PyFrameObject *f, int throwflag) { return _PyEval_EvalFrameDefault(f, throwflag);} I added following code to Python/pylifecycle.c at function _Py_InitializeEx_Private(Python version is 3.6.1): void *pyjit = NULL; pyjit = dlopen("../cmake-build-debug/libPubbon.dylib", 0);if (pyjit != NULL) { interp->eval_frame = (_PyFrameEvalFunction)dlsym(pyjit, "EvalFrame"); //interp->eval_frame = _PyEval_EvalFrameDefault;} Then something strange happened. I used LLDB to trace the variables. When it ran at EvalFrame, the address of f pointer didn't change, but f->f_lineno changed. Why the address of the pointer didn't change, but the context change? I am working on Mac OS X and Python 3.6.1. I want to know how to replace _PyEval_EvalFrameDefault in interp->eval_frame with my own function. _______________________________________________ Python-compilers mailing list Python-compilers at python.org https://mail.python.org/mailman/listinfo/python-compilers -- Siu Kwan Lam Software Engineer Continuum Analytics -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Mar 31 11:46:31 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 31 Mar 2017 17:46:31 +0200 Subject: [Python-Dev] Questions on the CPython Git master branch: how to exclude commits of 3.x branches? Message-ID: Hi, The CPython repository was converted from Mercurial to Git. Before with Mercurial, we used extensively merges. For example, a bug was fixed in branche 3.5, merged into 3.6 and then merged into master. With the conversion to Git, some merges commit are removed, some others are kept. My question is how to list commits which are only part of the "master" branch, as "hg log default" in Mercurial. "git log origin/master" lists also commits coming from 3.x branches and their merges. The problem is that if you pick a commit from a different branch, you compile Python 3.x, instead of compiling Python for the master branch. Right now, my need is to find the first commit in the "master" branch after a specific date. For example, find the first commit after 2016-01-01 00:00. Naive solution: --- $ git log --since="2016-01-01 00:00" origin/master --reverse|head commit 75e3630c6071819d3674d956ea754ccb4fed5271 Author: Benjamin Peterson Date: Fri Jan 1 10:23:45 2016 -0600 2016 will be another year of writing copyrighted code --- If you compile the revision 75e3630c6071819d3674d956ea754ccb4fed5271, you get Python 3.3: --- $ grep PY_M Include/patchlevel.h #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 3 #define PY_MICRO_VERSION 6 --- But if you exclude manually commits which are in branches 3.x, you get the commit 71db903563906cedfc098418659d1200043cd14c which gives a different Python version: --- $ grep PY_M Include/patchlevel.h #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 6 #define PY_MICRO_VERSION 0 --- In fact, I wrote a tool to manually exclude commits of branches 3.x: https://github.com/haypo/misc/blob/master/misc/find_git_revisions_by_date.py But it's super slow! Are there builtin options to only show Git commits which are in master branch but not in 3.x branches? Asked differently: how can I only see two commits on the following range? What are [options]? git rev-list 288cb25f1a208fe09b9e06ba479e11c1157da4b5..71db903563906cedfc098418659d1200043cd14c [options] Commits after 2016-01-01: --- $ git checkout 71db903563906cedfc098418659d1200043cd14c $ git log --graph * commit 71db903563906cedfc098418659d1200043cd14c |\ Merge: 288cb25 4c70293 | | Author: Benjamin Peterson | | Date: Fri Jan 1 10:25:22 2016 -0600 | | | | merge 3.5 | | | * commit 4c70293755ce8ea0adc5b224c714da2b7625d232 | |\ Merge: 42bf8fc e8c2a95 | | | Author: Benjamin Peterson | | | Date: Fri Jan 1 10:25:12 2016 -0600 | | | | | | merge 3.4 | | | | | * commit e8c2a957c87980a1fd79c39597d40e5c5aeb7048 | | |\ Merge: 52d6c2c 75e3630 | | | | Author: Benjamin Peterson | | | | Date: Fri Jan 1 10:24:21 2016 -0600 | | | | | | | | merge 3.3 | | | | | | | * commit 75e3630c6071819d3674d956ea754ccb4fed5271 | | | | Author: Benjamin Peterson | | | | Date: Fri Jan 1 10:23:45 2016 -0600 | | | | | | | | 2016 will be another year of writing copyrighted code | | | | * | | | commit 288cb25f1a208fe09b9e06ba479e11c1157da4b5 |\ \ \ \ Merge: 58f8833 42bf8fc | |/ / / Author: Serhiy Storchaka | | | | Date: Wed Dec 30 21:41:53 2015 +0200 | | | | | | | | Issue #25961: Disallowed null characters in the type name. | | | | Simplified testing for null characters in __name__ setter. | | | | --- Victor From mariatta.wijaya at gmail.com Fri Mar 31 11:56:12 2017 From: mariatta.wijaya at gmail.com (Mariatta Wijaya) Date: Fri, 31 Mar 2017 08:56:12 -0700 Subject: [Python-Dev] Questions on the CPython Git master branch: how to exclude commits of 3.x branches? In-Reply-To: References: Message-ID: Can you try git log master ^3.6 I think it will give what's on master and not in 3.6 On Mar 31, 2017 8:47 AM, "Victor Stinner" wrote: > Hi, > > The CPython repository was converted from Mercurial to Git. Before > with Mercurial, we used extensively merges. For example, a bug was > fixed in branche 3.5, merged into 3.6 and then merged into master. > With the conversion to Git, some merges commit are removed, some > others are kept. > > My question is how to list commits which are only part of the "master" > branch, as "hg log default" in Mercurial. "git log origin/master" > lists also commits coming from 3.x branches and their merges. The > problem is that if you pick a commit from a different branch, you > compile Python 3.x, instead of compiling Python for the master branch. > > Right now, my need is to find the first commit in the "master" branch > after a specific date. For example, find the first commit after > 2016-01-01 00:00. > > Naive solution: > --- > $ git log --since="2016-01-01 00:00" origin/master --reverse|head > commit 75e3630c6071819d3674d956ea754ccb4fed5271 > Author: Benjamin Peterson > Date: Fri Jan 1 10:23:45 2016 -0600 > > 2016 will be another year of writing copyrighted code > --- > > If you compile the revision 75e3630c6071819d3674d956ea754ccb4fed5271, > you get Python 3.3: > --- > $ grep PY_M Include/patchlevel.h > #define PY_MAJOR_VERSION 3 > #define PY_MINOR_VERSION 3 > #define PY_MICRO_VERSION 6 > --- > > But if you exclude manually commits which are in branches 3.x, you get > the commit 71db903563906cedfc098418659d1200043cd14c which gives a > different Python version: > --- > $ grep PY_M Include/patchlevel.h > #define PY_MAJOR_VERSION 3 > #define PY_MINOR_VERSION 6 > #define PY_MICRO_VERSION 0 > --- > > In fact, I wrote a tool to manually exclude commits of branches 3.x: > > https://github.com/haypo/misc/blob/master/misc/find_git_ > revisions_by_date.py > > But it's super slow! Are there builtin options to only show Git > commits which are in master branch but not in 3.x branches? > > Asked differently: how can I only see two commits on the following > range? What are [options]? > > git rev-list 288cb25f1a208fe09b9e06ba479e11c1157da4b5.. > 71db903563906cedfc098418659d1200043cd14c > [options] > > Commits after 2016-01-01: > --- > $ git checkout 71db903563906cedfc098418659d1200043cd14c > $ git log --graph > * commit 71db903563906cedfc098418659d1200043cd14c > |\ Merge: 288cb25 4c70293 > | | Author: Benjamin Peterson > | | Date: Fri Jan 1 10:25:22 2016 -0600 > | | > | | merge 3.5 > | | > | * commit 4c70293755ce8ea0adc5b224c714da2b7625d232 > | |\ Merge: 42bf8fc e8c2a95 > | | | Author: Benjamin Peterson > | | | Date: Fri Jan 1 10:25:12 2016 -0600 > | | | > | | | merge 3.4 > | | | > | | * commit e8c2a957c87980a1fd79c39597d40e5c5aeb7048 > | | |\ Merge: 52d6c2c 75e3630 > | | | | Author: Benjamin Peterson > | | | | Date: Fri Jan 1 10:24:21 2016 -0600 > | | | | > | | | | merge 3.3 > | | | | > | | | * commit 75e3630c6071819d3674d956ea754ccb4fed5271 > | | | | Author: Benjamin Peterson > | | | | Date: Fri Jan 1 10:23:45 2016 -0600 > | | | | > | | | | 2016 will be another year of writing copyrighted code > | | | | > * | | | commit 288cb25f1a208fe09b9e06ba479e11c1157da4b5 > |\ \ \ \ Merge: 58f8833 42bf8fc > | |/ / / Author: Serhiy Storchaka > | | | | Date: Wed Dec 30 21:41:53 2015 +0200 > | | | | > | | | | Issue #25961: Disallowed null characters in the type name. > | | | | Simplified testing for null characters in __name__ setter. > | | | | > --- > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > mariatta.wijaya%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Mar 31 12:09:11 2017 From: status at bugs.python.org (Python tracker) Date: Fri, 31 Mar 2017 18:09:11 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20170331160911.C7B1856A92@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2017-03-24 - 2017-03-31) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5855 ( -8) closed 35854 (+65) total 41709 (+57) Open issues with patches: 2419 Issues opened (36) ================== #11913: sdist refuses README.rst http://bugs.python.org/issue11913 reopened by pitrou #29880: python3.6 install readline ,and then cpython exit http://bugs.python.org/issue29880 reopened by pz #29897: itertools.chain behaves strangly when copied with copy.copy http://bugs.python.org/issue29897 opened by MSeifert #29898: PYTHONLEGACYWINDOWSIOENCODING isn't implemented http://bugs.python.org/issue29898 opened by eryksun #29899: zlib missing when --enable--optimizations option appended http://bugs.python.org/issue29899 opened by kyren ????????? #29902: copy breaks staticmethod http://bugs.python.org/issue29902 opened by dangyogi #29903: struct.Struct Addition http://bugs.python.org/issue29903 opened by palaviv #29905: TypeErrors not formatting values correctly http://bugs.python.org/issue29905 opened by Jim Fasarakis-Hilliard #29906: Add callback parameter to concurrent.futures.Executor.map http://bugs.python.org/issue29906 opened by aron.bordin #29909: types.coroutine monkey patches original function http://bugs.python.org/issue29909 opened by Omnifarious #29910: Ctrl-D eats a character on IDLE http://bugs.python.org/issue29910 opened by serhiy.storchaka #29911: Uninstall command line in Windows registry does not uninstall http://bugs.python.org/issue29911 opened by Christian.Ullrich #29914: Incorrect signatures of object.__reduce__() and object.__reduc http://bugs.python.org/issue29914 opened by serhiy.storchaka #29915: Drop Mac OS X Tiger support in Python 3.7? http://bugs.python.org/issue29915 opened by haypo #29916: No explicit documentation for PyGetSetDef and getter and sette http://bugs.python.org/issue29916 opened by MSeifert #29920: Document cgitb.text and cgitb.html http://bugs.python.org/issue29920 opened by xmorel #29922: error message when __aexit__ is not async http://bugs.python.org/issue29922 opened by Tadhg McDonald-Jensen #29925: test_uuid fails on OS X Tiger http://bugs.python.org/issue29925 opened by haypo #29926: time.sleep ignores _thread.interrupt_main() http://bugs.python.org/issue29926 opened by Mark #29929: Eliminate implicit __main__ relative imports http://bugs.python.org/issue29929 opened by ncoghlan #29930: Waiting for asyncio.StreamWriter.drain() twice in parallel rai http://bugs.python.org/issue29930 opened by metathink #29931: ipaddress.ip_interface __lt__ check seems to be broken http://bugs.python.org/issue29931 opened by Sanjay #29933: asyncio: set_write_buffer_limits() doc doesn't specify unit of http://bugs.python.org/issue29933 opened by haypo #29937: argparse mutex group should allow mandatory parameters http://bugs.python.org/issue29937 opened by Mark Nolan #29939: Compiler warning in _ctypes_test.c http://bugs.python.org/issue29939 opened by serhiy.storchaka #29940: Add follow_wrapped=True option to help() http://bugs.python.org/issue29940 opened by samwyse #29941: Confusion between asserts and Py_DEBUG http://bugs.python.org/issue29941 opened by Thomas Wouters #29943: PySlice_GetIndicesEx change broke ABI in 3.5 and 3.6 branches http://bugs.python.org/issue29943 opened by njs #29944: Argumentless super() calls do not work in classes constructed http://bugs.python.org/issue29944 opened by assume_away #29947: In SocketServer, why not passing a factory instance for the Re http://bugs.python.org/issue29947 opened by dominic108 #29948: DeprecationWarning when parse ElementTree with a doctype in 2. http://bugs.python.org/issue29948 opened by serhiy.storchaka #29949: sizeof set after set_merge() is doubled from 3.5 http://bugs.python.org/issue29949 opened by inada.naoki #29950: Rename SlotWrapperType to WrapperDescriptorType http://bugs.python.org/issue29950 opened by Jim Fasarakis-Hilliard #29951: PyArg_ParseTupleAndKeywords exception messages containing "fun http://bugs.python.org/issue29951 opened by MSeifert #29952: "keys and values" is preferred to "keys and elements" for name http://bugs.python.org/issue29952 opened by cocoatomo #29953: Memory leak in the replace() method of datetime and time objec http://bugs.python.org/issue29953 opened by serhiy.storchaka Most recent 15 issues with no replies (15) ========================================== #29953: Memory leak in the replace() method of datetime and time objec http://bugs.python.org/issue29953 #29950: Rename SlotWrapperType to WrapperDescriptorType http://bugs.python.org/issue29950 #29948: DeprecationWarning when parse ElementTree with a doctype in 2. http://bugs.python.org/issue29948 #29940: Add follow_wrapped=True option to help() http://bugs.python.org/issue29940 #29937: argparse mutex group should allow mandatory parameters http://bugs.python.org/issue29937 #29925: test_uuid fails on OS X Tiger http://bugs.python.org/issue29925 #29916: No explicit documentation for PyGetSetDef and getter and sette http://bugs.python.org/issue29916 #29914: Incorrect signatures of object.__reduce__() and object.__reduc http://bugs.python.org/issue29914 #29906: Add callback parameter to concurrent.futures.Executor.map http://bugs.python.org/issue29906 #29905: TypeErrors not formatting values correctly http://bugs.python.org/issue29905 #29895: Distutils blows up with an incorrect pypirc, should be caught http://bugs.python.org/issue29895 #29886: links between binascii.{un,}hexlify / bytes.{,to}hex http://bugs.python.org/issue29886 #29883: asyncio: Windows Proactor Event Loop UDP Support http://bugs.python.org/issue29883 #29877: compileall hangs when accessing urandom even if number of work http://bugs.python.org/issue29877 #29868: multiprocessing.dummy missing cpu_count http://bugs.python.org/issue29868 Most recent 15 issues waiting for review (15) ============================================= #29953: Memory leak in the replace() method of datetime and time objec http://bugs.python.org/issue29953 #29951: PyArg_ParseTupleAndKeywords exception messages containing "fun http://bugs.python.org/issue29951 #29930: Waiting for asyncio.StreamWriter.drain() twice in parallel rai http://bugs.python.org/issue29930 #29914: Incorrect signatures of object.__reduce__() and object.__reduc http://bugs.python.org/issue29914 #29897: itertools.chain behaves strangly when copied with copy.copy http://bugs.python.org/issue29897 #29869: Underscores in numeric literals not supported in lib2to3. http://bugs.python.org/issue29869 #29867: Add asserts in PyXXX_GET_SIZE macros http://bugs.python.org/issue29867 #29858: inspect.signature includes bound argument for wrappers around http://bugs.python.org/issue29858 #29854: Segfault when readline history is more then 2 * history size http://bugs.python.org/issue29854 #29843: errors raised by ctypes.Array for invalid _length_ attribute http://bugs.python.org/issue29843 #29840: Avoid raising OverflowError in bool() http://bugs.python.org/issue29840 #29839: Avoid raising OverflowError in len() when __len__() returns ne http://bugs.python.org/issue29839 #29838: Check that sq_length and mq_length return non-negative result http://bugs.python.org/issue29838 #29822: inspect.isabstract does not work on abstract base classes duri http://bugs.python.org/issue29822 #29803: Remove some redandunt ops in unicodeobject.c http://bugs.python.org/issue29803 Top 10 most discussed issues (10) ================================= #27593: Deprecate sys._mercurial and create sys._git http://bugs.python.org/issue27593 14 msgs #29926: time.sleep ignores _thread.interrupt_main() http://bugs.python.org/issue29926 10 msgs #29572: Upgrade installers to OpenSSL 1.0.2k http://bugs.python.org/issue29572 8 msgs #29573: NamedTemporaryFile with delete=True should not fail if file al http://bugs.python.org/issue29573 8 msgs #29887: test_normalization doesn't work http://bugs.python.org/issue29887 7 msgs #29899: zlib missing when --enable--optimizations option appended http://bugs.python.org/issue29899 7 msgs #29930: Waiting for asyncio.StreamWriter.drain() twice in parallel rai http://bugs.python.org/issue29930 7 msgs #29949: sizeof set after set_merge() is doubled from 3.5 http://bugs.python.org/issue29949 7 msgs #29951: PyArg_ParseTupleAndKeywords exception messages containing "fun http://bugs.python.org/issue29951 7 msgs #28556: typing.py upgrades http://bugs.python.org/issue28556 6 msgs Issues closed (67) ================== #6532: thread.get_ident() should return unsigned value http://bugs.python.org/issue6532 closed by serhiy.storchaka #10030: Patch for zip decryption speedup http://bugs.python.org/issue10030 closed by serhiy.storchaka #12518: In string.Template it's impossible to transform delimiter in t http://bugs.python.org/issue12518 closed by barry #16011: "in" should be consistent with return value of __contains__ http://bugs.python.org/issue16011 closed by Mariatta #19791: test_pathlib should use can_symlink or skip_unless_symlink fro http://bugs.python.org/issue19791 closed by brett.cannon #19824: string.Template: Rewrite docs to emphasize i18n use case http://bugs.python.org/issue19824 closed by serhiy.storchaka #20314: Potentially confusing formulation in 6.1.4. Template strings http://bugs.python.org/issue20314 closed by barry #20548: Use specific asserts in warnings and exceptions tests http://bugs.python.org/issue20548 closed by serhiy.storchaka #20552: Use specific asserts in bytes tests http://bugs.python.org/issue20552 closed by terry.reedy #22049: argparse: type= doesn't honor nargs > 1 http://bugs.python.org/issue22049 closed by paul.j3 #22392: Clarify documentation of __getinitargs__ http://bugs.python.org/issue22392 closed by Mariatta #22744: os.mkdir on Windows silently strips trailing blanks from direc http://bugs.python.org/issue22744 closed by serhiy.storchaka #22962: ipaddress: Add optional prefixlen argument to ip_interface and http://bugs.python.org/issue22962 closed by Gary.van.der.Merwe #23241: shutil should accept pathlib types http://bugs.python.org/issue23241 closed by serhiy.storchaka #23487: argparse: add_subparsers 'action' broken http://bugs.python.org/issue23487 closed by paul.j3 #23901: Force console stdout to use UTF8 on Windows http://bugs.python.org/issue23901 closed by martin.panter #24154: pathlib.Path.rename moves file to Path.cwd() when argument is http://bugs.python.org/issue24154 closed by serhiy.storchaka #24251: Different behavior for argparse between 2.7.8 and 2.7.9 when a http://bugs.python.org/issue24251 closed by paul.j3 #24821: The optimization of string search can cause pessimization http://bugs.python.org/issue24821 closed by serhiy.storchaka #25803: pathlib.Path('/').mkdir() raises wrong error type http://bugs.python.org/issue25803 closed by serhiy.storchaka #25996: Add support of file descriptor in os.scandir() http://bugs.python.org/issue25996 closed by serhiy.storchaka #27446: struct: allow per-item byte order http://bugs.python.org/issue27446 closed by rhettinger #28692: gettext: deprecate selecting plural form by fractional numbers http://bugs.python.org/issue28692 closed by serhiy.storchaka #28699: Imap from ThreadPool behaves unexpectedly http://bugs.python.org/issue28699 closed by xiang.zhang #28810: Document bytecode changes in 3.6 http://bugs.python.org/issue28810 closed by levkivskyi #29176: /tmp does not exist on Android and is used by curses.window.pu http://bugs.python.org/issue29176 closed by haypo #29204: Add code deprecations in ElementTree http://bugs.python.org/issue29204 closed by serhiy.storchaka #29557: binhex documentation claims unknown bug http://bugs.python.org/issue29557 closed by serhiy.storchaka #29619: st_ino (unsigned long long) is casted to long long in posixmod http://bugs.python.org/issue29619 closed by xiang.zhang #29632: argparse values for action in add_argument() should be flags i http://bugs.python.org/issue29632 closed by pgacv2 #29643: --enable-optimizations compiler flag has no effect http://bugs.python.org/issue29643 closed by inada.naoki #29677: clarify docs about 'round()' accepting a negative integer for http://bugs.python.org/issue29677 closed by Mariatta #29720: potential silent truncation in PyLong_AsVoidPtr http://bugs.python.org/issue29720 closed by haypo #29737: Optimize concatenating empty tuples http://bugs.python.org/issue29737 closed by serhiy.storchaka #29816: Get rid of C limitation for shift count in right shift http://bugs.python.org/issue29816 closed by serhiy.storchaka #29852: Argument Clinic: add common converter to Py_ssize_t that accep http://bugs.python.org/issue29852 closed by serhiy.storchaka #29862: Fix grammar typo in importlib.reload() exception http://bugs.python.org/issue29862 closed by Mariatta #29878: Add global instances of int 0 and 1 http://bugs.python.org/issue29878 closed by serhiy.storchaka #29884: faulthandler does not properly restore sigaltstack during tear http://bugs.python.org/issue29884 closed by Mariatta #29888: The link referring to "Python download page" is broken http://bugs.python.org/issue29888 closed by ned.deily #29892: change statement for open() is splited into two part in middle http://bugs.python.org/issue29892 closed by Mariatta #29894: Deprecate returning a subclass of complex from __complex__ http://bugs.python.org/issue29894 closed by serhiy.storchaka #29896: ElementTree.fromstring raises undocumented UnicodeError http://bugs.python.org/issue29896 closed by terry.reedy #29900: Remove unneeded wrappers in pathlib http://bugs.python.org/issue29900 closed by serhiy.storchaka #29901: Support path-like objects in zipapp http://bugs.python.org/issue29901 closed by serhiy.storchaka #29904: Fix a number of error message typos http://bugs.python.org/issue29904 closed by Jim Fasarakis-Hilliard #29907: Unicode encoding failure http://bugs.python.org/issue29907 closed by eryksun #29908: Inconsistent crashing with an access violation http://bugs.python.org/issue29908 closed by Cameron Mckain #29912: Overlapping tests between list_tests and seq_tests http://bugs.python.org/issue29912 closed by brett.cannon #29913: ipadress compare_networks does not work according to documenta http://bugs.python.org/issue29913 closed by xiang.zhang #29917: Wrong link target in PyMethodDef documentation http://bugs.python.org/issue29917 closed by orsenthil #29918: Missed "const" modifiers in C API documentation http://bugs.python.org/issue29918 closed by serhiy.storchaka #29919: Remove unused imports found by pyflakes http://bugs.python.org/issue29919 closed by haypo #29921: datetime validation is stricter in 3.6.1 than previous version http://bugs.python.org/issue29921 closed by haypo #29923: PEP487 __init_subclass__ incompatible with abc.ABCMeta http://bugs.python.org/issue29923 closed by xiang.zhang #29924: Useless argument in call to PyErr_Format http://bugs.python.org/issue29924 closed by haypo #29927: Unnecessary code in the c-api/exceptions.c http://bugs.python.org/issue29927 closed by xiang.zhang #29928: Add f-strings to Glossary http://bugs.python.org/issue29928 closed by Mariatta #29932: Missing word ("be") in error message ("first argument must a t http://bugs.python.org/issue29932 closed by brett.cannon #29934: % formatting fails to find formatting code in bytes type after http://bugs.python.org/issue29934 closed by xiang.zhang #29935: list and tuple index methods should accept None parameters http://bugs.python.org/issue29935 closed by serhiy.storchaka #29936: Typo in __GNU*C*_MINOR__ guard affecting gcc 3.x http://bugs.python.org/issue29936 closed by benjamin.peterson #29938: subprocess.run calling bash on windows10 cause 0x80070057 erro http://bugs.python.org/issue29938 closed by eryksun #29942: Stack overflow in itertools.chain.from_iterable. http://bugs.python.org/issue29942 closed by twouters #29945: decode string:u"\ufffd" UnicodeEncodeError http://bugs.python.org/issue29945 closed by eryksun #29946: compiler warning "sqrtpi defined but not used" http://bugs.python.org/issue29946 closed by xiang.zhang #1117601: os.path.exists returns false negatives in MAC environments. http://bugs.python.org/issue1117601 closed by serhiy.storchaka From rymg19 at gmail.com Fri Mar 31 12:36:27 2017 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Fri, 31 Mar 2017 11:36:27 -0500 Subject: [Python-Dev] Questions on the CPython Git master branch: how to exclude commits of 3.x branches? In-Reply-To: References: Message-ID: On Mar 31, 2017 10:48 AM, "Victor Stinner" wrote: Hi, The CPython repository was converted from Mercurial to Git. Before with Mercurial, we used extensively merges. For example, a bug was fixed in branche 3.5, merged into 3.6 and then merged into master. With the conversion to Git, some merges commit are removed, some others are kept. My question is how to list commits which are only part of the "master" branch, as "hg log default" in Mercurial. "git log origin/master" lists also commits coming from 3.x branches and their merges. The problem is that if you pick a commit from a different branch, you compile Python 3.x, instead of compiling Python for the master branch. I think you want: git log --no-merges --first-parent Right now, my need is to find the first commit in the "master" branch after a specific date. For example, find the first commit after 2016-01-01 00:00. Naive solution: --- $ git log --since="2016-01-01 00:00" origin/master --reverse|head commit 75e3630c6071819d3674d956ea754ccb4fed5271 Author: Benjamin Peterson Date: Fri Jan 1 10:23:45 2016 -0600 2016 will be another year of writing copyrighted code --- If you compile the revision 75e3630c6071819d3674d956ea754ccb4fed5271, you get Python 3.3: --- $ grep PY_M Include/patchlevel.h #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 3 #define PY_MICRO_VERSION 6 --- But if you exclude manually commits which are in branches 3.x, you get the commit 71db903563906cedfc098418659d1200043cd14c which gives a different Python version: --- $ grep PY_M Include/patchlevel.h #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 6 #define PY_MICRO_VERSION 0 --- In fact, I wrote a tool to manually exclude commits of branches 3.x: https://github.com/haypo/misc/blob/master/misc/find_git_revisions_by_date.py But it's super slow! Are there builtin options to only show Git commits which are in master branch but not in 3.x branches? Asked differently: how can I only see two commits on the following range? What are [options]? git rev-list 288cb25f1a208fe09b9e06ba479e11c1157da4b5.. 71db903563906cedfc098418659d1200043cd14c [options] Commits after 2016-01-01: --- $ git checkout 71db903563906cedfc098418659d1200043cd14c $ git log --graph * commit 71db903563906cedfc098418659d1200043cd14c |\ Merge: 288cb25 4c70293 | | Author: Benjamin Peterson | | Date: Fri Jan 1 10:25:22 2016 -0600 | | | | merge 3.5 | | | * commit 4c70293755ce8ea0adc5b224c714da2b7625d232 | |\ Merge: 42bf8fc e8c2a95 | | | Author: Benjamin Peterson | | | Date: Fri Jan 1 10:25:12 2016 -0600 | | | | | | merge 3.4 | | | | | * commit e8c2a957c87980a1fd79c39597d40e5c5aeb7048 | | |\ Merge: 52d6c2c 75e3630 | | | | Author: Benjamin Peterson | | | | Date: Fri Jan 1 10:24:21 2016 -0600 | | | | | | | | merge 3.3 | | | | | | | * commit 75e3630c6071819d3674d956ea754ccb4fed5271 | | | | Author: Benjamin Peterson | | | | Date: Fri Jan 1 10:23:45 2016 -0600 | | | | | | | | 2016 will be another year of writing copyrighted code | | | | * | | | commit 288cb25f1a208fe09b9e06ba479e11c1157da4b5 |\ \ \ \ Merge: 58f8833 42bf8fc | |/ / / Author: Serhiy Storchaka | | | | Date: Wed Dec 30 21:41:53 2015 +0200 | | | | | | | | Issue #25961: Disallowed null characters in the type name. | | | | Simplified testing for null characters in __name__ setter. | | | | --- Victor _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ rymg19%40gmail.com -- Ryan (????) Yoko Shimomura > ryo (supercell/EGOIST) > Hiroyuki Sawano >> everyone else http://refi64.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Mar 31 19:29:36 2017 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 1 Apr 2017 01:29:36 +0200 Subject: [Python-Dev] Questions on the CPython Git master branch: how to exclude commits of 3.x branches? In-Reply-To: References: Message-ID: 2017-03-31 18:36 GMT+02:00 Ryan Gonzalez : > I think you want: > > git log --no-merges --first-parent Oh, I mised --first-parent: it seems like it fixed my issue, thanks! But --no-merges is not what I want. I want to see merge commits which are only in the master branch. Victor From vadmium+py at gmail.com Fri Mar 31 21:35:00 2017 From: vadmium+py at gmail.com (Martin Panter) Date: Sat, 1 Apr 2017 01:35:00 +0000 Subject: [Python-Dev] why _PyGen_Finalize(gen) propagates close() to _PyGen_yf() ? In-Reply-To: References: <20170320173026.GA28483@redhat.com> <20170330180556.GA29318@redhat.com> Message-ID: On 31 March 2017 at 05:22, Nathaniel Smith wrote: >>> On Mon, Mar 20, 2017 at 11:30 AM, Oleg Nesterov wrote: >>> > [Aborting "yield" in a "for" loop leaves a sub-generator open, but aborting "yield from" cleans up the sub-generator] > > In any case the short answer to your original question is that PEP 342 > says that generator finalization calls the generator's close() method, > which throws a GeneratorExit into the generator, and PEP 380 says that > as a special case, when a GeneratorExit is thrown into a yield from, > then this is propagated by calling .close() on the yielded-from > iterator (if such a method exists) and then re-raised in the original > generator. I think the Python documentation could be improved regarding this. When I wrote the documentation for coroutine methods , I included details about the "close" and "throw" methods delegating to inner iterators. I thought I was going to propose similar updates to the generator documentation , but it seems I never got around to it. (In the mean time, was added, to which this may also be relevant, but that is too complicated for me.) There is a parallel with another annoyance with Python generator cleanup: . There are two ways you can require generators to be used. With a simple generator, you can partially iterate it and then throw it away without any special cleaning up. But with more complex generators that "own" expensive resources, it would be nice to produce a ResourceWarning I you forget to clean them up. With the "for / yield" case, the sub-generator is not cleaned up, so if a resource-intensive sub-generator has to be cleaned up you have to do it yourself. With "yield from", the cleanup is implicit and unavoidable, which means you can't use it if you want keep the sub-generator alive for later. But the automatic cleanup may be useful in other cases.