From stephen at xemacs.org Sat Aug 1 06:03:40 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sat, 01 Aug 2015 13:03:40 +0900 Subject: [Python-Dev] Issues not responded to. In-Reply-To: <55BB5C54.9000707@gmail.com> References: <55BB5C54.9000707@gmail.com> Message-ID: <87d1z7d59f.fsf@uwakimon.sk.tsukuba.ac.jp> Xavier de Gaye writes: > > Here's a query: > > > > https://bugs.python.org/issue?@action=search&@columns=title,id,creator,activity,actor,status&@sort=activity&status=-1,1,3,4&message_count=1 > > > > This is nice, thanks. > Note that this is missing the cases where more than one message was > required, for example to send two attachments (a script as the use > case, and a patch). If this picks up more than 100 (I bet it's the kind of thing Mark used, so 400 is probably a reasonable estimate), we can clean these up and worry about the ones that fall through the cracks later. It's arbitrary but as far as I can see not unfair. Steve From Nikolaus at rath.org Sun Aug 2 06:38:26 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Sat, 01 Aug 2015 21:38:26 -0700 Subject: [Python-Dev] PEP 492 documentation Message-ID: <87egjme24d.fsf@vostro.rath.org> Hello, Looking at the language reference for 3.5.0b4, I noticed that it mentions neither async nor await. Is this still going to get updated, or will the only documentation consist of the PEP itself? I think having a Python release recognize keywords that are not mentioned in the language reference would be quite unfortunate (even if they're treated specially to preserve backwards compatibility). Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From yselivanov.ml at gmail.com Sun Aug 2 16:21:38 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 02 Aug 2015 10:21:38 -0400 Subject: [Python-Dev] PEP 492 documentation In-Reply-To: <87egjme24d.fsf@vostro.rath.org> References: <87egjme24d.fsf@vostro.rath.org> Message-ID: <55BE2772.70805@gmail.com> Nikolaus, Strange. PEP 492 changes are fully documented since b3. Here are just few examples: https://docs.python.org/3.5/whatsnew/3.5.html#pep-492-coroutines-with-async-and-await-syntax https://docs.python.org/3.5/reference/datamodel.html#coroutines https://docs.python.org/3.5/reference/compound_stmts.html#coroutines Perhaps, it's a browser cache issue? Yury On 2015-08-02 12:38 AM, Nikolaus Rath wrote: > Hello, > > Looking at the language reference for 3.5.0b4, I noticed that it > mentions neither async nor await. > > Is this still going to get updated, or will the only documentation > consist of the PEP itself? I think having a Python release recognize > keywords that are not mentioned in the language reference would be quite > unfortunate (even if they're treated specially to preserve backwards > compatibility). > > Best, > -Nikolaus > From rustompmody at gmail.com Sun Aug 2 18:17:23 2015 From: rustompmody at gmail.com (Rustom Mody) Date: Sun, 2 Aug 2015 21:47:23 +0530 Subject: [Python-Dev] Issues not responded to. Message-ID: On Fri, Jul 31, 2015 at 9:37 AM, Carl Meyer wrote: > > I'm a Django core developer. For the last half-year or so, the Django > Software Foundation has (for the first time) paid a "Django Fellow" or > two (currently Tim Graham) to work on core Django. For me the experience > has been excellent. > > So based on my experience with the transition to having a DSF-paid > Fellow on the Django core team, and having watched important python-dev > work (e.g. the core workflow stuff) linger due to lack of available > volunteer time, I'd recommend that python-dev run, not walk, to ask the > PSF board to fund a similar position for Python core. > > Of course there may be differences between the culture of python-dev and > Django core > > A view from the other side. Yeah I guess its a good idea for PSF to spend some money to clear 'ugly' bugs. Dunno about the proc-n-cons of this so wont get into it. Instead I'd like to draw attention to the free side of the equation -- What would it take to have more hands with sleeves rolled up and doing the housecleaning? Context: We had a bunch of college students (2nd year Engineering) doing some projects with us. One was inside the CPython sources: https://github.com/rusimody/l10Python Their final presentation was last Thursday. Q: Is there anything in there that can reasonably be a patch for python? A: Please dont be embarrassing! However as a student project it was enough for us to say: "Good work!" Here's an REPL-session to demo: [Note ?????????? is devanagari equivalent of 1234567890] -------------------------------------------------- Python 3.5.0b2 (default, Jul 30 2015, 19:32:42) [GCC 4.9.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ?? 12 >>> 23 == ?? True >>> ?? + ?? 46 >>> ?? + 34 46 >>> "12" == "??" False >>> 2 ? 3 True >>> 2 ? 3 True >>> (? x: x+3)(4) 7 >>> # as a result of which this doesn't work... I did say they are kids! ... >>> ? = 3 File "", line 1 ? = 3 ^ SyntaxError: invalid syntax >>> {1,2,3} ? {2,3,4} {2, 3} >>> {1,2,3} ? {2,3,4} {1, 2, 3, 4} >>> ? True False >>> ?([1,2,3,4]) 10 >>> ---------------------------------------------- The last is actually more an embarrassment than the ? breaking since they?ve *changed the lexer* to read the ? when all that was required was ? = sum !! In short... Kids! However as kids we could say they are farther to being programmers than they were before this -- opening something of the scale of CPython, finding one's way around and adding/modifying even the tiniest bit of functionality is a big growing-up step. Brings me to the point of this mail: Surely me+my students is not unique configuration -- there must be zillions of such across the world. And if inexperienced/kids like us had more help from people like the members of this list we would get farther and at least some subset of these may go on to become actual devs/contributors. So the request is that some of you give a tiny fraction of your time to teams just mucking around in the CPython codebase as a long term investment to producing more devs even when it is not directly connected to a possible contribution/patch. [Yeah I am a lurker on the mentors list but I dont see much *technical* discussion happening there] We could actually submit patches. Just that the priorities of the 3 parties -- teachers, students, devs -- is clearly different: - Teachers need to give/create a good learning experience - Students need to shine, do well, excel...("show off" is not an inaccurate description) - Devs need the language to progress and bugs to be fixed Though these priorities are different I believe a symbiosis is possible. In particular, at least some of the -- for a dev -- 'ugly-bugs' could be a challenge in an academic context. I will be teaching again to more advanced students this time If I could find a path through bugs of different challenge-levels we may get some bugs fixed... Thanks Rusi -- http://blog.languager.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.combelle at gmail.com Sun Aug 2 22:04:06 2015 From: xavier.combelle at gmail.com (Xavier Combelle) Date: Sun, 2 Aug 2015 22:04:06 +0200 Subject: [Python-Dev] PEP 492 documentation In-Reply-To: <55BE2772.70805@gmail.com> References: <87egjme24d.fsf@vostro.rath.org> <55BE2772.70805@gmail.com> Message-ID: Shouldn't at least ayncio doc https://docs.python.org/3.5/library/asyncio.html be updated accordingly ? for example https://docs.python.org/3.5/search.html?q=await&check_keywords=yes&area=default doesn't mention https://docs.python.org/3.5/library/asyncio.html 2015-08-02 16:21 GMT+02:00 Yury Selivanov : > Nikolaus, > > Strange. PEP 492 changes are fully documented since b3. > > Here are just few examples: > > > https://docs.python.org/3.5/whatsnew/3.5.html#pep-492-coroutines-with-async-and-await-syntax > https://docs.python.org/3.5/reference/datamodel.html#coroutines > https://docs.python.org/3.5/reference/compound_stmts.html#coroutines > > Perhaps, it's a browser cache issue? > > Yury > > On 2015-08-02 12:38 AM, Nikolaus Rath wrote: > >> Hello, >> >> Looking at the language reference for 3.5.0b4, I noticed that it >> mentions neither async nor await. >> >> Is this still going to get updated, or will the only documentation >> consist of the PEP itself? I think having a Python release recognize >> keywords that are not mentioned in the language reference would be quite >> unfortunate (even if they're treated specially to preserve backwards >> compatibility). >> >> Best, >> -Nikolaus >> >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/xavier.combelle%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Sun Aug 2 22:47:02 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sun, 02 Aug 2015 16:47:02 -0400 Subject: [Python-Dev] PEP 492 documentation In-Reply-To: References: <87egjme24d.fsf@vostro.rath.org> <55BE2772.70805@gmail.com> Message-ID: <55BE81C6.4030109@gmail.com> On 2015-08-02 4:04 PM, Xavier Combelle wrote: > Shouldn't at least ayncio doc > https://docs.python.org/3.5/library/asyncio.html be updated accordingly ? > for example > https://docs.python.org/3.5/search.html?q=await&check_keywords=yes&area=default > doesn't mention https://docs.python.org/3.5/library/asyncio.html Yes, it was updated:https://docs.python.org/3.5/library/asyncio-task.html (search for 'async def' on the page -- it's mentioned about 13 times) Yury From Nikolaus at rath.org Mon Aug 3 00:42:57 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Sun, 02 Aug 2015 15:42:57 -0700 Subject: [Python-Dev] PEP 492 documentation In-Reply-To: <55BE2772.70805@gmail.com> (Yury Selivanov's message of "Sun, 02 Aug 2015 10:21:38 -0400") References: <87egjme24d.fsf@vostro.rath.org> <55BE2772.70805@gmail.com> Message-ID: <8738015n2m.fsf@vostro.rath.org> Hi, No, not a browser cache issue. I was looking for "async" or "await" in the table of contents, so I didn't notice the new "coroutines" sections. Sorry for the noise. -Nikolaus On Aug 02 2015, Yury Selivanov wrote: > Nikolaus, > > Strange. PEP 492 changes are fully documented since b3. > > Here are just few examples: > > https://docs.python.org/3.5/whatsnew/3.5.html#pep-492-coroutines-with-async-and-await-syntax > https://docs.python.org/3.5/reference/datamodel.html#coroutines > https://docs.python.org/3.5/reference/compound_stmts.html#coroutines > > Perhaps, it's a browser cache issue? > > Yury > > On 2015-08-02 12:38 AM, Nikolaus Rath wrote: >> Hello, >> >> Looking at the language reference for 3.5.0b4, I noticed that it >> mentions neither async nor await. >> >> Is this still going to get updated, or will the only documentation >> consist of the PEP itself? I think having a Python release recognize >> keywords that are not mentioned in the language reference would be quite >> unfortunate (even if they're treated specially to preserve backwards >> compatibility). >> >> Best, >> -Nikolaus >> > -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From robertc at robertcollins.net Mon Aug 3 02:46:05 2015 From: robertc at robertcollins.net (Robert Collins) Date: Mon, 3 Aug 2015 12:46:05 +1200 Subject: [Python-Dev] updating ensurepip to include wheel Message-ID: So, pip 7.0 depends on the wheel module for its automatic wheel building, and installing pip from get-pip.py, or the bundled copy in virtualenvs will automatically install wheel. But ensurepip doesn't bundle wheel, so we're actually installing a slightly crippled pip 7.1, which will lead to folk having a poorer experience. Is this a simple bug, or do we need to update the PEP? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From donald at stufft.io Mon Aug 3 03:06:28 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 2 Aug 2015 21:06:28 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On August 2, 2015 at 8:47:46 PM, Robert Collins (robertc at robertcollins.net) wrote: > So, pip 7.0 depends on the wheel module for its automatic wheel > building, and installing pip from get-pip.py, or the bundled copy in > virtualenvs will automatically install wheel. > > But ensurepip doesn't bundle wheel, so we're actually installing a > slightly crippled pip 7.1, which will lead to folk having a poorer > experience. > > Is this a simple bug, or do we need to update the PEP? > Personally, I think it's not going to be worth the pain to add wheel to ensurepip. We (pip) already have a somewhat rocky relationship with some downstream vendors because of the bundling of pip and setuptools that I'm not sure that wheel makes sense. Especially given that I want the optional dependency on Wheel to be a temporary measure until we can just implicitly install wheel as a build time dependency within pip and no longer need to install it implicitly in get-pip.py or virtualenv. In the future I expect setuptools to be removed as well at a similar time when we can implicitly install setuptools as a build time dependency of an sdist and do not require end users to install it explicitly. That being said, I think the PEP would need to be updated (and possibly a new? PEP?) since we explicitly called out the fact that setuptools would currently be included until pip no longer needed it to be installed seperately. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From rdmurray at bitdance.com Mon Aug 3 16:42:37 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 03 Aug 2015 10:42:37 -0400 Subject: [Python-Dev] Issues not responded to. In-Reply-To: References: Message-ID: <20150803144238.22355B20093@webabinitio.net> On Sun, 02 Aug 2015 21:47:23 +0530, Rustom Mody wrote: > [Yeah I am a lurker on the mentors list but I dont see much *technical* > discussion happening there] Yes, it's a mentoring list for how to contribute, not for technical issues, though we happily get in to technical issues when they arise. > We could actually submit patches. > Just that the priorities of the 3 parties -- teachers, students, devs -- > is clearly different: > - Teachers need to give/create a good learning experience > - Students need to shine, do well, excel...("show off" is not an > inaccurate description) > - Devs need the language to progress and bugs to be fixed > > Though these priorities are different I believe a symbiosis is possible. > In particular, at least some of the -- for a dev -- 'ugly-bugs' could be a > challenge in an academic context. The issues that haven't been responded to are *an* issue, but not in fact our most pressing one. The bigger problem, that a Fellow would solve, is not fixing bugs at all (although in Django's case apparently the Fellow does handle security issues...we have an active group of committers who address those, so I don't think a Python Fellow would need to write patches for those, just possibly shepherd them through). The need is to do the "ugly" *job* of making sure that issues that have patches get reviewed, the patches improved, and *get applied*, and, yes, that all issues get a reply. (This is actually a job I enjoy doing, but all I've been managing in my unpaid time lately is trying to keep up with tracker triage and making technical/procedural comments on some issues.) If we had the kind of support a Fellow would provide then your students could actually get valuable feedback by submitting patches (as long as they were willing to take patch criticism!). I'm not *sure* that would be a good thing, as it would increase the review load of patches from less experienced developers, but personally I'd encourage it anyway to help kids (and other less experienced developers) *become* good developers. Python has always had an educational mission, after all :) Rob's suggestion of core devs trying to review one 'commit review' patch a day (or whatever rhythm works for them) could move us toward this goal without a Fellow. I'm going to try to get back to doing that. But realistically, we can't count on busy people being able to make that kind of time consistently available for free, or being interested on working on parts of Python that, well, they aren't interested in. > I will be teaching again to more advanced students this time > If I could find a path through bugs of different challenge-levels we may > get some bugs fixed... Like I said, our problem isn't getting the bugs fixed, it is getting the fixes *reviewed* and *applied*. (Yes, there are some bugs that languish because no one is interested in doing the grunt work to fix them, but I bet that problem would take care of itself if people were more confident that patches would actually get applied when completed.) Having your students *review* and improve existing patches that haven't been moved to 'commit review' and aren't being actively worked on would be just as useful as finding bugs without patches to work on (and easier to do). I think it would just as valuable educationally, or perhaps more so, because they have a starting point, learn about code quality, and figure out how *improve* code (thus learning about better coding practices). Unfortunately there is no good path to finding "appropriate" bugs. My own technique, used for the pycon sprints, is just to hit 'random issue' and evaluate the doability (ie: it's not hung up waiting for a decision from core) and difficulty level and putting it on a list accordingly. That does in some cases require enough experience with the codebase to make those judgements, but there are a lot of issues that are fairly obvious on their face as to how difficult they are. --David From drekin at gmail.com Tue Aug 4 12:04:30 2015 From: drekin at gmail.com (=?UTF-8?B?QWRhbSBCYXJ0b8Wh?=) Date: Tue, 4 Aug 2015 12:04:30 +0200 Subject: [Python-Dev] How to call PyOS_Readline from ctypes? Message-ID: Hello, I'd like to call PyOS_Readline or a particular readline hook function via ctypes. The problem is that the functions accepts stdin and stdout file pointers as first two arguments. These are usually pointers to the standard stdin and stdout files. But how to get the file pointers in Python 3? In Python 2, one could do PyFile_AsFile(py_object(sys.stdin)). I'm asking here since it is quite specific Python implementation related question. Actually, no one answered me on python-list ( https://mail.python.org/pipermail/python-list/2015-July/694633.html). Regards, Adam Barto? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Aug 5 16:01:47 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 6 Aug 2015 00:01:47 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On 3 August 2015 at 11:06, Donald Stufft wrote: > > On August 2, 2015 at 8:47:46 PM, Robert Collins (robertc at robertcollins.net) wrote: >> So, pip 7.0 depends on the wheel module for its automatic wheel >> building, and installing pip from get-pip.py, or the bundled copy in >> virtualenvs will automatically install wheel. >> >> But ensurepip doesn't bundle wheel, so we're actually installing a >> slightly crippled pip 7.1, which will lead to folk having a poorer >> experience. >> >> Is this a simple bug, or do we need to update the PEP? >> > > Personally, I think it's not going to be worth the pain to add wheel to > ensurepip. We (pip) already have a somewhat rocky relationship with some > downstream vendors because of the bundling of pip and setuptools that I'm not > sure that wheel makes sense. Especially given that I want the optional > dependency on Wheel to be a temporary measure until we can just implicitly > install wheel as a build time dependency within pip and no longer need to > install it implicitly in get-pip.py or virtualenv. In the future I expect > setuptools to be removed as well at a similar time when we can implicitly > install setuptools as a build time dependency of an sdist and do not require > end users to install it explicitly. > > That being said, I think the PEP would need to be updated (and possibly a new > PEP?) since we explicitly called out the fact that setuptools would currently > be included until pip no longer needed it to be installed seperately. I'm going to contradict what I said to Robert at the PyCon AU sprints earlier this week, and agree with Donald here. setuptools is in the situation where because it also includes pkg_resources, it blurs the line between "build time" and "run time" dependency. While it would be nice to split that and have a "just pkg_resources" runtime dependency distinct from the build time dependency, that isn't likely to happen any time soon. wheel, by contrast, is already a pure build time dependency for bdist_wheel, and thus should be getting brought in as an implied "build requires" by pip itself when building from source. This does pose an interesting challenge from the perspective of the "offline installation" use case for ensurepip, where wheels are used as a local build caching mechanism, but we don't assume PyPI access, but it isn't one we really considered in the original ensurepip PEP. So actually doing this would probably require a PEP to update ensurepip with some additional options related to whether the build dependencies should be installed or not, and give downstream vendors a free pass to exclude the build dependencies from the default installation set. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Wed Aug 5 16:10:43 2015 From: donald at stufft.io (Donald Stufft) Date: Wed, 5 Aug 2015 10:10:43 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On August 5, 2015 at 10:01:50 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > > setuptools is in the situation where because it also includes > pkg_resources, it blurs the line between "build time" and "run time" > dependency. While it would be nice to split that and have a "just > pkg_resources" runtime dependency distinct from the build time > dependency, that isn't likely to happen any time soon. > > wheel, by contrast, is already a pure build time dependency for > bdist_wheel, and thus should be getting brought in as an implied > "build requires" by pip itself when building from source. This does > pose an interesting challenge from the perspective of the "offline > installation" use case for ensurepip, where wheels are used as a local > build caching mechanism, but we don't assume PyPI access, but it isn't > one we really considered in the original ensurepip PEP. >? Just a small correction, in general setuptools does blur that line, but for pip itself setuptools is completely a build time dependency which isn?t *technically* any different than our dependency on wheel. We work perfectly fine without it installed you just don?t get certain features available to you if you don?t have it installed. However we left setuptools installing because the feature you lose if you don?t have it pre-installed is the ability to install from sdists entirely. It was determined that not being able to install from sdists was a large enough ?breakage? that considering setuptools a dependency of pip in the terms of ensurepip was considered better than minimizing the things we bundled. On the flip side, the thing you lose if you don?t have wheel installed is more like a ?nice to have? than something that breaks functionality that most people would consider mandatory in the current landscape. --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Wed Aug 5 17:11:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 6 Aug 2015 01:11:48 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On 6 August 2015 at 00:10, Donald Stufft wrote: > Just a small correction, in general setuptools does blur that line, but > for pip itself setuptools is completely a build time dependency which > isn?t *technically* any different than our dependency on wheel. We work > perfectly fine without it installed you just don?t get certain features > available to you if you don?t have it installed. However we left > setuptools installing because the feature you lose if you don?t have it > pre-installed is the ability to install from sdists entirely. It was > determined that not being able to install from sdists was a large enough > ?breakage? that considering setuptools a dependency of pip in the terms > of ensurepip was considered better than minimizing the things we bundled. Sorry, I omitted a downstream-related step in my thought process there. We currently have a hard dependency from python to pip and setuptools in Fedora, so we can ensure ensurepip works properly inside virtual environments. Enough things require setuptools at runtime for pkg_resources that that falls into the category of "annoying runtime dependency we'd like to see go away, but we can live with it since a lot of production systems are still going to end up with it installed regardless of what's in the base image". A hard dependency on wheel wouldn't fit into the same category - when folks are using a build pipeline to minimise the installation footprint on production systems, the wheel package itself has no business being installed anywhere other than developer systems and build servers. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From z2911 at bk.ru Wed Aug 5 17:25:07 2015 From: z2911 at bk.ru (John Doe) Date: Wed, 05 Aug 2015 18:25:07 +0300 Subject: [Python-Dev] who must makes FOR loop quicker Message-ID: <55C22AD3.3070501@bk.ru> To pass by reference or by copy of - that is the question from hamlet. ("hamlet" - a community of people smaller than a village python3.4-linux64) xlist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] i = 0 for x in xlist: print(xlist) print("\txlist[%d] = %d" % (i, x)) if x%2 == 0 : xlist.remove(x) print(xlist, "\n\n") i = i + 1 So, catch the output and help me, PLEASE, improve the answer: Does it appropriate ALWAYS REevaluate the terms of the expression list in FOR-scope on each iteration? But if I want to pass ONCE a copy to FOR instead of a reference (as seen from an output) and reduce unreasonable reevaluation, what I must to do for that? From jjevnik at quantopian.com Wed Aug 5 17:53:58 2015 From: jjevnik at quantopian.com (Joe Jevnik) Date: Wed, 5 Aug 2015 11:53:58 -0400 Subject: [Python-Dev] who must makes FOR loop quicker In-Reply-To: <55C22AD3.3070501@bk.ru> References: <55C22AD3.3070501@bk.ru> Message-ID: The iterator is not revaluated, instead, it is constructing a single iterator, in this case a list_iterator. The list_iterator looks at the underyling list to know how to iterate so when you mutate the underlying list, the list_iterator sees that. This does not mee the expression used to generate the iterator was re-evaluated. On Wed, Aug 5, 2015 at 11:25 AM, John Doe wrote: > To pass by reference or by copy of - that is the question from hamlet. > ("hamlet" - a community of people smaller than a village python3.4-linux64) > > xlist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] > i = 0 > for x in xlist: > print(xlist) > print("\txlist[%d] = %d" % (i, x)) > if x%2 == 0 : > xlist.remove(x) > print(xlist, "\n\n") > i = i + 1 > > So, catch the output and help me, PLEASE, improve the answer: > Does it appropriate ALWAYS REevaluate the terms of the expression list in > FOR-scope on each iteration? > But if I want to pass ONCE a copy to FOR instead of a reference (as seen > from an output) and reduce unreasonable reevaluation, what I must to do for > that? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/joe%40quantopian.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Wed Aug 5 17:56:50 2015 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 6 Aug 2015 01:56:50 +1000 Subject: [Python-Dev] who must makes FOR loop quicker In-Reply-To: <55C22AD3.3070501@bk.ru> References: <55C22AD3.3070501@bk.ru> Message-ID: On Thu, Aug 6, 2015 at 1:25 AM, John Doe wrote: > To pass by reference or by copy of - that is the question from hamlet. > ("hamlet" - a community of people smaller than a village python3.4-linux64) > > xlist = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] > i = 0 > for x in xlist: > print(xlist) > print("\txlist[%d] = %d" % (i, x)) > if x%2 == 0 : > xlist.remove(x) > print(xlist, "\n\n") > i = i + 1 > > So, catch the output and help me, PLEASE, improve the answer: > Does it appropriate ALWAYS REevaluate the terms of the expression list in > FOR-scope on each iteration? > But if I want to pass ONCE a copy to FOR instead of a reference (as seen > from an output) and reduce unreasonable reevaluation, what I must to do for > that? This list is for the development *of* Python, rather than development *with* Python. If you repost your question to python-list at python.org (the main user list), I'll be happy to explain over there what's going on and how to sort this out! But the simple answer is: Don't mutate the thing you're iterating over. You can take a copy with xlist[:] and iterate over that, if you like. ChrisA From steve at pearwood.info Wed Aug 5 18:14:04 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Thu, 6 Aug 2015 02:14:04 +1000 Subject: [Python-Dev] who must makes FOR loop quicker In-Reply-To: <55C22AD3.3070501@bk.ru> References: <55C22AD3.3070501@bk.ru> Message-ID: <20150805161403.GQ3737@ando.pearwood.info> On Wed, Aug 05, 2015 at 06:25:07PM +0300, John Doe wrote: > To pass by reference or by copy of - that is the question from hamlet. > ("hamlet" - a community of people smaller than a village python3.4-linux64) [snip question] John, you have already posted this same question to the tutor list, where you have been given an answer. If the response doesn't answer your question, please discuss it there on the tutor list. This question is not suitable for this list. -- Steve From ronaldoussoren at mac.com Wed Aug 5 09:52:18 2015 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Wed, 05 Aug 2015 09:52:18 +0200 Subject: [Python-Dev] PEP 447 (type.__getdescriptor__) In-Reply-To: <1900153295.283384.1437913100285.JavaMail.open-xchange@oxbsltgw11.schlund.de> References: <00AA7433-C853-4101-9718-060468EBAC54@mac.com> <128643133.270510.1437838774416.JavaMail.open-xchange@oxbsltgw13.schlund.de> <1900153295.283384.1437913100285.JavaMail.open-xchange@oxbsltgw11.schlund.de> Message-ID: <3CDBA080-83F9-4130-936F-E32B79E304D0@mac.com> > On 26 Jul 2015, at 14:18, Mark Shannon wrote: > >> On 26 July 2015 at 10:41 Ronald Oussoren wrote: >> >> >> >>> On 26 Jul 2015, at 09:14, Ronald Oussoren wrote: >>> >>> >>>> On 25 Jul 2015, at 17:39, Mark Shannon >>> > wrote: >>>> >>>> Hi, >>>> >>>> On 22/07/15 09:25, Ronald Oussoren wrote:> Hi, >>>>> >>>>> Another summer with another EuroPython, which means its time again to >>>>> try to revive PEP 447? >>>>> >>>> >>>> IMO, there are two main issues with the PEP and implementation. >>>> >>>> 1. The implementation as outlined in the PEP is infinitely recursive, since >>>> the >>>> lookup of "__getdescriptor__" on type must necessarily call >>>> type.__getdescriptor__. >>>> The implementation (in C) special cases classes that inherit >>>> "__getdescriptor__" >>>> from type. This special casing should be mentioned in the PEP. >>> >>> Sure. An alternative is to slightly change the the PEP: use >>> __getdescriptor__ when >>> present and directly peek into __dict__ when it is not, and then remove the >>> default >>> __getdescriptor__. >>> >>> The reason I didn?t do this in the PEP is that I prefer a programming model >>> where >>> I can explicitly call the default behaviour. >> >> I?m not sure there is a problem after all (but am willing to use the >> alternative I describe above), >> although that might be because I?m too much focussed on CPython semantics. >> >> The __getdescriptor__ method is a slot in the type object and because of that >> the >> normal attribute lookup mechanism is side-stepped for methods implemented in >> C. A >> __getdescriptor__ that is implemented on Python is looked up the normal way by >> the >> C function that gets added to the type struct for such methods, but that?s not >> a problem for >> type itself. >> >> That?s not new for __getdescriptor__ but happens for most other special >> methods as well, >> as I noted in my previous mail, and also happens for the __dict__ lookup >> that?s currently >> used (t.__dict__ is an attribute and should be lookup up using >> __getattribute__, ?) > > > "__getdescriptor__" is fundamentally different from "__getattribute__" in that > is defined in terms of itself. > > object.__getattribute__ is defined in terms of type.__getattribute__, but > type.__getattribute__ just does > dictionary lookups. object.__getattribute__ is actually defined in terms of type.__dict__ and object.__dict__. Type.__getattribute__ is at best used to to find type.__dict__. > However defining type.__getattribute__ in terms of > __descriptor__ causes a circularity as > __descriptor__ has to be looked up on a type. > > So, not only must the cycle be broken by special casing "type", but that > "__getdescriptor__" can be defined > not only by a subclass, but also a metaclass that uses "__getdescriptor__" to > define "__getdescriptor__" on the class. > (and so on for meta-meta classes, etc.) Are the semantics of special methods backed by a member in PyTypeObject part of Python?s semantics, or are those CPython implementation details/warts? In particular that such methods are access directly without using __getattribute__ at all (or at least only indirectly when the method is implemented in Python). That is: >>> class Dict (dict): ... def __getattribute__(self, nm): ... print("Get", nm) ... return dict.__getattribute__(self, nm) ... >>> >>> d = Dict(a=4) >>> d.__getitem__('a') Get __getitem__ 4 >>> d['a'] 4 >>> (And likewise for other special methods, which amongst others means that neither __getattribute__ nor __getdescriptor__ can be used to dynamicly add such methods to a class) In my proposed patch I do special case ?type?, but that?s only intended as a (for now unbenchmarked) speed hack. The code would work just as well without the hack because the metatype?s __getdescriptor__ is looked up directly in the PyTypeObject on the C level, without using __getattribute__ and hence without having to use recursion. BTW. I wouldn?t mind dropping the default ?type.__getdescriptor__? completely and reword my proposal to state that __getdescriptor__ is used when present, and otherwise __dict__ is accessed directly. That would remove the infinite recursion, as all metaclass chains should at some end up at ?type? which then wouldn?t have a ?__getdescriptor__?. The reason I added ?type.__getdescriptor__? is that IMHO gives a nicer programming model where you can call the superclass implementation in the implementation of __getdescriptor__ in a subclass. Given the minimal semantics of ?type.__getdescriptor__? loosing that wouldn?t be too bad to get a better object model. Ronald > > Cheers, > Mark > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ronaldoussoren%40mac.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Aug 6 01:29:49 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 6 Aug 2015 01:29:49 +0200 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: Le 5 ao?t 2015 17:12, "Nick Coghlan" a ?crit : > A hard dependency on wheel wouldn't fit into the same category - when > folks are using a build pipeline to minimise the installation > footprint on production systems, the wheel package itself has no > business being installed anywhere other than developer systems and > build servers. I'm quite sure that virtualenv is used to deploy python on production. Pip 7 automatically creates wheel packages when no build wheel package is available on PyPI. Examples numpy and any pure python package only providing a tarball. For me it makes sense to embed wheel in ensurepip and to install wheel on production systems (to install pacakes, not to build them). Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Thu Aug 6 05:04:18 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 6 Aug 2015 13:04:18 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On 6 August 2015 at 09:29, Victor Stinner wrote: > Le 5 ao?t 2015 17:12, "Nick Coghlan" a ?crit : >> A hard dependency on wheel wouldn't fit into the same category - when >> folks are using a build pipeline to minimise the installation >> footprint on production systems, the wheel package itself has no >> business being installed anywhere other than developer systems and >> build servers. > > I'm quite sure that virtualenv is used to deploy python on production. > > Pip 7 automatically creates wheel packages when no build wheel package is > available on PyPI. Examples numpy and any pure python package only providing > a tarball. > > For me it makes sense to embed wheel in ensurepip and to install wheel on > production systems (to install pacakes, not to build them). pip can install from wheels just fine without the wheel package being present - that's how ensurepip already works. The wheel package itself is only needed in order to support the setuptools "bdist_wheel" command, which then allows pip to implicitly cache wheel files when installing from an sdist. Installing from sdist in production is a *fundamentally bad idea*, because it means you have to have a build toolchain on your production servers. One of the benefits of the wheel format and projects like devpi is that it makes it easier to discourage people from doing that. Even without getting into Linux containers and tools like pyp2rpm, it's also possible to create an entire virtualenv on a build server, bundle that up as an RPM or DEB file, and use the system package manager to do the production deployment. However, production Linux servers aren't the only case we need to care about, and there's a strong user experience argument to be made for providing wheel by default upstream, and telling downstream redistributors that care about the distinction to do the necessary disentangling to make it easy to have "build dependency free" production images. We've learned from experience that things go far more smoothly if we thrash out those kinds of platform dependent behavioural differences *before* we inflict them on end users, rather than having downstream redistributors tackle foreseeable problems independently of both each other and upstream :) Hence my request for a PEP - I can see why adding wheel to the ensurepip bundle would be a good idea for upstream, but I can also see why it's a near certainty downstream Linux distros (including Fedora) would take it out again in at least some situations to better meet the needs of *our* user base. (Since RPM has weak dependency support now, we'd likely make python-wheel a "Recommends:" dependency, rather than a "Requires:" dependency - still installed by default, but easy to omit if not wanted or needed) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From robertc at robertcollins.net Thu Aug 6 11:04:12 2015 From: robertc at robertcollins.net (Robert Collins) Date: Thu, 6 Aug 2015 21:04:12 +1200 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On 6 August 2015 at 15:04, Nick Coghlan wrote: > On 6 August 2015 at 09:29, Victor Stinner wrote: >> Le 5 ao?t 2015 17:12, "Nick Coghlan" a ?crit : >>> A hard dependency on wheel wouldn't fit into the same category - when >>> folks are using a build pipeline to minimise the installation >>> footprint on production systems, the wheel package itself has no >>> business being installed anywhere other than developer systems and >>> build servers. >> >> I'm quite sure that virtualenv is used to deploy python on production. >> >> Pip 7 automatically creates wheel packages when no build wheel package is >> available on PyPI. Examples numpy and any pure python package only providing >> a tarball. >> >> For me it makes sense to embed wheel in ensurepip and to install wheel on >> production systems (to install pacakes, not to build them). > > pip can install from wheels just fine without the wheel package being > present - that's how ensurepip already works. pip can also do this without setuptools being installed; yet we bundle setuptools with pip in ensurepip. I am thus confused :). When I consider the harm to a production pipeline that using setuptools can cause (in that it triggers easy_install, and easy_install has AFAIK none of the security improvements pip has added over the last couple years....), I find the acceptance of setuptools, but non-acceptance of wheel flummoxing. > The wheel package itself is only needed in order to support the > setuptools "bdist_wheel" command, which then allows pip to implicitly > cache wheel files when installing from an sdist. > > Installing from sdist in production is a *fundamentally bad idea*, > because it means you have to have a build toolchain on your production > servers. One of the benefits of the wheel format and projects like > devpi is that it makes it easier to discourage people from doing that. > Even without getting into Linux containers and tools like pyp2rpm, > it's also possible to create an entire virtualenv on a build server, > bundle that up as an RPM or DEB file, and use the system package > manager to do the production deployment. Yes: but the logic chain from 'its a bad idea' to 'we don't include wheel but we do include setuptools' is the bit I'm having a hard time with. > However, production Linux servers aren't the only case we need to care > about, and there's a strong user experience argument to be made for > providing wheel by default upstream, and telling downstream > redistributors that care about the distinction to do the necessary > disentangling to make it easy to have "build dependency free" > production images. > > We've learned from experience that things go far more smoothly if we > thrash out those kinds of platform dependent behavioural differences > *before* we inflict them on end users, rather than having downstream > redistributors tackle foreseeable problems independently of both each > other and upstream :) > > Hence my request for a PEP - I can see why adding wheel to the > ensurepip bundle would be a good idea for upstream, but I can also see > why it's a near certainty downstream Linux distros (including Fedora) > would take it out again in at least some situations to better meet the Does Fedora also take out setuptools? If not, why not? > needs of *our* user base. (Since RPM has weak dependency support now, > we'd likely make python-wheel a "Recommends:" dependency, rather than > a "Requires:" dependency - still installed by default, but easy to > omit if not wanted or needed) So, a new PEP? -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ncoghlan at gmail.com Thu Aug 6 14:47:11 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 6 Aug 2015 22:47:11 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: On 6 August 2015 at 19:04, Robert Collins wrote: > On 6 August 2015 at 15:04, Nick Coghlan wrote: > When I consider the harm to a production pipeline that using > setuptools can cause (in that it triggers easy_install, and > easy_install has AFAIK none of the security improvements pip has added > over the last couple years....), I find the acceptance of setuptools, > but non-acceptance of wheel flummoxing. When ensurepip was implemented, pip couldn't install from wheel files without setuptools yet, and the level of adoption of wheel files in general was lower than it is today. >> The wheel package itself is only needed in order to support the >> setuptools "bdist_wheel" command, which then allows pip to implicitly >> cache wheel files when installing from an sdist. >> >> Installing from sdist in production is a *fundamentally bad idea*, >> because it means you have to have a build toolchain on your production >> servers. One of the benefits of the wheel format and projects like >> devpi is that it makes it easier to discourage people from doing that. >> Even without getting into Linux containers and tools like pyp2rpm, >> it's also possible to create an entire virtualenv on a build server, >> bundle that up as an RPM or DEB file, and use the system package >> manager to do the production deployment. > > Yes: but the logic chain from 'its a bad idea' to 'we don't include > wheel but we do include setuptools' is the bit I'm having a hard time > with. Just an accident of history due to the relative timing of ensurepip's introduction, pip gaining the ability to install wheel files without setuptools, and high levels of adoption of the wheel format on PyPI. If PEP 453 was redone today, it's entirely possible setuptools wouldn't have been bundled, but it wasn't a viable option at the time. Accepting the bundling was a nice piece of technical debt that bought several additional months of feature availability :) >> Hence my request for a PEP - I can see why adding wheel to the >> ensurepip bundle would be a good idea for upstream, but I can also see >> why it's a near certainty downstream Linux distros (including Fedora) >> would take it out again in at least some situations to better meet the > > Does Fedora also take out setuptools? If not, why not? Not at the moment - while I'd like to see the dependency go away eventually, there are plenty of other things in the world that bother me more, especially since it comes back the moment someone has an "import pkg_resources" anywhere in their application. >> needs of *our* user base. (Since RPM has weak dependency support now, >> we'd likely make python-wheel a "Recommends:" dependency, rather than >> a "Requires:" dependency - still installed by default, but easy to >> omit if not wanted or needed) > > So, a new PEP? Yeah. I don't think it needs to be too fancy, just provide a way to indicate whether or not ensurepip should install the wheel package, and make it clear that if folks want to ensure pip can build wheels, they should install it explicitly (at the command line or as a dependency), rather than assuming it will always be there by default. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Thu Aug 6 17:28:34 2015 From: donald at stufft.io (Donald Stufft) Date: Thu, 6 Aug 2015 11:28:34 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: Message-ID: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> > On Aug 6, 2015, at 5:04 AM, Robert Collins wrote: > > Yes: but the logic chain from 'its a bad idea' to 'we don't include > wheel but we do include setuptools' is the bit I'm having a hard time > with. In my opinion, it?s the severity of how crippled their experience is without that particular thing installed. In the case of wheel not being installed they lose the ability to have an implicit wheel cache and to run ``pip wheel``. This makes pip less good but, unless they are running ``pip wheel`` everything is still fully functioning. In the case of setuptools they lose the ability to ``pip install`` when there isn?t a wheel available and the ability to run ``pip wheel``. This is making pip completely unusable for a lot of people, and if we did not pre-install setup tools the number one thing people would do is to ``pip install setuptools``, most likely while bitching under their breath about the command that just failed because they tried to install from sdist. So it?s really just ?how bad are we going to break people?s expectations?. From robertc at robertcollins.net Fri Aug 7 00:50:47 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 7 Aug 2015 10:50:47 +1200 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: On 7 August 2015 at 03:28, Donald Stufft wrote: > >> On Aug 6, 2015, at 5:04 AM, Robert Collins wrote: >> >> Yes: but the logic chain from 'its a bad idea' to 'we don't include >> wheel but we do include setuptools' is the bit I'm having a hard time >> with. > > > In my opinion, it?s the severity of how crippled their experience is without that particular thing installed. > > In the case of wheel not being installed they lose the ability to have an implicit wheel cache and to run ``pip wheel``. This makes pip less good but, unless they are running ``pip wheel`` everything is still fully functioning. > > In the case of setuptools they lose the ability to ``pip install`` when there isn?t a wheel available and the ability to run ``pip wheel``. This is making pip completely unusable for a lot of people, and if we did not pre-install setup tools the number one thing people would do is to ``pip install setuptools``, most likely while bitching under their breath about the command that just failed because they tried to install from sdist. > > So it?s really just ?how bad are we going to break people?s expectations?. So - I was in a talk at PyCon AU about conda[*], and the author believed they were using the latest pip with all the latest caching features, but their experience (16 minute installs) wasn't that. I dug into that with them after the talk, and it was due to Conda not installing wheel by default. Certainly the framing of ensurepip as 'this installs pip' is going to be confusing and misleading if it doesn't install pip the way get-pip.py (or virtualenv) install pip, leading to confusion such as that. Given the inconsequential impact of installing wheel, I see only harm in holding it back, and only benefits in adding it. All the harm from having source builds comes in with setuptools ;). -Rob *) https://www.youtube.com/watch?v=Fqknoni5aX0 -- Robert Collins Distinguished Technologist HP Converged Cloud From larry at hastings.org Fri Aug 7 03:29:45 2015 From: larry at hastings.org (Larry Hastings) Date: Thu, 06 Aug 2015 18:29:45 -0700 Subject: [Python-Dev] Bitbucket mirror is out-of-date Message-ID: <55C40A09.5060709@hastings.org> Bitbucket has a mirror of cpython, here: https://bitbucket.org/mirror/cpython It was last updated on May 7 and still says it's Python 3.5.0a4. It's not clear to me who owns the "mirror" account--is it Atlassian themselves? Anyway it'd be nice if it were, y'know, fresher. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Fri Aug 7 09:02:35 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 7 Aug 2015 17:02:35 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: On 7 August 2015 at 08:50, Robert Collins wrote: > Certainly the framing of ensurepip as 'this installs pip' is going to > be confusing and misleading if it doesn't install pip the way > get-pip.py (or virtualenv) install pip, leading to confusion such as > that. > > Given the inconsequential impact of installing wheel, I see only harm > in holding it back, and only benefits in adding it. All the harm from > having source builds comes in with setuptools ;). Right, this is the main reason I'm actually *in favour* of adding wheel to the ensurepip bundle upstream - it significantly improves the "out of the box" experience of pyvenv by implicitly caching builds. (I'm also in favour because it will lead to redistributors providing "pip wheel" support by default, and having to make an explicit design decision *not* to provide it if we want to do something different). The only reason I'm asking for a PEP is because I'm confident we're going to want a "support prebuilt wheels only" installation option downstream in the Linux distro world - shipping setuptools by default is a pragmatic concession to practical reality rather than something we *want* to be doing. As such, I do think Robert raises a good point that any new ensurepip option should probably prevent installation of both wheel *and* setuptools, since pip can install from wheel files without setuptools these days. The CLI option name might be something like "--no-build-tools", and could also be added to the public pyvenv and virtualenv interfaces. Downstream in Fedora, now that we have weak dependency support, I'd advocate for switching the python->setuptools dependency over to Recommends, and adding wheel as a Recommends dependency from the start. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Fri Aug 7 09:20:04 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 7 Aug 2015 03:20:04 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: > On Aug 7, 2015, at 3:02 AM, Nick Coghlan wrote: > > On 7 August 2015 at 08:50, Robert Collins wrote: >> Certainly the framing of ensurepip as 'this installs pip' is going to >> be confusing and misleading if it doesn't install pip the way >> get-pip.py (or virtualenv) install pip, leading to confusion such as >> that. >> >> Given the inconsequential impact of installing wheel, I see only harm >> in holding it back, and only benefits in adding it. All the harm from >> having source builds comes in with setuptools ;). > > Right, this is the main reason I'm actually *in favour* of adding > wheel to the ensurepip bundle upstream - it significantly improves the > "out of the box" experience of pyvenv by implicitly caching builds. > (I'm also in favour because it will lead to redistributors providing > "pip wheel" support by default, and having to make an explicit design > decision *not* to provide it if we want to do something different). > > The only reason I'm asking for a PEP is because I'm confident we're > going to want a "support prebuilt wheels only" installation option > downstream in the Linux distro world - shipping setuptools by default > is a pragmatic concession to practical reality rather than something > we *want* to be doing. > > As such, I do think Robert raises a good point that any new ensurepip > option should probably prevent installation of both wheel *and* > setuptools, since pip can install from wheel files without setuptools > these days. The CLI option name might be something like > "--no-build-tools", and could also be added to the public pyvenv and > virtualenv interfaces. > > Downstream in Fedora, now that we have weak dependency support, I'd > advocate for switching the python->setuptools dependency over to > Recommends, and adding wheel as a Recommends dependency from the > start. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia I?m not sure if ?no-build-tools make sense, since I plan on removing setuptools from ensurepip completely once pip can implicitly install it. PEP 453 explicitly called out the fact that setuptools was installed as an implementation detail with an eye to remove it in the future. Adding flags that deal with it specifically doesn?t seem like the right path to go down. From ncoghlan at gmail.com Fri Aug 7 09:56:27 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 7 Aug 2015 17:56:27 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: On 7 August 2015 at 17:20, Donald Stufft wrote: > I?m not sure if ?no-build-tools make sense, since I plan on removing setuptools from ensurepip completely once pip can implicitly install it. PEP 453 explicitly called out the fact that setuptools was installed as an implementation detail with an eye to remove it in the future. Adding flags that deal with it specifically doesn?t seem like the right path to go down. I'd be happy for the flag to go the other way, and have to supply "--build-tools" in order to opt in to having ensurepip install setuptools and wheel in addition to pip itself. If we did that, the downstream setup would likely be the even weaker "Suggests" dependency. My use case here is the "offline Python installation" one - having the build tools bundled with CPython and readily available is still useful for cases where you have your own code you want to build, but can't go to the internet to get a build toolchain. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Fri Aug 7 10:31:37 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 7 Aug 2015 10:31:37 +0200 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: Le 7 ao?t 2015 00:51, "Robert Collins" a ?crit : > So - I was in a talk at PyCon AU about conda[*], and the author > believed they were using the latest pip with all the latest caching > features, but their experience (16 minute installs) wasn't that. If an expert user is unaware of having to explicitly install wheel, what about other users? Packaging is the most hated feature of Python. Please don't add extra pain for purity and make sure that ensurepip installs "pip" and not "slow pip until you install wheel in the venv". To develop on OpenStack, I have more than 20 virtual environment on my PC. I recreate them regulary because I like downgarding or upgrading a package, or edit the code of a package directly in the venv (usually to debug). Since pip7, the creation of venv is much faster. Please don't make pip slower. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Fri Aug 7 17:05:50 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 7 Aug 2015 08:05:50 -0700 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: <4510497611540453253@unknownmsgid> > I'm confident we're > going to want a "support prebuilt wheels only" installation option > downstream in the Linux distro world - Interesting-- so move to a Python specific binary distribution option -- rather than using rm or deb packages? Doesn't lead to a dependency heck? I.e no way to express non-python dependencies? And while we are moving forward, can we please deprecate dependency management and installation from setuptools? Is there a philosophy of intended separation of concerns articulated somewhere? -Chris > shipping setuptools by default > is a pragmatic concession to practical reality rather than something > we *want* to be doing. > > As such, I do think Robert raises a good point that any new ensurepip > option should probably prevent installation of both wheel *and* > setuptools, since pip can install from wheel files without setuptools > these days. The CLI option name might be something like > "--no-build-tools", and could also be added to the public pyvenv and > virtualenv interfaces. > > Downstream in Fedora, now that we have weak dependency support, I'd > advocate for switching the python->setuptools dependency over to > Recommends, and adding wheel as a Recommends dependency from the > start. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov From chris.barker at noaa.gov Fri Aug 7 17:13:33 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Fri, 7 Aug 2015 08:13:33 -0700 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> Message-ID: <-7895006158186750053@unknownmsgid> >Please don't add extra pain for purity and >make sure that ensurepip installs "pip" and >not "slow pip until you install wheel in the venv". This is a really good point -- other than purity, what is the downside? Arguably, the only reason setuptools, pip, and wheel are not in the standard library are because the need a more rapid development/release cycle. Ensurepip is the way to get the best of both worlds -- why not make it complete? -Chris From status at bugs.python.org Fri Aug 7 18:08:29 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 7 Aug 2015 18:08:29 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20150807160829.257DB56797@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-07-31 - 2015-08-07) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 4992 (+31) closed 31595 (+32) total 36587 (+63) Open issues with patches: 2245 Issues opened (55) ================== #24759: Idle: add ttk widgets as an option http://bugs.python.org/issue24759 reopened by terry.reedy #24764: cgi.FieldStorage can't parse multipart part headers with Conte http://bugs.python.org/issue24764 opened by Peter Landry #24765: Move .idlerc to %APPDATA%\IDLE on Windows http://bugs.python.org/issue24765 opened by jan parowka #24766: Subclass of property doesn't preserve instance __doc__ when us http://bugs.python.org/issue24766 opened by erik.bray #24767: can concurrent.futures len(Executor) return the number of task http://bugs.python.org/issue24767 opened by Pat Riehecky #24769: Interpreter doesn't start when dynamic loading is disabled http://bugs.python.org/issue24769 opened by Jeffrey.Armstrong #24770: Py_Finalize not cleaning up all threads http://bugs.python.org/issue24770 opened by Alex Budovski #24772: Smaller viewport shifts the "expand left menu" character into http://bugs.python.org/issue24772 opened by karlcow #24773: Add local time disambiguation flag to datetime http://bugs.python.org/issue24773 opened by belopolsky #24774: inconsistency in http.server.test http://bugs.python.org/issue24774 opened by wdv4758h #24775: Python client failing to connect to server but completing as i http://bugs.python.org/issue24775 opened by Se??n Kelleher #24776: Improve Fonts/Tabs UX for IDLE http://bugs.python.org/issue24776 opened by rhettinger #24778: mailcap.findmatch: document shell command Injection danger in http://bugs.python.org/issue24778 opened by TheRegRunner #24779: Python/ast.c: decode_unicode is never called with rawmode=True http://bugs.python.org/issue24779 opened by eric.smith #24780: unittest assertEqual difference output foiled by newlines http://bugs.python.org/issue24780 opened by chris.jerdonek #24781: Improve UX of IDLE Highlighting configuration tab http://bugs.python.org/issue24781 opened by markroseman #24782: Merge 'configure extensions' into main IDLE config dialog http://bugs.python.org/issue24782 opened by markroseman #24783: Import Error (undefined symbol: PyFloat_Type) when Importing m http://bugs.python.org/issue24783 opened by david-narvaez #24784: Build fails --without-threads http://bugs.python.org/issue24784 opened by louis.dassy #24786: Changes in the devguide repository are not published online in http://bugs.python.org/issue24786 opened by jcea #24787: csv.Sniffer guesses "M" instead of \t or , as the delimiter http://bugs.python.org/issue24787 opened by Tiago Wright #24788: HTTPException is derived from Exception instead of IOError http://bugs.python.org/issue24788 opened by Pastafarianist #24789: ctypes doc string http://bugs.python.org/issue24789 opened by LambertDW #24790: Idle: improve stack viewer http://bugs.python.org/issue24790 opened by terry.reedy #24792: zipimporter masks import errors http://bugs.python.org/issue24792 opened by Amund Hov #24794: PyZipFile mixes compiled files from different python versions. http://bugs.python.org/issue24794 opened by Amund Hov #24795: Make event loops with statement context managers http://bugs.python.org/issue24795 opened by Mathias Fr??jdman #24796: Deleting names referencing from enclosed and enclosing scopes http://bugs.python.org/issue24796 opened by ncoghlan #24798: _msvccompiler.py doesn't properly support manifests http://bugs.python.org/issue24798 opened by gladman #24799: IDLE should detect changes to open files by other processes http://bugs.python.org/issue24799 opened by Al.Sweigart #24800: exec docs should note that the no argument form in a local sco http://bugs.python.org/issue24800 opened by Peter Eastman #24801: right-mouse click in IDLE on Mac doesn't work http://bugs.python.org/issue24801 opened by markroseman #24802: PyFloat_FromString Buffer Over-read http://bugs.python.org/issue24802 opened by JohnLeitch #24803: PyNumber_Long Buffer Over-read.patch http://bugs.python.org/issue24803 opened by JohnLeitch #24805: Python installer having problem in installing Python for all u http://bugs.python.org/issue24805 opened by Debarshi.Goswami #24806: Inheriting from NoneType does not fail consistently http://bugs.python.org/issue24806 opened by brechtm #24807: compileall can cause Python installation to fail http://bugs.python.org/issue24807 opened by Jon Ribbens #24808: PyTypeObject fields have incorrectly documented types http://bugs.python.org/issue24808 opened by Joseph Weston #24809: Add getprotobynumber to socket module http://bugs.python.org/issue24809 opened by wbooth #24810: UX mode for IDLE targeted to 'new learners' http://bugs.python.org/issue24810 opened by markroseman #24812: All standard keystrokes not recognized in IDLE dialogs on Mac http://bugs.python.org/issue24812 opened by markroseman #24813: About IDLE dialog shouldn't be modal http://bugs.python.org/issue24813 opened by markroseman #24814: Disable Undo/Redo menu items when not applicable http://bugs.python.org/issue24814 opened by markroseman #24815: IDLE can lose menubar on OS X http://bugs.python.org/issue24815 opened by markroseman #24816: don't allow selecting IDLE debugger menu item when running http://bugs.python.org/issue24816 opened by markroseman #24817: disable format menu items when not applicable http://bugs.python.org/issue24817 opened by markroseman #24818: no way to run program in debugger from edit window http://bugs.python.org/issue24818 opened by markroseman #24819: replace window size preference with just use last window size http://bugs.python.org/issue24819 opened by markroseman #24820: IDLE themes for light on dark http://bugs.python.org/issue24820 opened by markroseman #24821: The optimization of string search can cause pessimization http://bugs.python.org/issue24821 opened by serhiy.storchaka #24822: IDLE: Accelerator key doesn't work for Options http://bugs.python.org/issue24822 opened by serhiy.storchaka #24823: ctypes.create_string_buffer does not add NUL if len(init) == s http://bugs.python.org/issue24823 opened by tom.pohl #24824: Pydoc fails with codecs http://bugs.python.org/issue24824 opened by serhiy.storchaka #24825: visual margin indicator for breakpoints in IDLE http://bugs.python.org/issue24825 opened by markroseman #24826: ability to integrate editor, shell, debugger in one window http://bugs.python.org/issue24826 opened by markroseman Most recent 15 issues with no replies (15) ========================================== #24826: ability to integrate editor, shell, debugger in one window http://bugs.python.org/issue24826 #24825: visual margin indicator for breakpoints in IDLE http://bugs.python.org/issue24825 #24824: Pydoc fails with codecs http://bugs.python.org/issue24824 #24823: ctypes.create_string_buffer does not add NUL if len(init) == s http://bugs.python.org/issue24823 #24822: IDLE: Accelerator key doesn't work for Options http://bugs.python.org/issue24822 #24821: The optimization of string search can cause pessimization http://bugs.python.org/issue24821 #24820: IDLE themes for light on dark http://bugs.python.org/issue24820 #24819: replace window size preference with just use last window size http://bugs.python.org/issue24819 #24818: no way to run program in debugger from edit window http://bugs.python.org/issue24818 #24817: disable format menu items when not applicable http://bugs.python.org/issue24817 #24816: don't allow selecting IDLE debugger menu item when running http://bugs.python.org/issue24816 #24815: IDLE can lose menubar on OS X http://bugs.python.org/issue24815 #24814: Disable Undo/Redo menu items when not applicable http://bugs.python.org/issue24814 #24813: About IDLE dialog shouldn't be modal http://bugs.python.org/issue24813 #24812: All standard keystrokes not recognized in IDLE dialogs on Mac http://bugs.python.org/issue24812 Most recent 15 issues waiting for review (15) ============================================= #24809: Add getprotobynumber to socket module http://bugs.python.org/issue24809 #24808: PyTypeObject fields have incorrectly documented types http://bugs.python.org/issue24808 #24803: PyNumber_Long Buffer Over-read.patch http://bugs.python.org/issue24803 #24802: PyFloat_FromString Buffer Over-read http://bugs.python.org/issue24802 #24798: _msvccompiler.py doesn't properly support manifests http://bugs.python.org/issue24798 #24784: Build fails --without-threads http://bugs.python.org/issue24784 #24782: Merge 'configure extensions' into main IDLE config dialog http://bugs.python.org/issue24782 #24774: inconsistency in http.server.test http://bugs.python.org/issue24774 #24773: Add local time disambiguation flag to datetime http://bugs.python.org/issue24773 #24766: Subclass of property doesn't preserve instance __doc__ when us http://bugs.python.org/issue24766 #24764: cgi.FieldStorage can't parse multipart part headers with Conte http://bugs.python.org/issue24764 #24756: doctest run_docstring_examples does have an obvious utility http://bugs.python.org/issue24756 #24750: IDLE: Cosmetic improvements for main window http://bugs.python.org/issue24750 #24746: doctest 'fancy diff' formats incorrectly strip trailing whites http://bugs.python.org/issue24746 #24733: Logically Dead Code http://bugs.python.org/issue24733 Top 10 most discussed issues (10) ================================= #24383: consider implementing __await__ on concurrent.futures.Future http://bugs.python.org/issue24383 26 msgs #24750: IDLE: Cosmetic improvements for main window http://bugs.python.org/issue24750 15 msgs #24787: csv.Sniffer guesses "M" instead of \t or , as the delimiter http://bugs.python.org/issue24787 13 msgs #15944: memoryviews and ctypes http://bugs.python.org/issue15944 12 msgs #24759: Idle: add ttk widgets as an option http://bugs.python.org/issue24759 12 msgs #24778: mailcap.findmatch: document shell command Injection danger in http://bugs.python.org/issue24778 11 msgs #24272: PEP 484 docs http://bugs.python.org/issue24272 10 msgs #22329: Windows installer can't recover partially installed state http://bugs.python.org/issue22329 9 msgs #24667: OrderedDict.popitem()/__str__() raises KeyError http://bugs.python.org/issue24667 8 msgs #23672: IDLE can crash if file name contains non-BMP Unicode character http://bugs.python.org/issue23672 7 msgs Issues closed (32) ================== #4395: Document auto __ne__ generation; provide a use case for non-tr http://bugs.python.org/issue4395 closed by rbcollins #20557: Use specific asserts in io tests http://bugs.python.org/issue20557 closed by serhiy.storchaka #20769: Reload() description is unclear http://bugs.python.org/issue20769 closed by rbcollins #21192: Idle: Print filename when running a file from editor http://bugs.python.org/issue21192 closed by terry.reedy #21279: str.translate documentation incomplete http://bugs.python.org/issue21279 closed by python-dev #22397: test_socket failure on AIX http://bugs.python.org/issue22397 closed by r.david.murray #22932: email.utils.formatdate uses unreliable time.timezone constant http://bugs.python.org/issue22932 closed by rbcollins #23004: mock_open() should allow reading binary data http://bugs.python.org/issue23004 closed by berker.peksag #23182: Update grammar tests to use new style for annotated function d http://bugs.python.org/issue23182 closed by python-dev #23524: Use _set_thread_local_invalid_parameter_handler in posixmodule http://bugs.python.org/issue23524 closed by rbcollins #23652: ifdef uses of EPOLLxxx macros so we can compile on systems tha http://bugs.python.org/issue23652 closed by python-dev #23812: asyncio.Queue.put_nowait(), followed get() task cancellation l http://bugs.python.org/issue23812 closed by yselivanov #23888: Fixing fractional expiry time bug in cookiejar http://bugs.python.org/issue23888 closed by rbcollins #24021: Add docstring to urllib.urlretrieve http://bugs.python.org/issue24021 closed by rbcollins #24129: Incorrect (misleading) statement in the execution model docume http://bugs.python.org/issue24129 closed by ncoghlan #24217: O_RDWR undefined in mmapmodule.c http://bugs.python.org/issue24217 closed by python-dev #24370: OrderedDict behavior is unclear with misbehaving keys. http://bugs.python.org/issue24370 closed by eric.snow #24531: please document that no code preceding encoding declaration is http://bugs.python.org/issue24531 closed by rbcollins #24720: Python install help http://bugs.python.org/issue24720 closed by zach.ware #24745: Better default font for editor http://bugs.python.org/issue24745 closed by terry.reedy #24751: regrtest/buildbot: test run marked as failure even when re-run http://bugs.python.org/issue24751 closed by python-dev #24754: argparse add_argument with action="store_true", type=bool shou http://bugs.python.org/issue24754 closed by dbagnall #24762: Branchless, vectorizable frozen set hash http://bugs.python.org/issue24762 closed by rhettinger #24768: Bytearray double free or corruption http://bugs.python.org/issue24768 closed by pitrou #24771: Cannot import _tkinter in Python 3.5 on Windows http://bugs.python.org/issue24771 closed by steve.dower #24777: sys.getrefcount takes no arguments http://bugs.python.org/issue24777 closed by berker.peksag #24785: Document asyncio.futures.wrap_future() http://bugs.python.org/issue24785 closed by berker.peksag #24791: *args regression http://bugs.python.org/issue24791 closed by yselivanov #24793: Calling 'python' via subprocess.call ignoring passed %PATH% http://bugs.python.org/issue24793 closed by paul.moore #24797: email.header.decode_header return type is not consistent http://bugs.python.org/issue24797 closed by r.david.murray #24804: https://www.python.org/ftp/python/2.7.4/python-2.7.4.msi actua http://bugs.python.org/issue24804 closed by zach.ware #24811: Unicode character in history breaks history under Windows http://bugs.python.org/issue24811 closed by r.david.murray From donald at stufft.io Fri Aug 7 18:12:41 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 7 Aug 2015 12:12:41 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: <-7895006158186750053@unknownmsgid> References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> Message-ID: <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> > On Aug 7, 2015, at 11:13 AM, Chris Barker - NOAA Federal wrote: > >> Please don't add extra pain for purity and >> make sure that ensurepip installs "pip" and >> not "slow pip until you install wheel in the venv". > > > This is a really good point -- other than purity, what is the downside? > > Arguably, the only reason setuptools, pip, and wheel are not in the > standard library are because the need a more rapid development/release > cycle. > > Ensurepip is the way to get the best of both worlds -- why not make it complete? > So my opinion is basically that in a vacuum I would absolutely add wheel to ensurepip (and I did add wheel to get-pip.py and to virtualenv). However, this does not exist in a vacuum and there is still animosity about PEP 453 and downstream?s are still trying to figure out how they are going to handle it for real. During the 3.4 release there were downstream redisttibutors who completely removed ensurepip and were talking about possibly removing pip entirely from their archives. So my hesitation is basically that I consider is a short (or medium) term need until pip can implicitly install wheel and setuptools as part of the build process for a particular project and I worry that it will reopen some wounds and cause more strife. I do however think it would make ensurepip itself better, so I?m not dead set against it, mostly just worried about ramifications. From eric at trueblade.com Sat Aug 8 03:39:15 2015 From: eric at trueblade.com (Eric V. Smith) Date: Fri, 7 Aug 2015 21:39:15 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting Message-ID: <55C55DC3.8040605@trueblade.com> Following a long discussion on python-ideas, I've posted my draft of PEP-498. It describes the "f-string" approach that was the subject of the "Briefer string format" thread. I'm open to a better title than "Literal String Formatting". I need to add some text to the discussion section, but I think it's in reasonable shape. I have a fully working implementation that I'll get around to posting somewhere this weekend. >>> def how_awesome(): return 'very' ... >>> f'f-strings are {how_awesome()} awesome!' 'f-strings are very awesome!' I'm open to any suggestions to improve the PEP. Thanks for your feedback. -- Eric. From ncoghlan at gmail.com Sat Aug 8 05:53:00 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 8 Aug 2015 13:53:00 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> Message-ID: On 8 August 2015 at 02:12, Donald Stufft wrote: > >> On Aug 7, 2015, at 11:13 AM, Chris Barker - NOAA Federal wrote: >> >>> Please don't add extra pain for purity and >>> make sure that ensurepip installs "pip" and >>> not "slow pip until you install wheel in the venv". >> >> >> This is a really good point -- other than purity, what is the downside? >> >> Arguably, the only reason setuptools, pip, and wheel are not in the >> standard library are because the need a more rapid development/release >> cycle. >> >> Ensurepip is the way to get the best of both worlds -- why not make it complete? >> > > So my opinion is basically that in a vacuum I would absolutely add wheel to ensurepip (and I did add wheel to get-pip.py and to virtualenv). However, this does not exist in a vacuum and there is still animosity about PEP 453 and downstream?s are still trying to figure out how they are going to handle it for real. During the 3.4 release there were downstream redisttibutors who completely removed ensurepip and were talking about possibly removing pip entirely from their archives. [I'm wearing my professional Fedora Environments & Stack WG and RHEL Developer Experience hats in this post, moreso than my CPython core developer one] It seems to me that most modern open source developers (especially those using dynamic languages) perceive Linux distros more as impediments to be worked around, rather than as allies to collaborate with, and that's *our* UX issue to figure out downstream (hence design concepts like https://fedoraproject.org/wiki/Env_and_Stacks/Projects/UserLevelPackageManagement and https://fedoraproject.org/wiki/Env_and_Stacks/Projects/PackageReviewProcessRedesign in the Fedora space) It's not CPython's problem to resolve, and it's only CPython's responsibility to work around to the extent that it makes things easier for *end users* developing in Python. If a distro is being unreasonably intransigent about developer experience concerns, then that's the distro's problem, and we can advise people to download and use a cross-platform distro like ActivePython, EPD/Canopy or Anaconda instead of the system Python. > So my hesitation is basically that I consider is a short (or medium) term need until pip can implicitly install wheel and setuptools as part of the build process for a particular project and I worry that it will reopen some wounds and cause more strife. I don't believe it's a good idea to avoid strife for the sake of avoiding strife - many Linux distros are in the wrong here, and we need to get with the program in suitably meeting the needs of open source developers, not just folks running Linux on production servers. Fedora started that process with the launch of the Fedora.next initiatives a couple of years ago, but there's still a lot of work to be done in retooling our online and desktop experience to make it more developer friendly. > I do however think it would make ensurepip itself better, so I?m not dead set against it, mostly just worried about ramifications. I'd advise against letting concerns about Linux distro politics hold you back from making ensurepip as good as you can make it - if nothing else, the developer experience folks at commercial Linux vendors are specifically paid to advocate for the interests of software developers when it comes to the Linux user experience (that's part of my own day job in the Fedora/RHEL/CentOS case - I moved over to the software management team in RHEL Developer Experience at the start of June). That means that while I will have some *requests* to make certain things easier downstream (like going through the PEP process to figure out an upstream supported way to omit the build-only dependencies when running ensurepip), I also wholeheartedly endorse the idea of having the default upstream behaviour focus on making the initial experience for folks downloading Windows or Mac OS X binaries from python.org as compelling as we can make it. python-dev needs to put the needs of Python first, and those of Linux second. This does mean that any Linux distro that can't figure out how to provide a better open source developer experience for Pythonistas than Windows or Mac OS X is at risk of falling by the wayside in the Python community, but if those of us that care specifically about the viability of desktop Linux as a platform for open source development stand by and let that happen, then we'll *deserve* the consequences. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Sat Aug 8 05:19:16 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 8 Aug 2015 03:19:16 +0000 Subject: [Python-Dev] SSH to hg.p.o okay? Message-ID: Is hg.python.org okay for others? I'm getting the following output from all hg commands: sending hello command sending between command abort: no suitable response from remote hg! I don't know of any more verbose or debugging options than that (--debug, -v's added nothing). I can SSH normally into hg.p.o, not that there's anything I can do without a terminal, but I guess it's not completely down... -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sat Aug 8 10:34:24 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 8 Aug 2015 04:34:24 -0400 Subject: [Python-Dev] SSH to hg.p.o okay? In-Reply-To: References: Message-ID: I was intermittently getting that earlier. I didn't change anything on my side and it started working, maybe 5 minutes later. -- Eric. > On Aug 7, 2015, at 11:19 PM, Steve Dower wrote: > > Is hg.python.org okay for others? I'm getting the following output from all hg commands: > > sending hello command > sending between command > abort: no suitable response from remote hg! > > I don't know of any more verbose or debugging options than that (--debug, -v's added nothing). I can SSH normally into hg.p.o, not that there's anything I can do without a terminal, but I guess it's not completely down... > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Aug 8 11:34:57 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 8 Aug 2015 19:34:57 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: On 8 August 2015 at 11:39, Eric V. Smith wrote: > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". Thanks you for your work on this - it's a very cool concept! I've also now written and posted an initial draft of PEP 500, based directly on PEP 498, which formalises the "__interpolate__" builtin idea I raised in those threads, along with a PEP 292 based syntax proposal that aims to be as simple as possible for the simple case of interpolating existing variables, while still allowing the use of braces to permit embedding of arbitrary expressions and formatting directives. it turned out this approach provided an unanticipated benefit that I only discovered while writing the PEP: by defining a separate "__interpolateb__" builtin, it's straightforward to define binary interpolation in terms of bytes.__mod__, while still defining text interpolation in terms of str.format. The previously-redundant-in-python-3 'u' prefix also finds new life as a way of always requesting the default string interpolation, even if __interpolate__ has been overridden in the current namespace to mean something else (like il8n string translation). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From Steve.Dower at microsoft.com Sat Aug 8 15:15:52 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Sat, 8 Aug 2015 13:15:52 +0000 Subject: [Python-Dev] SSH to hg.p.o okay? In-Reply-To: References: , Message-ID: Eventually I updated Mercurial and then it worked, but that didn't make a whole lot of sense. Maybe I caught it during some maintenance? Seems to be okay now though. Cheers, Steve Top-posted from my Windows Phone ________________________________ From: Eric V. Smith Sent: ?8/?8/?2015 1:34 To: Steve Dower Cc: python-dev at python.org Subject: Re: [Python-Dev] SSH to hg.p.o okay? I was intermittently getting that earlier. I didn't change anything on my side and it started working, maybe 5 minutes later. -- Eric. On Aug 7, 2015, at 11:19 PM, Steve Dower > wrote: Is hg.python.org okay for others? I'm getting the following output from all hg commands: sending hello command sending between command abort: no suitable response from remote hg! I don't know of any more verbose or debugging options than that (--debug, -v's added nothing). I can SSH normally into hg.p.o, not that there's anything I can do without a terminal, but I guess it's not completely down... _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sat Aug 8 16:00:32 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sat, 8 Aug 2015 10:00:32 -0400 Subject: [Python-Dev] SSH to hg.p.o okay? In-Reply-To: References: Message-ID: <55C60B80.30104@trueblade.com> On 8/8/2015 9:15 AM, Steve Dower wrote: > Eventually I updated Mercurial and then it worked, but that didn't make > a whole lot of sense. Maybe I caught it during some maintenance? > > Seems to be okay now though. I didn't make any changes on my end when it started working. I'm guessing maintenance. Eric. From tritium-list at sdamon.com Sat Aug 8 16:05:02 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 08 Aug 2015 10:05:02 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55C60C8E.8000305@sdamon.com> Please do not change the meaning of the vestigial U''. It was re-added to the language to fix a problem, rebinding it to another meaning introduces new problems. We have plenty of other letters in the alphabet to use. On 8/8/2015 05:34, Nick Coghlan wrote: > On 8 August 2015 at 11:39, Eric V. Smith wrote: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". > Thanks you for your work on this - it's a very cool concept! > > I've also now written and posted an initial draft of PEP 500, based > directly on PEP 498, which formalises the "__interpolate__" builtin > idea I raised in those threads, along with a PEP 292 based syntax > proposal that aims to be as simple as possible for the simple case of > interpolating existing variables, while still allowing the use of > braces to permit embedding of arbitrary expressions and formatting > directives. > > it turned out this approach provided an unanticipated benefit that I > only discovered while writing the PEP: by defining a separate > "__interpolateb__" builtin, it's straightforward to define binary > interpolation in terms of bytes.__mod__, while still defining text > interpolation in terms of str.format. > > The previously-redundant-in-python-3 'u' prefix also finds new life as > a way of always requesting the default string interpolation, even if > __interpolate__ has been overridden in the current namespace to mean > something else (like il8n string translation). > > Cheers, > Nick. > From ncoghlan at gmail.com Sat Aug 8 17:07:45 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 9 Aug 2015 01:07:45 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C60C8E.8000305@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> Message-ID: On 9 August 2015 at 00:05, Alexander Walters wrote: > Please do not change the meaning of the vestigial U''. It was re-added to > the language to fix a problem, rebinding it to another meaning introduces > new problems. We have plenty of other letters in the alphabet to use. It's actually being used in the same sense we already use it - I'm just adding a new compile time use case where the distinction matters again, which we haven't previously had in Python 3. (The usage in this PEP is fairly closely analogous to WSGI's distinction between native strings, text strings and binary strings, which matters for hybrid Python 2/3 code, but not for pure Python 3 code) It would certainly be *possible* to use a different character for that aspect of the PEP, but it would be additional work without any obvious gain. Cheers, Nick. P.S. I hop on the plane for the US in a few hours, so I'll be aiming to be bad at responding to emails until the 17th or so. We'll see how well I stick to that plan :) -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Sat Aug 8 17:10:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 9 Aug 2015 01:10:14 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: On 8 August 2015 at 19:34, Nick Coghlan wrote: > On 8 August 2015 at 11:39, Eric V. Smith wrote: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". > > Thanks you for your work on this - it's a very cool concept! > > I've also now written and posted an initial draft of PEP 500, I've actually moved this to PEP 501, for reasons of liking a proposed alternate use of PEP 500 :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tritium-list at sdamon.com Sat Aug 8 17:12:51 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 08 Aug 2015 11:12:51 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> Message-ID: <55C61C73.6010609@sdamon.com> ... Its adding meaning to something that was intentionally meaningless. Not using u'' has the obvious, immediate benefit of not caring what u'' means in python 3, so one can continue to write polyglot code. Since you are adding new semantics to python 3, use a different letter so that it just breaks in python 2, instead of having different meanings between versions. Python 2 is still the dominant python. On 8/8/2015 11:07, Nick Coghlan wrote: > On 9 August 2015 at 00:05, Alexander Walters wrote: >> Please do not change the meaning of the vestigial U''. It was re-added to >> the language to fix a problem, rebinding it to another meaning introduces >> new problems. We have plenty of other letters in the alphabet to use. > It's actually being used in the same sense we already use it - I'm > just adding a new compile time use case where the distinction matters > again, which we haven't previously had in Python 3. (The usage in this > PEP is fairly closely analogous to WSGI's distinction between native > strings, text strings and binary strings, which matters for hybrid > Python 2/3 code, but not for pure Python 3 code) > > It would certainly be *possible* to use a different character for that > aspect of the PEP, but it would be additional work without any obvious > gain. > > Cheers, > Nick. > > P.S. I hop on the plane for the US in a few hours, so I'll be aiming > to be bad at responding to emails until the 17th or so. We'll see how > well I stick to that plan :) > From tritium-list at sdamon.com Sat Aug 8 17:16:44 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 08 Aug 2015 11:16:44 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> Message-ID: <55C61D5C.4090600@sdamon.com> Wait a second, the pep itself does not use the vestigial u''... it uses i''. where did u'' come from? On 8/8/2015 11:07, Nick Coghlan wrote: > On 9 August 2015 at 00:05, Alexander Walters wrote: >> Please do not change the meaning of the vestigial U''. It was re-added to >> the language to fix a problem, rebinding it to another meaning introduces >> new problems. We have plenty of other letters in the alphabet to use. > It's actually being used in the same sense we already use it - I'm > just adding a new compile time use case where the distinction matters > again, which we haven't previously had in Python 3. (The usage in this > PEP is fairly closely analogous to WSGI's distinction between native > strings, text strings and binary strings, which matters for hybrid > Python 2/3 code, but not for pure Python 3 code) > > It would certainly be *possible* to use a different character for that > aspect of the PEP, but it would be additional work without any obvious > gain. > > Cheers, > Nick. > > P.S. I hop on the plane for the US in a few hours, so I'll be aiming > to be bad at responding to emails until the 17th or so. We'll see how > well I stick to that plan :) > From ncoghlan at gmail.com Sat Aug 8 17:24:08 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 9 Aug 2015 01:24:08 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C61D5C.4090600@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> <55C61D5C.4090600@sdamon.com> Message-ID: On 9 August 2015 at 01:16, Alexander Walters wrote: > Wait a second, the pep itself does not use the vestigial u''... it uses i''. > where did u'' come from? The only difference in the PEP is the fact that the iu"" variant calls a different builtin (__interpolateu__ instead of __interpolate__). There's no change to the semantics of u"" - those remain identical to "". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From tritium-list at sdamon.com Sat Aug 8 17:23:25 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Sat, 08 Aug 2015 11:23:25 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> Message-ID: <55C61EED.1030309@sdamon.com> As written in the pep, where i'' means 'I have the __interpolate__' method, and iu'' means 'i have the __interpolateu__' method (or that translators should call these methods), is fine, as the meaning of u ('I am unicode, yeah you already knew that') isn't changed. On 8/8/2015 11:07, Nick Coghlan wrote: > On 9 August 2015 at 00:05, Alexander Walters wrote: >> Please do not change the meaning of the vestigial U''. It was re-added to >> the language to fix a problem, rebinding it to another meaning introduces >> new problems. We have plenty of other letters in the alphabet to use. > It's actually being used in the same sense we already use it - I'm > just adding a new compile time use case where the distinction matters > again, which we haven't previously had in Python 3. (The usage in this > PEP is fairly closely analogous to WSGI's distinction between native > strings, text strings and binary strings, which matters for hybrid > Python 2/3 code, but not for pure Python 3 code) > > It would certainly be *possible* to use a different character for that > aspect of the PEP, but it would be additional work without any obvious > gain. > > Cheers, > Nick. > > P.S. I hop on the plane for the US in a few hours, so I'll be aiming > to be bad at responding to emails until the 17th or so. We'll see how > well I stick to that plan :) > From brett at python.org Sat Aug 8 22:49:52 2015 From: brett at python.org (Brett Cannon) Date: Sat, 08 Aug 2015 20:49:52 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C61EED.1030309@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C60C8E.8000305@sdamon.com> <55C61EED.1030309@sdamon.com> Message-ID: Can the discussion of PEP 501 be done in a separate thread? As of right now this thread has not been about PEP 498 beyond Eric's initial email. On Sat, Aug 8, 2015 at 8:56 AM Alexander Walters wrote: > As written in the pep, where i'' means 'I have the __interpolate__' > method, and iu'' means 'i have the __interpolateu__' method (or that > translators should call these methods), is fine, as the meaning of u ('I > am unicode, yeah you already knew that') isn't changed. > > On 8/8/2015 11:07, Nick Coghlan wrote: > > On 9 August 2015 at 00:05, Alexander Walters > wrote: > >> Please do not change the meaning of the vestigial U''. It was re-added > to > >> the language to fix a problem, rebinding it to another meaning > introduces > >> new problems. We have plenty of other letters in the alphabet to use. > > It's actually being used in the same sense we already use it - I'm > > just adding a new compile time use case where the distinction matters > > again, which we haven't previously had in Python 3. (The usage in this > > PEP is fairly closely analogous to WSGI's distinction between native > > strings, text strings and binary strings, which matters for hybrid > > Python 2/3 code, but not for pure Python 3 code) > > > > It would certainly be *possible* to use a different character for that > > aspect of the PEP, but it would be additional work without any obvious > > gain. > > > > Cheers, > > Nick. > > > > P.S. I hop on the plane for the US in a few hours, so I'll be aiming > > to be bad at responding to emails until the 17th or so. We'll see how > > well I stick to that plan :) > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Aug 8 23:05:08 2015 From: brett at python.org (Brett Cannon) Date: Sat, 08 Aug 2015 21:05:08 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: On Fri, Aug 7, 2015 at 6:39 PM Eric V. Smith wrote: > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". > > I need to add some text to the discussion section, but I think it's in > reasonable shape. I have a fully working implementation that I'll get > around to posting somewhere this weekend. > > >>> def how_awesome(): return 'very' > ... > >>> f'f-strings are {how_awesome()} awesome!' > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. > I fixed a grammar nit directly in the PEP, but otherwise I'm +1 on the proposal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From timothy.c.delaney at gmail.com Sun Aug 9 03:08:59 2015 From: timothy.c.delaney at gmail.com (Tim Delaney) Date: Sun, 9 Aug 2015 11:08:59 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: On 8 August 2015 at 11:39, Eric V. Smith wrote: > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". > > I need to add some text to the discussion section, but I think it's in > reasonable shape. I have a fully working implementation that I'll get > around to posting somewhere this weekend. > > >>> def how_awesome(): return 'very' > ... > >>> f'f-strings are {how_awesome()} awesome!' > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. > I'd like to see an alternatives section, in particular listing alternative prefixes and why they weren't chosen over f. Off the top of my head, ones I've seen listed are: ! $ Tim Delaney -------------- next part -------------- An HTML attachment was scrubbed... URL: From raymond.hettinger at gmail.com Sun Aug 9 03:19:49 2015 From: raymond.hettinger at gmail.com (Raymond Hettinger) Date: Sat, 8 Aug 2015 18:19:49 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: <087DED27-68CE-4593-8105-2F680001904C@gmail.com> > On Aug 7, 2015, at 6:39 PM, Eric V. Smith wrote: > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. Here's are few thoughts: * I really like the reduction in verbosity for passing in the variable names. * Because of my C background, I experience a little mental hiccup when using the f-prefix with the print() function: print(f"The answer is {answer}") wants to come out of my fingers as: printf("The answer is {answer}") * It's unclear whether the string-to-expression-expansion should be arbitrarily limited to locals() and globals() or whether it should include __builtins__ and cell variables (closures and nested scopes). Making it behave just like normal expressions means that there won't be new special cases to remember and that many existing calls to format() can be converted automatically: w = 10 def f(x): def g(y): print(f'{len.__name__}{w}{x}{y}') * Will this proposal complicate linters, analysis tools, highlighters, etc.? In a way, this isn't a small language extension, it is a whole new way to write expressions. * Does it complicate situations where we would otherwise pass around templates as first class class objects (internationalization for example)? def welcome(name, title): print(_("Good morning {title} {name}")) # expect gettext() substitution * A related thought is that we normally like templates to live outside the functions where they are used (separation of business logic and presentation logic). Use of f-strings may impact our ability to refactor (move code up or down a chain of nested function calls), ability to pass in templates as arguments, storing templates in globals or thread locals so that they are shareable, or moving them out of our scripts and into files editable by non-programmers. * With respect to learnability, the downside is that it becomes yet another thing to have to cover in a Python class (I'm already not looking forward teaching star-unpacking generalizations and the restraint to not overuse them, and covering await, and single dispatch, etc, etc). The upside is that templates themselves aren't being changed. The only incremental learning task is the invocation becomes automatic, saving us a little typing. The above above are random thoughts based a first quick read. Don't take them too seriously. Some are just shooting from the hip and are listed as food for thought. Raymond From Nikolaus at rath.org Sun Aug 9 04:28:25 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Sat, 08 Aug 2015 19:28:25 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: (Nick Coghlan's message of "Sat, 8 Aug 2015 19:34:57 +1000") References: <55C55DC3.8040605@trueblade.com> Message-ID: <87wpx5nqk6.fsf@vostro.rath.org> On Aug 08 2015, Nick Coghlan wrote: > On 8 August 2015 at 11:39, Eric V. Smith wrote: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". > > Thanks you for your work on this - it's a very cool concept! > > I've also now written and posted an initial draft of PEP 500, [...] I think what that PEP really needs is a concise summary of the *differences* to PEP 498. Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From Nikolaus at rath.org Sun Aug 9 04:37:38 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Sat, 08 Aug 2015 19:37:38 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <87wpx5nqk6.fsf@vostro.rath.org> (Nikolaus Rath's message of "Sat, 08 Aug 2015 19:28:25 -0700") References: <55C55DC3.8040605@trueblade.com> <87wpx5nqk6.fsf@vostro.rath.org> Message-ID: <87tws9nq4t.fsf@vostro.rath.org> On Aug 08 2015, Nikolaus Rath wrote: > On Aug 08 2015, Nick Coghlan wrote: >> On 8 August 2015 at 11:39, Eric V. Smith wrote: >>> Following a long discussion on python-ideas, I've posted my draft of >>> PEP-498. It describes the "f-string" approach that was the subject of >>> the "Briefer string format" thread. I'm open to a better title than >>> "Literal String Formatting". >> >> Thanks you for your work on this - it's a very cool concept! >> >> I've also now written and posted an initial draft of PEP 500, > [...] > > I think what that PEP really needs is a concise summary of the > *differences* to PEP 498. I should probably elaborate on that. After reading both PEPs, it seems to me that the only difference is that you want to use a different prefix (i instead of f), use ${} instead of {}, and call a builtin function to perform the interpolation (instead of always using format). But is that really it? The PEP appears rather long, so I'm not sure if I'm missing other differences in the parts that seemed identical to PEP 498 to me. Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From stefan_ml at behnel.de Sun Aug 9 10:06:47 2015 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 09 Aug 2015 10:06:47 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: Eric V. Smith schrieb am 08.08.2015 um 03:39: > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". > > I need to add some text to the discussion section, but I think it's in > reasonable shape. I have a fully working implementation that I'll get > around to posting somewhere this weekend. > > >>> def how_awesome(): return 'very' > ... > >>> f'f-strings are {how_awesome()} awesome!' > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. [copying my comment from python-ideas here] How common is this use case, really? Almost all of the string formatting that I've used lately is either for logging (no help from this proposal here) or requires some kind of translation/i18n *before* the formatting, which is not helped by this proposal either. Meaning, in almost all cases, the formatting will use some more or less simple variant of this pattern: result = process("string with {a} and {b}").format(a=1, b=2) which commonly collapses into result = translate("string with {a} and {b}", a=1, b=2) by wrapping the concrete use cases in appropriate helper functions. I've seen Nick Coghlan's proposal for an implementation backed by a global function, which would at least catch some of these use cases. But it otherwise seems to me that this is a huge sledge hammer solution for a niche problem. Stefan From stefan_ml at behnel.de Sun Aug 9 11:53:28 2015 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sun, 09 Aug 2015 11:53:28 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: Stefan Behnel schrieb am 09.08.2015 um 10:06: > Eric V. Smith schrieb am 08.08.2015 um 03:39: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". >> >> I need to add some text to the discussion section, but I think it's in >> reasonable shape. I have a fully working implementation that I'll get >> around to posting somewhere this weekend. >> >> >>> def how_awesome(): return 'very' >> ... >> >>> f'f-strings are {how_awesome()} awesome!' >> 'f-strings are very awesome!' >> >> I'm open to any suggestions to improve the PEP. Thanks for your feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Thinking about this some more, the "almost all" is actually wrong. This only applies to one kind of application that I'm working on. In fact, "almost all" of the string formatting that I use is not in those applications but in Cython's code generator. And there's a *lot* of string formatting in there, even though we use real templating for bigger things already. However, looking through the code, I cannot see this proposal being of much help for that use case either. Many of the values that get formatted into the strings use some kind of non-trivial expression (function calls, object attributes, also local variables, sometimes variables with lengthy names) that is best written out in actual code. Here are some real example snippets: code.putln( 'static char %s[] = "%s";' % ( entry.doc_cname, split_string_literal(escape_byte_string(docstr)))) if entry.is_special: code.putln('#if CYTHON_COMPILING_IN_CPYTHON') code.putln( "struct wrapperbase %s;" % entry.wrapperbase_cname) code.putln('#endif') temp = ... code.putln("for (%s=0; %s < PyTuple_GET_SIZE(%s); %s++) {" % ( temp, temp, Naming.args_cname, temp)) code.putln("PyObject* item = PyTuple_GET_ITEM(%s, %s);" % ( Naming.args_cname, temp)) code.put("%s = (%s) ? PyDict_Copy(%s) : PyDict_New(); " % ( self.starstar_arg.entry.cname, Naming.kwds_cname, Naming.kwds_cname)) code.putln("if (unlikely(!%s)) return %s;" % ( self.starstar_arg.entry.cname, self.error_value())) We use %-formatting for historical reasons (that's all there was 15 years ago), but I wouldn't switch to .format() because there is nothing to win here. The "%s" etc. place holders are *very* short and do not get in the way (as "{}" would in C code templates). Named formatting would require a lot more space in the templates, so positional, unnamed formatting helps readability a lot. And the value expressions used for the interpolation tend to be expressions rather than simple variables, so keeping those outside of the formatting strings simplifies both editing and reading. That's the third major real-world use case for string formatting now where this proposal doesn't help. The niche is getting smaller. Stefan From larry at hastings.org Sun Aug 9 12:37:12 2015 From: larry at hastings.org (Larry Hastings) Date: Sun, 09 Aug 2015 03:37:12 -0700 Subject: [Python-Dev] Reminder: the "3.5" branch in CPython trunk is now 3.5.1 Message-ID: <55C72D58.6040704@hastings.org> As I write this email I'm tagging Python 3.5.0 release candidate 1. This is the moment that we switch over to our new experimental workflow, where we use Bitbucket and pull requests for all future changesets that will get applied to 3.5.0. The Bitbucket repository isn't ready yet, and I'm still putting the final touches on the documentation for the process. I'll have everything ready no later than a day from now. It'll be posted here, and will also go into the Python Dev Guide. For now changes for 3.5.0rc2 will just have to wait. Again: any revisions you check in to the "3.5" branch on hg.python.org/cpython right now will go automatically into 3.5.1. They will *not* automatically go in to 3.5.0. The only way to get changes into 3.5.0 from this moment forward is by creating a pull request on Bitbucket which you convince me to accept. //arry/ p.s. In case you're curious, the last revision to make it in to 3.5.0rc1 (apart from the revisions I generate as part of my build engineering work) was 202a7aabd4fe. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sun Aug 9 19:22:25 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 9 Aug 2015 13:22:25 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55C78C51.8030008@trueblade.com> On 8/8/2015 9:08 PM, Tim Delaney wrote: > On 8 August 2015 at 11:39, Eric V. Smith > wrote: > > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". > > I need to add some text to the discussion section, but I think it's in > reasonable shape. I have a fully working implementation that I'll get > around to posting somewhere this weekend. > > >>> def how_awesome(): return 'very' > ... > >>> f'f-strings are {how_awesome()} awesome!' > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your > feedback. > > > I'd like to see an alternatives section, in particular listing > alternative prefixes and why they weren't chosen over f. Off the top of > my head, ones I've seen listed are: > > ! > $ I'll add something, but there's no particular reason. "f" for formatted, along the lines of 'r' raw, 'b' bytes, and 'u' unicode. Especially when you want to combine them, I think a letter looks better: fr'{x} a formatted raw string' $r'{x} a formatted raw string' Eric. From brett at python.org Sun Aug 9 19:38:55 2015 From: brett at python.org (Brett Cannon) Date: Sun, 09 Aug 2015 17:38:55 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: On Sun, 9 Aug 2015 at 01:07 Stefan Behnel wrote: > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > Following a long discussion on python-ideas, I've posted my draft of > > PEP-498. It describes the "f-string" approach that was the subject of > > the "Briefer string format" thread. I'm open to a better title than > > "Literal String Formatting". > > > > I need to add some text to the discussion section, but I think it's in > > reasonable shape. I have a fully working implementation that I'll get > > around to posting somewhere this weekend. > > > > >>> def how_awesome(): return 'very' > > ... > > >>> f'f-strings are {how_awesome()} awesome!' > > 'f-strings are very awesome!' > > > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Meaning, in almost all cases, > the formatting will use some more or less simple variant of this pattern: > > result = process("string with {a} and {b}").format(a=1, b=2) > > which commonly collapses into > > result = translate("string with {a} and {b}", a=1, b=2) > > by wrapping the concrete use cases in appropriate helper functions. > > I've seen Nick Coghlan's proposal for an implementation backed by a global > function, which would at least catch some of these use cases. But it > otherwise seems to me that this is a huge sledge hammer solution for a > niche problem. > So in my case the vast majority of calls to str.format could be replaced with an f-string. I would also like to believe that other languages that have adopted this approach to string interpolation did so with knowledge that it would be worth it (but then again I don't really know how other languages are developed so this might just be a hope that other languages fret as much as we do about stuff). -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Sun Aug 9 20:22:43 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 9 Aug 2015 14:22:43 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55C79A73.1030901@trueblade.com> On 8/9/2015 1:38 PM, Brett Cannon wrote: > > > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > wrote: > > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > Following a long discussion on python-ideas, I've posted my draft of > > PEP-498. It describes the "f-string" approach that was the subject of > > the "Briefer string format" thread. I'm open to a better title than > > "Literal String Formatting". > > > > I need to add some text to the discussion section, but I think it's in > > reasonable shape. I have a fully working implementation that I'll get > > around to posting somewhere this weekend. > > > > >>> def how_awesome(): return 'very' > > ... > > >>> f'f-strings are {how_awesome()} awesome!' > > 'f-strings are very awesome!' > > > > I'm open to any suggestions to improve the PEP. Thanks for your > feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Meaning, in almost all > cases, > the formatting will use some more or less simple variant of this > pattern: > > result = process("string with {a} and {b}").format(a=1, b=2) > > which commonly collapses into > > result = translate("string with {a} and {b}", a=1, b=2) > > by wrapping the concrete use cases in appropriate helper functions. > > I've seen Nick Coghlan's proposal for an implementation backed by a > global > function, which would at least catch some of these use cases. But it > otherwise seems to me that this is a huge sledge hammer solution for a > niche problem. > > > So in my case the vast majority of calls to str.format could be replaced > with an f-string. I would also like to believe that other languages that > have adopted this approach to string interpolation did so with knowledge > that it would be worth it (but then again I don't really know how other > languages are developed so this might just be a hope that other > languages fret as much as we do about stuff). I think it has to do with the nature of the programs that people write. I write software for internal use in a large company. In the last 13 years there, I've written literally hundreds of individual programs, large and small. I just checked: literally 100% of my calls to %-formatting (older code) or str.format (in newer code) could be replaced with f-strings. And I think every such use would be an improvement. I firmly believe that the majority of software written in Python does not show up on PyPi, but is used internally in corporations. It's not internationalized or localized: it just exists to get a job done quickly. This is the code that would benefit from f-strings. This isn't to say that there's not plenty of code where f-strings would not help. But I think it's as big a mistake to generalize from my experience as it is from Stefan's. Eric. From pludemann at google.com Sun Aug 9 21:58:16 2015 From: pludemann at google.com (Peter Ludemann) Date: Sun, 9 Aug 2015 12:58:16 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C79A73.1030901@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: Most of my outputs are log messages, so this proposal won't help me because (I presume) it does eager evaluation of the format string and the logging methods are designed to do lazy evaluation. Python doesn't have anything like Lisp's "special forms", so there doesn't seem to be a way to implicitly put a lambda on the string to delay evaluation. It would be nice to be able to mark the formatting as lazy ... maybe another string prefix character to indicate that? (And would the 2nd expression in an assert statement be lazy or eager?) PS: As to Brett's comment about the history of string interpolation ... my recollection/understanding is that it started with Unix shells and the "$variable" notation, with the "$variable" being evaluated within "..." and not within '...'. Perl, PHP, Make (and others) picked this up. There seems to be a trend to avoid the bare "$variable" form and instead use "${variable}" everywhere, mainly because "${...}" is sometimes required to avoid ambiguities (e.g. "There were $NUMBER ${THING}s.") PPS: For anyone wishing to improve the existing format options, Common Lisp's FORMAT and Prolog's format/2 have some capabilities that I miss from time to time in Python. On 9 August 2015 at 11:22, Eric V. Smith wrote: > On 8/9/2015 1:38 PM, Brett Cannon wrote: > > > > > > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > > wrote: > > > > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > > Following a long discussion on python-ideas, I've posted my draft > of > > > PEP-498. It describes the "f-string" approach that was the subject > of > > > the "Briefer string format" thread. I'm open to a better title than > > > "Literal String Formatting". > > > > > > I need to add some text to the discussion section, but I think > it's in > > > reasonable shape. I have a fully working implementation that I'll > get > > > around to posting somewhere this weekend. > > > > > > >>> def how_awesome(): return 'very' > > > ... > > > >>> f'f-strings are {how_awesome()} awesome!' > > > 'f-strings are very awesome!' > > > > > > I'm open to any suggestions to improve the PEP. Thanks for your > > feedback. > > > > [copying my comment from python-ideas here] > > > > How common is this use case, really? Almost all of the string > formatting > > that I've used lately is either for logging (no help from this > proposal > > here) or requires some kind of translation/i18n *before* the > formatting, > > which is not helped by this proposal either. Meaning, in almost all > > cases, > > the formatting will use some more or less simple variant of this > > pattern: > > > > result = process("string with {a} and {b}").format(a=1, b=2) > > > > which commonly collapses into > > > > result = translate("string with {a} and {b}", a=1, b=2) > > > > by wrapping the concrete use cases in appropriate helper functions. > > > > I've seen Nick Coghlan's proposal for an implementation backed by a > > global > > function, which would at least catch some of these use cases. But it > > otherwise seems to me that this is a huge sledge hammer solution for > a > > niche problem. > > > > > > So in my case the vast majority of calls to str.format could be replaced > > with an f-string. I would also like to believe that other languages that > > have adopted this approach to string interpolation did so with knowledge > > that it would be worth it (but then again I don't really know how other > > languages are developed so this might just be a hope that other > > languages fret as much as we do about stuff). > > I think it has to do with the nature of the programs that people write. > I write software for internal use in a large company. In the last 13 > years there, I've written literally hundreds of individual programs, > large and small. I just checked: literally 100% of my calls to > %-formatting (older code) or str.format (in newer code) could be > replaced with f-strings. And I think every such use would be an > improvement. > > I firmly believe that the majority of software written in Python does > not show up on PyPi, but is used internally in corporations. It's not > internationalized or localized: it just exists to get a job done > quickly. This is the code that would benefit from f-strings. > > This isn't to say that there's not plenty of code where f-strings would > not help. But I think it's as big a mistake to generalize from my > experience as it is from Stefan's. > > Eric. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/pludemann%40google.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Mon Aug 10 00:25:20 2015 From: brett at python.org (Brett Cannon) Date: Sun, 09 Aug 2015 22:25:20 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: On Sun, Aug 9, 2015, 13:51 Peter Ludemann via Python-Dev < python-dev at python.org> wrote: Most of my outputs are log messages, so this proposal won't help me because (I presume) it does eager evaluation of the format string and the logging methods are designed to do lazy evaluation. Python doesn't have anything like Lisp's "special forms", so there doesn't seem to be a way to implicitly put a lambda on the string to delay evaluation. It would be nice to be able to mark the formatting as lazy ... maybe another string prefix character to indicate that? (And would the 2nd expression in an assert statement be lazy or eager?) That would require a lazy string type which is beyond the scope of this PEP as proposed since it would require its own design choices, how much code would not like the different type, etc. -Brett PS: As to Brett's comment about the history of string interpolation ... my recollection/understanding is that it started with Unix shells and the "$variable" notation, with the "$variable" being evaluated within "..." and not within '...'. Perl, PHP, Make (and others) picked this up. There seems to be a trend to avoid the bare "$variable" form and instead use "${variable}" everywhere, mainly because "${...}" is sometimes required to avoid ambiguities (e.g. "There were $NUMBER ${THING}s.") PPS: For anyone wishing to improve the existing format options, Common Lisp's FORMAT and Prolog's format/2 have some capabilities that I miss from time to time in Python. On 9 August 2015 at 11:22, Eric V. Smith wrote: On 8/9/2015 1:38 PM, Brett Cannon wrote: > > > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > wrote: > > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > Following a long discussion on python-ideas, I've posted my draft of > > PEP-498. It describes the "f-string" approach that was the subject of > > the "Briefer string format" thread. I'm open to a better title than > > "Literal String Formatting". > > > > I need to add some text to the discussion section, but I think it's in > > reasonable shape. I have a fully working implementation that I'll get > > around to posting somewhere this weekend. > > > > >>> def how_awesome(): return 'very' > > ... > > >>> f'f-strings are {how_awesome()} awesome!' > > 'f-strings are very awesome!' > > > > I'm open to any suggestions to improve the PEP. Thanks for your > feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Meaning, in almost all > cases, > the formatting will use some more or less simple variant of this > pattern: > > result = process("string with {a} and {b}").format(a=1, b=2) > > which commonly collapses into > > result = translate("string with {a} and {b}", a=1, b=2) > > by wrapping the concrete use cases in appropriate helper functions. > > I've seen Nick Coghlan's proposal for an implementation backed by a > global > function, which would at least catch some of these use cases. But it > otherwise seems to me that this is a huge sledge hammer solution for a > niche problem. > > > So in my case the vast majority of calls to str.format could be replaced > with an f-string. I would also like to believe that other languages that > have adopted this approach to string interpolation did so with knowledge > that it would be worth it (but then again I don't really know how other > languages are developed so this might just be a hope that other > languages fret as much as we do about stuff). I think it has to do with the nature of the programs that people write. I write software for internal use in a large company. In the last 13 years there, I've written literally hundreds of individual programs, large and small. I just checked: literally 100% of my calls to %-formatting (older code) or str.format (in newer code) could be replaced with f-strings. And I think every such use would be an improvement. I firmly believe that the majority of software written in Python does not show up on PyPi, but is used internally in corporations. It's not internationalized or localized: it just exists to get a job done quickly. This is the code that would benefit from f-strings. This isn't to say that there's not plenty of code where f-strings would not help. But I think it's as big a mistake to generalize from my experience as it is from Stefan's. Eric. _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/pludemann%40google.com _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From pludemann at google.com Mon Aug 10 02:24:16 2015 From: pludemann at google.com (Peter Ludemann) Date: Sun, 9 Aug 2015 17:24:16 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: What if logging understood lambda? (By testing for types.FunctionType). This is outside PEP 498, but there might be some recommendations on how "lazy" evaluation should be done and understood by some functions. e.g.: log.info(lambda: f'{foo} just did a {bar} thing') It's not pretty, but it's not too verbose. As far as I can tell, PEP 498 would work with this because it implicitly supports closures ? that is, it's defined as equivalent to log.info(lambda: ''.join([foo.__format__(), ' just did a ', bar.__format__(), ' thing'])) On 9 August 2015 at 15:25, Brett Cannon wrote: > > On Sun, Aug 9, 2015, 13:51 Peter Ludemann via Python-Dev < > python-dev at python.org> wrote: > > Most of my outputs are log messages, so this proposal won't help me > because (I presume) it does eager evaluation of the format string and the > logging methods are designed to do lazy evaluation. Python doesn't have > anything like Lisp's "special forms", so there doesn't seem to be a way to > implicitly put a lambda on the string to delay evaluation. > > It would be nice to be able to mark the formatting as lazy ... maybe > another string prefix character to indicate that? (And would the 2nd > expression in an assert statement be lazy or eager?) > > > That would require a lazy string type which is beyond the scope of this > PEP as proposed since it would require its own design choices, how much > code would not like the different type, etc. > > -Brett > > > PS: As to Brett's comment about the history of string interpolation ... my > recollection/understanding is that it started with Unix shells and the > "$variable" notation, with the "$variable" being evaluated within "..." and > not within '...'. Perl, PHP, Make (and others) picked this up. There seems > to be a trend to avoid the bare "$variable" form and instead use > "${variable}" everywhere, mainly because "${...}" is sometimes required to > avoid ambiguities (e.g. "There were $NUMBER ${THING}s.") > > PPS: For anyone wishing to improve the existing format options, Common > Lisp's FORMAT > and Prolog's format/2 > > have some capabilities that I miss from time to time in Python. > > On 9 August 2015 at 11:22, Eric V. Smith wrote: > > On 8/9/2015 1:38 PM, Brett Cannon wrote: > > > > > > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > > > wrote: > > > > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > > Following a long discussion on python-ideas, I've posted my draft > of > > > PEP-498. It describes the "f-string" approach that was the subject > of > > > the "Briefer string format" thread. I'm open to a better title than > > > "Literal String Formatting". > > > > > > I need to add some text to the discussion section, but I think > it's in > > > reasonable shape. I have a fully working implementation that I'll > get > > > around to posting somewhere this weekend. > > > > > > >>> def how_awesome(): return 'very' > > > ... > > > >>> f'f-strings are {how_awesome()} awesome!' > > > 'f-strings are very awesome!' > > > > > > I'm open to any suggestions to improve the PEP. Thanks for your > > feedback. > > > > [copying my comment from python-ideas here] > > > > How common is this use case, really? Almost all of the string > formatting > > that I've used lately is either for logging (no help from this > proposal > > here) or requires some kind of translation/i18n *before* the > formatting, > > which is not helped by this proposal either. Meaning, in almost all > > cases, > > the formatting will use some more or less simple variant of this > > pattern: > > > > result = process("string with {a} and {b}").format(a=1, b=2) > > > > which commonly collapses into > > > > result = translate("string with {a} and {b}", a=1, b=2) > > > > by wrapping the concrete use cases in appropriate helper functions. > > > > I've seen Nick Coghlan's proposal for an implementation backed by a > > global > > function, which would at least catch some of these use cases. But it > > otherwise seems to me that this is a huge sledge hammer solution for > a > > niche problem. > > > > > > So in my case the vast majority of calls to str.format could be replaced > > with an f-string. I would also like to believe that other languages that > > have adopted this approach to string interpolation did so with knowledge > > that it would be worth it (but then again I don't really know how other > > languages are developed so this might just be a hope that other > > languages fret as much as we do about stuff). > > I think it has to do with the nature of the programs that people write. > I write software for internal use in a large company. In the last 13 > years there, I've written literally hundreds of individual programs, > large and small. I just checked: literally 100% of my calls to > %-formatting (older code) or str.format (in newer code) could be > replaced with f-strings. And I think every such use would be an > improvement. > > I firmly believe that the majority of software written in Python does > not show up on PyPi, but is used internally in corporations. It's not > internationalized or localized: it just exists to get a job done > quickly. This is the code that would benefit from f-strings. > > This isn't to say that there's not plenty of code where f-strings would > not help. But I think it's as big a mistake to generalize from my > experience as it is from Stefan's. > > Eric. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/pludemann%40google.com > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Mon Aug 10 02:33:09 2015 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 10 Aug 2015 01:33:09 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: <55C7F145.3030004@mrabarnett.plus.com> On 2015-08-10 01:24, Peter Ludemann via Python-Dev wrote: > What if logging understood lambda? (By testing for types.FunctionType). [snip] Why not use 'callable'? From eric at trueblade.com Mon Aug 10 03:02:09 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 9 Aug 2015 21:02:09 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: <55C7F811.6000808@trueblade.com> On 8/9/2015 8:24 PM, Peter Ludemann wrote: > What if logging understood lambda? (By testing for types.FunctionType). > This is outside PEP 498, but there might be some recommendations on how > "lazy" evaluation should be done and understood by some functions. > > e.g.: > log.info (lambda: f'{foo} just did a {bar} thing') > > It's not pretty, but it's not too verbose. As far as I can tell, PEP 498 > would work with this because it implicitly supports closures ? that is, > it's defined as equivalent to > log.info (lambda: ''.join([foo.__format__(), ' just did > a ', bar.__format__(), ' thing'])) > That basically works: class Foo: def __init__(self, name): self.name = name def __format__(self, fmt): print(f'__format__: {self.name}') return f'{self.name}' class Logger: # accumulate log messages until flush is called def __init__(self): self.values = [] def log(self, value): self.values.append(value) def flush(self): for value in self.values: if callable(value): value = value() print(f'log: {value}') logger = Logger() f1 = Foo('one') f2 = Foo('two') print('before log calls') logger.log('first log message') logger.log(lambda:f'f: {f1} {f2}') logger.log('last log message') print('after log calls') f1 = Foo('three') logger.flush() produces: before log calls after log calls log: first log message __format__: three __format__: two log: f: three two log: last log message But note that when the lambdas are called, f1 is bound to Foo('three'), so that's what's printed. I don't think that's what the logging module would normally do, since it wouldn't see the rebinding. I guess you'd have to change logging to do something special if it had a single argument which is a callable, or add new interface to it. And of course you'd have to live with the ugliness of lambdas in the logging calls. So, I can't say I'm a huge fan of the approach. But writing examples using f-strings is way more fun that using %-formatting or str.format! But it does remind me I still need to implement f'{field:{width}}'. Eric. From eric at trueblade.com Mon Aug 10 03:05:45 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 9 Aug 2015 21:05:45 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C7F811.6000808@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <55C7F811.6000808@trueblade.com> Message-ID: <55C7F8E9.2020504@trueblade.com> On 8/9/2015 9:02 PM, Eric V. Smith wrote: > On 8/9/2015 8:24 PM, Peter Ludemann wrote: >> What if logging understood lambda? (By testing for types.FunctionType). >> This is outside PEP 498, but there might be some recommendations on how >> "lazy" evaluation should be done and understood by some functions. >> >> e.g.: >> log.info (lambda: f'{foo} just did a {bar} thing') >> >> It's not pretty, but it's not too verbose. As far as I can tell, PEP 498 >> would work with this because it implicitly supports closures ? that is, >> it's defined as equivalent to >> log.info (lambda: ''.join([foo.__format__(), ' just did >> a ', bar.__format__(), ' thing'])) >> > > That basically works: > class Foo: > def __init__(self, name): > self.name = name > > def __format__(self, fmt): > print(f'__format__: {self.name}') > return f'{self.name}' > > > class Logger: > # accumulate log messages until flush is called > def __init__(self): > self.values = [] > > def log(self, value): > self.values.append(value) > > def flush(self): > for value in self.values: > if callable(value): > value = value() > print(f'log: {value}') > > logger = Logger() > > f1 = Foo('one') > f2 = Foo('two') > print('before log calls') > logger.log('first log message') > logger.log(lambda:f'f: {f1} {f2}') > logger.log('last log message') > print('after log calls') > f1 = Foo('three') > logger.flush() > > > produces: > > before log calls > after log calls > log: first log message > __format__: three > __format__: two > log: f: three two > log: last log message > > > But note that when the lambdas are called, f1 is bound to Foo('three'), > so that's what's printed. I don't think that's what the logging module > would normally do, since it wouldn't see the rebinding. > > I guess you'd have to change logging to do something special if it had a > single argument which is a callable, or add new interface to it. > > And of course you'd have to live with the ugliness of lambdas in the > logging calls. > > So, I can't say I'm a huge fan of the approach. But writing examples > using f-strings is way more fun that using %-formatting or str.format! Here's a better example that shows the closure. Same output as above: class Foo: def __init__(self, name): self.name = name def __format__(self, fmt): print(f'__format__: {self.name}') return f'{self.name}' class Logger: # accumulate log messages until flush is called def __init__(self): self.values = [] def log(self, value): self.values.append(value) def flush(self): for value in self.values: if callable(value): value = value() print(f'log: {value}') def do_something(logger): f1 = Foo('one') f2 = Foo('two') print('before log calls') logger.log('first log message') logger.log(lambda:f'f: {f1} {f2}') logger.log('last log message') print('after log calls') f1 = Foo('three') logger = Logger() do_something(logger) logger.flush() From mertz at gnosis.cx Mon Aug 10 03:14:18 2015 From: mertz at gnosis.cx (David Mertz) Date: Sun, 9 Aug 2015 18:14:18 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C79A73.1030901@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: On Sun, Aug 9, 2015 at 11:22 AM, Eric V. Smith wrote: > > I think it has to do with the nature of the programs that people write. > I write software for internal use in a large company. In the last 13 > years there, I've written literally hundreds of individual programs, > large and small. I just checked: literally 100% of my calls to > %-formatting (older code) or str.format (in newer code) could be > replaced with f-strings. And I think every such use would be an > improvement. > I'm sure that pretty darn close to 100% of all the uses of %-formatting and str.format I've written in the last 13 years COULD be replaced by the proposed f-strings (I suppose about 16 years for me, actually). But I think that every single such replacement would make the programs worse. I'm not sure if it helps to mention that I *did* actually "write the book" on _Text Processing in Python_ :-). The proposal just continues to seem far too magical to me. In the training I now do for Continuum Analytics (I'm in charge of the training program with one other person), I specifically have a (very) little bit of the lessons where I mention something like: print("{foo} is {bar}".format(**locals())) But I give that entirely as a negative example of abusing code and introducing fragility. f-strings are really the same thing, only even more error-prone and easier to get wrong. Relying on implicit context of the runtime state of variables that are merely in scope feels very break-y to me still. If I had to teach f-strings in the future, I'd teach it as a Python wart. That said, there *is* one small corner where I believe f-strings add something helpful to the language. There is no really concise way to spell: collections.ChainMap(locals(), globals(), __builtins__.__dict__). If we could spell that as, say `lgb()`, that would let str.format() or %-formatting pick up the full "what's in scope". To my mind, that's the only good thing about the f-string idea. Yours, David... -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mertz at gnosis.cx Mon Aug 10 03:54:38 2015 From: mertz at gnosis.cx (David Mertz) Date: Sun, 9 Aug 2015 18:54:38 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: Y'know, I just read a few more posts over on python-ideas that I had missed somehow. I saw Guido's point about `**locals()` being too specialized and magical for beginners, which I agree with. And it's the other aspect of "magic" that makes me not like f-strings. The idea of *implicitly* getting values from the local scope (or really, the global_local_builtin scope) makes me worry about readers of code very easily missing what's really going on within an f-string. I don't actually care about the code injection issues and that sort of thing. I mean, OK I care a little bit, but my actual concern is purely explicitness and readability. Which brought to mind a certain thought. While I don't like: f'My name is {name}, my age next year is {age+1}' I wouldn't have any similar objection to: 'My name is {name}, my age next year is {age+1}'.scope_format() Or scope_format('My name is {name}, my age next year is {age+1}') I realize that these could be completely semantically equivalent... but the function or method call LOOKS LIKE a runtime operation, while a one letter prefix just doesn't look like that (especially to beginners whom I might teach). The name 'scope_format' is ugly, and something shorter would be nicer, but I think this conveys my idea. Yours, David... On Sun, Aug 9, 2015 at 6:14 PM, David Mertz wrote: > On Sun, Aug 9, 2015 at 11:22 AM, Eric V. Smith wrote: >> >> I think it has to do with the nature of the programs that people write. >> I write software for internal use in a large company. In the last 13 >> years there, I've written literally hundreds of individual programs, >> large and small. I just checked: literally 100% of my calls to >> %-formatting (older code) or str.format (in newer code) could be >> replaced with f-strings. And I think every such use would be an >> improvement. >> > > I'm sure that pretty darn close to 100% of all the uses of %-formatting > and str.format I've written in the last 13 years COULD be replaced by the > proposed f-strings (I suppose about 16 years for me, actually). But I > think that every single such replacement would make the programs worse. > I'm not sure if it helps to mention that I *did* actually "write the book" > on _Text Processing in Python_ :-). > > The proposal just continues to seem far too magical to me. In the > training I now do for Continuum Analytics (I'm in charge of the training > program with one other person), I specifically have a (very) little bit of > the lessons where I mention something like: > > print("{foo} is {bar}".format(**locals())) > > But I give that entirely as a negative example of abusing code and > introducing fragility. f-strings are really the same thing, only even more > error-prone and easier to get wrong. Relying on implicit context of the > runtime state of variables that are merely in scope feels very break-y to > me still. If I had to teach f-strings in the future, I'd teach it as a > Python wart. > > That said, there *is* one small corner where I believe f-strings add > something helpful to the language. There is no really concise way to spell: > > collections.ChainMap(locals(), globals(), __builtins__.__dict__). > > If we could spell that as, say `lgb()`, that would let str.format() or > %-formatting pick up the full "what's in scope". To my mind, that's the > only good thing about the f-string idea. > > Yours, David... > > -- > Keeping medicines from the bloodstreams of the sick; food > from the bellies of the hungry; books from the hands of the > uneducated; technology from the underdeveloped; and putting > advocates of freedom in prisons. Intellectual property is > to the 21st century what the slave trade was to the 16th. > -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Mon Aug 10 05:04:42 2015 From: wes.turner at gmail.com (Wes Turner) Date: Sun, 9 Aug 2015 22:04:42 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: On Aug 9, 2015 8:14 PM, "David Mertz" wrote: > > On Sun, Aug 9, 2015 at 11:22 AM, Eric V. Smith wrote: >> >> I think it has to do with the nature of the programs that people write. >> I write software for internal use in a large company. In the last 13 >> years there, I've written literally hundreds of individual programs, >> large and small. I just checked: literally 100% of my calls to >> %-formatting (older code) or str.format (in newer code) could be >> replaced with f-strings. And I think every such use would be an improvement. > > > I'm sure that pretty darn close to 100% of all the uses of %-formatting and str.format I've written in the last 13 years COULD be replaced by the proposed f-strings (I suppose about 16 years for me, actually). But I think that every single such replacement would make the programs worse. I'm not sure if it helps to mention that I *did* actually "write the book" on _Text Processing in Python_ :-). > > The proposal just continues to seem far too magical to me. In the training I now do for Continuum Analytics (I'm in charge of the training program with one other person), I specifically have a (very) little bit of the lessons where I mention something like: > > print("{foo} is {bar}".format(**locals())) > > But I give that entirely as a negative example of abusing code and introducing fragility. f-strings are really the same thing, only even more error-prone and easier to get wrong. Relying on implicit context of the runtime state of variables that are merely in scope feels very break-y to me still. If I had to teach f-strings in the future, I'd teach it as a Python wart. My editor matches \bsym\b, but not locals() or "{sym}"; when I press *. #traceability > > That said, there *is* one small corner where I believe f-strings add something helpful to the language. There is no really concise way to spell: > > collections.ChainMap(locals(), globals(), __builtins__.__dict__). > > If we could spell that as, say `lgb()`, that would let str.format() or %-formatting pick up the full "what's in scope". To my mind, that's the only good thing about the f-string idea. +1. This would be the explicit way to be loose with variable scope and string interpolation, while maintaining grep-ability. > > Yours, David... > > -- > Keeping medicines from the bloodstreams of the sick; food > from the bellies of the hungry; books from the hands of the > uneducated; technology from the underdeveloped; and putting > advocates of freedom in prisons. Intellectual property is > to the 21st century what the slave trade was to the 16th. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From srkunze at mail.de Mon Aug 10 07:29:50 2015 From: srkunze at mail.de (Sven R. Kunze) Date: Mon, 10 Aug 2015 07:29:50 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C78C51.8030008@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C78C51.8030008@trueblade.com> Message-ID: <55C836CE.5030203@mail.de> After I read Nick's proposal and pondering over the 'f' vs. 'r' examples, I like the 'i' prefix more (regardless of the internal implementation). The best solution would be "without prefix and '{var}' only" syntax. Not sure if that is possible at all; I cannot remember using '{...}' anywhere else than for formatting. On 09.08.2015 19:22, Eric V. Smith wrote: > On 8/8/2015 9:08 PM, Tim Delaney wrote: >> On 8 August 2015 at 11:39, Eric V. Smith > > wrote: >> >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". >> >> I need to add some text to the discussion section, but I think it's in >> reasonable shape. I have a fully working implementation that I'll get >> around to posting somewhere this weekend. >> >> >>> def how_awesome(): return 'very' >> ... >> >>> f'f-strings are {how_awesome()} awesome!' >> 'f-strings are very awesome!' >> >> I'm open to any suggestions to improve the PEP. Thanks for your >> feedback. >> >> >> I'd like to see an alternatives section, in particular listing >> alternative prefixes and why they weren't chosen over f. Off the top of >> my head, ones I've seen listed are: >> >> ! >> $ > I'll add something, but there's no particular reason. "f" for formatted, > along the lines of 'r' raw, 'b' bytes, and 'u' unicode. > > Especially when you want to combine them, I think a letter looks better: > fr'{x} a formatted raw string' > $r'{x} a formatted raw string' > > Eric. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de From larry at hastings.org Mon Aug 10 10:05:40 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 01:05:40 -0700 Subject: [Python-Dev] Python 3.5.0rc1 is delayed by a day Message-ID: <55C85B54.4020007@hastings.org> We retagged Python 3.5.0rc1 today to fix two bugs that popped up late in the process. Release candidates are supposed to be software you genuinely would release, and I couldn't release Python with both those bugs. This delay rippled through the whole process, so it just isn't going out tonight (late Sunday / early Monday in my timezone). I have every expectation it'll go out Monday. In case you're interested, the bugs are (were!): http://bugs.python.org/issue24745 http://bugs.python.org/issue24835 My thanks to the folks who stepped up and fixed the bugs on short notice, and my apologies to the community for the delay. We're just trying to make the best Python we can, for yooooooooou! See you tomorrow, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Aug 10 10:27:58 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 01:27:58 -0700 Subject: [Python-Dev] Instructions on the new "push request" workflow for 3.5.0rc1+ through 3.5.0 final Message-ID: <55C8608E.5020509@hastings.org> As of Python 3.5.0rc1, the canonical repository for Python 3.5.0 is *no longer* on hg.python.org. Instead, it's hosted on Bitbucket on my personal account, here: https://bitbucket.org/larry/cpython350 Since 3.5.0rc1 isn't out yet I'm keeping the repository private for now. Once 3.5.0 rc1 is released (hopefully Monday) I'll flip the switch and make the repository public. (I'll email python-dev and python-committers when that happens.) Putting it succinctly, here's a table of versions and where you'd check in for your change to go there: 3.5.0 : https://bitbucket.org/larry/cpython350 (branch "3.5") 3.5.1 : hg.python.org/cpython (branch "3.5") 3.6.0 : hg.python.org/cpython (branch "default") You'll notice nobody but myself has checkin permissions for my 3.5.0 repo on Bitbucket. That's on purpose. The only way you can get changes in to 3.5.0 now is by sending me a Bitbucket "pull request". This is a workflow experiment, to see if we as a community like this sort of new-fangled gizmo. For now, we're only using Bitbucket for the actual final checking-in stage. Requests for fixes to be accepted into 3.5.0 and code review will all still happen on the Python issue tracker. Also, I'm officially now asking you folks to do the forward-merge into 3.5.1 and 3.6.0 yourselves. Here's how to get a fix checked in for 3.5.0, starting with 3.5.0rc1+ and continuing through until 3.5.0 final. Pre-requisites: * You must have a Bitbucket account. * You must have commit rights to the CPython repository. 1. Create an issue on the Python issue tracker for the problem. 2. Submit a patch that fixes the problem. 3. Add me to the issue and get me to agree that it needs fixing in 3.5.0. (You can attempt this step before 2 if you prefer.) 4. Fork my repository into your Bitbucket account using their web GUI. To do that, go to Bitbucket, log in, then go to my 3.5.0 repo: https://bitbucket.org/larry/cpython350 and press the "Fork" link in the left column. Bitbucket has a tutorial on how to do this, here: https://confluence.atlassian.com/display/BITBUCKET/Fork+a+teammate%27s+repository Note: DO NOT start with a conventional CPython trunk cloned from hg.python.org. The 3.5 branch in my repo and the 3.5 branch in normal CPython trunk have intentionally diverged and *need* to stay out-of-sync. 5. Make a local clone of your fork on your computer. Bitbucket has a tutorial on how to do that, here: https://confluence.atlassian.com/display/BITBUCKET/Copy+your+Mercurial+repository+and+add+source+files Reminder: switch to the 3.5 branch! 6. Apply your change to the 3.5 branch and check in. Reminder: check in to the 3.5 branch! 7. Make sure you checked in your change to the 3.5 branch. Reminder: Seriously. I keep messing this up. I say, the more reminders, the better. 8. Push your change back to *your* fork on *your* Bitbucket account. Just normal "hg push" should work here. In case it helps, I recommend using the "https" protocol for this step, as it sidesteps ssh authentication and prompts you for your Bitbucket username and password. 9. Create a pull request using Bitbucket's web GUI. Bitbucket has a tutorial on how to create a pull request, here: https://confluence.atlassian.com/display/BITBUCKET/Create+a+pull+request On the "Create pull request" web GUI, make sure that you specify branch "3.5" for *both* your repo *and* my repo. Also, make sure you *don't* check the "Close 3.5 after the pull request is merged" check box. (If you use the "Compare" page, you also need to select "3.5" in both drop-down lists--one for my repo, and one for yours.) 10. Paste a link to the pull request into the issue tracker issue for this change request. 11. Wait for confirmation that I've accepted your pull request into the 3.5.0 repo. 12. Pull your accepted change from your local Bitbucket fork repo into a normal hg.cpython.org CPython repo, merge into 3.5, then merge into 3.6, then push. For the record, here's what *my* workflow looks like when I accept your pull request: 1. Click on the URL you pasted into the pull request. 2. Visually check that the diff matches the approved diff in the issue on the issue tracker. 3. Click on the "Merge" button. Frequently Asked Questions ========================== Q: What if someone sends me a "pull request" for a change that doesn't merge cleanly? A: Then I'll decline it, and ask you on the issue tracker to rebase and resubmit. Q: What if someone sends me a "pull request" but they don't have commit rights to CPython? A: I'll ignore it. I'll only pay attention to pull requests pasted into the issue tracker by someone with commit rights. Q: Whose name goes on the commit? A: It gets the name the checkin was made with. Don't worry, your name will stay on your commit. Q: This seems like a lot more work than the old way. A: For you guys, yes. But notice how little work it is for *me*! Seriously. Q: Can I reuse my fork / my local copy? Or do I have to create a fresh one each time? A: I don't care either way. All I care about are clean pull requests. If you're careful you should have no trouble reusing forks and local checkouts. If it were me, I'd probably use a fresh fork each time. Forks are cheap and this way is cleaner. I'll add these instructions to the Python Dev Guide in the next day or two. /arry p.s. Remember to use the 3.5 branch! From tritium-list at sdamon.com Mon Aug 10 12:05:00 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Mon, 10 Aug 2015 06:05:00 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C836CE.5030203@mail.de> References: <55C55DC3.8040605@trueblade.com> <55C78C51.8030008@trueblade.com> <55C836CE.5030203@mail.de> Message-ID: <55C8774C.1050209@sdamon.com> On 8/10/2015 01:29, Sven R. Kunze wrote: > The best solution would be "without prefix and '{var}' only" syntax. > Not sure if that is possible at all; I cannot remember using '{...}' > anywhere else than for formatting. My JSON string literal 'test fixtures' weep at that idea. From barry at python.org Mon Aug 10 13:48:54 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 10 Aug 2015 07:48:54 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: <20150810074854.329aa125@limelight.wooz.org> On Aug 09, 2015, at 06:14 PM, David Mertz wrote: >That said, there *is* one small corner where I believe f-strings add >something helpful to the language. There is no really concise way to spell: > > collections.ChainMap(locals(), globals(), __builtins__.__dict__). > >If we could spell that as, say `lgb()`, that would let str.format() or >%-formatting pick up the full "what's in scope". To my mind, that's the >only good thing about the f-string idea. That would certainly be useful to avoid sys._getframe() calls in my library, although I'd probably want the third argument to be optional (I wouldn't use it). If '{foo}' or '${foo}' syntax is adopted (with no allowance for '$foo'), it's very unlikely I'd use that over string.Template for internationalization, but the above would still be useful. Cheers, -Barry From victor.stinner at gmail.com Mon Aug 10 16:18:36 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Mon, 10 Aug 2015 16:18:36 +0200 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? Message-ID: Hi, I read the PEP but I don't understand how it is implemented. For me, it should be a simple preprocessor: - f'x={x}' is replaced with 'x={0}'.format(x) by the compiler - f'x={1+1}' is replaced with 'x={0}'.format(1+1) - f'x={foo()!r}' is replaced with 'x={0!r}'.format(foo()) - ... That's all. No new language, no new function or method. It's unclear to me if arbitrary expressions should be allowed or not. If not, we may pass parameters by keywords instead: f'x={x}' is replaced with 'x={x}'.format(x=x) by the compiler '...'.format(...) is highly optimized. In the current PEP, I see that each parameter is rendered in its own independent buffer (.__format__() method called multiple times) and then concateneted by ''.join(...). It's less efficient that using a single call to .format(). Victor PS: it looks like the gmail application changed the font size in the middle of my email. I don't know how to use plain text, sorry about that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Aug 10 16:36:30 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 10:36:30 -0400 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: References: Message-ID: <55C8B6EE.10504@trueblade.com> On 08/10/2015 10:18 AM, Victor Stinner wrote: > Hi, > > I read the PEP but I don't understand how it is implemented. For me, it > should be a simple preprocessor: > > - f'x={x}' is replaced with 'x={0}'.format(x) by the compiler > - f'x={1+1}' is replaced with 'x={0}'.format(1+1) > - f'x={foo()!r}' is replaced with 'x={0!r}'.format(foo()) > - ... > > That's all. No new language, no new function or method. There is no new function or method being proposed. The "pre-processor" is being implemented as the ast is being built. As the PEP says, though, the expressions supported aren't exactly the same, so a simple conversion to str.format syntax isn't possible. > It's unclear to me if arbitrary expressions should be allowed or not. If > not, we may pass parameters by keywords instead: > > f'x={x}' is replaced with 'x={x}'.format(x=x) by the compiler > > '...'.format(...) is highly optimized. In the current PEP, I see that > each parameter is rendered in its own independent buffer (.__format__() > method called multiple times) and then concateneted by ''.join(...). > It's less efficient that using a single call to .format(). Well, that's what str.format() does: calls __format__ on each expression and concatenates the results. Except str.format() uses _PyUncicodeWriter rather than ''.join, so it skips creating the list. I guess there is a short-circuit in format() where it has an exact object check for string, float, double (and I think complex), in which case it will skip the object allocation and basically call the internals of the object's __format__ method. I could make a similar optimization here, but it would require a new opcode. I'd like to look at some benchmarks first. I don't think such an optimization should drive acceptance or not: let's decide on the functionality first. Eric. From larry at hastings.org Mon Aug 10 16:44:22 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 07:44:22 -0700 Subject: [Python-Dev] Branch Prediction And The Performance Of Interpreters - Don't Trust Folklore Message-ID: <55C8B8C6.2090803@hastings.org> This just went by this morning on reddit's /r/programming. It's a paper that analyzed Python--among a handful of other languages--to answer the question "are branch predictors still that bad at the big switch statement approach to interpreters?" Their conclusion: no. Our simulations [...] show that, as long as the payload in the bytecode remains limited and do not feature significant amount of extra indirect branches, then the misprediction rate on the interpreter can be even become insignificant (less than 0.5 MPKI). (MPKI = missed predictions per thousand instructions) Their best results were on simulated hardware with state-of-the-art prediction algorithms ("TAGE" and "ITTAGE"), but they also demonstrate that branch predictors in real hardware are getting better quickly. When running the Unladen Swallow test suite on Python 3.3.2, compiled with USE_COMPUTED_GOTOS turned off, Intel's Nehalem experienced an average of 12.8 MPKI--but Sandy Bridge drops that to 3.5 MPKI, and Haswell reduces it further to a mere *1.4* MPKI. (AFAICT they didn't compare against Python 3.3.2 using computed gotos, either in terms of MPKI or in overall performance.) The paper is here: https://hal.inria.fr/hal-01100647/document I suppose I wouldn't propose removing the labels-as-values opcode dispatch code yet. But perhaps that day is in sight! //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Mon Aug 10 17:42:44 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 10 Aug 2015 11:42:44 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: <55C8C674.9010804@gmail.com> Eric, On 2015-08-07 9:39 PM, Eric V. Smith wrote: [..] > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. > Congrats for the PEP, it's a cool concept! Overall I'm +1, because a lot of my own formatting code looks like this: 'something ... {var1} .. something ... {var2}'.format( var1=var1, var2=var2) However, I'm still -1 on a few things. 1. Naming. How about renaming f-strings to i-strings (short for interpolated, and, maybe, later for i18n-ed)? So instead of f'...' we will have i'...'. There is a parallel PEP 501 by Nick Coghlan proposing integrating translation mechanisms, and I think, that "i-" prefix would allow us to implement PEP 498 first, and later build upon it. And, to my ears, "i-string" sounds way better than "f-string". 2. I'm still not sold on allowing arbitrary expressions in strings. There is something about this idea that conflicts with Python philosophy and its principles. Supporting arbitrary expressions means that we give a blessing to shifting parts of application business logic to string formatting. I'd hate to see code like this: print(f'blah blah {self.foobar(spam="ham")!r} blah') to me it seems completely unreadable, and should be refactored to result = self.foobar(spam="ham") print(f'blah blah {result!r} blah') The refactored snippet of code is readable even without advanced syntax highlighting. Moreover, if we decide to implement Nick's PEP 501, then supporting expressions in f-strings will cause more harm than good, as translators usually aren't programmers. I think that the main reason behind allowing arbitrary expressions in f-strings is allowing attribute and item access: f'{foo.bar} {spam["ham"]}' If that's the case, then can we just restrict expressions allowed in f-strings to names, attribute and item lookups? And if later, there is a strong demand for full expressions, we can add them in 3.7? Thanks, Yury From steve at pearwood.info Mon Aug 10 19:07:13 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Aug 2015 03:07:13 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: <20150810170713.GM3737@ando.pearwood.info> On Sun, Aug 09, 2015 at 06:14:18PM -0700, David Mertz wrote: [...] > That said, there *is* one small corner where I believe f-strings add > something helpful to the language. There is no really concise way to spell: > > collections.ChainMap(locals(), globals(), __builtins__.__dict__). I think that to match the normal name resolution rules, nonlocals() needs to slip in there between locals() and globals(). I realise that there actually isn't a nonlocals() function (perhaps there should be?). > If we could spell that as, say `lgb()`, that would let str.format() or > %-formatting pick up the full "what's in scope". To my mind, that's the > only good thing about the f-string idea. I like the concept, but not the name. Initialisms tend to be hard to remember and rarely self-explanatory. How about scope()? -- Steve From eric at trueblade.com Mon Aug 10 19:18:16 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 13:18:16 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810170713.GM3737@ando.pearwood.info> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810170713.GM3737@ando.pearwood.info> Message-ID: <55C8DCD8.2080306@trueblade.com> On 08/10/2015 01:07 PM, Steven D'Aprano wrote: > On Sun, Aug 09, 2015 at 06:14:18PM -0700, David Mertz wrote: > > [...] >> That said, there *is* one small corner where I believe f-strings add >> something helpful to the language. There is no really concise way to spell: >> >> collections.ChainMap(locals(), globals(), __builtins__.__dict__). > > I think that to match the normal name resolution rules, nonlocals() > needs to slip in there between locals() and globals(). I realise that > there actually isn't a nonlocals() function (perhaps there should be?). > >> If we could spell that as, say `lgb()`, that would let str.format() or >> %-formatting pick up the full "what's in scope". To my mind, that's the >> only good thing about the f-string idea. > > I like the concept, but not the name. Initialisms tend to be hard > to remember and rarely self-explanatory. How about scope()? I don't see how you're going to be able to do this in the general case. Not all variables end up in locals(). See PEP-498's discussion of closures, for example. Guido has already said locals() and globals() would not be part of the solution for string interpolation (also in the PEP). PEP-498 handles the non-general case: it parses through the string to find the variables used in the expressions, and then adds them to the symbol table. Eric. From steve at pearwood.info Mon Aug 10 19:26:31 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Aug 2015 03:26:31 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: <20150810172631.GN3737@ando.pearwood.info> On Sun, Aug 09, 2015 at 06:54:38PM -0700, David Mertz wrote: > Which brought to mind a certain thought. While I don't like: > > f'My name is {name}, my age next year is {age+1}' > > I wouldn't have any similar objection to: > > 'My name is {name}, my age next year is {age+1}'.scope_format() > > Or > > scope_format('My name is {name}, my age next year is {age+1}') > > I realize that these could be completely semantically equivalent... but the > function or method call LOOKS LIKE a runtime operation, while a one letter > prefix just doesn't look like that (especially to beginners whom I might > teach). I fear that this is actually worse than the f-string concept. f-strings, as far as I understand, are literals. (Well, not exactly literals.) You cannot say: # this can't happen (I think?) expr = 'age + 1' result = f'blah blah blah {' + expr + '}' and inject the expression into the f-string. That makes them a little weaker than eval(), and hence a little safer. But scope_format would have to be eval in disguise, since it receives a string as argument, and it can't know where it came from or how it came to be: # pretend that expr comes from, say, a web form expr = 'age + 1}{os.system("echo Pwned!") and ""' result = scope_format( 'My name is {name}, my age next year is {' + expr + '}' ) It's a dilemma, because I'm completely with you in your discomfort in having something which looks like a string literal actually be a function of sorts; but turning it into an actual function makes it more dangerous, not less. I think I would be happy with f-strings, or perhaps i-strings if we use Nick's ideas about internationalisation, and limit what they can evaluate to name lookups, attribute lookups, and indexing, just like format(). We can always relax that restriction in the future, if necessary, but it's a lot harder to tighten it. -- Steve From eric at trueblade.com Mon Aug 10 20:28:30 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 14:28:30 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810172631.GN3737@ando.pearwood.info> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> Message-ID: <55C8ED4E.6050104@trueblade.com> On 08/10/2015 01:26 PM, Steven D'Aprano wrote: > On Sun, Aug 09, 2015 at 06:54:38PM -0700, David Mertz wrote: > >> Which brought to mind a certain thought. While I don't like: >> >> f'My name is {name}, my age next year is {age+1}' >> >> I wouldn't have any similar objection to: >> >> 'My name is {name}, my age next year is {age+1}'.scope_format() >> >> Or >> >> scope_format('My name is {name}, my age next year is {age+1}') >> >> I realize that these could be completely semantically equivalent... but the >> function or method call LOOKS LIKE a runtime operation, while a one letter >> prefix just doesn't look like that (especially to beginners whom I might >> teach). > > I fear that this is actually worse than the f-string concept. f-strings, > as far as I understand, are literals. (Well, not exactly literals.) You > cannot say: > > # this can't happen (I think?) > expr = 'age + 1' > result = f'blah blah blah {' + expr + '}' > > and inject the expression into the f-string. That makes them a little > weaker than eval(), and hence a little safer. Correct. f-strings only work on literals. They essentially convert the f-string literal into an expression (which is not strictly specified in the PEP, but it has examples). > But scope_format would > have to be eval in disguise, since it receives a string as argument, > and it can't know where it came from or how it came to be: > > # pretend that expr comes from, say, a web form > expr = 'age + 1}{os.system("echo Pwned!") and ""' > result = scope_format( > 'My name is {name}, my age next year is {' + expr + '}' > ) > > It's a dilemma, because I'm completely with you in your discomfort in > having something which looks like a string literal actually be a > function of sorts; but turning it into an actual function makes it more > dangerous, not less. > > I think I would be happy with f-strings, or perhaps i-strings if we use > Nick's ideas about internationalisation, and limit what they can > evaluate to name lookups, attribute lookups, and indexing, just like > format(). > > We can always relax that restriction in the future, if necessary, but > it's a lot harder to tighten it. This desire, which many people have expressed, is not completely lost on me. Eric. From barry at python.org Mon Aug 10 20:31:27 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 10 Aug 2015 14:31:27 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810172631.GN3737@ando.pearwood.info> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> Message-ID: <20150810143127.66c5f842@anarchist.wooz.org> On Aug 11, 2015, at 03:26 AM, Steven D'Aprano wrote: >I think I would be happy with f-strings, or perhaps i-strings if we use >Nick's ideas about internationalisation, and limit what they can evaluate to >name lookups, attribute lookups, and indexing, just like format(). I still think you really only need name lookups, especially for an i18n context. Anything else is just overkill, YAGNI, potentially error prone, or perhaps even harmful. Remember that the translated strings usually come from only moderately (if at all) trusted and verified sources, so it's entirely possible that a malicious translator could sneak in an exploit, especially if you're evaluating arbitrary expressions. If you're only doing name substitutions, then the worst that can happen is an information leak, which is bad, but won't compromise the integrity of say a server using the translation. Even if the source strings avoid the use of expressions, if the feature is available, a translator could still sneak something in. That pretty much makes it a non-starter for i18n, IMHO. Besides, any expression you have to calculate can go in a local that will get interpolated. The same goes for any !r or other formatting modifiers. In an i18n context, you want to stick to the simplest possible substitution placeholders. Cheers, -Barry From eric at trueblade.com Mon Aug 10 20:37:12 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 14:37:12 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810143127.66c5f842@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> Message-ID: <55C8EF58.6040100@trueblade.com> On 08/10/2015 02:31 PM, Barry Warsaw wrote: > On Aug 11, 2015, at 03:26 AM, Steven D'Aprano wrote: > >> I think I would be happy with f-strings, or perhaps i-strings if we use >> Nick's ideas about internationalisation, and limit what they can evaluate to >> name lookups, attribute lookups, and indexing, just like format(). > > I still think you really only need name lookups, especially for an i18n > context. Anything else is just overkill, YAGNI, potentially error prone, or > perhaps even harmful. > > Remember that the translated strings usually come from only moderately (if at > all) trusted and verified sources, so it's entirely possible that a malicious > translator could sneak in an exploit, especially if you're evaluating > arbitrary expressions. If you're only doing name substitutions, then the > worst that can happen is an information leak, which is bad, but won't > compromise the integrity of say a server using the translation. > > Even if the source strings avoid the use of expressions, if the feature is > available, a translator could still sneak something in. That pretty much > makes it a non-starter for i18n, IMHO. > > Besides, any expression you have to calculate can go in a local that will get > interpolated. The same goes for any !r or other formatting modifiers. In an > i18n context, you want to stick to the simplest possible substitution > placeholders. This is why I think PEP-498 isn't the solution for i18n. I'd really like to be able to say, in a debugging context: print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') without having to create locals to hold these 4 values. Eric. From yselivanov.ml at gmail.com Mon Aug 10 20:44:08 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 10 Aug 2015 14:44:08 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8EF58.6040100@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> Message-ID: <55C8F0F8.8040806@gmail.com> On 2015-08-10 2:37 PM, Eric V. Smith wrote: >> Besides, any expression you have to calculate can go in a local that will get >> >interpolated. The same goes for any !r or other formatting modifiers. In an >> >i18n context, you want to stick to the simplest possible substitution >> >placeholders. > This is why I think PEP-498 isn't the solution for i18n. I'd really like > to be able to say, in a debugging context: > > print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') > > without having to create locals to hold these 4 values. Why can't we restrict expressions in f-strings to attribute/item getters? I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but disallow f'{foo.bar(baz=something)}' Yury From eric at trueblade.com Mon Aug 10 20:49:05 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 14:49:05 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8F0F8.8040806@gmail.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> Message-ID: <55C8F221.5070504@trueblade.com> On 08/10/2015 02:44 PM, Yury Selivanov wrote: > > > On 2015-08-10 2:37 PM, Eric V. Smith wrote: >>> Besides, any expression you have to calculate can go in a local that >>> will get >>> >interpolated. The same goes for any !r or other formatting >>> modifiers. In an >>> >i18n context, you want to stick to the simplest possible substitution >>> >placeholders. >> This is why I think PEP-498 isn't the solution for i18n. I'd really like >> to be able to say, in a debugging context: >> >> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') >> >> without having to create locals to hold these 4 values. > > Why can't we restrict expressions in f-strings to > attribute/item getters? > > I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but > disallow f'{foo.bar(baz=something)}' It's possible. But my point is that Barry doesn't even want attribute/item getters for an i18n solution, and I'm not willing to restrict it that much. Eric. From mertz at gnosis.cx Mon Aug 10 20:52:44 2015 From: mertz at gnosis.cx (David Mertz) Date: Mon, 10 Aug 2015 11:52:44 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810170713.GM3737@ando.pearwood.info> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810170713.GM3737@ando.pearwood.info> Message-ID: I know. I elided including the nonexistent `nonlocals()` in there. But it *should* be `lngb()`. Or call it scope(). :-) On Aug 10, 2015 10:09 AM, "Steven D'Aprano" wrote: > On Sun, Aug 09, 2015 at 06:14:18PM -0700, David Mertz wrote: > > [...] > > That said, there *is* one small corner where I believe f-strings add > > something helpful to the language. There is no really concise way to > spell: > > > > collections.ChainMap(locals(), globals(), __builtins__.__dict__). > > I think that to match the normal name resolution rules, nonlocals() > needs to slip in there between locals() and globals(). I realise that > there actually isn't a nonlocals() function (perhaps there should be?). > > > If we could spell that as, say `lgb()`, that would let str.format() or > > %-formatting pick up the full "what's in scope". To my mind, that's the > > only good thing about the f-string idea. > > I like the concept, but not the name. Initialisms tend to be hard > to remember and rarely self-explanatory. How about scope()? > > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Mon Aug 10 20:57:22 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Mon, 10 Aug 2015 20:57:22 +0200 Subject: [Python-Dev] Branch Prediction And The Performance Of Interpreters - Don't Trust Folklore In-Reply-To: <55C8B8C6.2090803@hastings.org> References: <55C8B8C6.2090803@hastings.org> Message-ID: On Mon, Aug 10, 2015 at 4:44 PM, Larry Hastings wrote: > > > This just went by this morning on reddit's /r/programming. It's a paper > that analyzed Python--among a handful of other languages--to answer the > question "are branch predictors still that bad at the big switch statement > approach to interpreters?" Their conclusion: no. > > Our simulations [...] show that, as long as the payload in the bytecode > remains limited and do not feature significant amount of extra indirect > branches, then the misprediction rate on the interpreter can be even become > insignificant (less than 0.5 MPKI). > > (MPKI = missed predictions per thousand instructions) > > Their best results were on simulated hardware with state-of-the-art > prediction algorithms ("TAGE" and "ITTAGE"), but they also demonstrate that > branch predictors in real hardware are getting better quickly. When running > the Unladen Swallow test suite on Python 3.3.2, compiled with > USE_COMPUTED_GOTOS turned off, Intel's Nehalem experienced an average of > 12.8 MPKI--but Sandy Bridge drops that to 3.5 MPKI, and Haswell reduces it > further to a mere *1.4* MPKI. (AFAICT they didn't compare against Python > 3.3.2 using computed gotos, either in terms of MPKI or in overall > performance.) > > The paper is here: > > https://hal.inria.fr/hal-01100647/document > > > I suppose I wouldn't propose removing the labels-as-values opcode dispatch > code yet. But perhaps that day is in sight! > > > /arry > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com > Hi Larry Please also note that as far as I can tell this mostly applies to x86. The ARM branch prediction is significantly dumber these days and as long as python performance is considered on such platforms such tricks do make the situation better. We found it out doing CPython/PyPy comparison, where the difference PyPy vs cPython was bigger on ARM and smaller on x86, despite our ARM assembler that we produce being less well optimized. Cheers, fijal From guido at python.org Mon Aug 10 21:23:15 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 10 Aug 2015 21:23:15 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8F221.5070504@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> Message-ID: On Mon, Aug 10, 2015 at 8:49 PM, Eric V. Smith wrote: > On 08/10/2015 02:44 PM, Yury Selivanov wrote: > > On 2015-08-10 2:37 PM, Eric V. Smith wrote: > >> This is why I think PEP-498 isn't the solution for i18n. I'd really like > >> to be able to say, in a debugging context: > >> > >> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') > >> > >> without having to create locals to hold these 4 values. > > > > Why can't we restrict expressions in f-strings to > > attribute/item getters? > > > > I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but > > disallow f'{foo.bar(baz=something)}' > > It's possible. But my point is that Barry doesn't even want > attribute/item getters for an i18n solution, and I'm not willing to > restrict it that much. I also don't want to tie this closely to i18n. That is (still) very much a wold of its own. What I want with f-strings (by any name) is a way to generalize from print() calls with multiple arguments. We can write print('Todo:', len(self.todo), '; busy:', len(self.busy)) but the same thing is more awkward when you have to pass it as a single string to a function that just sends one string somewhere. And note that the above example inserts a space before the ';' which I don't really like. So it would be nice if instead we could write print(f'Todo: {len(self.todo)}; busy: {len(self.busy)}') which IMO is just as readable as the multi-arg print() call[1], and generalizes to other functions besides print(). In fact, the latter form has less punctuation noise than the former -- every time you insert an expression in a print() call, you have a quote+comma before it and a comma+quote after it, compared to a brace before and one after in the new form. (Note that this is an argument for using f'{...}' rather than '\{...}' -- for a single interpolation it's the same amount of typing, but for multiple interpolations, f'{...}{...}' is actually shorter than '\{...}\{...}', and also the \{ part is ugly.) Anyway, this generalization from print() is why I want arbitrary expressions. Wouldn't it be silly if we introduced print() today and said "we don't really like to encourage printing complicated expressions, but maybe we can introduce them in a future version"... :-) Continuing the print()-generalization theme, if things become too long to fit on a line we can write print('Todo:', len(self.todo), '; busy:', len(self.busy)) Can we allow the same in f-strings? E.g. print(f'Todo: {len(self.todo) }; busy: {len(self.busy) }') or is that too ugly? It could also be solved using implicit concatenation, e.g. print(f'Todo: {len(self.todo)}; ' f'busy: {len(self.busy)}') [1] Assuming syntax colorizers catch on. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From carl at oddbird.net Mon Aug 10 21:04:40 2015 From: carl at oddbird.net (Carl Meyer) Date: Mon, 10 Aug 2015 15:04:40 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8F221.5070504@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> Message-ID: <55C8F5C8.7030005@oddbird.net> On 08/10/2015 02:49 PM, Eric V. Smith wrote: > On 08/10/2015 02:44 PM, Yury Selivanov wrote: >> >> >> On 2015-08-10 2:37 PM, Eric V. Smith wrote: >>>> Besides, any expression you have to calculate can go in a local that >>>> will get >>>>> interpolated. The same goes for any !r or other formatting >>>> modifiers. In an >>>>> i18n context, you want to stick to the simplest possible substitution >>>>> placeholders. >>> This is why I think PEP-498 isn't the solution for i18n. I'd really like >>> to be able to say, in a debugging context: >>> >>> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') >>> >>> without having to create locals to hold these 4 values. >> >> Why can't we restrict expressions in f-strings to >> attribute/item getters? >> >> I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but >> disallow f'{foo.bar(baz=something)}' > > It's possible. But my point is that Barry doesn't even want > attribute/item getters for an i18n solution, and I'm not willing to > restrict it that much. I don't think attribute access and item access are on the same level here. In terms of readability of the resulting string literal, it would be reasonable to allow attribute access but disallow item access. And I think attribute access is reasonable to allow in the context of an i18n solution as well (but item access is not). Item access is much harder to read and easier for translators to mess up because of all the extra punctuation (and the not-obvious-to-a-non-programmer distinction between a literal or variable key). There's also the solution used by the Django and Jinja templating languages, where dot-notation can mean either attribute access (preferentially) or item access with literal key (as fallback). That manages to achieve both a high level of readability of the literal/template, and a high level of flexibility for the context provider (who may find it easier to provide a dictionary than an object), but may fail the "too different from Python" test. Carl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From barry at python.org Mon Aug 10 21:38:21 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 10 Aug 2015 15:38:21 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8F221.5070504@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> Message-ID: <20150810153821.5696f7d7@anarchist.wooz.org> On Aug 10, 2015, at 02:49 PM, Eric V. Smith wrote: >It's possible. But my point is that Barry doesn't even want >attribute/item getters for an i18n solution, and I'm not willing to >restrict it that much. Actually, attribute chasing is generally fine, and flufl.i18n supports that. Translators can handle $foo.bar although you still do have to be careful about information leaks ("choose your foo's carefully"). Item getters have been more YAGNI than anything else. Cheers, -Barry From python at mrabarnett.plus.com Mon Aug 10 21:57:02 2015 From: python at mrabarnett.plus.com (MRAB) Date: Mon, 10 Aug 2015 20:57:02 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> Message-ID: <55C9020E.7020403@mrabarnett.plus.com> On 2015-08-10 20:23, Guido van Rossum wrote: > On Mon, Aug 10, 2015 at 8:49 PM, Eric V. Smith > wrote: > > On 08/10/2015 02:44 PM, Yury Selivanov wrote: > > On 2015-08-10 2:37 PM, Eric V. Smith wrote: > >> This is why I think PEP-498 isn't the solution for i18n. I'd really like > >> to be able to say, in a debugging context: > >> > >> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') > >> > >> without having to create locals to hold these 4 values. > > > > Why can't we restrict expressions in f-strings to > > attribute/item getters? > > > > I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but > > disallow f'{foo.bar(baz=something)}' > > It's possible. But my point is that Barry doesn't even want > attribute/item getters for an i18n solution, and I'm not willing to > restrict it that much. > > > I also don't want to tie this closely to i18n. That is (still) very much > a wold of its own. > > What I want with f-strings (by any name) is a way to generalize from > print() calls with multiple arguments. We can write > > print('Todo:', len(self.todo), '; busy:', len(self.busy)) > > but the same thing is more awkward when you have to pass it as a single > string to a function that just sends one string somewhere. And note that > the above example inserts a space before the ';' which I don't really > like. So it would be nice if instead we could write > > print(f'Todo: {len(self.todo)}; busy: {len(self.busy)}') > > which IMO is just as readable as the multi-arg print() call[1], and > generalizes to other functions besides print(). > > In fact, the latter form has less punctuation noise than the former -- > every time you insert an expression in a print() call, you have a > quote+comma before it and a comma+quote after it, compared to a brace > before and one after in the new form. (Note that this is an argument for > using f'{...}' rather than '\{...}' -- for a single interpolation it's > the same amount of typing, but for multiple interpolations, > f'{...}{...}' is actually shorter than '\{...}\{...}', and also the \{ > part is ugly.) > > Anyway, this generalization from print() is why I want arbitrary > expressions. Wouldn't it be silly if we introduced print() today and > said "we don't really like to encourage printing complicated > expressions, but maybe we can introduce them in a future version"... :-) > > Continuing the print()-generalization theme, if things become too long > to fit on a line we can write > > print('Todo:', len(self.todo), > '; busy:', len(self.busy)) > > Can we allow the same in f-strings? E.g. > > print(f'Todo: {len(self.todo) > }; busy: {len(self.busy) > }') > > or is that too ugly? It could also be solved using implicit > concatenation, e.g. > > print(f'Todo: {len(self.todo)}; ' > f'busy: {len(self.busy)}') > > [1] Assuming syntax colorizers catch on. > I'd expect f'...' to follow similar rules to '...'. You could escape it: print(f'Todo: {len(self.todo)\ }; busy: {len(self.busy)\ }') which would be equivalent to: print(f'Todo: {len(self.todo) }; busy: {len(self.busy) }') or use triple-quoted a f-string: print(f'''Todo: {len(self.todo) }; busy: {len(self.busy) }''') which would be equivalent to: print(f'Todo: {len(self.todo)\n }; busy: {len(self.busy)\n }') (I think it might be OK to have a newline in the expression because it's wrapped in {...}.) From python-ideas at mgmiller.net Mon Aug 10 22:12:21 2015 From: python-ideas at mgmiller.net (Mike Miller) Date: Mon, 10 Aug 2015 13:12:21 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: <55C905A5.9050005@mgmiller.net> Here are my notes on PEP 498. 1. Title: Literal String Formatting - String Literal Formatting - Format String Expressions ? 2. Let's call them "format strings" not "f-strings". The latter sounds slightly obnoxious, and also inconsistent with the others: r'' raw string u'' unicode object (string) f'' format string 3. " This PEP does not propose to remove or deprecate any of the existing string formatting mechanisms. " Should we put this farther up with the section talking about them, it seems out of place where it is. 4. "The existing ways of formatting are either error prone, inflexible, or cumbersome." I would tone this down a bit, they're not so bad, quite verbose is a phrase I might use instead. 5. Discussion Section How to designate f-strings, and how specify the locaton of expressions ^ typo 6. Perhaps mention string literal functionality, like triple quotes, line-ending backslashes, as MRAB mentions, in addition to the concatenation rules. -Mike On 08/07/2015 06:39 PM, Eric V. Smith wrote: From njs at pobox.com Mon Aug 10 23:51:41 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 10 Aug 2015 14:51:41 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810143127.66c5f842@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> Message-ID: On Aug 10, 2015 11:33 AM, "Barry Warsaw" wrote: > > On Aug 11, 2015, at 03:26 AM, Steven D'Aprano wrote: > > >I think I would be happy with f-strings, or perhaps i-strings if we use > >Nick's ideas about internationalisation, and limit what they can evaluate to > >name lookups, attribute lookups, and indexing, just like format(). > > I still think you really only need name lookups, especially for an i18n > context. Anything else is just overkill, YAGNI, potentially error prone, or > perhaps even harmful. > > Remember that the translated strings usually come from only moderately (if at > all) trusted and verified sources, so it's entirely possible that a malicious > translator could sneak in an exploit, especially if you're evaluating > arbitrary expressions. If you're only doing name substitutions, then the > worst that can happen is an information leak, which is bad, but won't > compromise the integrity of say a server using the translation. > > Even if the source strings avoid the use of expressions, if the feature is > available, a translator could still sneak something in. That pretty much > makes it a non-starter for i18n, IMHO. > > Besides, any expression you have to calculate can go in a local that will get > interpolated. The same goes for any !r or other formatting modifiers. In an > i18n context, you want to stick to the simplest possible substitution > placeholders. IIUC what Nick contemplates in PEP 501 is that when you write something like i"I am ${self.age}" then the python runtime would itself evaluate self.age and pass it on to the i18n machinery to do the actual substitution; the i18n machinery wouldn't even contain any calls to eval. The above string could be translated as i"Tengo ${self.age} a?os" but i"Tengo ${self.password} a?os" would be an error, because the runtime did not provide a value for self.password. So while arbitrarily complex expressions are allowed (at least as far as the language is concerned -- a given project or i18n toolkit could impose additional policy restrictions), by the time the interpolation machinery runs they'll effectively have been reduced to local variables with funny multi-token names. This pretty much eliminates all the information leak and exploit concerns, AFAICT. From your comments about having to be careful about attribute chasing, it sounds like it might even be more robust than current flufl.i18n in this regard...? -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Aug 11 00:22:35 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 00:22:35 +0200 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: <55C8B6EE.10504@trueblade.com> References: <55C8B6EE.10504@trueblade.com> Message-ID: Le lundi 10 ao?t 2015, Eric V. Smith a ?crit : > On 08/10/2015 10:18 AM, Victor Stinner wrote: > > Hi, > > > > I read the PEP but I don't understand how it is implemented. For me, it > > should be a simple preprocessor: > > > > - f'x={x}' is replaced with 'x={0}'.format(x) by the compiler > > - f'x={1+1}' is replaced with 'x={0}'.format(1+1) > > - f'x={foo()!r}' is replaced with 'x={0!r}'.format(foo()) > > - ... > > > > That's all. No new language, no new function or method. > > There is no new function or method being proposed. The "pre-processor" > is being implemented as the ast is being built. As the PEP says, though, > the expressions supported aren't exactly the same, so a simple > conversion to str.format syntax isn't possible. > Can you please provide example(s) of f-string(s) which cannot be replaced by a call to .format() like I did? Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Aug 11 00:54:00 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 00:54:00 +0200 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces Message-ID: PEP 498: """ Leading whitespace in expressions is skipped Because expressions may begin with a left brace ('{'), there is a problem when parsing such expressions. For example: >>> f'{{k:v for k, v in [(1, 2), (3, 4)]}}' '{k:v for k, v in [(1, 2), (3, 4)]}' """ For me, this example is crazy. You should not add a special case (ignore spaces) just to support a corner case. This example can easily be rewritten using a temporary variable and it makes the code simpler. items={k:v for k, v in [(1, 2), (3, 4)]; f'{items}' Seriously, a dict-comprehension inside a f-string should be considered as an abuse of the feature. Don't you think so? Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Aug 11 00:59:29 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 18:59:29 -0400 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: References: <55C8B6EE.10504@trueblade.com> Message-ID: <55C92CD1.5080403@trueblade.com> On 8/10/2015 6:22 PM, Victor Stinner wrote: > Le lundi 10 ao?t 2015, Eric V. Smith > a ?crit : > > On 08/10/2015 10:18 AM, Victor Stinner wrote: > > Hi, > > > > I read the PEP but I don't understand how it is implemented. For > me, it > > should be a simple preprocessor: > > > > - f'x={x}' is replaced with 'x={0}'.format(x) by the compiler > > - f'x={1+1}' is replaced with 'x={0}'.format(1+1) > > - f'x={foo()!r}' is replaced with 'x={0!r}'.format(foo()) > > - ... > > > > That's all. No new language, no new function or method. > > There is no new function or method being proposed. The "pre-processor" > is being implemented as the ast is being built. As the PEP says, though, > the expressions supported aren't exactly the same, so a simple > conversion to str.format syntax isn't possible. > > > Can you please provide example(s) of f-string(s) which cannot be > replaced by a call to .format() like I did? Oops, I was thinking of going the other way (str.format -> f''). Yes, I think you're correct. But in any event, I don't see the distinction between calling str.format(), and calling each object's __format__ method. Both are compliant with the PEP, which doesn't specify exactly how the transformation is done. Eric. From victor.stinner at gmail.com Tue Aug 11 01:00:59 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 01:00:59 +0200 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: Message-ID: By the way, I don't think that fu'...' syntax should be allowed. IMHO u'...' was only reintroduced to Python 3.3 to ease transition from Python 2 to Python 3 of the existing u'...' Syntax. Since f'...' is a new syntax, backward compatibility doesn't matter here. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Aug 11 01:04:17 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 19:04:17 -0400 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: Message-ID: <55C92DF1.1080905@trueblade.com> On 8/10/2015 6:54 PM, Victor Stinner wrote: > > PEP 498: > > """ > > > Leading whitespace in expressions is skipped > > > Because expressions may begin with a left brace ('{'), there is a > problem when parsing such expressions. For example: > >>>> f'{{k:v for k, v in [(1, 2), (3, 4)]}}' '{k:v for k, v in [(1, 2), (3, 4)]}' > > """ > > For me, this example is crazy. You should not add a special case (ignore > spaces) just to support a corner case. > > This example can easily be rewritten using a temporary variable and it > makes the code simpler. > > items={k:v for k, v in [(1, 2), (3, 4)]; f'{items}' > > Seriously, a dict-comprehension inside a f-string should be considered > as an abuse of the feature. Don't you think so? Yes, it's absolutely an abuse and should never be written. But if the only "cost" to allowing it is skipping leading spaces, I don't see the harm. It sounds like you want to disallow leading spaces just to disallow this one type of expression. My other use case for spaces, which I've not added to the PEP yet, is for alignment, like: f' x={ x:15}' f'xx={xx:15}' Eric. From python at mrabarnett.plus.com Tue Aug 11 01:17:04 2015 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 11 Aug 2015 00:17:04 +0100 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: Message-ID: <55C930F0.90406@mrabarnett.plus.com> On 2015-08-10 23:54, Victor Stinner wrote: > > PEP 498: > > """ > > > Leading whitespace in expressions is skipped > > > Because expressions may begin with a left brace ('{'), there is a > problem when parsing such expressions. For example: > >>>> f'{{k:v for k, v in [(1, 2), (3, 4)]}}' '{k:v for k, v in [(1, 2), (3, 4)]}' > > """ > > For me, this example is crazy. You should not add a special case (ignore > spaces) just to support a corner case. > Is it a special case? Don't we already ignore leading spaces in bracketed expressions? > This example can easily be rewritten using a temporary variable and it > makes the code simpler. > > items={k:v for k, v in [(1, 2), (3, 4)]; f'{items}' > > Seriously, a dict-comprehension inside a f-string should be considered > as an abuse of the feature. Don't you think so? > True. Or we can wrap it in parentheses. :-) From victor.stinner at gmail.com Tue Aug 11 01:23:23 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 01:23:23 +0200 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: <55C92CD1.5080403@trueblade.com> References: <55C8B6EE.10504@trueblade.com> <55C92CD1.5080403@trueblade.com> Message-ID: Le mardi 11 ao?t 2015, Eric V. Smith a ?crit : > > Oops, I was thinking of going the other way (str.format -> f''). Yes, I > think you're correct. Ah ok. But in any event, I don't see the distinction between calling > str.format(), and calling each object's __format__ method. Both are > compliant with the PEP, which doesn't specify exactly how the > transformation is done. > When I read the PEP for the first time, I understood that you reimplemented str.format() using the __format__() methods. So i understood that it's a new formatting language and it would be tricky to reimplement it, for example in a library providing i18n with f-string syntax (I'm not sure that it's feasible, it's just an example). I also expected many subtle differences between .format() and f-string. In fact, f-string is quite standard and not new, it's just a compact syntax to call .format() (well, with some minor and acceptable subtle differences). For me, it's a good thing to rely on the existing .format() method because it's well known (no need to learn a new formatting language). Maybe you should rephrase some parts of your PEP and rewrite some examples to say that's it's "just" a compact syntax to call .format(). -- For me, calling __format__() multiple times or format() once matters, for performances, because I contributed to the implementation of _PyUnicodeWriter. I spent a lot of time to keep good performances when the implementation of Unicode was rewritten for the PEP 393. With this PEP, writing an efficient implementation is much harder. The dummy benchmark is to compare Python 2.7 str.format() (bytes!) to Python 3 str.format() (Unicode!). Users want similar performances! If I recall correctly, Python 3 is not bad (faster is some corner cases). Concatenate temporary strings is less efficient Than _PyUnicodeWriter (single buffer) when you have UCS-1, UCS-2 and UCS-4 strings (1/2/4 bytes per character). It's more efficient to write directly into the final format (UCS-1/2/4), even if you may need to convert the buffer from UCS-1 to UCS-2 (and maybe even one more time to UCS-4). Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Tue Aug 11 01:26:35 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 01:26:35 +0200 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <55C92DF1.1080905@trueblade.com> References: <55C92DF1.1080905@trueblade.com> Message-ID: Le mardi 11 ao?t 2015, Eric V. Smith a ?crit : > It sounds like you want to disallow leading spaces just to > disallow this one type of expression. > I would like to reduce the number of subtle differences between f-string and str.format(). Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Aug 11 01:30:06 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 19:30:06 -0400 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: References: <55C8B6EE.10504@trueblade.com> <55C92CD1.5080403@trueblade.com> Message-ID: <55C933FE.8070800@trueblade.com> On 8/10/2015 7:23 PM, Victor Stinner wrote: > But in any event, I don't see the distinction between calling > str.format(), and calling each object's __format__ method. Both are > compliant with the PEP, which doesn't specify exactly how the > transformation is done. > > > When I read the PEP for the first time, I understood that you > reimplemented str.format() using the __format__() methods. So i > understood that it's a new formatting language and it would be tricky to > reimplement it, for example in a library providing i18n with f-string > syntax (I'm not sure that it's feasible, it's just an example). I also > expected many subtle differences between .format() and f-string. > > In fact, f-string is quite standard and not new, it's just a compact > syntax to call .format() (well, with some minor and acceptable subtle > differences). For me, it's a good thing to rely on the existing > .format() method because it's well known (no need to learn a new > formatting language). > > Maybe you should rephrase some parts of your PEP and rewrite some > examples to say that's it's "just" a compact syntax to call .format(). Okay. I'll look at it. > For me, calling __format__() multiple times or format() once matters, > for performances, because I contributed to the implementation of > _PyUnicodeWriter. I spent a lot of time to keep good performances > when the implementation of Unicode was rewritten for the PEP 393. With > this PEP, writing an efficient implementation is much harder. The dummy > benchmark is to compare Python 2.7 str.format() (bytes!) to Python 3 > str.format() (Unicode!). Users want similar performances! If I recall > correctly, Python 3 is not bad (faster is some corner cases). '{} {}'.format(datetime.datetime.now(), decimal.Decimal('100')) calls __format__() twice. It's only special cased to not call __format__ for str, int, float, and complex. I'll grant you that most of the cases it will ever be used for are thus special cased. > Concatenate temporary strings is less efficient Than _PyUnicodeWriter > (single buffer) when you have UCS-1, UCS-2 and UCS-4 strings (1/2/4 > bytes per character). It's more efficient to write directly into the > final format (UCS-1/2/4), even if you may need to convert the buffer > from UCS-1 to UCS-2 (and maybe even one more time to UCS-4). As I said, after it's benchmarked, I'll look at it. It's not a user-visible change. And thanks for your work on _PyUnicodeWriter. Eric. From eric at trueblade.com Tue Aug 11 01:31:26 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 19:31:26 -0400 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: <55C92DF1.1080905@trueblade.com> Message-ID: <55C9344E.7040407@trueblade.com> On 8/10/2015 7:26 PM, Victor Stinner wrote: > Le mardi 11 ao?t 2015, Eric V. Smith > a ?crit : > > It sounds like you want to disallow leading spaces just to > disallow this one type of expression. > > > I would like to reduce the number of subtle differences between > f-string and str.format(). The expressions supported are so vastly different that I don't think the whitespace issue matters. Eric. From cs at zip.com.au Tue Aug 11 01:58:51 2015 From: cs at zip.com.au (Cameron Simpson) Date: Tue, 11 Aug 2015 09:58:51 +1000 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: Message-ID: <20150810235851.GA19570@cskk.homeip.net> On 11Aug2015 01:00, Victor Stinner wrote: >By the way, I don't think that fu'...' syntax should be allowed. IMHO >u'...' was only reintroduced to Python 3.3 to ease transition from Python 2 >to Python 3 of the existing u'...' Syntax. Since f'...' is a new syntax, >backward compatibility doesn't matter here. There's another reason to resist the fu'...' prefix: political correctness. To illustrate, there's a consumer rights TV snow here with a segment called "F.U. Tube", where members of the public describe ripoffs and other product failures in video form. While a phonetic play on the name "YouTube", the abbreviation also colloquially means just what you think it might. I can just imagine reciting one of these new strings out loud... Cheers, Cameron Simpson People shouldn't be allowed to build overpasses ... - Dianne "I know what's best for you" Feinstein after the '94 LA quake. From python at mrabarnett.plus.com Tue Aug 11 02:00:15 2015 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 11 Aug 2015 01:00:15 +0100 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: <55C92DF1.1080905@trueblade.com> Message-ID: <55C93B0F.8080906@mrabarnett.plus.com> On 2015-08-11 00:26, Victor Stinner wrote: > Le mardi 11 ao?t 2015, Eric V. Smith > a ?crit : > > It sounds like you want to disallow leading spaces just to > disallow this one type of expression. > > > I would like to reduce the number of subtle differences between > f-string and str.format(). > I'm a little bit surprised at seeing this: >>> '{0}'.format('foo') 'foo' >>> '{ 0}'.format('foo') Traceback (most recent call last): File "", line 1, in KeyError: ' 0' >>> '{a}'.format(a='foo') 'foo' >>> '{ a}'.format(a='foo') Traceback (most recent call last): File "", line 1, in KeyError: ' a' In some other cases, leading and trailing spaces are ignored: >>> int(' 0 ') 0 Outside string literals, they're also ignored. But, then: >>> '{-1}'.format('foo') Traceback (most recent call last): File "", line 1, in KeyError: '-1' It's a string key, even though it looks like an int position. From victor.stinner at gmail.com Tue Aug 11 02:04:29 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 11 Aug 2015 02:04:29 +0200 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <55C93B0F.8080906@mrabarnett.plus.com> References: <55C92DF1.1080905@trueblade.com> <55C93B0F.8080906@mrabarnett.plus.com> Message-ID: Le mardi 11 ao?t 2015, MRAB a ?crit : > > I'm a little bit surprised at seeing this: (...) > We may modify str.format to ignore leading spaces, but IMHO it should not be motivated by the PEP. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Tue Aug 11 02:04:38 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 20:04:38 -0400 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <55C93B0F.8080906@mrabarnett.plus.com> References: <55C92DF1.1080905@trueblade.com> <55C93B0F.8080906@mrabarnett.plus.com> Message-ID: <55C93C16.9040701@trueblade.com> On 8/10/2015 8:00 PM, MRAB wrote: > On 2015-08-11 00:26, Victor Stinner wrote: >> Le mardi 11 ao?t 2015, Eric V. Smith > > a ?crit : >> >> It sounds like you want to disallow leading spaces just to >> disallow this one type of expression. >> >> >> I would like to reduce the number of subtle differences between >> f-string and str.format(). >> > I'm a little bit surprised at seeing this: > >>>> '{0}'.format('foo') > 'foo' >>>> '{ 0}'.format('foo') > Traceback (most recent call last): > File "", line 1, in > KeyError: ' 0' >>>> '{a}'.format(a='foo') > 'foo' >>>> '{ a}'.format(a='foo') > Traceback (most recent call last): > File "", line 1, in > KeyError: ' a' > > In some other cases, leading and trailing spaces are ignored: > >>>> int(' 0 ') > 0 > > Outside string literals, they're also ignored. > > But, then: > >>>> '{-1}'.format('foo') > Traceback (most recent call last): > File "", line 1, in > KeyError: '-1' > > It's a string key, even though it looks like an int position. I think there are bug tracker issues for both of these. I think the argument against changing them is that people might be depending on this behavior. I'll grant you it seems unlikely, but you never know. Eric. From eric at trueblade.com Tue Aug 11 02:07:18 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 10 Aug 2015 20:07:18 -0400 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: <55C92DF1.1080905@trueblade.com> <55C93B0F.8080906@mrabarnett.plus.com> Message-ID: <55C93CB6.5060106@trueblade.com> On 8/10/2015 8:04 PM, Victor Stinner wrote: > > > Le mardi 11 ao?t 2015, MRAB > a ?crit : > > I'm a little bit surprised at seeing this: (...) > > > We may modify str.format to ignore leading spaces, but IMHO it should > not be motivated by the PEP. Agreed. Eric. From brett at python.org Tue Aug 11 02:19:52 2015 From: brett at python.org (Brett Cannon) Date: Tue, 11 Aug 2015 00:19:52 +0000 Subject: [Python-Dev] Instructions on the new "push request" workflow for 3.5.0rc1+ through 3.5.0 final In-Reply-To: <55C8608E.5020509@hastings.org> References: <55C8608E.5020509@hastings.org> Message-ID: A quick hg tip for making sure you check out the right branch: end the URL on #3.5 and it will start the repo out with the 3.5 as the active branch. On Mon, Aug 10, 2015, 01:28 Larry Hastings wrote: > > As of Python 3.5.0rc1, the canonical repository for Python 3.5.0 is > *no longer* on hg.python.org. Instead, it's hosted on Bitbucket on > my personal account, here: > > https://bitbucket.org/larry/cpython350 > > Since 3.5.0rc1 isn't out yet I'm keeping the repository private for now. > Once 3.5.0 rc1 is released (hopefully Monday) I'll flip the switch and make > the repository public. (I'll email python-dev and python-committers when > that happens.) > > > Putting it succinctly, here's a table of versions and where you'd check in > for your change to go there: > > 3.5.0 : https://bitbucket.org/larry/cpython350 (branch "3.5") > 3.5.1 : hg.python.org/cpython (branch "3.5") > 3.6.0 : hg.python.org/cpython (branch "default") > > You'll notice nobody but myself has checkin permissions for my 3.5.0 repo > on > Bitbucket. That's on purpose. The only way you can get changes in to > 3.5.0 > now is by sending me a Bitbucket "pull request". This is a workflow > experiment, to see if we as a community like this sort of new-fangled > gizmo. > > For now, we're only using Bitbucket for the actual final checking-in stage. > Requests for fixes to be accepted into 3.5.0 and code review will all still > happen on the Python issue tracker. > > Also, I'm officially now asking you folks to do the forward-merge into > 3.5.1 > and 3.6.0 yourselves. > > > Here's how to get a fix checked in for 3.5.0, starting with 3.5.0rc1+ and > continuing through until 3.5.0 final. > > Pre-requisites: > * You must have a Bitbucket account. > * You must have commit rights to the CPython repository. > > 1. Create an issue on the Python issue tracker for the problem. > > 2. Submit a patch that fixes the problem. > > 3. Add me to the issue and get me to agree that it needs fixing in 3.5.0. > (You can attempt this step before 2 if you prefer.) > > 4. Fork my repository into your Bitbucket account using their web GUI. > To do that, go to Bitbucket, log in, then go to my 3.5.0 repo: > > https://bitbucket.org/larry/cpython350 > > and press the "Fork" link in the left column. Bitbucket has a > tutorial > on how to do this, here: > > > https://confluence.atlassian.com/display/BITBUCKET/Fork+a+teammate%27s+repository > > Note: DO NOT start with a conventional CPython trunk cloned from > hg.python.org. The 3.5 branch in my repo and the 3.5 branch in > normal > CPython trunk have intentionally diverged and *need* to stay > out-of-sync. > > 5. Make a local clone of your fork on your computer. > > Bitbucket has a tutorial on how to do that, here: > > > https://confluence.atlassian.com/display/BITBUCKET/Copy+your+Mercurial+repository+and+add+source+files > > Reminder: switch to the 3.5 branch! > > 6. Apply your change to the 3.5 branch and check in. > > Reminder: check in to the 3.5 branch! > > 7. Make sure you checked in your change to the 3.5 branch. > > Reminder: Seriously. I keep messing this up. I say, the more > reminders, > the better. > > 8. Push your change back to *your* fork on *your* Bitbucket account. > > Just normal "hg push" should work here. > > In case it helps, I recommend using the "https" protocol for this > step, as > it sidesteps ssh authentication and prompts you for your Bitbucket > username > and password. > > 9. Create a pull request using Bitbucket's web GUI. > > Bitbucket has a tutorial on how to create a pull request, here: > > https://confluence.atlassian.com/display/BITBUCKET/Create+a+pull+request > > On the "Create pull request" web GUI, make sure that you specify > branch "3.5" for *both* your repo *and* my repo. Also, make sure > you *don't* check the "Close 3.5 after the pull request is merged" > check box. > > (If you use the "Compare" page, you also need to select "3.5" in both > drop-down lists--one for my repo, and one for yours.) > > 10. Paste a link to the pull request into the issue tracker issue for this > change request. > > 11. Wait for confirmation that I've accepted your pull request into the > 3.5.0 repo. > > 12. Pull your accepted change from your local Bitbucket fork repo into > a normal hg.cpython.org CPython repo, merge into 3.5, then merge > into 3.6, then push. > > > For the record, here's what *my* workflow looks like when I accept your > pull request: > > 1. Click on the URL you pasted into the pull request. > > 2. Visually check that the diff matches the approved diff in the issue > on the issue tracker. > > 3. Click on the "Merge" button. > > > Frequently Asked Questions > ========================== > > Q: What if someone sends me a "pull request" for a change that doesn't > merge cleanly? > A: Then I'll decline it, and ask you on the issue tracker to rebase > and resubmit. > > Q: What if someone sends me a "pull request" but they don't have commit > rights to CPython? > A: I'll ignore it. I'll only pay attention to pull requests pasted into > the issue tracker by someone with commit rights. > > Q: Whose name goes on the commit? > A: It gets the name the checkin was made with. Don't worry, your name > will stay on your commit. > > Q: This seems like a lot more work than the old way. > A: For you guys, yes. But notice how little work it is for *me*! > Seriously. > > Q: Can I reuse my fork / my local copy? Or do I have to create > a fresh one each time? > A: I don't care either way. All I care about are clean pull requests. > If you're careful you should have no trouble reusing forks and local > checkouts. If it were me, I'd probably use a fresh fork each time. > Forks are cheap and this way is cleaner. > > > I'll add these instructions to the Python Dev Guide in the next day or two. > > > /arry > > p.s. Remember to use the 3.5 branch! > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Tue Aug 11 02:26:18 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 17:26:18 -0700 Subject: [Python-Dev] [RELEASED] Python 3.5.0rc1 is now available Message-ID: <55C9412A.5030003@hastings.org> On behalf of the Python development community and the Python 3.5 release team, I'm relieved to announce the availability of Python 3.5.0rc1, also known as Python 3.5.0 Release Candidate 1. Python 3.5 has now entered "feature freeze". By default new features may no longer be added to Python 3.5. This is a preview release, and its use is not recommended for production settings. You can find Python 3.5.0rc1 here: https://www.python.org/downloads/release/python-350rc1/ Windows and Mac users: please read the important platform-specific "Notes on this release" section near the end of that page. Happy hacking, /arry From larry at hastings.org Tue Aug 11 02:28:18 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 17:28:18 -0700 Subject: [Python-Dev] Instructions on the new "push request" workflow for 3.5.0rc1+ through 3.5.0 final In-Reply-To: <55C8608E.5020509@hastings.org> References: <55C8608E.5020509@hastings.org> Message-ID: <55C941A2.70100@hastings.org> On 08/10/2015 01:27 AM, Larry Hastings wrote: > > As of Python 3.5.0rc1, the canonical repository for Python 3.5.0 is > *no longer* on hg.python.org. Instead, it's hosted on Bitbucket on > my personal account, here: > > https://bitbucket.org/larry/cpython350 > > Since 3.5.0rc1 isn't out yet I'm keeping the repository private for now. > Once 3.5.0 rc1 is released (hopefully Monday) I'll flip the switch and > make > the repository public. (I'll email python-dev and python-committers when > that happens.) Python 3.5.0rc1 just went live. So, as promised, I've flipped the switch--my "cpython350" repository is now public. En garde, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Tue Aug 11 02:55:40 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 17:55:40 -0700 Subject: [Python-Dev] Sorry folks, minor hiccup for Python 3.5.0rc1 Message-ID: <55C9480C.20409@hastings.org> I built the source tarballs with a slightly-out-of-date tree. We slipped the release by a day to get two fixes in, but the tree I built from didn't have those two fixes. I yanked the tarballs off the release page as soon as I suspected something. I'm rebuilding the tarballs and the docs now. If you grabbed the tarball as soon as it appeared, it's slightly out of date, please re-grab. Sorry for the palaver, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Tue Aug 11 02:56:26 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 10 Aug 2015 17:56:26 -0700 Subject: [Python-Dev] Sorry folks, minor hiccup for Python 3.5.0rc1 In-Reply-To: <55C9480C.20409@hastings.org> References: <55C9480C.20409@hastings.org> Message-ID: <55C9483A.8010300@hastings.org> On 08/10/2015 05:55 PM, Larry Hastings wrote: > I yanked the tarballs off the release page as soon as I suspected > something. I'm rebuilding the tarballs and the docs now. If you > grabbed the tarball as soon as it appeared, it's slightly out of date, > please re-grab. p.s. I should have mentioned--the Mac and Windows builds should be fine. They, unlike me, updated their tree ;-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Tue Aug 11 04:11:01 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 10 Aug 2015 21:11:01 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> Message-ID: On Aug 10, 2015 4:52 PM, "Nathaniel Smith" wrote: > > On Aug 10, 2015 11:33 AM, "Barry Warsaw" wrote: > > > > On Aug 11, 2015, at 03:26 AM, Steven D'Aprano wrote: > > > > >I think I would be happy with f-strings, or perhaps i-strings if we use > > >Nick's ideas about internationalisation, and limit what they can evaluate to > > >name lookups, attribute lookups, and indexing, just like format(). > > > > I still think you really only need name lookups, especially for an i18n > > context. Anything else is just overkill, YAGNI, potentially error prone, or > > perhaps even harmful. > > > > Remember that the translated strings usually come from only moderately (if at > > all) trusted and verified sources, so it's entirely possible that a malicious > > translator could sneak in an exploit, especially if you're evaluating > > arbitrary expressions. If you're only doing name substitutions, then the > > worst that can happen is an information leak, which is bad, but won't > > compromise the integrity of say a server using the translation. > > > > Even if the source strings avoid the use of expressions, if the feature is > > available, a translator could still sneak something in. That pretty much > > makes it a non-starter for i18n, IMHO. > > > > Besides, any expression you have to calculate can go in a local that will get > > interpolated. The same goes for any !r or other formatting modifiers. In an > > i18n context, you want to stick to the simplest possible substitution > > placeholders. > > IIUC what Nick contemplates in PEP 501 is that when you write something like > i"I am ${self.age}" > then the python runtime would itself evaluate self.age and pass it on to the i18n machinery to do the actual substitution; the i18n machinery wouldn't even contain any calls to eval. The above string could be translated as > i"Tengo ${self.age} a?os" > but > i"Tengo ${self.password} a?os" > would be an error, because the runtime did not provide a value for self.password. So while arbitrarily complex expressions are allowed (at least as far as the language is concerned -- a given project or i18n toolkit could impose additional policy restrictions), by the time the interpolation machinery runs they'll effectively have been reduced to local variables with funny multi-token names. > > This pretty much eliminates all the information leak and exploit concerns, AFAICT. From your comments about having to be careful about attribute chasing, it sounds like it might even be more robust than current flufl.i18n in this regard...? No, those remain; but minimizing calls to eval is good, too. I prefer explicit template context for good reason: * scope / variable binding in list comprehensions, * "it was called 'cmd' two nested scopes ago" Again, convenient but dangerous (Django and Jinja can/do autoescaping) and making it far too easy to wrongly quote and not escape strings (which often contain domain-specific) control characters. > > -n > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Tue Aug 11 04:17:42 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Aug 2015 12:17:42 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> Message-ID: <20150811021742.GP3737@ando.pearwood.info> On Mon, Aug 10, 2015 at 09:23:15PM +0200, Guido van Rossum wrote: [...] > Anyway, this generalization from print() is why I want arbitrary > expressions. Wouldn't it be silly if we introduced print() today and said > "we don't really like to encourage printing complicated expressions, but > maybe we can introduce them in a future version"... :-) That's a straw-man argument. Nobody is arguing against allowing arbitrary expressions as arguments to functions. If you want a fair analogy, how about the reluctance to allow arbitrary expressions as decorators? @[spam, eggs, cheese][switch] def function(): ... As far as I can see, the non-straw argument is that f-strings be limited to the same subset of expressions that format() accepts: name and attribute look-ups, and indexing. -- Steve From wes.turner at gmail.com Tue Aug 11 04:33:11 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 10 Aug 2015 21:33:11 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810170713.GM3737@ando.pearwood.info> Message-ID: On Mon, Aug 10, 2015 at 1:52 PM, David Mertz wrote: > I know. I elided including the nonexistent `nonlocals()` in there. But it > *should* be `lngb()`. Or call it scope(). :-) > On Aug 10, 2015 10:09 AM, "Steven D'Aprano" wrote: > >> On Sun, Aug 09, 2015 at 06:14:18PM -0700, David Mertz wrote: >> >> [...] >> > That said, there *is* one small corner where I believe f-strings add >> > something helpful to the language. There is no really concise way to >> spell: >> > >> > collections.ChainMap(locals(), globals(), __builtins__.__dict__). >> >> I think that to match the normal name resolution rules, nonlocals() >> needs to slip in there between locals() and globals(). I realise that >> there actually isn't a nonlocals() function (perhaps there should be?). >> >> > If we could spell that as, say `lgb()`, that would let str.format() or >> > %-formatting pick up the full "what's in scope". To my mind, that's the >> > only good thing about the f-string idea. >> >> I like the concept, but not the name. Initialisms tend to be hard >> to remember and rarely self-explanatory. How about scope()? >> > #letsgoblues! scope(**kwargs), lngb(**kwargs), lookup(**kwargs) could allow for local attr override. > >> >> -- >> Steve >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/mertz%40gnosis.cx >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wes.turner at gmail.com Tue Aug 11 04:59:43 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 10 Aug 2015 21:59:43 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C8F5C8.7030005@oddbird.net> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55C8EF58.6040100@trueblade.com> <55C8F0F8.8040806@gmail.com> <55C8F221.5070504@trueblade.com> <55C8F5C8.7030005@oddbird.net> Message-ID: On Mon, Aug 10, 2015 at 2:04 PM, Carl Meyer wrote: > On 08/10/2015 02:49 PM, Eric V. Smith wrote: > > On 08/10/2015 02:44 PM, Yury Selivanov wrote: > >> > >> > >> On 2015-08-10 2:37 PM, Eric V. Smith wrote: > >>>> Besides, any expression you have to calculate can go in a local that > >>>> will get > >>>>> interpolated. The same goes for any !r or other formatting > >>>> modifiers. In an > >>>>> i18n context, you want to stick to the simplest possible substitution > >>>>> placeholders. > >>> This is why I think PEP-498 isn't the solution for i18n. I'd really > like > >>> to be able to say, in a debugging context: > >>> > >>> print('a:{self.a} b:{self.b} c:{self.c} d:{self.d}') > >>> > >>> without having to create locals to hold these 4 values. > >> > >> Why can't we restrict expressions in f-strings to > >> attribute/item getters? > >> > >> I.e. allow f'{foo.bar.baz}' and f'{self.foo["bar"]}' but > >> disallow f'{foo.bar(baz=something)}' > > > > It's possible. But my point is that Barry doesn't even want > > attribute/item getters for an i18n solution, and I'm not willing to > > restrict it that much. > > I don't think attribute access and item access are on the same level > here. In terms of readability of the resulting string literal, it would > be reasonable to allow attribute access but disallow item access. And I > think attribute access is reasonable to allow in the context of an i18n > solution as well (but item access is not). Item access is much harder to > read and easier for translators to mess up because of all the extra > punctuation (and the not-obvious-to-a-non-programmer distinction between > a literal or variable key). > > There's also the solution used by the Django and Jinja templating > languages, where dot-notation can mean either attribute access > (preferentially) or item access with literal key (as fallback). That > manages to achieve both a high level of readability of the > literal/template, and a high level of flexibility for the context > provider (who may find it easier to provide a dictionary than an > object), but may fail the "too different from Python" test. > References for (these) PEPs: One advantage of Python HAVING required explicit template format interpolation string contexts is that to do string language formatting correctly (e.g. *for anything other than printing strings to console* or with formats with defined field/record boundary delimiters (which, even then, may contain shell control escape codes)) we've had to write and use external modules which are specific to the output domain (JSON, HTML, CSS, SQL, SPARQL, CSS, [...]). There are a number of posts about operator syntax, which IMHO, regardless, it's not convenient enough to lose this distinctive 'security' feature (explicit variable bindings for string interpolation) of Python as a scripting language as compared to e.g. Perl, Ruby. Jinja2 reimplements and extends Django template syntax -{% for %}{{variable_or_expr | filtercallable}}-{% endfor %} * Jinja2 supports configurable operators {{ can instead be !! or !{ or ${ or ?? * Because it is a compilable function composition, Jinja2 supports extensions: https://github.com/mitsuhiko/jinja2/blob/master/tests/test_ext.py * Jinja2 supports {% trans %}, _(''), and gettext("") babel-style i18n http://jinja.pocoo.org/docs/dev/templates/#i18n * Jinja2 supports autoescaping: http://jinja.pocoo.org/docs/dev/api/#autoescaping (e.g. 'jinja2.ext.autoescape' AutoEscapeExtension [ScopedEvalContextModifier]) https://github.com/mitsuhiko/jinja2/blob/master/jinja2/ext.py#L434 * preprocessors and things are then just jinja2.ext.Extension s. * Jinja2 accepts an explicit context (where merge(globals, locals, kwargs) just feels wrong because it is, ... [ ] lookup(**kwargs), lngb(**kwargs)) (salt pillar merges)) ~ collections.abc.MutableMapping: https://docs.python.org/3/library/collections.abc.html#collections.abc.MutableMapping * Jinja2 marks strings with MarkupSafe (in order to prevent e.g. multiple escaping, lack of escaping) https://pypi.python.org/pypi/MarkupSafe f-strings would make it too easy for me to do the wrong thing; which other language don't prevent (this does occur often [CWE Top 25 2011]), and I regard this as a current feature of Python. > Carl > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ischwabacher at wisc.edu Tue Aug 11 01:05:45 2015 From: ischwabacher at wisc.edu (ISAAC J SCHWABACHER) Date: Mon, 10 Aug 2015 23:05:45 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: I don't know about you, but I sure like this better than what you have: code.putlines(f""" static char {entry.doc_cname}[] = "{ split_string_literal(escape_bytestring(docstr))}"; { # nested! f""" #if CYTHON_COMPILING_IN_CPYTHON struct wrapperbase {entry.wrapperbase_cname}; #endif """ if entry.is_special else ''} {(lambda temp, argn: # my kingdom for a let! f""" for ({temp}=0; {temp} on behalf of Stefan Behnel Sent: Sunday, August 9, 2015 04:53 To: python-dev at python.org Subject: Re: [Python-Dev] PEP-498: Literal String Formatting Stefan Behnel schrieb am 09.08.2015 um 10:06: > Eric V. Smith schrieb am 08.08.2015 um 03:39: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". >> >> I need to add some text to the discussion section, but I think it's in >> reasonable shape. I have a fully working implementation that I'll get >> around to posting somewhere this weekend. >> >> >>> def how_awesome(): return 'very' >> ... >> >>> f'f-strings are {how_awesome()} awesome!' >> 'f-strings are very awesome!' >> >> I'm open to any suggestions to improve the PEP. Thanks for your feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Thinking about this some more, the "almost all" is actually wrong. This only applies to one kind of application that I'm working on. In fact, "almost all" of the string formatting that I use is not in those applications but in Cython's code generator. And there's a *lot* of string formatting in there, even though we use real templating for bigger things already. However, looking through the code, I cannot see this proposal being of much help for that use case either. Many of the values that get formatted into the strings use some kind of non-trivial expression (function calls, object attributes, also local variables, sometimes variables with lengthy names) that is best written out in actual code. Here are some real example snippets: code.putln( 'static char %s[] = "%s";' % ( entry.doc_cname, split_string_literal(escape_byte_string(docstr)))) if entry.is_special: code.putln('#if CYTHON_COMPILING_IN_CPYTHON') code.putln( "struct wrapperbase %s;" % entry.wrapperbase_cname) code.putln('#endif') temp = ... code.putln("for (%s=0; %s < PyTuple_GET_SIZE(%s); %s++) {" % ( temp, temp, Naming.args_cname, temp)) code.putln("PyObject* item = PyTuple_GET_ITEM(%s, %s);" % ( Naming.args_cname, temp)) code.put("%s = (%s) ? PyDict_Copy(%s) : PyDict_New(); " % ( self.starstar_arg.entry.cname, Naming.kwds_cname, Naming.kwds_cname)) code.putln("if (unlikely(!%s)) return %s;" % ( self.starstar_arg.entry.cname, self.error_value())) We use %-formatting for historical reasons (that's all there was 15 years ago), but I wouldn't switch to .format() because there is nothing to win here. The "%s" etc. place holders are *very* short and do not get in the way (as "{}" would in C code templates). Named formatting would require a lot more space in the templates, so positional, unnamed formatting helps readability a lot. And the value expressions used for the interpolation tend to be expressions rather than simple variables, so keeping those outside of the formatting strings simplifies both editing and reading. That's the third major real-world use case for string formatting now where this proposal doesn't help. The niche is getting smaller. Stefan _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ischwabacher%40wisc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg.ewing at canterbury.ac.nz Tue Aug 11 08:07:26 2015 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Tue, 11 Aug 2015 18:07:26 +1200 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <20150810235851.GA19570@cskk.homeip.net> References: <20150810235851.GA19570@cskk.homeip.net> Message-ID: <55C9911E.4090402@canterbury.ac.nz> Cameron Simpson wrote: > To illustrate, there's a consumer rights TV snow here with a segment > called "F.U. Tube", where members of the public describe ripoffs and > other product failures in video form. While a phonetic play on the name > "YouTube", the abbreviation also colloquially means just what you think > it might. I can just imagine reciting one of these new strings out loud... We could require it to be spelled "uf" unless "from __future__ import billy_connolly_as_FLUFL" is in effect. -- Greg From stefan_ml at behnel.de Tue Aug 11 08:12:06 2015 From: stefan_ml at behnel.de (Stefan Behnel) Date: Tue, 11 Aug 2015 08:12:06 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: ISAAC J SCHWABACHER schrieb am 11.08.2015 um 01:05: > I don't know about you, but I sure like this better than what you have: > > code.putlines(f""" > static char {entry.doc_cname}[] = "{ > split_string_literal(escape_bytestring(docstr))}"; > > { # nested! > f""" > #if CYTHON_COMPILING_IN_CPYTHON > struct wrapperbase {entry.wrapperbase_cname}; > #endif > """ if entry.is_special else ''} > > {(lambda temp, argn: # my kingdom for a let! > f""" > for ({temp}=0; {temp} PyObject *item = PyTuple_GET_ITEM({argn}, {temp}); > }}""")(..., Naming.args_cname)} > > {self.starstar_arg.entry.cname} = > ({Naming.kwds_cname}) ? PyDict_Copy({Naming.kwds_cname}) > : PyDict_New(); > > if (unlikely(!{self.starstar_arg.entry.cname})) return {self.error_value()}; > """) Matter of taste, I guess. Looks awful to me. It's very difficult to visually separate input and output in this code, so it requires a thorough look to see what data is being used for the formatting. Syntax highlighting and in-string expression completion should eventually help, once IDEs support it. But then editing this code will require an editor that has such support. And not everyone is going to be willing to get one. Stefan From robertc at robertcollins.net Tue Aug 11 08:09:34 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 11 Aug 2015 18:09:34 +1200 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? Message-ID: So, there's a patch on issue 9232 - allow trailing commas in function definitions - but there's been enough debate that I suspect we need a PEP. Would love it if someone could correct me, but I'd like to be able to either categorically say 'no' and close the ticket, or 'yes and this is what needs to happen next'. -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From cs at zip.com.au Tue Aug 11 09:08:13 2015 From: cs at zip.com.au (Cameron Simpson) Date: Tue, 11 Aug 2015 17:08:13 +1000 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <55C9911E.4090402@canterbury.ac.nz> References: <55C9911E.4090402@canterbury.ac.nz> Message-ID: <20150811070813.GA23088@cskk.homeip.net> On 11Aug2015 18:07, Greg Ewing wrote: >Cameron Simpson wrote: >>To illustrate, there's a consumer rights TV snow here with a segment >>called "F.U. Tube", where members of the public describe ripoffs and >>other product failures in video form. While a phonetic play on the >>name "YouTube", the abbreviation also colloquially means just what >>you think it might. I can just imagine reciting one of these new >>strings out loud... > >We could require it to be spelled "uf" unless "from __future__ >import billy_connolly_as_FLUFL" is in effect. That seems like a reasoned and measured response. Cheers, Cameron Simpson For those who understand, NO explanation is needed, for those who don't understand, NO explanation will be given! - Davey D From stephen at xemacs.org Tue Aug 11 09:33:43 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 11 Aug 2015 16:33:43 +0900 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150810143127.66c5f842@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> Message-ID: <878u9ixors.fsf@uwakimon.sk.tsukuba.ac.jp> Barry Warsaw writes: > Besides, any expression you have to calculate can go in a local > that will get interpolated. Sure, but that style should be an application programmer choice. If this syntax can't replace the vast majority of cases where the format method is invoked on a literal string without requiring introduction of gratuitous temporaries, I don't see the point. By "invoked", I mean the arguments to the format method, too, so even function calls should be permitted. To me it's not worth the expense of learning and teaching the differences otherwise. If that point of view were generally accepted, it seems to me that it kills this idea of using the same syntax for programmer interpolation and for translation interpolation. The two use cases present requirements that are too different since translators are generally "third party volunteers", *not* "trusted contributors". Nor are their contributions generally reviewed by "core". > In an i18n context, you want to stick to the simplest possible > substitution placeholders. Certainly, and in that case I think format strings with simple variable and attribute interpolation, plus an explicit invocation of the format method comprise TOOWDTI -- it's exactly what you want! In fact, I am now -1 on an implicitly formatted I18N quasi-literal. It seems to me that in fact we should think of such an internationalized string as merely an obfuscated way of spelling variable_input_by_user. The current I18N frameworks make this clear by requiring a function call, which theoretically could return any string and have any side effects -- but these are controlled by the programmer. But there are other contexts, equally important, where a more compact, implicit formatting syntax would be very valuable, starting with scripting. BTW, I know application programmers hate those calls. I wonder if they can't be folded into str.format, with a dummy string prefix of "i" (or "_"!) being allowed solely to trigger xgettext and similar potfile extraction utilities? So you'd write >>> s = i"Please translate this {adjective} string." >>> s.format(adjective=i"beautiful", gettext=('ja', None)) "???????????????????" where the first component of gettext is the language and the second is the gettext domain (defaulting to the current application). If that works, the transformation from monolingual application to internationalized application is sufficiently mechanical that a non- programmer could be easily taught to perform it. From rosuav at gmail.com Tue Aug 11 13:51:56 2015 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 11 Aug 2015 21:51:56 +1000 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: <20150811070813.GA23088@cskk.homeip.net> References: <55C9911E.4090402@canterbury.ac.nz> <20150811070813.GA23088@cskk.homeip.net> Message-ID: On Tue, Aug 11, 2015 at 5:08 PM, Cameron Simpson wrote: > On 11Aug2015 18:07, Greg Ewing wrote: >> >> Cameron Simpson wrote: >>> >>> To illustrate, there's a consumer rights TV snow here with a segment >>> called "F.U. Tube", where members of the public describe ripoffs and other >>> product failures in video form. While a phonetic play on the name "YouTube", >>> the abbreviation also colloquially means just what you think it might. I can >>> just imagine reciting one of these new strings out loud... >> >> >> We could require it to be spelled "uf" unless "from __future__ >> import billy_connolly_as_FLUFL" is in effect. > > > That seems like a reasoned and measured response. Given the levels of profanity that are not disallowed in identifier names, I think blocking off a two-letter prefix is pretty pointless. It'd be different if the specification _required_ it (though even then, it's not that big a deal...), but merely permitting it? Not Python's fault. ChrisA From steve at pearwood.info Tue Aug 11 14:47:33 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Tue, 11 Aug 2015 22:47:33 +1000 Subject: [Python-Dev] PEP 498 f-string: please remove the special case for spaces In-Reply-To: References: <55C9911E.4090402@canterbury.ac.nz> <20150811070813.GA23088@cskk.homeip.net> Message-ID: <20150811124733.GC5249@ando.pearwood.info> On Tue, Aug 11, 2015 at 09:51:56PM +1000, Chris Angelico wrote: > On Tue, Aug 11, 2015 at 5:08 PM, Cameron Simpson wrote: > > On 11Aug2015 18:07, Greg Ewing wrote: > >> > >> Cameron Simpson wrote: > >>> > >>> To illustrate, there's a consumer rights TV snow here with a segment > >>> called "F.U. Tube", where members of the public describe ripoffs and other > >>> product failures in video form. While a phonetic play on the name "YouTube", > >>> the abbreviation also colloquially means just what you think it might. I can > >>> just imagine reciting one of these new strings out loud... > >> > >> > >> We could require it to be spelled "uf" unless "from __future__ > >> import billy_connolly_as_FLUFL" is in effect. > > > > > > That seems like a reasoned and measured response. > > Given the levels of profanity that are not disallowed in identifier > names, I think blocking off a two-letter prefix is pretty pointless. > It'd be different if the specification _required_ it (though even > then, it's not that big a deal...), but merely permitting it? Not > Python's fault. Er, if it's not Python's doing, whose doing is it? There's a difference between not censoring identifiers written by the user, and creating syntax. I don't think anyone would blame the language if I created an identifier "poop", say, but if the language included a keyword "poop" or a syntax feature, say, poop-lists: poop[a, b, c, d] then one might wonder what the language designers were thinking. I've already seen a bit of sniggering on Reddit about fu strings. In the grand scheme of things, worrying about fu strings is pretty low on the list of priorities. But if there is no need to allow fu as a prefix, or if there is another equally good prefix to use instead of f, then there's no harm done by disappointing the 14 year olds. -- Steve From barry at python.org Tue Aug 11 15:44:49 2015 From: barry at python.org (Barry Warsaw) Date: Tue, 11 Aug 2015 09:44:49 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <20150811094449.4cc9bada@limelight.wooz.org> On Aug 10, 2015, at 11:05 PM, ISAAC J SCHWABACHER wrote: >code.putlines(f""" >static char {entry.doc_cname}[] = "{ > split_string_literal(escape_bytestring(docstr))}"; > >{ # nested! >f""" >#if CYTHON_COMPILING_IN_CPYTHON > struct wrapperbase {entry.wrapperbase_cname}; >#endif >""" if entry.is_special else ''} > >{(lambda temp, argn: # my kingdom for a let! >f""" >for ({temp}=0; {temp} PyObject *item = PyTuple_GET_ITEM({argn}, {temp}); >}}""")(..., Naming.args_cname)} > >{self.starstar_arg.entry.cname} = > ({Naming.kwds_cname}) ? PyDict_Copy({Naming.kwds_cname}) > : PyDict_New(); > >if (unlikely(!{self.starstar_arg.entry.cname})) return {self.error_value()}; >""") > >What do others think of this PEP-498 sample? No offense intended, but I put this in an Emacs Python buffer and it made me want to cry. Cheers, -Barry From eric at trueblade.com Tue Aug 11 15:47:06 2015 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 11 Aug 2015 09:47:06 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C905A5.9050005@mgmiller.net> References: <55C55DC3.8040605@trueblade.com> <55C905A5.9050005@mgmiller.net> Message-ID: <55C9FCDA.2010206@trueblade.com> On 08/10/2015 04:12 PM, Mike Miller wrote: > Here are my notes on PEP 498. > > 1. Title: Literal String Formatting > > - String Literal Formatting > - Format String Expressions > ? I like "String Literal Formatting", but let me sleep on it. > 2. Let's call them "format strings" not "f-strings". > The latter sounds slightly obnoxious, and also inconsistent with the > others: > > r'' raw string > u'' unicode object (string) > f'' format string People seem to have already started using f-strings. I think it's inevitable. > 3. " This PEP does not propose to remove or deprecate any of the existing > string formatting mechanisms. " > > Should we put this farther up with the section talking about them, > it seems out of place where it is. > Done. > 4. "The existing ways of formatting are either error prone, inflexible, or > cumbersome." > > I would tone this down a bit, they're not so bad, quite verbose is a > phrase I might use instead. > I'll try and tone it down. > 5. Discussion Section > How to designate f-strings, and how specify the locaton of expressions > ^ typo I already found that one. Thanks. > 6. Perhaps mention string literal functionality, like triple quotes, > line-ending backslashes, as MRAB mentions, in addition to the > concatenation rules. Good idea. Eric. > -Mike > > > On 08/07/2015 06:39 PM, Eric V. Smith wrote: > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > > From rdmurray at bitdance.com Tue Aug 11 16:46:59 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 11 Aug 2015 10:46:59 -0400 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: References: Message-ID: <20150811144700.40AE0B1400A@webabinitio.net> On Tue, 11 Aug 2015 18:09:34 +1200, Robert Collins wrote: > So, there's a patch on issue 9232 - allow trailing commas in function > definitions - but there's been enough debate that I suspect we need a > PEP. > > Would love it if someone could correct me, but I'd like to be able to > either categorically say 'no' and close the ticket, or 'yes and this > is what needs to happen next'. I think we might just need another round of discussion here. I'm +1 myself. Granted there haven't been many times I've wanted it (functions with enough arguments to want to make it easy to add and remove elements are a bit of a code smell), but I have wanted it (and even used the form that is accepted) several times. On the other hand, the number of times when the detection of a trailing comma has revealed a missing argument to me (Raymond's objection) has been...well, I'm pretty sure it is zero. Especially since it only happens *sometimes*. Since backward compatibility says we shouldn't disallow it where it is currently allowed, the only logical thing to do, IMO, is consistently allow it. (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for disallowing trailing commas outside of (). The number of times I've ended up with an unintentional tuple after converting a dictionary to a series of assignments outnumbers both of the above :) Note, I am *not* suggesting doing this!) --David From rosuav at gmail.com Tue Aug 11 17:03:38 2015 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 12 Aug 2015 01:03:38 +1000 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: <20150811144700.40AE0B1400A@webabinitio.net> References: <20150811144700.40AE0B1400A@webabinitio.net> Message-ID: On Wed, Aug 12, 2015 at 12:46 AM, R. David Murray wrote: > (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for > disallowing trailing commas outside of (). The number of times I've > ended up with an unintentional tuple after converting a dictionary to a > series of assignments outnumbers both of the above :) Note, I am *not* > suggesting doing this!) Outside of any form of bracket, I hope you mean. The ability to leave a trailing comma on a list or dict is well worth keeping: func = { "+": operator.add, "-": operator.sub, "*": operator.mul, "/": operator.truediv, } ChrisA From eric at trueblade.com Tue Aug 11 17:05:46 2015 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 11 Aug 2015 11:05:46 -0400 Subject: [Python-Dev] PEP 498 f-string: is it a preprocessor? In-Reply-To: References: <55C8B6EE.10504@trueblade.com> <55C92CD1.5080403@trueblade.com> Message-ID: <55CA0F4A.6040102@trueblade.com> On 08/10/2015 07:23 PM, Victor Stinner wrote: > > > Le mardi 11 ao?t 2015, Eric V. Smith > a ?crit : > > Oops, I was thinking of going the other way (str.format -> f''). Yes, I > think you're correct. > > > Ah ok. > > But in any event, I don't see the distinction between calling > str.format(), and calling each object's __format__ method. Both are > compliant with the PEP, which doesn't specify exactly how the > transformation is done. > > > When I read the PEP for the first time, I understood that you > reimplemented str.format() using the __format__() methods. So i > understood that it's a new formatting language and it would be tricky to > reimplement it, for example in a library providing i18n with f-string > syntax (I'm not sure that it's feasible, it's just an example). I also > expected many subtle differences between .format() and f-string. > > In fact, f-string is quite standard and not new, it's just a compact > syntax to call .format() (well, with some minor and acceptable subtle > differences). For me, it's a good thing to rely on the existing > .format() method because it's well known (no need to learn a new > formatting language). > > Maybe you should rephrase some parts of your PEP and rewrite some > examples to say that's it's "just" a compact syntax to call .format(). > > -- > > For me, calling __format__() multiple times or format() once matters, > for performances, because I contributed to the implementation of > _PyUnicodeWriter. I spent a lot of time to keep good performances > when the implementation of Unicode was rewritten for the PEP 393. With > this PEP, writing an efficient implementation is much harder. The dummy > benchmark is to compare Python 2.7 str.format() (bytes!) to Python 3 > str.format() (Unicode!). Users want similar performances! If I recall > correctly, Python 3 is not bad (faster is some corner cases). > > Concatenate temporary strings is less efficient Than _PyUnicodeWriter > (single buffer) when you have UCS-1, UCS-2 and UCS-4 strings (1/2/4 > bytes per character). It's more efficient to write directly into the > final format (UCS-1/2/4), even if you may need to convert the buffer > from UCS-1 to UCS-2 (and maybe even one more time to UCS-4). I think I've pinpointed what bothers me about building up a string for str.format: You're building up a string which is then parsed, and after it's parsed, you make the exact same function calls that you could instead make directly. When Mark Dickinson and I implemented short float repr we spent a lot of time taking apart code that did things like this. But yes, there are some optimizations in str.format dealing with both _PyUncicodeWriter and with not calling __format__ for some builtin types. So maybe there's a win to be had there, even with the extra parsing that would happen. In any event, it should be driven by testing. That said, I now think that when handling nested f-strings: f'value: {value:{width}}' It will be easier to translate this to: 'value: {0:{1}s}'.format(value, width) than: ''.join(['value: ', value.__format__(''.join([width.__format__(), 's'])) ]) But I don't see any need to modify the PEP. The exact mechanism used isn't specified. I just want the PEP to be clear that it's using the __format__ protocol. The implementation can either do so explicitly or via str.format, and which one might change in the future. Eric. From tritium-list at sdamon.com Tue Aug 11 17:09:36 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 11 Aug 2015 11:09:36 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> Message-ID: <55CA1030.3060808@sdamon.com> This may seam like a simplistic solution to i18n, but why not just add a method to string objects (assuming we implement f-strings) that just returns the original, unprocessed string. If the string was not an f-string, it just returns self. The gettext module can be modified, I think trivially, to use the method instead of the string directly. Is this a horrible idea? - Alex W. From eric at trueblade.com Tue Aug 11 17:16:05 2015 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 11 Aug 2015 11:16:05 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA1030.3060808@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> Message-ID: <55CA11B5.3080503@trueblade.com> On 08/11/2015 11:09 AM, Alexander Walters wrote: > This may seam like a simplistic solution to i18n, but why not just add a > method to string objects (assuming we implement f-strings) that just > returns the original, unprocessed string. If the string was not an > f-string, it just returns self. The gettext module can be modified, I > think trivially, to use the method instead of the string directly. You need the original string, in order to figure out what it translates to. You need the values to replace into that string, evaluated at runtime, in the context of where the string appears. And you need to know where in the original (or translated) string to put them. The problem is that there's no way to evaluate the values and, before they're substituted in to the string, use a different template string with obvious substitution points. This is what PEP 501 is trying to do. Eric. From wes.turner at gmail.com Tue Aug 11 17:19:48 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 11 Aug 2015 10:19:48 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA1030.3060808@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> Message-ID: On Aug 11, 2015 10:10 AM, "Alexander Walters" wrote: > > This may seam like a simplistic solution to i18n, but why not just add a method to string objects (assuming we implement f-strings) that just returns the original, unprocessed string. If the string was not an f-string, it just returns self. The gettext module can be modified, I think trivially, to use the method instead of the string directly. > > Is this a horrible idea? This is a backward compatible macro to elide code in strings that should not be. * IIUC, this would only be usable in 3.6+ (so, not at all and style guide says NO) * there should be a normal functional() way to accomplish this in a backwards compatible way * formatlng() / lookup() would be more future compatible > > - Alex W. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Aug 11 17:23:58 2015 From: guido at python.org (Guido van Rossum) Date: Tue, 11 Aug 2015 17:23:58 +0200 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: References: Message-ID: I don't think it needs a PEP. See my response in the issue. On Tue, Aug 11, 2015 at 8:09 AM, Robert Collins wrote: > So, there's a patch on issue 9232 - allow trailing commas in function > definitions - but there's been enough debate that I suspect we need a > PEP. > > Would love it if someone could correct me, but I'd like to be able to > either categorically say 'no' and close the ticket, or 'yes and this > is what needs to happen next'. > > -Rob > > -- > Robert Collins > Distinguished Technologist > HP Converged Cloud > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tritium-list at sdamon.com Tue Aug 11 17:26:38 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 11 Aug 2015 11:26:38 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA11B5.3080503@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> <55CA11B5.3080503@trueblade.com> Message-ID: <55CA142E.4030905@sdamon.com> On 8/11/2015 11:16, Eric V. Smith wrote: > On 08/11/2015 11:09 AM, Alexander Walters wrote: >> This may seam like a simplistic solution to i18n, but why not just add a >> method to string objects (assuming we implement f-strings) that just >> returns the original, unprocessed string. If the string was not an >> f-string, it just returns self. The gettext module can be modified, I >> think trivially, to use the method instead of the string directly. > You need the original string, in order to figure out what it translates > to. You need the values to replace into that string, evaluated at > runtime, in the context of where the string appears. And you need to > know where in the original (or translated) string to put them. > > The problem is that there's no way to evaluate the values and, before > they're substituted in to the string, use a different template string > with obvious substitution points. This is what PEP 501 is trying to do. > > Eric. I don't understand some of that. We already trust translators with _('foo {bar}').format(bar=bar) to not mess up the {bar} in the string, so the that wont change. Is the issue handing the string back to python to be formatted? Could gettext not make the same AST as an f-string would, and hand that back to python? If you add a method to strings that returns the un-f-string-processed version of the string, doesn't that make all these problems solvable without pep-501? From wes.turner at gmail.com Tue Aug 11 17:28:27 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 11 Aug 2015 10:28:27 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> Message-ID: On Aug 11, 2015 10:19 AM, "Wes Turner" wrote: > > > On Aug 11, 2015 10:10 AM, "Alexander Walters" wrote: > > > > This may seam like a simplistic solution to i18n, but why not just add a method to string objects (assuming we implement f-strings) that just returns the original, unprocessed string. If the string was not an f-string, it just returns self. The gettext module can be modified, I think trivially, to use the method instead of the string directly. > > > > Is this a horrible idea? - [ ] review all string interpolation (for "injection") * [ ] review every '%' * [ ] review every ".format()" * [ ] review every f-string (AND LOCALS AND GLOBALS) * every os.system, os.exec*, subprocess.Popen * every unclosed tag * every unescaped control character This would create work we don't need. Solution: __str_shell_ escapes, adds slashes, and quotes. __str__SQL__ refs a global list of reserved words. > > This is a backward compatible macro to elide code in strings that should not be. > > * IIUC, this would only be usable in 3.6+ (so, not at all and style guide says NO) > * there should be a normal functional() way to accomplish this in a backwards compatible way > * formatlng() / lookup() would be more future compatible > > > > > - Alex W. > > > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Tue Aug 11 17:31:57 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Tue, 11 Aug 2015 08:31:57 -0700 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: <20150811144700.40AE0B1400A@webabinitio.net> References: <20150811144700.40AE0B1400A@webabinitio.net> Message-ID: <-5192872034033324019@unknownmsgid> > there's been enough debate that I suspect we need a >> PEP. >> > I think we might just need another round of discussion here. Please no :-) Looking back at the previous discussion, it looked like it's all been said, and there was almost unanimous approval (with some key mild disapproval) for the idea, so what we need now is a pronouncement. If it's unclear whether consensus was close, then folks that are strongly against should speak up now. If there is a flurry of those, then a PEP is in order. But another big long unstructured discussion won't be useful. -Chris > > I'm +1 myself. Granted there haven't been many times I've wanted it > (functions with enough arguments to want to make it easy to add and > remove elements are a bit of a code smell), but I have wanted it (and > even used the form that is accepted) several times. On the other hand, > the number of times when the detection of a trailing comma has revealed > a missing argument to me (Raymond's objection) has been...well, I'm > pretty sure it is zero. Especially since it only happens *sometimes*. > Since backward compatibility says we shouldn't disallow it where it is > currently allowed, the only logical thing to do, IMO, is consistently > allow it. > > (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for > disallowing trailing commas outside of (). The number of times I've > ended up with an unintentional tuple after converting a dictionary to a > series of assignments outnumbers both of the above :) Note, I am *not* > suggesting doing this!) > > --David > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov From tritium-list at sdamon.com Tue Aug 11 17:35:40 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 11 Aug 2015 11:35:40 -0400 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: <-5192872034033324019@unknownmsgid> References: <20150811144700.40AE0B1400A@webabinitio.net> <-5192872034033324019@unknownmsgid> Message-ID: <55CA164C.5040305@sdamon.com> As a user who has banged my head against this more than once, its not a feature, its a bug, it does not need a pep (Guido said as much), just fix it. On 8/11/2015 11:31, Chris Barker - NOAA Federal wrote: >> there's been enough debate that I suspect we need a >>> PEP. >>> >> I think we might just need another round of discussion here. > Please no :-) > > Looking back at the previous discussion, it looked like it's all been > said, and there was almost unanimous approval (with some key mild > disapproval) for the idea, so what we need now is a pronouncement. > > If it's unclear whether consensus was close, then folks that are > strongly against should speak up now. If there is a flurry of those, > then a PEP is in order. But another big long unstructured discussion > won't be useful. > > -Chris > > > > >> I'm +1 myself. Granted there haven't been many times I've wanted it >> (functions with enough arguments to want to make it easy to add and >> remove elements are a bit of a code smell), but I have wanted it (and >> even used the form that is accepted) several times. On the other hand, >> the number of times when the detection of a trailing comma has revealed >> a missing argument to me (Raymond's objection) has been...well, I'm >> pretty sure it is zero. Especially since it only happens *sometimes*. >> Since backward compatibility says we shouldn't disallow it where it is >> currently allowed, the only logical thing to do, IMO, is consistently >> allow it. >> >> (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for >> disallowing trailing commas outside of (). The number of times I've >> ended up with an unintentional tuple after converting a dictionary to a >> series of assignments outnumbers both of the above :) Note, I am *not* >> suggesting doing this!) >> >> --David >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/tritium-list%40sdamon.com From tritium-list at sdamon.com Tue Aug 11 17:52:37 2015 From: tritium-list at sdamon.com (Alexander Walters) Date: Tue, 11 Aug 2015 11:52:37 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> Message-ID: <55CA1A45.2010407@sdamon.com> On 8/11/2015 11:28, Wes Turner wrote: > > > On Aug 11, 2015 10:19 AM, "Wes Turner" > wrote: > > - [ ] review all string interpolation (for "injection") > * [ ] review every '%' > * [ ] review every ".format()" > * [ ] review every f-string (AND LOCALS AND GLOBALS) > * every os.system, os.exec*, subprocess.Popen > * every unclosed tag > * every unescaped control character > > This would create work we don't need. > > Solution: __str_shell_ escapes, adds slashes, and quotes. __str__SQL__ > refs a global list of reserved words. > I don't understand why % and .format got interjected into this. If you are mentioning them as 'get the unprocessed version of any string formatting', that is a bad idea, and not needed, since you already have an unprocessed string object. Assuming the method were named "hypothetical": >>> 'foo bar'.hypothetical() # returns 'foo bar' >>> '{0} bar'.format('foo').hypothetical() # returns 'foo bar' >>> ('%s bar' % ('foo',)).hypothetical() # returns 'foo bar' >>> f'{foo} bar'.hypothetical() # returns '{foo} bar', prime for translation. could gettext not be modified to create the same AST as f'{foo} bar' when it is translated to '{foo} le bar.' and inject it back into the runtime? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ischwabacher at wisc.edu Tue Aug 11 18:10:20 2015 From: ischwabacher at wisc.edu (ISAAC J SCHWABACHER) Date: Tue, 11 Aug 2015 16:10:20 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: Now with syntax highlighting, if my email client cooperates: code.putlines(f""" static char {entry.doc_cname}[] = "{ split_string_literal(escape_bytestring(docstr))}"; { # nested! f""" #if CYTHON_COMPILING_IN_CPYTHON struct wrapperbase {entry.wrapperbase_cname}; #endif """ if entry.is_special else ''} {(lambda temp, argn: # my kingdom for a let! f""" for ({temp}=0; {temp} on behalf of Stefan Behnel Sent: Sunday, August 9, 2015 04:53 To: python-dev at python.org Subject: Re: [Python-Dev] PEP-498: Literal String Formatting Stefan Behnel schrieb am 09.08.2015 um 10:06: > Eric V. Smith schrieb am 08.08.2015 um 03:39: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". >> >> I need to add some text to the discussion section, but I think it's in >> reasonable shape. I have a fully working implementation that I'll get >> around to posting somewhere this weekend. >> >> >>> def how_awesome(): return 'very' >> ... >> >>> f'f-strings are {how_awesome()} awesome!' >> 'f-strings are very awesome!' >> >> I'm open to any suggestions to improve the PEP. Thanks for your feedback. > > [copying my comment from python-ideas here] > > How common is this use case, really? Almost all of the string formatting > that I've used lately is either for logging (no help from this proposal > here) or requires some kind of translation/i18n *before* the formatting, > which is not helped by this proposal either. Thinking about this some more, the "almost all" is actually wrong. This only applies to one kind of application that I'm working on. In fact, "almost all" of the string formatting that I use is not in those applications but in Cython's code generator. And there's a *lot* of string formatting in there, even though we use real templating for bigger things already. However, looking through the code, I cannot see this proposal being of much help for that use case either. Many of the values that get formatted into the strings use some kind of non-trivial expression (function calls, object attributes, also local variables, sometimes variables with lengthy names) that is best written out in actual code. Here are some real example snippets: code.putln( 'static char %s[] = "%s";' % ( entry.doc_cname, split_string_literal(escape_byte_string(docstr)))) if entry.is_special: code.putln('#if CYTHON_COMPILING_IN_CPYTHON') code.putln( "struct wrapperbase %s;" % entry.wrapperbase_cname) code.putln('#endif') temp = ... code.putln("for (%s=0; %s < PyTuple_GET_SIZE(%s); %s++) {" % ( temp, temp, Naming.args_cname, temp)) code.putln("PyObject* item = PyTuple_GET_ITEM(%s, %s);" % ( Naming.args_cname, temp)) code.put("%s = (%s) ? PyDict_Copy(%s) : PyDict_New(); " % ( self.starstar_arg.entry.cname, Naming.kwds_cname, Naming.kwds_cname)) code.putln("if (unlikely(!%s)) return %s;" % ( self.starstar_arg.entry.cname, self.error_value())) We use %-formatting for historical reasons (that's all there was 15 years ago), but I wouldn't switch to .format() because there is nothing to win here. The "%s" etc. place holders are *very* short and do not get in the way (as "{}" would in C code templates). Named formatting would require a lot more space in the templates, so positional, unnamed formatting helps readability a lot. And the value expressions used for the interpolation tend to be expressions rather than simple variables, so keeping those outside of the formatting strings simplifies both editing and reading. That's the third major real-world use case for string formatting now where this proposal doesn't help. The niche is getting smaller. Stefan _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ischwabacher%40wisc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue Aug 11 18:54:17 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 11 Aug 2015 12:54:17 -0400 Subject: [Python-Dev] PEP needed for http://bugs.python.org/issue9232 ? In-Reply-To: <-5192872034033324019@unknownmsgid> References: <20150811144700.40AE0B1400A@webabinitio.net> <-5192872034033324019@unknownmsgid> Message-ID: <20150811165417.7822B250FF7@webabinitio.net> On Tue, 11 Aug 2015 08:31:57 -0700, Chris Barker - NOAA Federal wrote: > Looking back at the previous discussion, it looked like it's all been > said, and there was almost unanimous approval (with some key mild > disapproval) for the idea, so what we need now is a pronouncement. And we got it, so done :) --David From rdmurray at bitdance.com Tue Aug 11 19:01:11 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 11 Aug 2015 13:01:11 -0400 Subject: [Python-Dev] trailing commas on statements In-Reply-To: References: <20150811144700.40AE0B1400A@webabinitio.net> Message-ID: <20150811170112.62E8D250FEF@webabinitio.net> On Wed, 12 Aug 2015 01:03:38 +1000, Chris Angelico wrote: > On Wed, Aug 12, 2015 at 12:46 AM, R. David Murray wrote: > > (If you wanted to fix an 'oops' trailing comma syntax issue, I'd vote for > > disallowing trailing commas outside of (). The number of times I've > > ended up with an unintentional tuple after converting a dictionary to a > > series of assignments outnumbers both of the above :) Note, I am *not* > > suggesting doing this!) > > Outside of any form of bracket, I hope you mean. The ability to leave > a trailing comma on a list or dict is well worth keeping: > > func = { > "+": operator.add, > "-": operator.sub, > "*": operator.mul, > "/": operator.truediv, > } Sorry, "trailing comma outside ()" was a shorthand for 'trailing comma on a complete statement'. That is, what trips me up is going from something like: dict(abc=1, foo=2, bar=3, ) to: abc = 1, foo = 2, bar = 3, That is, I got rid of the dict(), but forgot to delete the commas. (Real world examples are more complex and it is often that the transformation gets done piecemeal and/or via cut and paste and I only miss one or two of the commas... But, for backward compatibility reasons, we wouldn't change it even if everyone thought it was a good idea for some reason :) --David From srkunze at mail.de Tue Aug 11 19:25:15 2015 From: srkunze at mail.de (Sven R. Kunze) Date: Tue, 11 Aug 2015 19:25:15 +0200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA11B5.3080503@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> <55CA11B5.3080503@trueblade.com> Message-ID: <55CA2FFB.3080001@mail.de> Couldn't you just store the original format string at some __format_str__ attribute at the formatted string? Just in case you need it. x = f'{a}' => x = '{}'.format(a) # or whatever it turns out to be x.__format_str__ = '{a}' On 11.08.2015 17:16, Eric V. Smith wrote: > On 08/11/2015 11:09 AM, Alexander Walters wrote: >> This may seam like a simplistic solution to i18n, but why not just add a >> method to string objects (assuming we implement f-strings) that just >> returns the original, unprocessed string. If the string was not an >> f-string, it just returns self. The gettext module can be modified, I >> think trivially, to use the method instead of the string directly. > You need the original string, in order to figure out what it translates > to. You need the values to replace into that string, evaluated at > runtime, in the context of where the string appears. And you need to > know where in the original (or translated) string to put them. > > The problem is that there's no way to evaluate the values and, before > they're substituted in to the string, use a different template string > with obvious substitution points. This is what PEP 501 is trying to do. > > Eric. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de From eric at trueblade.com Tue Aug 11 19:33:40 2015 From: eric at trueblade.com (Eric V. Smith) Date: Tue, 11 Aug 2015 13:33:40 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA2FFB.3080001@mail.de> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> <55CA11B5.3080503@trueblade.com> <55CA2FFB.3080001@mail.de> Message-ID: <55CA31F4.8000108@trueblade.com> On 08/11/2015 01:25 PM, Sven R. Kunze wrote: > Couldn't you just store the original format string at some > __format_str__ attribute at the formatted string? Just in case you need it. > > x = f'{a}' > > => > > x = '{}'.format(a) # or whatever it turns out to be > x.__format_str__ = '{a}' Yes. But I think the i18n problem, as evidenced by the differences in PEPs 498 and 501, relate to the expression evaluation, not to keeping the original string. But if people think that this helps the i18n problem, I suggest proposing concrete changes to PEP 501. Eric. > > > On 11.08.2015 17:16, Eric V. Smith wrote: >> On 08/11/2015 11:09 AM, Alexander Walters wrote: >>> This may seam like a simplistic solution to i18n, but why not just add a >>> method to string objects (assuming we implement f-strings) that just >>> returns the original, unprocessed string. If the string was not an >>> f-string, it just returns self. The gettext module can be modified, I >>> think trivially, to use the method instead of the string directly. >> You need the original string, in order to figure out what it translates >> to. You need the values to replace into that string, evaluated at >> runtime, in the context of where the string appears. And you need to >> know where in the original (or translated) string to put them. >> >> The problem is that there's no way to evaluate the values and, before >> they're substituted in to the string, use a different template string >> with obvious substitution points. This is what PEP 501 is trying to do. >> >> Eric. >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/srkunze%40mail.de > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > > From wes.turner at gmail.com Tue Aug 11 19:43:58 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 11 Aug 2015 12:43:58 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA1A45.2010407@sdamon.com> References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> <20150810172631.GN3737@ando.pearwood.info> <20150810143127.66c5f842@anarchist.wooz.org> <55CA1030.3060808@sdamon.com> <55CA1A45.2010407@sdamon.com> Message-ID: On Tue, Aug 11, 2015 at 10:52 AM, Alexander Walters wrote: > On 8/11/2015 11:28, Wes Turner wrote: > > > On Aug 11, 2015 10:19 AM, "Wes Turner" wrote: > > - [ ] review all string interpolation (for "injection") > * [ ] review every '%' > * [ ] review every ".format()" > * [ ] review every f-string (AND LOCALS AND GLOBALS) > * every os.system, os.exec*, subprocess.Popen > * every unclosed tag > * every unescaped control character > > This would create work we don't need. > > Solution: __str_shell_ escapes, adds slashes, and quotes. __str__SQL__ > refs a global list of reserved words. > > I don't understand why % and .format got interjected into this. > > If you are mentioning them as 'get the unprocessed version of any string > formatting', that is a bad idea, and not needed, since you already have an > unprocessed string object. Assuming the method were named "hypothetical": > > >>> 'foo bar'.hypothetical() # returns 'foo bar' > >>> '{0} bar'.format('foo').hypothetical() # returns 'foo bar' > >>> ('%s bar' % ('foo',)).hypothetical() # returns 'foo bar' > >>> f'{foo} bar'.hypothetical() # returns '{foo} bar', prime for > translation. > > could gettext not be modified to create the same AST as f'{foo} bar' when > it is translated to '{foo} le bar.' and inject it back into the runtime? > well, we're talking about a functional [series of] transformations on __str__ (or __unicode__); with globals and locals, and more-or-less a macro for eliding this (**frequently wrong** because when is a string not part of an output format with control characters that need to be escaped before they're interpolated in). % and str.format, (and gettext), are the current ways to do this, and they are also **frequently wrong** because HTML, SQL. The risk with this additional syntax is that unescaped globals and locals are transcluded (and/or translated); with an explicit (combination of) string prefixes to indicate forwards-compatible functional composition (of usually mutable types). -------------- next part -------------- An HTML attachment was scrubbed... URL: From python-ideas at mgmiller.net Tue Aug 11 19:05:55 2015 From: python-ideas at mgmiller.net (Mike Miller) Date: Tue, 11 Aug 2015 10:05:55 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C9FCDA.2010206@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <55C905A5.9050005@mgmiller.net> <55C9FCDA.2010206@trueblade.com> Message-ID: <55CA2B73.7070407@mgmiller.net> On 08/11/2015 06:47 AM, Eric V. Smith wrote: >> 2. Let's call them "format strings" not "f-strings". >> The latter sounds slightly obnoxious, and also inconsistent with the >> others: >> >> r'' raw string >> u'' unicode object (string) >> f'' format string > > People seem to have already started using f-strings. I think it's > inevitable. Sure, there's no way to ban it, that would be silly. But, I think the documentation should not use it. We don't normally say "r-strings" or "u-strings" when talking about them, it's not very accurate. The letter they use isn't their important quality. Also, avoiding the f- takes the spotlight off the part where f stands for words besides format. ;) -Mike From greg.ewing at canterbury.ac.nz Wed Aug 12 01:34:58 2015 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Wed, 12 Aug 2015 11:34:58 +1200 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55CA86A2.7040203@canterbury.ac.nz> Stefan Behnel wrote: > Syntax highlighting and in-string expression completion should eventually > help, once IDEs support it. Concerning that, this is going to place quite a burden on syntax highlighters. Doing it properly will require the ability to parse arbitrary Python expressions, or at least match nested brackets. An editor whose syntax hightlighting engine is based on regular expressions could have trouble with that. -- Greg From python at mrabarnett.plus.com Wed Aug 12 02:20:57 2015 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 12 Aug 2015 01:20:57 +0100 Subject: [Python-Dev] Can't import tkinter in Python 3.5.0rc1 Message-ID: <55CA9169.2080209@mrabarnett.plus.com> As the subject says, I'm unable to import tkinter in Python 3.5.0rc1. The console says: C:\Python35>python Python 3.5.0rc1 (v3.5.0rc1:1a58b1227501, Aug 10 2015, 05:18:45) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter Traceback (most recent call last): File "", line 1, in File "C:\Python35\lib\tkinter\__init__.py", line 35, in import _tkinter # If this fails your Python may not be configured for Tk ImportError: DLL load failed: The specified module could not be found. Is this a known problem? I'm on Windows 10 Home (64-bit). From doko at ubuntu.com Wed Aug 12 02:29:47 2015 From: doko at ubuntu.com (Matthias Klose) Date: Wed, 12 Aug 2015 02:29:47 +0200 Subject: [Python-Dev] Sorry folks, minor hiccup for Python 3.5.0rc1 In-Reply-To: <55C9483A.8010300@hastings.org> References: <55C9480C.20409@hastings.org> <55C9483A.8010300@hastings.org> Message-ID: <55CA937B.9050601@ubuntu.com> On 08/11/2015 02:56 AM, Larry Hastings wrote: > On 08/10/2015 05:55 PM, Larry Hastings wrote: >> I yanked the tarballs off the release page as soon as I suspected something. >> I'm rebuilding the tarballs and the docs now. If you grabbed the tarball as >> soon as it appeared, it's slightly out of date, please re-grab. > > p.s. I should have mentioned--the Mac and Windows builds should be fine. They, > unlike me, updated their tree ;-) didn't see any follow-up message. are the source tarballs now fixed? From rosuav at gmail.com Wed Aug 12 02:54:01 2015 From: rosuav at gmail.com (Chris Angelico) Date: Wed, 12 Aug 2015 10:54:01 +1000 Subject: [Python-Dev] trailing commas on statements In-Reply-To: <20150811170112.62E8D250FEF@webabinitio.net> References: <20150811144700.40AE0B1400A@webabinitio.net> <20150811170112.62E8D250FEF@webabinitio.net> Message-ID: On Wed, Aug 12, 2015 at 3:01 AM, R. David Murray wrote: > Sorry, "trailing comma outside ()" was a shorthand for 'trailing comma > on a complete statement'. That is, what trips me up is going from > something like: > > dict(abc=1, > foo=2, > bar=3, > ) > > to: > > abc = 1, > foo = 2, > bar = 3, > > That is, I got rid of the dict(), but forgot to delete the commas. > (Real world examples are more complex and it is often that the > transformation gets done piecemeal and/or via cut and paste and I only > miss one or two of the commas... > > But, for backward compatibility reasons, we wouldn't change it even if > everyone thought it was a good idea for some reason :) Sure. In that case, I agree with you completely. When I *do* want a tuple, I'll usually be putting it inside parens, rather than just tagging a comma on. But this can be the job of a linter. ChrisA From python at mrabarnett.plus.com Wed Aug 12 03:16:16 2015 From: python at mrabarnett.plus.com (MRAB) Date: Wed, 12 Aug 2015 02:16:16 +0100 Subject: [Python-Dev] Can't import tkinter in Python 3.5.0rc1 In-Reply-To: References: <55CA9169.2080209@mrabarnett.plus.com> Message-ID: <55CA9E5F.7000704@mrabarnett.plus.com> On 2015-08-12 02:05, Steve Dower wrote: > We saw and fixed it before RC 1. I'll check whether that fix didn't > stick, but go ahead, open an issue and assign me. It's issue 24847. > ------------------------------------------------------------------------ > From: MRAB > Sent: ?8/?11/?2015 17:25 > To: Python-Dev > Subject: [Python-Dev] Can't import tkinter in Python 3.5.0rc1 > > As the subject says, I'm unable to import tkinter in Python 3.5.0rc1. > > The console says: > > C:\Python35>python > Python 3.5.0rc1 (v3.5.0rc1:1a58b1227501, Aug 10 2015, 05:18:45) [MSC > v.1900 64 bit (AMD64)] on win32 > Type "help", "copyright", "credits" or "license" for more information. > >>> import tkinter > Traceback (most recent call last): > File "", line 1, in > File "C:\Python35\lib\tkinter\__init__.py", line 35, in > import _tkinter # If this fails your Python may not be configured > for Tk > ImportError: DLL load failed: The specified module could not be found. > > > Is this a known problem? > > I'm on Windows 10 Home (64-bit). From Steve.Dower at microsoft.com Wed Aug 12 03:05:12 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 12 Aug 2015 01:05:12 +0000 Subject: [Python-Dev] Can't import tkinter in Python 3.5.0rc1 In-Reply-To: <55CA9169.2080209@mrabarnett.plus.com> References: <55CA9169.2080209@mrabarnett.plus.com> Message-ID: We saw and fixed it before RC 1. I'll check whether that fix didn't stick, but go ahead, open an issue and assign me. Cheers, Steve Top-posted from my Windows Phone ________________________________ From: MRAB Sent: ?8/?11/?2015 17:25 To: Python-Dev Subject: [Python-Dev] Can't import tkinter in Python 3.5.0rc1 As the subject says, I'm unable to import tkinter in Python 3.5.0rc1. The console says: C:\Python35>python Python 3.5.0rc1 (v3.5.0rc1:1a58b1227501, Aug 10 2015, 05:18:45) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tkinter Traceback (most recent call last): File "", line 1, in File "C:\Python35\lib\tkinter\__init__.py", line 35, in import _tkinter # If this fails your Python may not be configured for Tk ImportError: DLL load failed: The specified module could not be found. Is this a known problem? I'm on Windows 10 Home (64-bit). _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fmail.python.org%2fmailman%2flistinfo%2fpython-dev&data=01%7c01%7csteve.dower%40microsoft.com%7c6db05dc594c0495f007508d2a2ac89c8%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=ylL27LiF2rTZ4WNvxI794J1I6KSmTMtmbJUw6MdRJ8o%3d Unsubscribe: https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fmail.python.org%2fmailman%2foptions%2fpython-dev%2fsteve.dower%2540microsoft.com&data=01%7c01%7csteve.dower%40microsoft.com%7c6db05dc594c0495f007508d2a2ac89c8%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=PDN11uxJyULACi3dsE3eWU4f3e01mdNKNE3ERUw01Jc%3d -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Wed Aug 12 08:01:54 2015 From: larry at hastings.org (Larry Hastings) Date: Tue, 11 Aug 2015 23:01:54 -0700 Subject: [Python-Dev] Sorry folks, minor hiccup for Python 3.5.0rc1 In-Reply-To: <55CA937B.9050601@ubuntu.com> References: <55C9480C.20409@hastings.org> <55C9483A.8010300@hastings.org> <55CA937B.9050601@ubuntu.com> Message-ID: <55CAE152.7070108@hastings.org> On 08/11/2015 05:29 PM, Matthias Klose wrote: > On 08/11/2015 02:56 AM, Larry Hastings wrote: >> On 08/10/2015 05:55 PM, Larry Hastings wrote: >>> I yanked the tarballs off the release page as soon as I suspected something. >>> I'm rebuilding the tarballs and the docs now. If you grabbed the tarball as >>> soon as it appeared, it's slightly out of date, please re-grab. >> p.s. I should have mentioned--the Mac and Windows builds should be fine. They, >> unlike me, updated their tree ;-) > didn't see any follow-up message. are the source tarballs now fixed? > Yes. I deleted the tarballs as soon as I detected a problem; I only re-uploaded them once everything is correct. They were fixed within an hour if I remember correctly. The correct .tgz has md5 sum 7ef9c440b863dc19a4c9fed55c3e9093, and the correct .tar.xz has md5 sum 984540f202abc9305435df321c91720c. //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From agriff at tin.it Wed Aug 12 21:05:50 2015 From: agriff at tin.it (Andrea Griffini) Date: Wed, 12 Aug 2015 21:05:50 +0200 Subject: [Python-Dev] About closures creates in exec Message-ID: Is it intended that closures created in exec statement/function cannot see locals if the exec was provided a locals dictionary? This code gives an error ("foo" is not found during lambda execution): exec("def foo(x): return x\n\n(lambda x:foo(x))(0)", globals(), {}) while executes normally with exec("def foo(x): return x\n\n(lambda x:foo(x))(0)") Is this the expected behavior? If so where is it documented? Andrea -------------- next part -------------- An HTML attachment was scrubbed... URL: From ischwabacher at wisc.edu Wed Aug 12 21:46:55 2015 From: ischwabacher at wisc.edu (ISAAC J SCHWABACHER) Date: Wed, 12 Aug 2015 19:46:55 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CA86A2.7040203@canterbury.ac.nz> References: <55C55DC3.8040605@trueblade.com> <55CA86A2.7040203@canterbury.ac.nz> Message-ID: Ruby already has this feature, and in my experience syntax highlighters handle it just fine. Here's what vim's default highlighter shows me: puts "we can #{ ["include", "interpolate"].each { |s| puts s } .select { |s| s.include? "erp" } # .first } arbitrary expressions!" So an editor whose syntax highlighting is based on regular expressions already can't cope with the world as it is. :) Does anyone reading this know of a tool that successfully highlights python but not ruby? ijs ________________________________________ From: Python-Dev on behalf of Greg Ewing Sent: Tuesday, August 11, 2015 18:34 To: python-dev at python.org Subject: Re: [Python-Dev] PEP-498: Literal String Formatting Stefan Behnel wrote: > Syntax highlighting and in-string expression completion should eventually > help, once IDEs support it. Concerning that, this is going to place quite a burden on syntax highlighters. Doing it properly will require the ability to parse arbitrary Python expressions, or at least match nested brackets. An editor whose syntax hightlighting engine is based on regular expressions could have trouble with that. -- Greg _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ischwabacher%40wisc.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Wed Aug 12 21:48:25 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Wed, 12 Aug 2015 15:48:25 -0400 Subject: [Python-Dev] About closures creates in exec In-Reply-To: References: Message-ID: <20150812194826.2683A250EE6@webabinitio.net> On Wed, 12 Aug 2015 21:05:50 +0200, Andrea Griffini wrote: > Is it intended that closures created in exec statement/function cannot see > locals if the exec was provided a locals dictionary? > > This code gives an error ("foo" is not found during lambda execution): > > exec("def foo(x): return x\n\n(lambda x:foo(x))(0)", globals(), {}) > > while executes normally with > > exec("def foo(x): return x\n\n(lambda x:foo(x))(0)") > > Is this the expected behavior? If so where is it documented? Yes. In the 'exec' docs, indirectly. They say: Remember that at module level, globals and locals are the same dictionary. If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition. Try the above in a class def and you'll see you get the same behavior. See also issue 24800. I'm wondering if the exec docs need to talk about this a little bit more, or maybe we need a faq entry and a link to it? --David From Steve.Dower at microsoft.com Wed Aug 12 18:03:39 2015 From: Steve.Dower at microsoft.com (Steve Dower) Date: Wed, 12 Aug 2015 16:03:39 +0000 Subject: [Python-Dev] [low-pri] Changing my email address Message-ID: Hi all Just a heads-up that I'll be switching to an alternate email address for all of my Python communications, due to what I'm sure are very sensible corporate security policies that nonetheless corrupt code snippets and URLs in my incoming email. I will henceforth be known as steve.dower at python.org (which is forwarding to python at stevedower.id.au, for the security conscious who will no doubt spot that address in headers). I can obviously still be reached at my old address, just don't send me URLs or text with full stops in it please :) Cheers, Steve From ethan at stoneleaf.us Thu Aug 13 01:11:04 2015 From: ethan at stoneleaf.us (Ethan Furman) Date: Wed, 12 Aug 2015 16:11:04 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55CBD288.6000401@stoneleaf.us> On 08/10/2015 04:05 PM, ISAAC J SCHWABACHER wrote: > I don't know about you, but I sure like this better than what you have: > > code.putlines(f""" > static char {entry.doc_cname}[] = "{ > split_string_literal(escape_bytestring(docstr))}"; > > { # nested! > f""" > #if CYTHON_COMPILING_IN_CPYTHON > struct wrapperbase {entry.wrapperbase_cname}; > #endif > """ if entry.is_special else ''} > > {(lambda temp, argn: # my kingdom for a let! > f""" > for ({temp}=0; {temp} PyObject *item = PyTuple_GET_ITEM({argn}, {temp}); > }}""")(..., Naming.args_cname)} > > {self.starstar_arg.entry.cname} = > ({Naming.kwds_cname}) ? PyDict_Copy({Naming.kwds_cname}) > : PyDict_New(); > > if (unlikely(!{self.starstar_arg.entry.cname})) return {self.error_value()}; > """) > > What do others think of this PEP-498 sample? (The PEP-501 version looks pretty similar, so I omit it.) Agh! My brain is hurting! ;) No, I don't care for it at all. -- ~Ethan~ From ischwabacher at wisc.edu Thu Aug 13 23:57:08 2015 From: ischwabacher at wisc.edu (ISAAC J SCHWABACHER) Date: Thu, 13 Aug 2015 21:57:08 +0000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55CBD288.6000401@stoneleaf.us> References: <55C55DC3.8040605@trueblade.com> <55CBD288.6000401@stoneleaf.us> Message-ID: Well, I seem to have succeeded in crystallizing opinions on the topic, even if the consensus is, "Augh! Make it stop!" :) The primary objective of that code sample was to make the structure of the code as close as possible to the structure of the interpolated string, since having descriptive text like "{entry.doc_cname}" inline instead of "%s" is precisely what str.format gains over str.__mod__. But there are several different elements in that code, and I'm curious what people find most off-putting. Is it the triple quoted format strings? The nesting? The interpolation with `"""...""" if cond else ''`? Just plain interpolations as are already available with str.format, but without explicitly importing names into the format string's scope via **kwargs? Trying to emulate let? Would a different indentation scheme make things better, or is this a problem with the coding style I've advanced here, or with the feature itself? Also, should this be allowed: def make_frob(foo): def frob(bar): f"""Frob the bar using {foo}""" ? ijs P.S.: I've translated the original snippet into ruby here: https://gist.github.com/ischwabacher/405afb86e28282946cc5, since it's already legal syntax there. Ironically, github's syntax highlighting either fails to parse the interpolation (in edit mode) or fails to treat the heredoc as a string literal (in display mode), but you can open it in your favorite editor to see whether the highlighting makes the code clearer. ________________________________________ From: Python-Dev on behalf of Ethan Furman Sent: Wednesday, August 12, 2015 18:11 To: python-dev at python.org Subject: Re: [Python-Dev] PEP-498: Literal String Formatting On 08/10/2015 04:05 PM, ISAAC J SCHWABACHER wrote: > I don't know about you, but I sure like this better than what you have: > > code.putlines(f""" > static char {entry.doc_cname}[] = "{ > split_string_literal(escape_bytestring(docstr))}"; > > { # nested! > f""" > #if CYTHON_COMPILING_IN_CPYTHON > struct wrapperbase {entry.wrapperbase_cname}; > #endif > """ if entry.is_special else ''} > > {(lambda temp, argn: # my kingdom for a let! > f""" > for ({temp}=0; {temp} PyObject *item = PyTuple_GET_ITEM({argn}, {temp}); > }}""")(..., Naming.args_cname)} > > {self.starstar_arg.entry.cname} = > ({Naming.kwds_cname}) ? PyDict_Copy({Naming.kwds_cname}) > : PyDict_New(); > > if (unlikely(!{self.starstar_arg.entry.cname})) return {self.error_value()}; > """) > > What do others think of this PEP-498 sample? (The PEP-501 version looks pretty similar, so I omit it.) Agh! My brain is hurting! ;) No, I don't care for it at all. -- ~Ethan~ _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/ischwabacher%40wisc.edu From status at bugs.python.org Fri Aug 14 18:08:28 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 14 Aug 2015 18:08:28 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20150814160828.59951568DD@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-08-07 - 2015-08-14) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5002 (+10) closed 31627 (+32) total 36629 (+42) Open issues with patches: 2244 Issues opened (34) ================== #23530: os and multiprocessing.cpu_count do not respect cpuset/affinit http://bugs.python.org/issue23530 reopened by serhiy.storchaka #24745: Better default font for editor http://bugs.python.org/issue24745 reopened by ned.deily #24828: Segfault when using store-context AST node in a load context http://bugs.python.org/issue24828 opened by xmorel #24829: Use interactive input even if stdout is redirected http://bugs.python.org/issue24829 opened by Drekin #24831: Load average in test suite too high http://bugs.python.org/issue24831 opened by skrah #24832: Issue building viewable docs with newer sphinx (default theme http://bugs.python.org/issue24832 opened by r.david.murray #24833: IDLE tabnanny check fails http://bugs.python.org/issue24833 opened by ???????????? ?????????? #24834: pydoc should display the expression for a builtin argument def http://bugs.python.org/issue24834 opened by larry #24836: Consistent failure in test_email on OS X Snow Leopard buildbot http://bugs.python.org/issue24836 opened by larry #24837: await process.wait() does not work with a new_event_loop http://bugs.python.org/issue24837 opened by chetan #24838: tarfile.py: fix GNU and USTAR formats to properly handle paths http://bugs.python.org/issue24838 opened by Roddy Shuler #24840: implement bool conversion for enums to prevent odd edge case http://bugs.python.org/issue24840 opened by novas0x2a #24841: Some test_ssl network tests fail if svn.python.org is not acce http://bugs.python.org/issue24841 opened by vlee #24842: Mention SimpleNamespace in namedtuple docs http://bugs.python.org/issue24842 opened by cvrebert #24844: Python 3.5rc1 compilation error with Apple clang 4.2 included http://bugs.python.org/issue24844 opened by dabeaz #24845: IDLE functional/integration testing http://bugs.python.org/issue24845 opened by markroseman #24846: Add tests for ``from ... import ...` code http://bugs.python.org/issue24846 opened by brett.cannon #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 opened by mrabarnett #24848: Warts in UTF-7 error handling http://bugs.python.org/issue24848 opened by serhiy.storchaka #24849: Add __len__ to map, everything in itertools http://bugs.python.org/issue24849 opened by flying sheep #24850: syslog.syslog() does not return error when unable to send the http://bugs.python.org/issue24850 opened by Cyril Bouthors #24851: infinite loop in faulthandler._stack_overflow http://bugs.python.org/issue24851 opened by Paul Murphy #24852: Python 3.5.0rc1 "HOWTO Use Python in the web" needs fix http://bugs.python.org/issue24852 opened by John Hagen #24853: Py_Finalize doesn't clean up PyImport_Inittab http://bugs.python.org/issue24853 opened by Alex Budovski #24857: mock: Crash on comparing call_args with long strings http://bugs.python.org/issue24857 opened by Wilfred.Hughes #24858: python3 -m test -ugui -v test_tk gives 3 failures under Debian http://bugs.python.org/issue24858 opened by lac #24859: ctypes.Structure bit order is reversed - counts from right http://bugs.python.org/issue24859 opened by zeero #24860: handling of IDLE 'open module' errors http://bugs.python.org/issue24860 opened by markroseman #24861: deprecate importing components of IDLE http://bugs.python.org/issue24861 opened by markroseman #24862: subprocess.Popen behaves incorrect when moved in process tree http://bugs.python.org/issue24862 opened by Andre Merzky #24864: errors writing to stdout during interpreter exit exit with sta http://bugs.python.org/issue24864 opened by rbcollins #24866: Boolean representation of Q/queue objects does not fit behavio http://bugs.python.org/issue24866 opened by Frunit #24867: Asyncio Task.get_stack fails with native coroutines http://bugs.python.org/issue24867 opened by habilain #24868: Python start http://bugs.python.org/issue24868 opened by jack Most recent 15 issues with no replies (15) ========================================== #24868: Python start http://bugs.python.org/issue24868 #24853: Py_Finalize doesn't clean up PyImport_Inittab http://bugs.python.org/issue24853 #24848: Warts in UTF-7 error handling http://bugs.python.org/issue24848 #24846: Add tests for ``from ... import ...` code http://bugs.python.org/issue24846 #24841: Some test_ssl network tests fail if svn.python.org is not acce http://bugs.python.org/issue24841 #24834: pydoc should display the expression for a builtin argument def http://bugs.python.org/issue24834 #24833: IDLE tabnanny check fails http://bugs.python.org/issue24833 #24829: Use interactive input even if stdout is redirected http://bugs.python.org/issue24829 #24828: Segfault when using store-context AST node in a load context http://bugs.python.org/issue24828 #24821: The optimization of string search can cause pessimization http://bugs.python.org/issue24821 #24817: disable format menu items when not applicable http://bugs.python.org/issue24817 #24815: IDLE can lose menubar on OS X http://bugs.python.org/issue24815 #24808: PyTypeObject fields have incorrectly documented types http://bugs.python.org/issue24808 #24807: compileall can cause Python installation to fail http://bugs.python.org/issue24807 #24792: zipimporter masks import errors http://bugs.python.org/issue24792 Most recent 15 issues waiting for review (15) ============================================= #24867: Asyncio Task.get_stack fails with native coroutines http://bugs.python.org/issue24867 #24861: deprecate importing components of IDLE http://bugs.python.org/issue24861 #24851: infinite loop in faulthandler._stack_overflow http://bugs.python.org/issue24851 #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 #24845: IDLE functional/integration testing http://bugs.python.org/issue24845 #24840: implement bool conversion for enums to prevent odd edge case http://bugs.python.org/issue24840 #24838: tarfile.py: fix GNU and USTAR formats to properly handle paths http://bugs.python.org/issue24838 #24809: Add getprotobynumber to socket module http://bugs.python.org/issue24809 #24808: PyTypeObject fields have incorrectly documented types http://bugs.python.org/issue24808 #24803: PyNumber_Long Buffer Over-read.patch http://bugs.python.org/issue24803 #24802: PyFloat_FromString Buffer Over-read http://bugs.python.org/issue24802 #24801: right-mouse click in IDLE on Mac doesn't work http://bugs.python.org/issue24801 #24784: Build fails --without-threads http://bugs.python.org/issue24784 #24782: Merge 'configure extensions' into main IDLE config dialog http://bugs.python.org/issue24782 #24774: inconsistency in http.server.test http://bugs.python.org/issue24774 Top 10 most discussed issues (10) ================================= #24492: using custom objects as modules: AttributeErrors new in 3.5 http://bugs.python.org/issue24492 11 msgs #24831: Load average in test suite too high http://bugs.python.org/issue24831 11 msgs #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 11 msgs #24832: Issue building viewable docs with newer sphinx (default theme http://bugs.python.org/issue24832 8 msgs #24849: Add __len__ to map, everything in itertools http://bugs.python.org/issue24849 8 msgs #24858: python3 -m test -ugui -v test_tk gives 3 failures under Debian http://bugs.python.org/issue24858 8 msgs #24840: implement bool conversion for enums to prevent odd edge case http://bugs.python.org/issue24840 7 msgs #24801: right-mouse click in IDLE on Mac doesn't work http://bugs.python.org/issue24801 6 msgs #24857: mock: Crash on comparing call_args with long strings http://bugs.python.org/issue24857 6 msgs #24864: errors writing to stdout during interpreter exit exit with sta http://bugs.python.org/issue24864 6 msgs Issues closed (31) ================== #4214: no extension debug info with msvc9compiler.py http://bugs.python.org/issue4214 closed by steve.dower #12854: PyOS_Readline usage in tokenizer ignores sys.stdin/sys.stdout http://bugs.python.org/issue12854 closed by eryksun #15944: memoryviews and ctypes http://bugs.python.org/issue15944 closed by skrah #16554: The description of the argument of MAKE_FUNCTION and MAKE_CLOS http://bugs.python.org/issue16554 closed by pitrou #19450: Bug in sqlite in Windows binaries http://bugs.python.org/issue19450 closed by steve.dower #20059: Inconsistent urlparse/urllib.parse handling of invalid port va http://bugs.python.org/issue20059 closed by rbcollins #21159: configparser.InterpolationMissingOptionError is not very intui http://bugs.python.org/issue21159 closed by rbcollins #21167: float('nan') returns 0.0 on Python compiled with icc http://bugs.python.org/issue21167 closed by r.david.murray #23626: Windows per-user install of 3.5a2 doesn't associate .py files http://bugs.python.org/issue23626 closed by steve.dower #23725: update tempfile docs to say that TemporaryFile is secure http://bugs.python.org/issue23725 closed by rbcollins #23756: Tighten definition of bytes-like objects http://bugs.python.org/issue23756 closed by skrah #24385: libpython27.a in python-2.7.10 i386 (windows msi release) cont http://bugs.python.org/issue24385 closed by steve.dower #24440: Move the buildslave setup information from the wiki to the dev http://bugs.python.org/issue24440 closed by r.david.murray #24634: Importing uuid should not try to load libc on Windows http://bugs.python.org/issue24634 closed by steve.dower #24640: no ensurepip in embedded Windows distribution http://bugs.python.org/issue24640 closed by steve.dower #24667: OrderedDict.popitem()/__str__() raises KeyError http://bugs.python.org/issue24667 closed by eric.snow #24798: _msvccompiler.py doesn't properly support manifests http://bugs.python.org/issue24798 closed by steve.dower #24819: replace window size preference with just use last window size http://bugs.python.org/issue24819 closed by terry.reedy #24822: IDLE: Accelerator key doesn't work for Options http://bugs.python.org/issue24822 closed by terry.reedy #24824: Pydoc fails with codecs http://bugs.python.org/issue24824 closed by larry #24825: visual margin indicator for breakpoints in IDLE http://bugs.python.org/issue24825 closed by terry.reedy #24827: round(1.65, 1) return 1.6 with decimal http://bugs.python.org/issue24827 closed by zach.ware #24830: IndexError should (must?) report the index in error! http://bugs.python.org/issue24830 closed by r.david.murray #24835: Consistent failure in test_asyncio on Windows 7 buildbot http://bugs.python.org/issue24835 closed by yselivanov #24839: platform._syscmd_ver raises DeprecationWarning http://bugs.python.org/issue24839 closed by steve.dower #24843: 2to3 not working http://bugs.python.org/issue24843 closed by gladman #24854: Null check handle return by new_string() http://bugs.python.org/issue24854 closed by python-dev #24855: fail to mock the urlopen function http://bugs.python.org/issue24855 closed by sih4sing5hong5 #24856: Mock.side_effect as iterable or iterator http://bugs.python.org/issue24856 closed by r.david.murray #24863: Incoherent bevavior with umlaut in regular expressions http://bugs.python.org/issue24863 closed by eryksun #24865: IDLE crashes on entering diacritical mark (Alt-E) on Mac OS X http://bugs.python.org/issue24865 closed by ned.deily From andres.guzman-ballen at intel.com Fri Aug 14 17:56:09 2015 From: andres.guzman-ballen at intel.com (Guzman-ballen, Andres) Date: Fri, 14 Aug 2015 15:56:09 +0000 Subject: [Python-Dev] Differences between Python's OpenSSL in SVN and OpenSSL's in GitHub Message-ID: <53F5624D66ECF84BBC0ECD29D47FCCBD284EC7@ORSMSX102.amr.corp.intel.com> Hello Python Developers! Why is it that the OpenSSL v1.0.2d that is found on Python's SVN repo is quite different from what OpenSSL has on their GitHub repository for OpenSSL v1.0.2d? I am asking because I am able to successfully download OpenSSL's GitHub version during the cpython build process but when I try to build cpython, I get failures because Visual Studio isn't able to find files like openssl/opensslconf.h and this is because Python's OpenSSL version in SVN is the only one that has a directory inside the include directory. The GitHub repo is missing this directory however, and these are not the only differences. If I checkout the GitHub version and then replace it with what is in the SVN repo, you get these untracked files. MINFO Makefile Makefile.bak apps/CA.pl apps/md4.c crypto/buildinf.h crypto/buildinf.h.orig crypto/buildinf_amd64.h crypto/buildinf_x86.h crypto/opensslconf.h crypto/opensslconf.h.bak crypto/opensslconf_amd64.h crypto/opensslconf_x86.h inc64/ include/openssl/ ms/bcb.mak ms/libeay32.def ms/nt.mak ms/nt64.mak ms/ntdll.mak ms/ssleay32.def ms/uptable.asm ms/uptable.obj ms/version32.rc out64/ test/bftest.c test/bntest.c test/casttest.c test/constant_time_test.c test/destest.c test/dhtest.c test/dsatest.c test/ecdhtest.c test/ecdsatest.c test/ectest.c test/enginetest.c test/evp_extra_test.c test/evp_test.c test/evptests.txt test/exptest.c test/heartbeat_test.c test/hmactest.c test/ideatest.c test/jpaketest.c test/md2test.c test/md4test.c test/md5test.c test/mdc2test.c test/randtest.c test/rc2test.c test/rc4test.c test/rc5test.c test/rmdtest.c test/rsa_test.c test/sha1test.c test/sha256t.c test/sha512t.c test/shatest.c test/srptest.c test/ssltest.c test/v3nametest.c test/verify_extra_test.c test/wp_test.c tmp/ tmp32/ tmp64/ tools/c_rehash Does anyone know why this is the case? What was the motivation behind these changes? Thanks! Andres Guzman-Ballen Scripting Analyzers & Tools Team Intel Americas, Inc. 1906 Fox Dr, Champaign IL 61820 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zachary.ware+pydev at gmail.com Fri Aug 14 19:26:55 2015 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Fri, 14 Aug 2015 12:26:55 -0500 Subject: [Python-Dev] Differences between Python's OpenSSL in SVN and OpenSSL's in GitHub In-Reply-To: <53F5624D66ECF84BBC0ECD29D47FCCBD284EC7@ORSMSX102.amr.corp.intel.com> References: <53F5624D66ECF84BBC0ECD29D47FCCBD284EC7@ORSMSX102.amr.corp.intel.com> Message-ID: On Fri, Aug 14, 2015 at 10:56 AM, Guzman-ballen, Andres wrote: > Hello Python Developers! > > Why is it that the OpenSSL v1.0.2d that is found on Python?s SVN repo is > quite different from what OpenSSL has on their GitHub repository for OpenSSL > v1.0.2d? The reason for the difference is to avoid requiring Perl to be installed to be able to build Python. The svn.python.org version of openssl-1.0.2d at revision 89058should match 1.0.2d from Github (if it doesn't that's a bug in OpenSSL's packaging or my checking it into SVN). Revision 89059 checks in all of the changes, all of which are made by running 'PCbuild\prepare_ssl.py' on the vanilla sources, and you should be able to produce the same set of changes by running 'PCbuild\prepare_ssl.py' over the sources checked out from Github. Note that to run that script successfully, you'll need to have the Visual Studio environment set up ('PCbuild\env.bat' will do it for you), Perl and NASM on your PATH, and run the script with Python 3.4 or later. Hope this answers your question, -- Zach From steve.dower at python.org Fri Aug 14 18:20:45 2015 From: steve.dower at python.org (Steve Dower) Date: Fri, 14 Aug 2015 09:20:45 -0700 Subject: [Python-Dev] Differences between Python's OpenSSL in SVN and OpenSSL's in GitHub In-Reply-To: <53F5624D66ECF84BBC0ECD29D47FCCBD284EC7@ORSMSX102.amr.corp.intel.com> References: <53F5624D66ECF84BBC0ECD29D47FCCBD284EC7@ORSMSX102.amr.corp.intel.com> Message-ID: <55CE155D.3090001@python.org> On 14Aug2015 0856, Guzman-ballen, Andres wrote: > Hello Python Developers! > > Why is it that the OpenSSL v1.0.2d that is found on Python?s SVN repo > is quite > different from what OpenSSL has on their GitHub repository > for OpenSSL > v1.0.2d? I am asking because I am able to successfully download > OpenSSL?s GitHub version during the cpython build process but when I try > to build cpython, I get failures because Visual Studio isn?t able to > find files like openssl/opensslconf.h and this is because Python?s > OpenSSL version in SVN is the only one that has a directory inside the > include directory. The GitHub repo is missing this directory however, > and these are not the only differences. > > If I checkout the GitHub version and then replace it with what is in the > SVN repo, you get these untracked files. > > MINFO > > Makefile > > Makefile.bak > > apps/CA.pl > > apps/md4.c > > crypto/buildinf.h > > crypto/buildinf.h.orig > > crypto/buildinf_amd64.h > > crypto/buildinf_x86.h > > crypto/opensslconf.h > > crypto/opensslconf.h.bak > > crypto/opensslconf_amd64.h > > crypto/opensslconf_x86.h > > inc64/ > > include/openssl/ > > ms/bcb.mak > > ms/libeay32.def > > ms/nt.mak > > ms/nt64.mak > > ms/ntdll.mak > > ms/ssleay32.def > > ms/uptable.asm > > ms/uptable.obj > > ms/version32.rc > > out64/ > > test/bftest.c > > test/bntest.c > > test/casttest.c > > test/constant_time_test.c > > test/destest.c > > test/dhtest.c > > test/dsatest.c > > test/ecdhtest.c > > test/ecdsatest.c > > test/ectest.c > > test/enginetest.c > > test/evp_extra_test.c > > test/evp_test.c > > test/evptests.txt > > test/exptest.c > > test/heartbeat_test.c > > test/hmactest.c > > test/ideatest.c > > test/jpaketest.c > > test/md2test.c > > test/md4test.c > > test/md5test.c > > test/mdc2test.c > > test/randtest.c > > test/rc2test.c > > test/rc4test.c > > test/rc5test.c > > test/rmdtest.c > > test/rsa_test.c > > test/sha1test.c > > test/sha256t.c > > test/sha512t.c > > test/shatest.c > > test/srptest.c > > test/ssltest.c > > test/v3nametest.c > > test/verify_extra_test.c > > test/wp_test.c > > tmp/ > > tmp32/ > > tmp64/ > > tools/c_rehash > > Does anyone know why this is the case? What was the motivation behind > these changes? Thanks! > > Andres Guzman-Ballen > > Scripting Analyzers & Tools Team > > Intel Americas, Inc. > > 1906 Fox Dr, Champaign IL 61820 > > To build OpenSSL on Windows you also need a copy of Perl and you need to run the preparation script to generate some extra code files. Otherwise you don't get a Windows makefile (which we don't use anymore because it builds significantly faster and more reliably with a Visual C++ project) and the generated assembly code. The copy we have in SVN has already had these scripts generated, but nothing else should be different from the original repository. It's possible you got a slightly different version out of github? I believe we use their released tarballs, but Zach will know for sure as he did the last update IIRC. Cheers, Steve From luckyy8769 at gmail.com Sun Aug 16 06:22:42 2015 From: luckyy8769 at gmail.com (lucky yadav) Date: Sat, 15 Aug 2015 21:22:42 -0700 Subject: [Python-Dev] Python Message-ID: Want to learn python Would u help me! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Sun Aug 16 07:16:47 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 16 Aug 2015 01:16:47 -0400 Subject: [Python-Dev] Python In-Reply-To: References: Message-ID: On 8/16/2015 12:22 AM, lucky yadav wrote: > Want to learn python > Would u help me! Try the python-tutor list. This list if for development of the next releases of Python. -- Terry Jan Reedy From larry at hastings.org Sun Aug 16 09:13:10 2015 From: larry at hastings.org (Larry Hastings) Date: Sun, 16 Aug 2015 00:13:10 -0700 Subject: [Python-Dev] How are we merging forward from the Bitbucket 3.5 repo? Message-ID: <55D03806.1020808@hastings.org> So far I've accepted two pull requests into bitbucket.com/larry/cpython350 in the 3.5 branch, what will become 3.5.0rc2. As usual, it's the contributor's responsibility to merge forward; if their checkin goes in to 3.5, it's their responsibility to also merge it into the hg.python.org/cpython 3.5 branch (which will be 3.5.1) and default branch (which right now is 3.6). But... what's the plan here? I didn't outline anything specific, I just assumed we'd do the normal thing, pulling from 3.5.0 into 3.5.1 and merging. But of the two pull requests so far accepted, one was merged this way, though it cherry-picked the revision (skipping the pull request merge revision Bitbucket created), and one was checked in to 3.5.1 directly (no merging). I suppose we can do whatever we like. But it'd be helpful if we were consistent. I can suggest three approaches: 1. After your push request is merged, you cherry-pick your revision from bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge. After 3.5.0 final is released I do a big null merge from bitbucket.com/larry/cpython350 into hg.python.org/cpython. 2. After your push request is merged, you manually check in a new equivalent revision into hg.python.org/cpython in the 3.5 branch. No merging necessary because from Mercurial's perspective it's unrelated to the revision I accepted. After 3.5.0 final is released I do a big null merge from bitbucket.com/larry/cpython350 into hg.python.org/cpython. 3. After your push request is merged, you pull from bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge into 3.5. In this version I don't have to do a final null merge! I'd prefer 3; that's what we normally do, and that's what I was expecting. So far people have done 1 and 2. Can we pick one approach and stick with it? Pretty-please? //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Aug 16 15:17:09 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 16 Aug 2015 14:17:09 +0100 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> Message-ID: On 8 August 2015 at 04:53, Nick Coghlan wrote: >> I do however think it would make ensurepip itself better, so I?m not dead set against it, mostly just worried about ramifications. > > I'd advise against letting concerns about Linux distro politics hold > you back from making ensurepip as good as you can make it - if nothing > else, the developer experience folks at commercial Linux vendors are > specifically paid to advocate for the interests of software developers > when it comes to the Linux user experience (that's part of my own day > job in the Fedora/RHEL/CentOS case - I moved over to the software > management team in RHEL Developer Experience at the start of June). > > That means that while I will have some *requests* to make certain > things easier downstream (like going through the PEP process to figure > out an upstream supported way to omit the build-only dependencies when > running ensurepip), I also wholeheartedly endorse the idea of having > the default upstream behaviour focus on making the initial experience > for folks downloading Windows or Mac OS X binaries from python.org as > compelling as we can make it. python-dev needs to put the needs of > Python first, and those of Linux second. > > This does mean that any Linux distro that can't figure out how to > provide a better open source developer experience for Pythonistas than > Windows or Mac OS X is at risk of falling by the wayside in the Python > community, but if those of us that care specifically about the > viability of desktop Linux as a platform for open source development > stand by and let that happen, then we'll *deserve* the consequences. Sorry I'm late to this, but I would very much like to see wheel installed with ensurepip on at least Windows. (I don't see any reason why OSX would not be similar, but as I'm not an OSX user I can't say for certain). If Linux distros have issue with this, then maybe we need to do something different there (I like Nick's comments, and would rather we didn't make the Linux situation worse due to distro politics, but that's not my call) but that shouldn't affect Windows/OSX. There's a certain irony to me in the fact that we're reaching a point where the Windows experience is the benchmark Linux users need to aim for, but I'll avoid saying anything more on that one ;-) Paul From guido at python.org Sun Aug 16 16:08:03 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 16 Aug 2015 17:08:03 +0300 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: <55D03806.1020808@hastings.org> References: <55D03806.1020808@hastings.org> Message-ID: I presume the issue here is that Hg is so complicated that everyone knows a different subset of the commands and semantics. I personally don't know what the commands for cherry-picking a revision would be. I also don't know exactly what happens when you merge a PR using bitbucket. (I'm only familiar with the GitHub PR flow, and I don't like its behavior, which seems to always create an extra merge revision for what I consider as logically a single commit.) BTW When I go to https://bitbucket.org/larry/cpython350 the first thing I see (in a very big bold font) is "This is Python version 3.6.0 alpha 1". What's going on here? It doesn't inspire confidence. On Sun, Aug 16, 2015 at 10:13 AM, Larry Hastings wrote: > > > So far I've accepted two pull requests into bitbucket.com/larry/cpython350 > in the 3.5 branch, what will become 3.5.0rc2. As usual, it's the > contributor's responsibility to merge forward; if their checkin goes in to > 3.5, it's their responsibility to also merge it into the > hg.python.org/cpython 3.5 branch (which will be 3.5.1) and default branch > (which right now is 3.6). > > But... what's the plan here? I didn't outline anything specific, I just > assumed we'd do the normal thing, pulling from 3.5.0 into 3.5.1 and > merging. But of the two pull requests so far accepted, one was merged this > way, though it cherry-picked the revision (skipping the pull request merge > revision Bitbucket created), and one was checked in to 3.5.1 directly (no > merging). > > I suppose we can do whatever we like. But it'd be helpful if we were > consistent. I can suggest three approaches: > > 1. After your push request is merged, you cherry-pick your revision > from bitbucket.com/larry/cpython350 into hg.python.org/cpython and > merge. After 3.5.0 final is released I do a big null merge from > bitbucket.com/larry/cpython350 into hg.python.org/cpython. > 2. After your push request is merged, you manually check in a new > equivalent revision into hg.python.org/cpython in the 3.5 branch. No > merging necessary because from Mercurial's perspective it's unrelated to > the revision I accepted. After 3.5.0 final is released I do a big null > merge from bitbucket.com/larry/cpython350 into hg.python.org/cpython. > 3. After your push request is merged, you pull from > bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge > into 3.5. In this version I don't have to do a final null merge! > > I'd prefer 3; that's what we normally do, and that's what I was > expecting. So far people have done 1 and 2. > > Can we pick one approach and stick with it? Pretty-please? > > > */arry* > > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Sun Aug 16 16:21:00 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 16 Aug 2015 15:21:00 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55C55DC3.8040605@trueblade.com> References: <55C55DC3.8040605@trueblade.com> Message-ID: On 8 August 2015 at 02:39, Eric V. Smith wrote: > Following a long discussion on python-ideas, I've posted my draft of > PEP-498. It describes the "f-string" approach that was the subject of > the "Briefer string format" thread. I'm open to a better title than > "Literal String Formatting". > > I need to add some text to the discussion section, but I think it's in > reasonable shape. I have a fully working implementation that I'll get > around to posting somewhere this weekend. > >>>> def how_awesome(): return 'very' > ... >>>> f'f-strings are {how_awesome()} awesome!' > 'f-strings are very awesome!' > > I'm open to any suggestions to improve the PEP. Thanks for your feedback. In my view: 1. Calling them "format strings" rather than "f-strings" is sensible (by analogy with "raw string" etc). Colloquially we can use f-string if we want, but let's have the formal name be fully spelled out. In particular, the PEP should use "format string". 2. By far and away the most common use for me would be things like print(f"Iteration {n}: Took {end-start) seconds"). At the moment I use str,format() for this, and it's annoyingly verbose. This would be a big win, and I'm +1 on the PEP for this specific reason. 3. All of the complex examples look scary, but in practice I wouldn't write stuff like that - why would anyone do so unless they were being deliberately obscure? On the other hand, as I gained experience with the construct, being *able* to use more complex expressions without having to stop and remember special cases would be great. 4. It's easy to write print("My age is {age}") and forget the "f" prefix. While it'll bug me at first that I have to go back and fix stuff to add the "f" after my code gives the wrong output, I *don't* want to see this ability added to unprefixed strings. IMO that's going a step too far (explicit is better than implicit and all that). 5. The PEP is silent (as far as I can see) on things like whether triple quoting (f"""...""") is allowed (I assume it is), and whether prefixes can be combined (for example, rf'{drive}:\{path}\{filename}') (I'd like them to be, but can live without it). 6. The justification for ignoring whitespace is weak (the motivating case is obscure, and there are many viable workarounds). I don't think it's worth ignoring whitespace - but I also don't think it's worth a long discussion. Just pick an option (as you did) and go with it. So I see no need for change here, Apologies for the above being terse - I'm clearing a big backlog of emails. Ask for clarification if you need it! Paul From python-dev at masklinn.net Sun Aug 16 16:30:00 2015 From: python-dev at masklinn.net (Xavier Morel) Date: Sun, 16 Aug 2015 16:30:00 +0200 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: References: <55D03806.1020808@hastings.org> Message-ID: <2036B6D6-72E6-4E60-B1DE-20DCD8C6D478@masklinn.net> On 2015-08-16, at 16:08 , Guido van Rossum wrote: > I presume the issue here is that Hg is so complicated that everyone knows a different subset of the commands and semantics. > > I personally don't know what the commands for cherry-picking a revision would be. graft > I also don't know exactly what happens when you merge a PR using bitbucket. (I'm only familiar with the GitHub PR flow, and I don't like its behavior, which seems to always create an extra merge revision for what I consider as logically a single commit.) Same thing IIRC, I don't think there's a way to "squash" a merge via the web interface in either. > BTW When I go to https://bitbucket.org/larry/cpython350 the first thing I see (in a very big bold font) is "This is Python version 3.6.0 alpha 1". What's going on here? It doesn't inspire confidence. It's the rendered content of the README file at the root of the repository, same as github. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mal at egenix.com Sun Aug 16 16:36:10 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Sun, 16 Aug 2015 16:36:10 +0200 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: References: <55D03806.1020808@hastings.org> Message-ID: <55D09FDA.1050106@egenix.com> On 16.08.2015 16:08, Guido van Rossum wrote: > I presume the issue here is that Hg is so complicated that everyone knows a > different subset of the commands and semantics. > > I personally don't know what the commands for cherry-picking a revision > would be. > > I also don't know exactly what happens when you merge a PR using bitbucket. > (I'm only familiar with the GitHub PR flow, and I don't like its behavior, > which seems to always create an extra merge revision for what I consider as > logically a single commit.) > > BTW When I go to https://bitbucket.org/larry/cpython350 the first thing I > see (in a very big bold font) is "This is Python version 3.6.0 alpha 1". > What's going on here? It doesn't inspire confidence. You are probably looking at the default branch within that repo fork. This is the 3.5 branch: https://bitbucket.org/larry/cpython350/branch/3.5 > On Sun, Aug 16, 2015 at 10:13 AM, Larry Hastings wrote: > >> >> >> So far I've accepted two pull requests into bitbucket.com/larry/cpython350 >> in the 3.5 branch, what will become 3.5.0rc2. As usual, it's the >> contributor's responsibility to merge forward; if their checkin goes in to >> 3.5, it's their responsibility to also merge it into the >> hg.python.org/cpython 3.5 branch (which will be 3.5.1) and default branch >> (which right now is 3.6). >> >> But... what's the plan here? I didn't outline anything specific, I just >> assumed we'd do the normal thing, pulling from 3.5.0 into 3.5.1 and >> merging. But of the two pull requests so far accepted, one was merged this >> way, though it cherry-picked the revision (skipping the pull request merge >> revision Bitbucket created), and one was checked in to 3.5.1 directly (no >> merging). >> >> I suppose we can do whatever we like. But it'd be helpful if we were >> consistent. I can suggest three approaches: >> >> 1. After your push request is merged, you cherry-pick your revision >> from bitbucket.com/larry/cpython350 into hg.python.org/cpython and >> merge. After 3.5.0 final is released I do a big null merge from >> bitbucket.com/larry/cpython350 into hg.python.org/cpython. >> 2. After your push request is merged, you manually check in a new >> equivalent revision into hg.python.org/cpython in the 3.5 branch. No >> merging necessary because from Mercurial's perspective it's unrelated to >> the revision I accepted. After 3.5.0 final is released I do a big null >> merge from bitbucket.com/larry/cpython350 into hg.python.org/cpython. >> 3. After your push request is merged, you pull from >> bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge >> into 3.5. In this version I don't have to do a final null merge! >> >> I'd prefer 3; that's what we normally do, and that's what I was >> expecting. So far people have done 1 and 2. >> >> Can we pick one approach and stick with it? Pretty-please? >> >> >> */arry* >> >> _______________________________________________ >> python-committers mailing list >> python-committers at python.org >> https://mail.python.org/mailman/listinfo/python-committers >> >> > > > > > _______________________________________________ > python-committers mailing list > python-committers at python.org > https://mail.python.org/mailman/listinfo/python-committers > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Aug 16 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> mxODBC Plone/Zope Database Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2015-08-12: Released mxODBC 3.3.4 ... http://egenix.com/go80 2015-08-22: FrOSCon 2015 ... 6 days to go ::::: Try our mxODBC.Connect Python Database Interface for free ! :::::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From steve at pearwood.info Sun Aug 16 16:40:01 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 17 Aug 2015 00:40:01 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> Message-ID: <20150816144000.GY5249@ando.pearwood.info> On Sun, Aug 16, 2015 at 02:17:09PM +0100, Paul Moore wrote: > Sorry I'm late to this, but I would very much like to see wheel > installed with ensurepip on at least Windows. I seem to be missing something critical to this entire discussion. As I understand it, ensurepip is *only* intended to bootstrap pip itself. So the idea is, you install Python, including ensurepip, use that to install the latest pip *including wheel*, and Bob's your uncle. At worst, you install pip, then install wheel. So what is the benefit of including wheel with ensurepip? -- Steve From donald at stufft.io Sun Aug 16 16:52:00 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 16 Aug 2015 10:52:00 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: <20150816144000.GY5249@ando.pearwood.info> References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> <20150816144000.GY5249@ando.pearwood.info> Message-ID: On August 16, 2015 at 10:41:42 AM, Steven D'Aprano (steve at pearwood.info) wrote: > On Sun, Aug 16, 2015 at 02:17:09PM +0100, Paul Moore wrote: > > > Sorry I'm late to this, but I would very much like to see wheel > > installed with ensurepip on at least Windows. > > I seem to be missing something critical to this entire discussion. > > As I understand it, ensurepip is *only* intended to bootstrap pip > itself. So the idea is, you install Python, including ensurepip, use > that to install the latest pip *including wheel*, and Bob's your uncle. > > At worst, you install pip, then install wheel. > > So what is the benefit of including wheel with ensurepip? > pip has an optional dependency on wheel, if you install that optional dependency than you?ll get the implicit wheel cache enabled by default which can drastically improve installation speeds by caching built artifacts (i.e. ``pip instal lxml`` multiple times only compiles it once). The goal is to get more people getting the benefits of that by default instead of requiring them to know they need to ``pip install wheel`` after the fact. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From steve at pearwood.info Sun Aug 16 17:24:35 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Mon, 17 Aug 2015 01:24:35 +1000 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> <20150816144000.GY5249@ando.pearwood.info> Message-ID: <20150816152435.GZ5249@ando.pearwood.info> On Sun, Aug 16, 2015 at 10:52:00AM -0400, Donald Stufft wrote: > > So what is the benefit of including wheel with ensurepip? > > pip has an optional dependency on wheel, if you install that optional > dependency than you?ll get the implicit wheel cache enabled by default > which can drastically improve installation speeds by caching built > artifacts (i.e. ``pip instal lxml`` multiple times only compiles it > once). The goal is to get more people getting the benefits of that by > default instead of requiring them to know they need to ``pip install > wheel`` after the fact. Thanks for the explanation. And ensurepip couldn't install wheel as part of the process of installing pip? -- Steve From rdmurray at bitdance.com Sun Aug 16 17:24:32 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 16 Aug 2015 11:24:32 -0400 Subject: [Python-Dev] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: <55D03806.1020808@hastings.org> References: <55D03806.1020808@hastings.org> Message-ID: <20150816152433.15812B14095@webabinitio.net> On Sun, 16 Aug 2015 00:13:10 -0700, Larry Hastings wrote: > > > So far I've accepted two pull requests into > bitbucket.com/larry/cpython350 in the 3.5 branch, what will become > 3.5.0rc2. As usual, it's the contributor's responsibility to merge > forward; if their checkin goes in to 3.5, it's their responsibility to > also merge it into the hg.python.org/cpython 3.5 branch (which will be > 3.5.1) and default branch (which right now is 3.6). > > But... what's the plan here? I didn't outline anything specific, I just > assumed we'd do the normal thing, pulling from 3.5.0 into 3.5.1 and > merging. But of the two pull requests so far accepted, one was merged > this way, though it cherry-picked the revision (skipping the pull > request merge revision Bitbucket created), and one was checked in to > 3.5.1 directly (no merging). > > I suppose we can do whatever we like. But it'd be helpful if we were > consistent. I can suggest three approaches: > > 1. After your push request is merged, you cherry-pick your revision > from bitbucket.com/larry/cpython350 into hg.python.org/cpython and > merge. After 3.5.0 final is released I do a big null merge from > bitbucket.com/larry/cpython350 into hg.python.org/cpython. > 2. After your push request is merged, you manually check in a new > equivalent revision into hg.python.org/cpython in the 3.5 branch. > No merging necessary because from Mercurial's perspective it's > unrelated to the revision I accepted. After 3.5.0 final is released > I do a big null merge from bitbucket.com/larry/cpython350 into > hg.python.org/cpython. > 3. After your push request is merged, you pull from > bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge > into 3.5. In this version I don't have to do a final null merge! > > I'd prefer 3; that's what we normally do, and that's what I was > expecting. So far people have done 1 and 2. > > Can we pick one approach and stick with it? Pretty-please? Pick one Larry, you are the RM :) The reason you got different things was that how to do this was under-specified. Which of course we didn't realize, this being a new procedure and all. That said, I'm still not sure how (3) works. Can you give us a step by step like you did for creating the pull request? Including how it relates to the workflow for the other branches? (What I did was just do the thing I normally do, and then follow your instructions for creating a pull request using the same patch I had previously committed to 3.4.) --David From rdmurray at bitdance.com Sun Aug 16 17:36:19 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 16 Aug 2015 11:36:19 -0400 Subject: [Python-Dev] How are we managing 3.5 NEWS? In-Reply-To: <55D03806.1020808@hastings.org> References: <55D03806.1020808@hastings.org> Message-ID: <20150816153619.5A5A0B14095@webabinitio.net> The 3.5.0 patch flow question also brings up the question of how we are managing NEWS for 3.5.0 vs 3.5.1. We have some commits that are going in to both 3.5.0a2 and 3.5.1, and some that are only going in to 3.5.1. Currently the 3.5.1 NEWS says things are going in to 3.5.0a2, but that's obviously wrong. Do we relabel the section in 3.5.1 NEWS as 3.5.1a1? That would leave us with the 3.5.1 NEWS never having the last alpha sections from 3.5.0, which is logical but might be confusing (or is that the way we've done it in the past?) Do we leave it to the RM to sort out each individual patch when he merges 3.5.0 into the 3.5 branch? That sounds like a lot of work, although if there are few enough patches that go into the alphas, it might not be too hard. Either way, that final 3.5.0 merge is going to require an edit of the NEWS file. Larry, how do you plan to handle this? --David PS: We'll also need an answer to this question for the proposed new NEWS workflow of putting the NEWS items in the tracker. We'll probably need to introduce x.y.z versions into the tracker. From rdmurray at bitdance.com Sun Aug 16 17:43:24 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Sun, 16 Aug 2015 11:43:24 -0400 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: <20150816152433.15812B14095@webabinitio.net> References: <55D03806.1020808@hastings.org> <20150816152433.15812B14095@webabinitio.net> Message-ID: <20150816154324.8028AB14095@webabinitio.net> On Sun, 16 Aug 2015 11:24:32 -0400, "R. David Murray" wrote: > On Sun, 16 Aug 2015 00:13:10 -0700, Larry Hastings wrote: > > 3. After your push request is merged, you pull from > > bitbucket.com/larry/cpython350 into hg.python.org/cpython and merge > > into 3.5. In this version I don't have to do a final null merge! > > > > I'd prefer 3; that's what we normally do, and that's what I was > > expecting. So far people have done 1 and 2. Thinking about this some more I realize why I was confused. My patch/pull request was something that got committed to 3.4. In that case, to follow your 3 I'd have to leave 3.4 open until you merged the pull request, and that goes against our normal workflow. Maybe my patch will be the only exception... --David From donald at stufft.io Sun Aug 16 17:52:28 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 16 Aug 2015 11:52:28 -0400 Subject: [Python-Dev] updating ensurepip to include wheel In-Reply-To: <20150816152435.GZ5249@ando.pearwood.info> References: <98EA1177-50C3-4E6C-83BB-CF21690CDA10@stufft.io> <-7895006158186750053@unknownmsgid> <92D52E13-F529-4274-860E-AAF87D92357C@stufft.io> <20150816144000.GY5249@ando.pearwood.info> <20150816152435.GZ5249@ando.pearwood.info> Message-ID: On August 16, 2015 at 11:26:08 AM, Steven D'Aprano (steve at pearwood.info) wrote: > On Sun, Aug 16, 2015 at 10:52:00AM -0400, Donald Stufft wrote: > > > > So what is the benefit of including wheel with ensurepip? > > > > pip has an optional dependency on wheel, if you install that optional > > dependency than you?ll get the implicit wheel cache enabled by default > > which can drastically improve installation speeds by caching built > > artifacts (i.e. ``pip instal lxml`` multiple times only compiles it > > once). The goal is to get more people getting the benefits of that by > > default instead of requiring them to know they need to ``pip install > > wheel`` after the fact. > > Thanks for the explanation. > > And ensurepip couldn't install wheel as part of the process of > installing pip? > > That?s the proposal here, ensurepip only installs things it has bundled inside of it, so we?d add a .whl file for wheel and slightly tweak ensurepip so it also installs wheel. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From eric at trueblade.com Sun Aug 16 19:55:47 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 16 Aug 2015 13:55:47 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> Thanks, Paul. Good feedback. Triple quoted and raw strings work like you'd expect, but you're right: the PEP should make this clear. I might drop the leading spaces, for a technical reason having to deal with passing the strings in to str.format. But I agree it's not a big deal one way or the other. I'll incorporate the rest of your feedback (and others) when I get back to a real computer. -- Eric. Top-posted from my phone. > On Aug 16, 2015, at 10:21 AM, Paul Moore wrote: > >> On 8 August 2015 at 02:39, Eric V. Smith wrote: >> Following a long discussion on python-ideas, I've posted my draft of >> PEP-498. It describes the "f-string" approach that was the subject of >> the "Briefer string format" thread. I'm open to a better title than >> "Literal String Formatting". >> >> I need to add some text to the discussion section, but I think it's in >> reasonable shape. I have a fully working implementation that I'll get >> around to posting somewhere this weekend. >> >>>>> def how_awesome(): return 'very' >> ... >>>>> f'f-strings are {how_awesome()} awesome!' >> 'f-strings are very awesome!' >> >> I'm open to any suggestions to improve the PEP. Thanks for your feedback. > > In my view: > > 1. Calling them "format strings" rather than "f-strings" is sensible > (by analogy with "raw string" etc). Colloquially we can use f-string > if we want, but let's have the formal name be fully spelled out. In > particular, the PEP should use "format string". > > 2. By far and away the most common use for me would be things like > print(f"Iteration {n}: Took {end-start) seconds"). At the moment I use > str,format() for this, and it's annoyingly verbose. This would be a > big win, and I'm +1 on the PEP for this specific reason. > > 3. All of the complex examples look scary, but in practice I wouldn't > write stuff like that - why would anyone do so unless they were being > deliberately obscure? On the other hand, as I gained experience with > the construct, being *able* to use more complex expressions without > having to stop and remember special cases would be great. > > 4. It's easy to write print("My age is {age}") and forget the "f" > prefix. While it'll bug me at first that I have to go back and fix > stuff to add the "f" after my code gives the wrong output, I *don't* > want to see this ability added to unprefixed strings. IMO that's going > a step too far (explicit is better than implicit and all that). > > 5. The PEP is silent (as far as I can see) on things like whether > triple quoting (f"""...""") is allowed (I assume it is), and whether > prefixes can be combined (for example, rf'{drive}:\{path}\{filename}') > (I'd like them to be, but can live without it). > > 6. The justification for ignoring whitespace is weak (the motivating > case is obscure, and there are many viable workarounds). I don't think > it's worth ignoring whitespace - but I also don't think it's worth a > long discussion. Just pick an option (as you did) and go with it. So I > see no need for change here, > > Apologies for the above being terse - I'm clearing a big backlog of > emails. Ask for clarification if you need it! > > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com > From guido at python.org Sun Aug 16 21:37:45 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 16 Aug 2015 22:37:45 +0300 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> Message-ID: On Sun, Aug 16, 2015 at 8:55 PM, Eric V. Smith wrote: > Thanks, Paul. Good feedback. > Indeed, I smiled when I saw Paul's post. > Triple quoted and raw strings work like you'd expect, but you're right: > the PEP should make this clear. > > I might drop the leading spaces, for a technical reason having to deal > with passing the strings in to str.format. But I agree it's not a big deal > one way or the other. > Hm. I rather like allow optional leading/trailing spaces. Given that we support arbitrary expressions, we have to support internal spaces; I think that some people would really like to use leading/trailing spaces, especially when there's text immediately against the other side of the braces, as in f'Stuff{ len(self.busy) }more stuff' I also expect it might be useful to allow leading/trailing newlines, if they are allowed at all (i.e. inside triple-quoted strings). E.g. f'''Stuff{ len(self.busy) }more stuff''' > I'll incorporate the rest of your feedback (and others) when I get back to > a real computer. > Here's another thing for everybody's pondering: when tokenizing an f-string, I think the pieces could each become tokens in their own right. Then the rest of the parsing (and rules about whitespace etc.) would become simpler because the grammar would deal with them. E.g. the string above would be tokenized as follows: f'Stuff{ len ( self . busy ) }more stuff' The understanding here is that there are these new types of tokens: F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e. not containing any substitutions). These token types can then be used in the grammar. (A complication would be different kinds of string quotes; I propose to handle that in the lexer, otherwise the number of open/close token types would balloon out of proportions.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Sun Aug 16 22:04:09 2015 From: greg at krypto.org (Gregory P. Smith) Date: Sun, 16 Aug 2015 20:04:09 +0000 Subject: [Python-Dev] PEP-498 & PEP-501: Literal String Formatting/Interpolation In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: On Sun, Aug 9, 2015 at 3:25 PM Brett Cannon wrote: > > On Sun, Aug 9, 2015, 13:51 Peter Ludemann via Python-Dev < > python-dev at python.org> wrote: > > Most of my outputs are log messages, so this proposal won't help me > because (I presume) it does eager evaluation of the format string and the > logging methods are designed to do lazy evaluation. Python doesn't have > anything like Lisp's "special forms", so there doesn't seem to be a way to > implicitly put a lambda on the string to delay evaluation. > > It would be nice to be able to mark the formatting as lazy ... maybe > another string prefix character to indicate that? (And would the 2nd > expression in an assert statement be lazy or eager?) > > > That would require a lazy string type which is beyond the scope of this > PEP as proposed since it would require its own design choices, how much > code would not like the different type, etc. > > -Brett > Agreed that doesn't belong in PEP 498 or 501 itself... But it is a real need. We left logging behind when we added str.format() and adding yet another _third_ way to do string formatting without addressing the needs of deferred-formatting for things like logging is annoying. brainstorm: Imagine a deferred interpolation string with a d'' prefix.. di'foo ${bar}' would be a new type with a __str__ method that also retains a runtime reference to the necessary values from the scope within which it was created that will be used for substitutions when iff/when it is __str__()ed. I still wouldn't enjoy reminding people to use di'' inlogging.info(di'thing happened: ${result}') all the time any more than I like reminding people to undo their use of % and just pass the values as additional args to the logging call... But I think people would find it friendlier and thus be more likely to get it right on their own. logging's manually deferred % is an idiom i'd like to see wither away. There's also a performance aspect to any new formatter, % is oddly pretty fast, str.format isn't. So long as you can do stuff at compile time rather than runtime I think these PEPs could be even faster. Constant string pep-498 or pep-501 formatting could be broken down at compile time and composed into the optimal set of operations to build the resulting string / call the formatter. So far looking over both peps, I lean towards pep-501 rather than 498: I really prefer the ${} syntax. I don't like arbitrary logical expressions within strings. I dislike str only things without a similar concept for bytes. but neither quite suits me yet. 501's __interpolate*__ builtins are good and bad at the same time. doing this at the module level does seem right, i like the i18n use aspect of that, but you could also imagine these being methods so that subclasses could override the behavior on a per-type basis. but that probably only makes sense if a deferred type is created due to when and how interpolates would be called. also, adding builtins, even __ones__ annoys me for some reason I can't quite put my finger on. (jumping into the threads way late) -gps > > PS: As to Brett's comment about the history of string interpolation ... my > recollection/understanding is that it started with Unix shells and the > "$variable" notation, with the "$variable" being evaluated within "..." and > not within '...'. Perl, PHP, Make (and others) picked this up. There seems > to be a trend to avoid the bare "$variable" form and instead use > "${variable}" everywhere, mainly because "${...}" is sometimes required to > avoid ambiguities (e.g. "There were $NUMBER ${THING}s.") > > PPS: For anyone wishing to improve the existing format options, Common > Lisp's FORMAT > and Prolog's format/2 > > have some capabilities that I miss from time to time in Python. > > On 9 August 2015 at 11:22, Eric V. Smith wrote: > > On 8/9/2015 1:38 PM, Brett Cannon wrote: > > > > > > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > > > wrote: > > > > Eric V. Smith schrieb am 08.08.2015 um 03:39: > > > Following a long discussion on python-ideas, I've posted my draft > of > > > PEP-498. It describes the "f-string" approach that was the subject > of > > > the "Briefer string format" thread. I'm open to a better title than > > > "Literal String Formatting". > > > > > > I need to add some text to the discussion section, but I think > it's in > > > reasonable shape. I have a fully working implementation that I'll > get > > > around to posting somewhere this weekend. > > > > > > >>> def how_awesome(): return 'very' > > > ... > > > >>> f'f-strings are {how_awesome()} awesome!' > > > 'f-strings are very awesome!' > > > > > > I'm open to any suggestions to improve the PEP. Thanks for your > > feedback. > > > > [copying my comment from python-ideas here] > > > > How common is this use case, really? Almost all of the string > formatting > > that I've used lately is either for logging (no help from this > proposal > > here) or requires some kind of translation/i18n *before* the > formatting, > > which is not helped by this proposal either. Meaning, in almost all > > cases, > > the formatting will use some more or less simple variant of this > > pattern: > > > > result = process("string with {a} and {b}").format(a=1, b=2) > > > > which commonly collapses into > > > > result = translate("string with {a} and {b}", a=1, b=2) > > > > by wrapping the concrete use cases in appropriate helper functions. > > > > I've seen Nick Coghlan's proposal for an implementation backed by a > > global > > function, which would at least catch some of these use cases. But it > > otherwise seems to me that this is a huge sledge hammer solution for > a > > niche problem. > > > > > > So in my case the vast majority of calls to str.format could be replaced > > with an f-string. I would also like to believe that other languages that > > have adopted this approach to string interpolation did so with knowledge > > that it would be worth it (but then again I don't really know how other > > languages are developed so this might just be a hope that other > > languages fret as much as we do about stuff). > > I think it has to do with the nature of the programs that people write. > I write software for internal use in a large company. In the last 13 > years there, I've written literally hundreds of individual programs, > large and small. I just checked: literally 100% of my calls to > %-formatting (older code) or str.format (in newer code) could be > replaced with f-strings. And I think every such use would be an > improvement. > > I firmly believe that the majority of software written in Python does > not show up on PyPi, but is used internally in corporations. It's not > internationalized or localized: it just exists to get a job done > quickly. This is the code that would benefit from f-strings. > > This isn't to say that there's not plenty of code where f-strings would > not help. But I think it's as big a mistake to generalize from my > experience as it is from Stefan's. > > Eric. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/pludemann%40google.com > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Aug 17 04:28:19 2015 From: larry at hastings.org (Larry Hastings) Date: Sun, 16 Aug 2015 19:28:19 -0700 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: References: <55D03806.1020808@hastings.org> Message-ID: <55D146C3.9050509@hastings.org> On 08/16/2015 07:08 AM, Guido van Rossum wrote: > I presume the issue here is that Hg is so complicated that everyone > knows a different subset of the commands and semantics. > > I personally don't know what the commands for cherry-picking a > revision would be. There are a couple. The command you'd want for this use case is probably "hg transplant", because it lets you pull revisions from a different repo. Note that "transplant" is an extension; it's distributed with Mercurial but is turned off by default. (Presumably because it's an "unloved" feature, which seems to translate roughly to "deprecated and only minimally supported".) The Mercurial team recommends "graft", and they also provide "rebase", but neither of those can pull revisions from another repo. Since all revisions committed to 3.5.0 should be merged into 3.5.1 sooner or later, personally I don't see the *need* for cherry-picking. > I also don't know exactly what happens when you merge a PR using > bitbucket. (I'm only familiar with the GitHub PR flow, and I don't > like its behavior, which seems to always create an extra merge > revision for what I consider as logically a single commit.) Bitbucket doesn't appear to create any extraneous merge revisions. Of the two PRs I've accepted, only one created a merge, and that was sensible. > BTW When I go to https://bitbucket.org/larry/cpython350 the first > thing I see (in a very big bold font) is "This is Python version 3.6.0 > alpha 1". What's going on here? It doesn't inspire confidence. It was displaying the readme from the default branch. We use the 3.5 branch. I just went and looked, and there's a "default branch" option for the repo on Bitbucket. I changed it from "default" to "3.5" and now it displays "This is Python version 3.5.0 release candidate 1". Hopefully that inspires more confidence! / /arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Aug 17 04:58:54 2015 From: njs at pobox.com (Nathaniel Smith) Date: Sun, 16 Aug 2015 19:58:54 -0700 Subject: [Python-Dev] [python-committers] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: References: <55D03806.1020808@hastings.org> Message-ID: On Sun, Aug 16, 2015 at 7:08 AM, Guido van Rossum wrote: > (I'm only familiar with the GitHub PR flow, and I don't like its behavior, > which seems to always create an extra merge revision for what I consider as > logically a single commit.) For whatever it's worth, this is a "feature": the extra merge revision serves as a record of the fact that a PR was merged, who merged it, and what the state of the branch was before and after the merge (useful in case the PR contains multiple revisions that are all getting merged together). These are all things that git makes impossible to reconstruct after the fact otherwise, because it stores no metadata about which branch each revision started out in. But if you consistently make a merge commit every time, then 'git log --first-parent' will reliably show one entry per merged PR. -n -- Nathaniel J. Smith -- http://vorpus.org From victor.stinner at gmail.com Mon Aug 17 06:34:49 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Sun, 16 Aug 2015 21:34:49 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: 2015-08-16 7:21 GMT-07:00 Paul Moore : > 2. By far and away the most common use for me would be things like > print(f"Iteration {n}: Took {end-start) seconds"). At the moment I use > str,format() for this, and it's annoyingly verbose. This would be a > big win, and I'm +1 on the PEP for this specific reason. You can use a temporary variable, it's not much longer: print("Iteration {n}: Took {dt) seconds".format(n=n, dt=end-start)) becomes dt = end - start print(f"Iteration {n}: Took {dt) seconds") > 3. All of the complex examples look scary, but in practice I wouldn't > write stuff like that - why would anyone do so unless they were being > deliberately obscure? I'm quite sure that users will write complex code in f-strings. I vote -1 on the current PEP because of the support of Python code in f-string, but +1 on a PEP without Python code. Victor From pludemann at google.com Mon Aug 17 07:58:09 2015 From: pludemann at google.com (Peter Ludemann) Date: Sun, 16 Aug 2015 22:58:09 -0700 Subject: [Python-Dev] PEP-498 & PEP-501: Literal String Formatting/Interpolation In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <55C79A73.1030901@trueblade.com> Message-ID: How is this proposal of di"..." more than a different spelling of lambda i"..."? (I think it's a great idea ? but am wondering if there are some extra semantics that I missed) I don't think there's any need to preserve the values of the {...} (or ${...}) constituents ? the normal closure mechanism should do fine because logging is more-or-less like this: if : if callable(msg): log_msg = msg(*args) else: log_msg = msg % args and so there's no need to preserve the values at the moment the interpolated string is created. Perl allows arbitrary expressions inside interpolations, but that tends to get messy and is self-limiting for complex expressions; however, it's handy for things like: print("The {i+1}th item is strange: {x[i]}) On 16 August 2015 at 13:04, Gregory P. Smith wrote: > > > On Sun, Aug 9, 2015 at 3:25 PM Brett Cannon wrote: > >> >> On Sun, Aug 9, 2015, 13:51 Peter Ludemann via Python-Dev < >> python-dev at python.org> wrote: >> >> Most of my outputs are log messages, so this proposal won't help me >> because (I presume) it does eager evaluation of the format string and the >> logging methods are designed to do lazy evaluation. Python doesn't have >> anything like Lisp's "special forms", so there doesn't seem to be a way to >> implicitly put a lambda on the string to delay evaluation. >> >> It would be nice to be able to mark the formatting as lazy ... maybe >> another string prefix character to indicate that? (And would the 2nd >> expression in an assert statement be lazy or eager?) >> >> >> That would require a lazy string type which is beyond the scope of this >> PEP as proposed since it would require its own design choices, how much >> code would not like the different type, etc. >> >> -Brett >> > > Agreed that doesn't belong in PEP 498 or 501 itself... But it is a real > need. > > We left logging behind when we added str.format() and adding yet another > _third_ way to do string formatting without addressing the needs of > deferred-formatting for things like logging is annoying. > > brainstorm: Imagine a deferred interpolation string with a d'' prefix.. > di'foo ${bar}' would be a new type with a __str__ method that also retains > a runtime reference to the necessary values from the scope within which it > was created that will be used for substitutions when iff/when it is > __str__()ed. I still wouldn't enjoy reminding people to use di'' > inlogging.info(di'thing happened: ${result}') all the time any more than > I like reminding people to undo their use of % and just pass the values as > additional args to the logging call... But I think people would find it > friendlier and thus be more likely to get it right on their own. logging's > manually deferred % is an idiom i'd like to see wither away. > > There's also a performance aspect to any new formatter, % is oddly pretty > fast, str.format isn't. So long as you can do stuff at compile time rather > than runtime I think these PEPs could be even faster. Constant string > pep-498 or pep-501 formatting could be broken down at compile time and > composed into the optimal set of operations to build the resulting string / > call the formatter. > > So far looking over both peps, I lean towards pep-501 rather than 498: > > I really prefer the ${} syntax. > I don't like arbitrary logical expressions within strings. > I dislike str only things without a similar concept for bytes. > > but neither quite suits me yet. > > 501's __interpolate*__ builtins are good and bad at the same time. doing > this at the module level does seem right, i like the i18n use aspect of > that, but you could also imagine these being methods so that subclasses > could override the behavior on a per-type basis. but that probably only > makes sense if a deferred type is created due to when and how interpolates > would be called. also, adding builtins, even __ones__ annoys me for some > reason I can't quite put my finger on. > > (jumping into the threads way late) > -gps > >> >> PS: As to Brett's comment about the history of string interpolation ... >> my recollection/understanding is that it started with Unix shells and the >> "$variable" notation, with the "$variable" being evaluated within "..." and >> not within '...'. Perl, PHP, Make (and others) picked this up. There seems >> to be a trend to avoid the bare "$variable" form and instead use >> "${variable}" everywhere, mainly because "${...}" is sometimes required to >> avoid ambiguities (e.g. "There were $NUMBER ${THING}s.") >> >> PPS: For anyone wishing to improve the existing format options, Common >> Lisp's FORMAT >> and Prolog's format/2 >> >> have some capabilities that I miss from time to time in Python. >> >> On 9 August 2015 at 11:22, Eric V. Smith wrote: >> >> On 8/9/2015 1:38 PM, Brett Cannon wrote: >> > >> > >> > On Sun, 9 Aug 2015 at 01:07 Stefan Behnel > >> > > wrote: >> > >> > Eric V. Smith schrieb am 08.08.2015 um 03:39: >> > > Following a long discussion on python-ideas, I've posted my draft >> of >> > > PEP-498. It describes the "f-string" approach that was the >> subject of >> > > the "Briefer string format" thread. I'm open to a better title >> than >> > > "Literal String Formatting". >> > > >> > > I need to add some text to the discussion section, but I think >> it's in >> > > reasonable shape. I have a fully working implementation that I'll >> get >> > > around to posting somewhere this weekend. >> > > >> > > >>> def how_awesome(): return 'very' >> > > ... >> > > >>> f'f-strings are {how_awesome()} awesome!' >> > > 'f-strings are very awesome!' >> > > >> > > I'm open to any suggestions to improve the PEP. Thanks for your >> > feedback. >> > >> > [copying my comment from python-ideas here] >> > >> > How common is this use case, really? Almost all of the string >> formatting >> > that I've used lately is either for logging (no help from this >> proposal >> > here) or requires some kind of translation/i18n *before* the >> formatting, >> > which is not helped by this proposal either. Meaning, in almost all >> > cases, >> > the formatting will use some more or less simple variant of this >> > pattern: >> > >> > result = process("string with {a} and {b}").format(a=1, b=2) >> > >> > which commonly collapses into >> > >> > result = translate("string with {a} and {b}", a=1, b=2) >> > >> > by wrapping the concrete use cases in appropriate helper functions. >> > >> > I've seen Nick Coghlan's proposal for an implementation backed by a >> > global >> > function, which would at least catch some of these use cases. But it >> > otherwise seems to me that this is a huge sledge hammer solution >> for a >> > niche problem. >> > >> > >> > So in my case the vast majority of calls to str.format could be replaced >> > with an f-string. I would also like to believe that other languages that >> > have adopted this approach to string interpolation did so with knowledge >> > that it would be worth it (but then again I don't really know how other >> > languages are developed so this might just be a hope that other >> > languages fret as much as we do about stuff). >> >> I think it has to do with the nature of the programs that people write. >> I write software for internal use in a large company. In the last 13 >> years there, I've written literally hundreds of individual programs, >> large and small. I just checked: literally 100% of my calls to >> %-formatting (older code) or str.format (in newer code) could be >> replaced with f-strings. And I think every such use would be an >> improvement. >> >> I firmly believe that the majority of software written in Python does >> not show up on PyPi, but is used internally in corporations. It's not >> internationalized or localized: it just exists to get a job done >> quickly. This is the code that would benefit from f-strings. >> >> This isn't to say that there's not plenty of code where f-strings would >> not help. But I think it's as big a mistake to generalize from my >> experience as it is from Stefan's. >> >> Eric. >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> >> >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/pludemann%40google.com >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/brett%40python.org >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 17 12:02:15 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 17 Aug 2015 11:02:15 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: On 17 August 2015 at 05:34, Victor Stinner wrote: > 2015-08-16 7:21 GMT-07:00 Paul Moore : >> 2. By far and away the most common use for me would be things like >> print(f"Iteration {n}: Took {end-start) seconds"). At the moment I use >> str,format() for this, and it's annoyingly verbose. This would be a >> big win, and I'm +1 on the PEP for this specific reason. > > You can use a temporary variable, it's not much longer: > print("Iteration {n}: Took {dt) seconds".format(n=n, dt=end-start)) > becomes > dt = end - start > print(f"Iteration {n}: Took {dt) seconds") ... which is significantly shorter (my point). And using an inline expression print(f"Iteration {n}: Took {end-start) seconds") with (IMO) even better readability than the version with a temporary variable. > >> 3. All of the complex examples look scary, but in practice I wouldn't >> write stuff like that - why would anyone do so unless they were being >> deliberately obscure? > > I'm quite sure that users will write complex code in f-strings. So am I. Some people will always write bad code. I won't (or at least, I'll try not to write code that *I* consider to be complex :-)) but "you can use this construct to write bad code" isn't an argument for dropping the feature. If you couldn't find *good* uses, that would be different, but that doesn't seem to be the case here (at least in my view). Paul. From larry at hastings.org Mon Aug 17 13:48:49 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 17 Aug 2015 04:48:49 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <55D1CA21.8000803@hastings.org> On 08/17/2015 03:02 AM, Paul Moore wrote: > On 17 August 2015 at 05:34, Victor Stinner wrote: >> 2015-08-16 7:21 GMT-07:00 Paul Moore : >>> 3. All of the complex examples look scary, but in practice I wouldn't >>> write stuff like that - why would anyone do so unless they were being >>> deliberately obscure? >> I'm quite sure that users will write complex code in f-strings. > So am I. Some people will always write bad code. I won't (or at least, > I'll try not to write code that *I* consider to be complex :-)) but > "you can use this construct to write bad code" isn't an argument for > dropping the feature. If you couldn't find *good* uses, that would be > different, but that doesn't seem to be the case here (at least in my > view). I think this corner of the debate is covered by the "Consenting adults" guiding principle we use 'round these parts. Cheers, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Mon Aug 17 13:59:03 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 17 Aug 2015 12:59:03 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55D1CA21.8000803@hastings.org> References: <55C55DC3.8040605@trueblade.com> <55D1CA21.8000803@hastings.org> Message-ID: On 17 August 2015 at 12:48, Larry Hastings wrote: > I think this corner of the debate is covered by the "Consenting adults" > guiding principle we use 'round these parts. Precisely. Paul From eric at trueblade.com Mon Aug 17 16:13:04 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 17 Aug 2015 10:13:04 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> Message-ID: <55D1EBF0.2030406@trueblade.com> On 08/16/2015 03:37 PM, Guido van Rossum wrote: > On Sun, Aug 16, 2015 at 8:55 PM, Eric V. Smith > wrote: > > Thanks, Paul. Good feedback. > > > Indeed, I smiled when I saw Paul's post. > > > Triple quoted and raw strings work like you'd expect, but you're > right: the PEP should make this clear. > > I might drop the leading spaces, for a technical reason having to > deal with passing the strings in to str.format. But I agree it's not > a big deal one way or the other. > > > Hm. I rather like allow optional leading/trailing spaces. Given that we > support arbitrary expressions, we have to support internal spaces; I > think that some people would really like to use leading/trailing spaces, > especially when there's text immediately against the other side of the > braces, as in > > f'Stuff{ len(self.busy) }more stuff' > > I also expect it might be useful to allow leading/trailing newlines, if > they are allowed at all (i.e. inside triple-quoted strings). E.g. > > f'''Stuff{ > len(self.busy) > }more stuff''' Okay, I'm sold. This works in my current implementation: >>> f'''foo ... { 3 } ... bar''' 'foo\n3\nbar' And since this currently works, there's no implementation specific reason to disallow leading and trailing whitespace: >>> '\n{\n3 + \n 1\t\n}\n'.format_map({'\n3 + \n 1\t\n':4}) '\n4\n' My current plan is to replace an f-string with a call to .format_map: >>> foo = 100 >>> bar = 20 >>> f'foo: {foo} bar: { bar+1}' Would become: 'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21}) The string on which format_map is called is the identical string that's in the source code. With the exception noted in PEP 498, I think this satisfies the principle of least surprise. As I've said elsewhere, we could then have some i18n function look up and replace the string before format_map is called on it. As long as it leaves the expression text alone, everything will work out fine. There are some quirks with having the same expression appear twice, if the expression has side effects. But I'm not so worried about that. > Here's another thing for everybody's pondering: when tokenizing an > f-string, I think the pieces could each become tokens in their own > right. Then the rest of the parsing (and rules about whitespace etc.) > would become simpler because the grammar would deal with them. E.g. the > string above would be tokenized as follows: > > f'Stuff{ > len > ( > self > . > busy > ) > }more stuff' > > The understanding here is that there are these new types of tokens: > F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for > }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e. > not containing any substitutions). These token types can then be used in > the grammar. (A complication would be different kinds of string quotes; > I propose to handle that in the lexer, otherwise the number of > open/close token types would balloon out of proportions.) This would save a few hundred lines of C code. But a quick glance at the lexer and I can't see how to make the opening quotes agree with the closing quotes. I think the i18n case (if we chose to support it) is better served by having the entire, unaltered source string available at run time. PEP 501 comes to a similar conclusion (http://legacy.python.org/dev/peps/pep-0501/#preserving-the-unmodified-format-string). Eric. From barry at python.org Mon Aug 17 16:50:18 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 17 Aug 2015 10:50:18 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> Message-ID: <20150817105018.24cbf305@anarchist.wooz.org> On Aug 17, 2015, at 11:02 AM, Paul Moore wrote: > print(f"Iteration {n}: Took {end-start) seconds") This illustrates (more) problems I have with arbitrary expressions. First, you've actually made a typo there; it should be "{end-start}" -- notice the trailing curly brace. Second, what if you typoed that as "{end_start}"? According to PEP 498 the original typo above should trigger a SyntaxError and the second a run-time error (NameError?). But how will syntax highlighters and linters help you discover your bugs before you've even saved the file? Currently, a lot of these types of problems can be found much earlier on through the use of such linters. Putting arbitrary expressions in strings will just hide them to these tools for the foreseeable future. I have a hard time seeing how Emacs's syntax highlighting could cope with it for example. Cheers, -Barry From rosuav at gmail.com Mon Aug 17 16:58:21 2015 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 18 Aug 2015 00:58:21 +1000 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150817105018.24cbf305@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> Message-ID: On Tue, Aug 18, 2015 at 12:50 AM, Barry Warsaw wrote: > On Aug 17, 2015, at 11:02 AM, Paul Moore wrote: > >> print(f"Iteration {n}: Took {end-start) seconds") > > This illustrates (more) problems I have with arbitrary expressions. > > First, you've actually made a typo there; it should be "{end-start}" -- notice > the trailing curly brace. Second, what if you typoed that as "{end_start}"? > According to PEP 498 the original typo above should trigger a SyntaxError and > the second a run-time error (NameError?). But how will syntax highlighters > and linters help you discover your bugs before you've even saved the file? > Currently, a lot of these types of problems can be found much earlier on > through the use of such linters. Putting arbitrary expressions in strings > will just hide them to these tools for the foreseeable future. I have a hard > time seeing how Emacs's syntax highlighting could cope with it for example. > The linters could tell you that you have no 'end' or 'start' just as easily when it's in that form as when it's written out in full. Certainly the mismatched brackets could easily be caught by any sort of syntax highlighter. The rules for f-strings are much simpler than, say, the PHP rules and the differences between ${...} and {$...}, which I've seen editors get wrong. ChrisA From barry at python.org Mon Aug 17 17:13:54 2015 From: barry at python.org (Barry Warsaw) Date: Mon, 17 Aug 2015 11:13:54 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> Message-ID: <20150817111354.0ca9ab54@anarchist.wooz.org> On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote: >The linters could tell you that you have no 'end' or 'start' just as >easily when it's in that form as when it's written out in full. >Certainly the mismatched brackets could easily be caught by any sort >of syntax highlighter. The rules for f-strings are much simpler than, >say, the PHP rules and the differences between ${...} and {$...}, >which I've seen editors get wrong. I'm really asking whether it's technically feasible and realistically possible for them to do so. I'd love to hear from the maintainers of pyflakes, pylint, Emacs, vim, and other editors, linters, and other static analyzers on a rough technical assessment of whether they can support this and how much work it would be. Cheers, -Barry From guido at python.org Mon Aug 17 20:24:35 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 17 Aug 2015 11:24:35 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55D1EBF0.2030406@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> <55D1EBF0.2030406@trueblade.com> Message-ID: On Mon, Aug 17, 2015 at 7:13 AM, Eric V. Smith wrote: > [...] > My current plan is to replace an f-string with a call to .format_map: > >>> foo = 100 > >>> bar = 20 > >>> f'foo: {foo} bar: { bar+1}' > > Would become: > 'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21}) > > The string on which format_map is called is the identical string that's > in the source code. With the exception noted in PEP 498, I think this > satisfies the principle of least surprise. > Does this really work? Shouldn't this be using some internal variant of format_map() that doesn't attempt to interpret the keys in brackets in any ways? Otherwise there'd be problems with the different meaning of e.g. {a[x]} (unless I misunderstand .format_map() -- I'm assuming it's just like .format(**blah). > > As I've said elsewhere, we could then have some i18n function look up > and replace the string before format_map is called on it. As long as it > leaves the expression text alone, everything will work out fine. There > are some quirks with having the same expression appear twice, if the > expression has side effects. But I'm not so worried about that. > The more I hear Barry's objections against arbitrary expressions from the i18n POV the more I am thinking that this is just a square peg and a round hole situation, and we should leave i18n alone. The requirements for i18n are just too different than the requirements for other use cases (i18n cares deeply about preserving the original text of the {...} interpolations; the opposite is the case for the other use cases). > [...] > > The understanding here is that there are these new types of tokens: > > F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for > > }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e. > > not containing any substitutions). These token types can then be used in > > the grammar. (A complication would be different kinds of string quotes; > > I propose to handle that in the lexer, otherwise the number of > > open/close token types would balloon out of proportions.) > > This would save a few hundred lines of C code. But a quick glance at the > lexer and I can't see how to make the opening quotes agree with the > closing quotes. > The lexer would have to develop another stack for this purpose. > I think the i18n case (if we chose to support it) is better served by > having the entire, unaltered source string available at run time. PEP > 501 comes to a similar conclusion > ( > http://legacy.python.org/dev/peps/pep-0501/#preserving-the-unmodified-format-string > ). Fair enough. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Aug 17 20:31:14 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 17 Aug 2015 11:31:14 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150817111354.0ca9ab54@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> <20150817111354.0ca9ab54@anarchist.wooz.org> Message-ID: On Mon, Aug 17, 2015 at 8:13 AM, Barry Warsaw wrote: > I'm really asking whether it's technically feasible and realistically > possible > for them to do so. I'd love to hear from the maintainers of pyflakes, > pylint, > Emacs, vim, and other editors, linters, and other static analyzers on a > rough > technical assessment of whether they can support this and how much work it > would be. > Those that aren't specific to Python will have to solve a similar problem for e.g. Swift, which supports \(...) in all strings with arbitrary expressions in the ..., or Perl which apparently also supports arbitrary expressions. Heck, even Bash supports something like this, "...$(command)...". I am not disinclined in adding some restrictions to make things a little more tractable, but they would be along the lines of the Swift restriction (the interpolated expression cannot contain string quotes). However, I do think we should support f"...{a['key']}...". -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nikolaus at rath.org Mon Aug 17 21:23:05 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Mon, 17 Aug 2015 12:23:05 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: (Paul Moore's message of "Sun, 16 Aug 2015 15:21:00 +0100") References: <55C55DC3.8040605@trueblade.com> Message-ID: <87zj1phg86.fsf@thinkpad.rath.org> On Aug 16 2015, Paul Moore wrote: > 2. By far and away the most common use for me would be things like > print(f"Iteration {n}: Took {end-start) seconds"). I believe an even more common use willl be print(f"Iteration {n+1}: Took {end-start} seconds") Note that not allowing expressions would turn this into the rather verbose: iteration=n+1 duration=end-start print(f"Iteration {iteration}: Took {duration} seconds") Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From guido at python.org Mon Aug 17 21:46:04 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 17 Aug 2015 12:46:04 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <87zj1phg86.fsf@thinkpad.rath.org> References: <55C55DC3.8040605@trueblade.com> <87zj1phg86.fsf@thinkpad.rath.org> Message-ID: On Mon, Aug 17, 2015 at 12:23 PM, Nikolaus Rath wrote: > On Aug 16 2015, Paul Moore wrote: > > 2. By far and away the most common use for me would be things like > > print(f"Iteration {n}: Took {end-start) seconds"). > > I believe an even more common use willl be > > print(f"Iteration {n+1}: Took {end-start} seconds") > > Note that not allowing expressions would turn this into the rather > verbose: > > iteration=n+1 > duration=end-start > print(f"Iteration {iteration}: Took {duration} seconds") Let's stop debating this point -- any acceptable solution will have to support (more-or-less) arbitrary expressions. *If* we end up also attempting to solve i18n, then it will be up to the i18n toolchain to require a stricter syntax. (I imagine this could be done during the string extraction phase.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Aug 17 22:26:13 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 17 Aug 2015 16:26:13 -0400 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> <55D1EBF0.2030406@trueblade.com> Message-ID: <55D24365.7080602@trueblade.com> On 8/17/2015 2:24 PM, Guido van Rossum wrote: > On Mon, Aug 17, 2015 at 7:13 AM, Eric V. Smith > wrote: > > [...] > My current plan is to replace an f-string with a call to .format_map: > >>> foo = 100 > >>> bar = 20 > >>> f'foo: {foo} bar: { bar+1}' > > Would become: > 'foo: {foo} bar: { bar+1}'.format_map({'foo': 100, ' bar+1': 21}) > > The string on which format_map is called is the identical string that's > in the source code. With the exception noted in PEP 498, I think this > satisfies the principle of least surprise. > > > Does this really work? Shouldn't this be using some internal variant of > format_map() that doesn't attempt to interpret the keys in brackets in > any ways? Otherwise there'd be problems with the different meaning of > e.g. {a[x]} (unless I misunderstand .format_map() -- I'm assuming it's > just like .format(**blah). Good point. It will require a similar function to format_map which doesn't interpret the contents of the braces (except to the extent that the f-string parser already has to). For argument's sake in point #4 below, let's call this str.format_map_simple. > As I've said elsewhere, we could then have some i18n function look up > and replace the string before format_map is called on it. As long as it > leaves the expression text alone, everything will work out fine. There > are some quirks with having the same expression appear twice, if the > expression has side effects. But I'm not so worried about that. > > > The more I hear Barry's objections against arbitrary expressions from > the i18n POV the more I am thinking that this is just a square peg and a > round hole situation, and we should leave i18n alone. The requirements > for i18n are just too different than the requirements for other use > cases (i18n cares deeply about preserving the original text of the {...} > interpolations; the opposite is the case for the other use cases). I think it would be possible to create a version of this that works for both i18n and regular interpolation. I think the open issues are: 1. Barry wants the substitutions to look like $identifier and possibly ${identifier}, and the PEP 498 proposal just uses {}. 2. There needs to be a way to identify interpolated strings and i18n strings, and possibly combinations of those. This leads to PEP 501's i- and iu- strings. 3. A way to enforce identifiers-only, instead of generalized expressions. 4. We need a "safe substitution" mode for str.format_map_simple (from above). #1 is just a matter of preference: there's no technical reason to prefer {} over $ or ${}. We can make any decision here. I prefer {} because it's the same as str.format. #2 needs to be decided in concert with the tooling needed to extract the strings from the source code. The particular prefixes are up for debate. I'm not a big fan of using "u" to have a meaning different from it's current "do nothing" interpretation in 3.5. But really any prefixes will do, if we decide to use string prefixes. I think that's the question: do we want to distinguish among these cases using string prefixes or combinations thereof? #3 is doable, either at runtime or in the tooling that does the string extraction. #4 is simple, as long as we always turn it on for the localized strings. Personally I can go either way on including i18n. But I agree it's beginning to sound like i18n is just too complicated for PEP 498, and I think PEP 501 is already too complicated. I'd like to make a decision on this one way or the other, so we can move forward. > [...] > > The understanding here is that there are these new types of tokens: > > F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END for > > }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' (i.e. > > not containing any substitutions). These token types can then be used in > > the grammar. (A complication would be different kinds of string quotes; > > I propose to handle that in the lexer, otherwise the number of > > open/close token types would balloon out of proportions.) > > This would save a few hundred lines of C code. But a quick glance at the > lexer and I can't see how to make the opening quotes agree with the > closing quotes. > > > The lexer would have to develop another stack for this purpose. I'll give it some thought. Eric. From guido at python.org Mon Aug 17 22:36:07 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 17 Aug 2015 13:36:07 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55D24365.7080602@trueblade.com> References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> <55D1EBF0.2030406@trueblade.com> <55D24365.7080602@trueblade.com> Message-ID: On Mon, Aug 17, 2015 at 1:26 PM, Eric V. Smith wrote: > [...] > I think it would be possible to create a version of this that works for > both i18n and regular interpolation. I think the open issues are: > > 1. Barry wants the substitutions to look like $identifier and possibly > ${identifier}, and the PEP 498 proposal just uses {}. > > 2. There needs to be a way to identify interpolated strings and i18n > strings, and possibly combinations of those. This leads to PEP 501's i- > and iu- strings. > > 3. A way to enforce identifiers-only, instead of generalized expressions. > In an off-list message to Barry and Nick I came up with the same three points. :-) I think #2 is the hard one (unless we adopt a solution like Yury just proposed where you can have an arbitrary identifier in front of a string literal). > 4. We need a "safe substitution" mode for str.format_map_simple (from > above). > > #1 is just a matter of preference: there's no technical reason to prefer > {} over $ or ${}. We can make any decision here. I prefer {} because > it's the same as str.format. > > #2 needs to be decided in concert with the tooling needed to extract the > strings from the source code. The particular prefixes are up for debate. > I'm not a big fan of using "u" to have a meaning different from it's > current "do nothing" interpretation in 3.5. But really any prefixes will > do, if we decide to use string prefixes. I think that's the question: do > we want to distinguish among these cases using string prefixes or > combinations thereof? > > #3 is doable, either at runtime or in the tooling that does the string > extraction. > > #4 is simple, as long as we always turn it on for the localized strings. > > Personally I can go either way on including i18n. But I agree it's > beginning to sound like i18n is just too complicated for PEP 498, and I > think PEP 501 is already too complicated. I'd like to make a decision on > this one way or the other, so we can move forward. > What's the rush? There's plenty of time before Python 3.6. > > [...] > > > The understanding here is that there are these new types of tokens: > > > F_STRING_OPEN for f'...{, F_STRING_MIDDLE for }...{, F_STRING_END > for > > > }...', and I suppose we also need F_STRING_OPEN_CLOSE for f'...' > (i.e. > > > not containing any substitutions). These token types can then be > used in > > > the grammar. (A complication would be different kinds of string > quotes; > > > I propose to handle that in the lexer, otherwise the number of > > > open/close token types would balloon out of proportions.) > > > > This would save a few hundred lines of C code. But a quick glance at > the > > lexer and I can't see how to make the opening quotes agree with the > > closing quotes. > > > > > > The lexer would have to develop another stack for this purpose. > > I'll give it some thought. > > Eric. > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Tue Aug 18 00:06:14 2015 From: steve.dower at python.org (Steve Dower) Date: Mon, 17 Aug 2015 15:06:14 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150817111354.0ca9ab54@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> <20150817111354.0ca9ab54@anarchist.wooz.org> Message-ID: <55D25AD6.2040907@python.org> On 17Aug2015 0813, Barry Warsaw wrote: > On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote: > >> The linters could tell you that you have no 'end' or 'start' just as >> easily when it's in that form as when it's written out in full. >> Certainly the mismatched brackets could easily be caught by any sort >> of syntax highlighter. The rules for f-strings are much simpler than, >> say, the PHP rules and the differences between ${...} and {$...}, >> which I've seen editors get wrong. > > I'm really asking whether it's technically feasible and realistically possible > for them to do so. I'd love to hear from the maintainers of pyflakes, pylint, > Emacs, vim, and other editors, linters, and other static analyzers on a rough > technical assessment of whether they can support this and how much work it > would be. With the current format string proposals (allowing arbitrary expressions) I think I'd implement it in our parser with a FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a FORMAT_STRING_FORMAT_OPERATOR. A FORMAT_STRING_TOKEN would be started by f('|"|'''|""") and ended by matching quotes or before an open brace (that is not escaped). A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd either colour as part of the string or the regular brace colour. This also enables a parsing context where a colon becomes the FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary operator would be FORMAT_STRING_TOKEN. The final close brace becomes another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is FORMAT_STRING_TOKEN. So it'd translate something like this: f"This {text} is my {string:>{length+3}}" FORMAT_STRING_TOKEN[f"This ] FORMAT_STRING_JOIN_OPERATOR[{] IDENTIFIER[text] FORMAT_STRING_JOIN_OPERATOR[}] FORMAT_STRING_TOKEN[ is my ] FORMAT_STRING_JOIN_OPERATOR[{] IDENTIFIER[string] FORMAT_STRING_FORMAT_OPERATOR[:] FORMAT_STRING_TOKEN[>] FORMAT_STRING_JOIN_OPERATOR[{] IDENTIFIER[length] OPERATOR[+] NUMBER[3] FORMAT_STRING_JOIN_OPERATOR[}] FORMAT_STRING_TOKEN[] FORMAT_STRING_JOIN_OPERATOR[}] FORMAT_STRING_TOKEN["] I *believe* (without having tried it) that this would let us produce a valid tokenisation (in our model) without too much difficulty, and highlight/analyse correctly, including validating matching braces. Getting the precedence correct on the operators might be more difficult, but we may also just produce an AST that looks like a function call, since that will give us "good enough" handling once we're past tokenisation. A simpler tokenisation that would probably be sufficient for many editors would be to treat the first and last segments ([f"This {] and [}"]) as groupings and each section of text as separators, giving this: OPEN_GROUPING[f"This {] EXPRESSION[text] COMMA[} is my {] EXPRESSION[string] COMMA[:>{] EXPRESSION[length+3] COMMA[}}] CLOSE_GROUPING["] Initial parsing may be a little harder, but it should mean less trouble when expressions spread across multiple lines, since that is already handled for other types of groupings. And if any code analysis is occurring, it should be happening for dict/list/etc. contents already, and so format strings will get it too. So I'm confident we can support it, and I expect either of these two approaches will work for most tools without too much trouble. (There's also a middle ground where you create new tokens for format string components, but combine them like the second example.) Cheers, Steve > Cheers, > -Barry From steve.dower at python.org Tue Aug 18 01:08:11 2015 From: steve.dower at python.org (Steve Dower) Date: Mon, 17 Aug 2015 16:08:11 -0700 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55D25AD6.2040907@python.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> <20150817111354.0ca9ab54@anarchist.wooz.org> <55D25AD6.2040907@python.org> Message-ID: <55D2695B.7060004@python.org> On 17Aug2015 1506, Steve Dower wrote: > On 17Aug2015 0813, Barry Warsaw wrote: >> On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote: >> >>> The linters could tell you that you have no 'end' or 'start' just as >>> easily when it's in that form as when it's written out in full. >>> Certainly the mismatched brackets could easily be caught by any sort >>> of syntax highlighter. The rules for f-strings are much simpler than, >>> say, the PHP rules and the differences between ${...} and {$...}, >>> which I've seen editors get wrong. >> >> I'm really asking whether it's technically feasible and realistically >> possible >> for them to do so. I'd love to hear from the maintainers of pyflakes, >> pylint, >> Emacs, vim, and other editors, linters, and other static analyzers on >> a rough >> technical assessment of whether they can support this and how much >> work it >> would be. > > With the current format string proposals (allowing arbitrary > expressions) I think I'd implement it in our parser with a > FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a > FORMAT_STRING_FORMAT_OPERATOR. > > A FORMAT_STRING_TOKEN would be started by f('|"|'''|""") and ended by > matching quotes or before an open brace (that is not escaped). > > A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd > either colour as part of the string or the regular brace colour. This > also enables a parsing context where a colon becomes the > FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary > operator would be FORMAT_STRING_TOKEN. The final close brace becomes > another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is > FORMAT_STRING_TOKEN. > > So it'd translate something like this: > > f"This {text} is my {string:>{length+3}}" > > FORMAT_STRING_TOKEN[f"This ] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[text] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN[ is my ] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[string] > FORMAT_STRING_FORMAT_OPERATOR[:] > FORMAT_STRING_TOKEN[>] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[length] > OPERATOR[+] > NUMBER[3] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN[] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN["] > > I *believe* (without having tried it) that this would let us produce a > valid tokenisation (in our model) without too much difficulty, and > highlight/analyse correctly, including validating matching braces. > Getting the precedence correct on the operators might be more difficult, > but we may also just produce an AST that looks like a function call, > since that will give us "good enough" handling once we're past > tokenisation. > > A simpler tokenisation that would probably be sufficient for many > editors would be to treat the first and last segments ([f"This {] and > [}"]) as groupings and each section of text as separators, giving this: > > OPEN_GROUPING[f"This {] > EXPRESSION[text] > COMMA[} is my {] > EXPRESSION[string] > COMMA[:>{] > EXPRESSION[length+3] > COMMA[}}] > CLOSE_GROUPING["] > > Initial parsing may be a little harder, but it should mean less trouble > when expressions spread across multiple lines, since that is already > handled for other types of groupings. And if any code analysis is > occurring, it should be happening for dict/list/etc. contents already, > and so format strings will get it too. > > So I'm confident we can support it, and I expect either of these two > approaches will work for most tools without too much trouble. (There's > also a middle ground where you create new tokens for format string > components, but combine them like the second example.) The middle ground would probably be required for correct highlighting. I implied but didn't specify that the tokens in my second example would get special treatment here. > Cheers, > Steve > >> Cheers, >> -Barry From python at mrabarnett.plus.com Tue Aug 18 01:18:37 2015 From: python at mrabarnett.plus.com (MRAB) Date: Tue, 18 Aug 2015 00:18:37 +0100 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <55D25AD6.2040907@python.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> <20150817111354.0ca9ab54@anarchist.wooz.org> <55D25AD6.2040907@python.org> Message-ID: <55D26BCD.9030804@mrabarnett.plus.com> On 2015-08-17 23:06, Steve Dower wrote: > On 17Aug2015 0813, Barry Warsaw wrote: >> On Aug 18, 2015, at 12:58 AM, Chris Angelico wrote: >> >>> The linters could tell you that you have no 'end' or 'start' just as >>> easily when it's in that form as when it's written out in full. >>> Certainly the mismatched brackets could easily be caught by any sort >>> of syntax highlighter. The rules for f-strings are much simpler than, >>> say, the PHP rules and the differences between ${...} and {$...}, >>> which I've seen editors get wrong. >> >> I'm really asking whether it's technically feasible and realistically possible >> for them to do so. I'd love to hear from the maintainers of pyflakes, pylint, >> Emacs, vim, and other editors, linters, and other static analyzers on a rough >> technical assessment of whether they can support this and how much work it >> would be. > > With the current format string proposals (allowing arbitrary > expressions) I think I'd implement it in our parser with a > FORMAT_STRING_TOKEN, a FORMAT_STRING_JOIN_OPERATOR and a > FORMAT_STRING_FORMAT_OPERATOR. > > A FORMAT_STRING_TOKEN would be started by f('|"|'''|""") and ended by > matching quotes or before an open brace (that is not escaped). > > A FORMAT_STRING_JOIN_OPERATOR would be inserted as the '{', which we'd > either colour as part of the string or the regular brace colour. This > also enables a parsing context where a colon becomes the > FORMAT_STRING_FORMAT_OPERATOR and the right-hand side of this binary > operator would be FORMAT_STRING_TOKEN. The final close brace becomes > another FORMAT_STRING_JOIN_OPERATOR and the rest of the string is > FORMAT_STRING_TOKEN. > > So it'd translate something like this: > > f"This {text} is my {string:>{length+3}}" > > FORMAT_STRING_TOKEN[f"This ] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[text] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN[ is my ] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[string] > FORMAT_STRING_FORMAT_OPERATOR[:] > FORMAT_STRING_TOKEN[>] > FORMAT_STRING_JOIN_OPERATOR[{] > IDENTIFIER[length] > OPERATOR[+] > NUMBER[3] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN[] > FORMAT_STRING_JOIN_OPERATOR[}] > FORMAT_STRING_TOKEN["] > I'm not sure about that. I think it might work better with, say, FORMAT_OPEN for '{' and FORMAT_CLOSE for '}': FORMAT_STRING_TOKEN[f"This ] FORMAT_OPEN IDENTIFIER[text] FORMAT_CLOSE FORMAT_STRING_TOKEN[ is my ] FORMAT_OPEN IDENTIFIER[string] FORMAT_STRING_FORMAT_OPERATOR[:] FORMAT_STRING_TOKEN[>] FORMAT_OPEN IDENTIFIER[length] OPERATOR[+] NUMBER[3] FORMAT_CLOSE FORMAT_CLOSE FORMAT_STRING_TOKEN["] > I *believe* (without having tried it) that this would let us produce a > valid tokenisation (in our model) without too much difficulty, and > highlight/analyse correctly, including validating matching braces. > Getting the precedence correct on the operators might be more difficult, > but we may also just produce an AST that looks like a function call, > since that will give us "good enough" handling once we're past tokenisation. > > A simpler tokenisation that would probably be sufficient for many > editors would be to treat the first and last segments ([f"This {] and > [}"]) as groupings and each section of text as separators, giving this: > > OPEN_GROUPING[f"This {] > EXPRESSION[text] > COMMA[} is my {] > EXPRESSION[string] > COMMA[:>{] > EXPRESSION[length+3] > COMMA[}}] > CLOSE_GROUPING["] > > Initial parsing may be a little harder, but it should mean less trouble > when expressions spread across multiple lines, since that is already > handled for other types of groupings. And if any code analysis is > occurring, it should be happening for dict/list/etc. contents already, > and so format strings will get it too. > > So I'm confident we can support it, and I expect either of these two > approaches will work for most tools without too much trouble. (There's > also a middle ground where you create new tokens for format string > components, but combine them like the second example.) > > Cheers, > Steve > >> Cheers, >> -Barry > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/python%40mrabarnett.plus.com > > From wes.turner at gmail.com Tue Aug 18 02:57:05 2015 From: wes.turner at gmail.com (Wes Turner) Date: Mon, 17 Aug 2015 19:57:05 -0500 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <87zj1phg86.fsf@thinkpad.rath.org> References: <55C55DC3.8040605@trueblade.com> <87zj1phg86.fsf@thinkpad.rath.org> Message-ID: On Aug 17, 2015 2:23 PM, "Nikolaus Rath" wrote: > > On Aug 16 2015, Paul Moore wrote: > > 2. By far and away the most common use for me would be things like > > print(f"Iteration {n}: Took {end-start) seconds"). > > I believe an even more common use willl be > > print(f"Iteration {n+1}: Took {end-start} seconds") > > Note that not allowing expressions would turn this into the rather > verbose: > > iteration=n+1 > duration=end-start > print(f"Iteration {iteration}: Took {duration} seconds") * Is this more testable? * mutability of e.g. end.__sub__ * (do I add this syntax highlighting for Python < 3.6?) > > > Best, > -Nikolaus > > -- > GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F > Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F > > ?Time flies like an arrow, fruit flies like a Banana.? > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Tue Aug 18 03:28:16 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 18 Aug 2015 10:28:16 +0900 Subject: [Python-Dev] PEP-498: Literal String Formatting In-Reply-To: <20150817105018.24cbf305@anarchist.wooz.org> References: <55C55DC3.8040605@trueblade.com> <20150817105018.24cbf305@anarchist.wooz.org> Message-ID: <87k2stwfkf.fsf@uwakimon.sk.tsukuba.ac.jp> Barry Warsaw writes: > On Aug 17, 2015, at 11:02 AM, Paul Moore wrote: > > > print(f"Iteration {n}: Took {end-start) seconds") > > This illustrates (more) problems I have with arbitrary expressions. > > First, you've actually made a typo there; it should be > "{end-start}" -- notice the trailing curly brace. Second, what if > you typoed that as "{end_start}"? According to PEP 498 the > original typo above should trigger a SyntaxError That ship has sailed, you have the same problem with str.format format strings already. > and the second a run-time error (NameError?). Ditto. > But how will syntax highlighters and linters help you discover your > bugs before you've even saved the file? They need to recognize that a string prefixed with "f" is special, that it's not just a single token, then parse the syntax. The hardest part is finding the end-of-string delimiter! The expression itself is not a problem, since either we already have the code to handle the expression, or we don't (and your whole point is moot). Emacs abandoned the idea that you should do syntax highlighting without parsing well over a decade ago. If Python can implement the syntax, Emacs can highlight it. It's just a question of if there's will to do it on the part of the python-mode maintainers. I'm sure the same can be said about other linters and highlighters for Python, though I have no part in implementing them. From alexander.belopolsky at gmail.com Tue Aug 18 04:12:21 2015 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 17 Aug 2015 22:12:21 -0400 Subject: [Python-Dev] [Datetime-SIG] PEP 495 (Local Time Disambiguation) is ready for pronouncement In-Reply-To: References: Message-ID: [Posted on Python-Dev] On Sun, Aug 16, 2015 at 3:23 PM, Guido van Rossum wrote: > I think that a courtesy message to python-dev is appropriate, with a link to > the PEP and an invitation to discuss its merits on datetime-sig. Per Gudo's advise, this is an invitation to join PEP 495 discussion on Datetime-SIG. I you would like to catch-up on the SIG discussion, the archive of this thread starts at . The PEP itself can be found at , but if you would like to follow draft updates as they happen, you can do it on Github at . Even though the PEP is deliberately minimal in scope, there are still a few issues to be ironed out including how to call the disambiguation flag. It is agreed that the name should not refer to DST and should distinguish between two ambiguous times by their chronological order. The working name is "first", but no one particularly likes it including the author of the PEP. Some candidates are discussed in the PEP at , and some more have been suggested that I will add soon. Please direct your responses to . From robertc at robertcollins.net Tue Aug 18 04:37:23 2015 From: robertc at robertcollins.net (Robert Collins) Date: Tue, 18 Aug 2015 14:37:23 +1200 Subject: [Python-Dev] Burning down the backlog. In-Reply-To: References: Message-ID: On 26 July 2015 at 07:28, Robert Collins wrote: > On 21 July 2015 at 19:40, Nick Coghlan wrote: > >> All of this is why the chart that I believe should be worrying people >> is the topmost one on this page: >> http://bugs.python.org/issue?@template=stats >> >> Both the number of open issues and the number of open issues with >> patches are steadily trending upwards. That means the bottleneck in >> the current process *isn't* getting patches written in the first >> place, it's getting them up to the appropriate standards and applied. >> Yet the answer to the problem isn't a simple "recruit more core >> developers", as the existing core developers are *also* the bottleneck >> in the review and mentoring process for *new* core developers. > > Those charts doesn't show patches in 'commit-review' - > http://bugs.python.org/issue?%40columns=title&%40columns=id&stage=5&%40columns=activity&%40sort=activity&status=1&%40columns=status&%40pagesize=50&%40startwith=0&%40sortdir=on&%40action=search > > There are only 45 of those patches. > > AIUI - and I'm very new to core here - anyone in triagers can get > patches up to commit-review status. > > I think we should set a goal to keep inventory low here - e.g. review > and either bounce back to patch review, or commit, in less than a > month. Now - a month isn't super low, but we have lots of stuff > greater than a month. > > For my part, I'm going to pick up more or less one thing a day and > review it, but I think it would be great if other committers were to > also to do this: if we had 5 of us doing 1 a day, I think we'd burn > down this 45 patch backlog rapidly without significant individual > cost. At which point, we can fairly say to folk doing triage that > we're ready for patches :) We're down to 9 such patches, and reading through them today there are none that I felt comfortable moving forward: either their state is unclear, or they are waiting for action from a *specific* core. However - 9 isn't a bad number for 'patches that the triagers think are ready to commit' inventory. So yay!. Also - triagers, thank you for feeding patches through the process. Please keep it up :) -Rob -- Robert Collins Distinguished Technologist HP Converged Cloud From ben+python at benfinney.id.au Tue Aug 18 04:47:22 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 18 Aug 2015 12:47:22 +1000 Subject: [Python-Dev] Burning down the backlog. References: Message-ID: <85si7hfh39.fsf@benfinney.id.au> Robert Collins writes: > However - 9 isn't a bad number for 'patches that the triagers think > are ready to commit' inventory. > > So yay!. Also - triagers, thank you for feeding patches through the > process. Please keep it up :) If I were a cheerleader I would be able to lead a rousing ?Yay, go team backlog burners!? -- \ ?I may disagree with what you say, but I will defend to the | `\ death your right to mis-attribute this quote to Voltaire.? | _o__) ?Avram Grumer, rec.arts.sf.written, 2000-05-30 | Ben Finney From njs at pobox.com Tue Aug 18 05:01:14 2015 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 17 Aug 2015 20:01:14 -0700 Subject: [Python-Dev] Burning down the backlog. In-Reply-To: References: Message-ID: On Mon, Aug 17, 2015 at 7:37 PM, Robert Collins wrote: > On 26 July 2015 at 07:28, Robert Collins wrote: >> For my part, I'm going to pick up more or less one thing a day and >> review it, but I think it would be great if other committers were to >> also to do this: if we had 5 of us doing 1 a day, I think we'd burn >> down this 45 patch backlog rapidly without significant individual >> cost. At which point, we can fairly say to folk doing triage that >> we're ready for patches :) > > We're down to 9 such patches, and reading through them today there are > none that I felt comfortable moving forward: either their state is > unclear, or they are waiting for action from a *specific* core. > > However - 9 isn't a bad number for 'patches that the triagers think > are ready to commit' inventory. > > So yay!. Also - triagers, thank you for feeding patches through the > process. Please keep it up :) Awesome! If you're looking for something to do, the change in this patch had broad consensus, but has been stalled waiting for review for a while, and the lack of a final decision is leaving other projects in a somewhat uncomfortable position (they want to match CPython but CPython isn't deciding): https://bugs.python.org/issue24294 ;-) -n -- Nathaniel J. Smith -- http://vorpus.org From rdmurray at bitdance.com Tue Aug 18 16:52:10 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 18 Aug 2015 10:52:10 -0400 Subject: [Python-Dev] Burning down the backlog. In-Reply-To: <85si7hfh39.fsf@benfinney.id.au> References: <85si7hfh39.fsf@benfinney.id.au> Message-ID: <20150818145210.8B223B30003@webabinitio.net> On Tue, 18 Aug 2015 12:47:22 +1000, Ben Finney wrote: > Robert Collins writes: > > > However - 9 isn't a bad number for 'patches that the triagers think > > are ready to commit' inventory. > > > > So yay!. Also - triagers, thank you for feeding patches through the > > process. Please keep it up :) > > If I were a cheerleader I would be able to lead a rousing ???Yay, go team > backlog burners!??? Which at this point in time I think pretty much means Robert, who I also extend a hearty thanks to. (I think I moved one issue out of commit review because the test didn't fail, and that's been it for me since Robert started his burn down...) --David From willingc at willingconsulting.com Tue Aug 18 17:02:33 2015 From: willingc at willingconsulting.com (Carol Willing) Date: Tue, 18 Aug 2015 08:02:33 -0700 Subject: [Python-Dev] Burning down the backlog. In-Reply-To: <20150818145210.8B223B30003@webabinitio.net> References: <85si7hfh39.fsf@benfinney.id.au> <20150818145210.8B223B30003@webabinitio.net> Message-ID: <55D34909.8090301@willingconsulting.com> On 8/18/15 7:52 AM, R. David Murray wrote: > On Tue, 18 Aug 2015 12:47:22 +1000, Ben Finney wrote: >> Robert Collins writes: >> >>> However - 9 isn't a bad number for 'patches that the triagers think >>> are ready to commit' inventory. >>> >>> So yay!. Also - triagers, thank you for feeding patches through the >>> process. Please keep it up :) >> If I were a cheerleader I would be able to lead a rousing ???Yay, go team >> backlog burners!??? > Which at this point in time I think pretty much means Robert, who I also > extend a hearty thanks to. (I think I moved one issue out of commit > review because the test didn't fail, and that's been it for me since > Robert started his burn down...) > > --David Thank you Robert, David, and Ben :D Is anyone game for setting another goal to tackle a targeted subset of patches in "patch review" and move them to "commit review"? From valentine.sinitsyn at gmail.com Wed Aug 19 09:53:47 2015 From: valentine.sinitsyn at gmail.com (Valentine Sinitsyn) Date: Wed, 19 Aug 2015 12:53:47 +0500 Subject: [Python-Dev] tp_finalize vs tp_del sematics Message-ID: <55D4360B.6010400@gmail.com> Hi everybody, I'm trying to get sense of PEP-0442 [1]. Most of the looks clear, however I wasn't able to answer myself one simple question: why it wasn't possible to implement proposed CI disposal scheme on top of tp_del? Common sense suggests that tp_del and tp_finalize have different semantics. For instance, with tp_finalize there is a guarantee that the object will be in a safe state, as documented at [2]. However, tp_del is not documented, and I have only vague idea of its guarantees. Are there any? Thanks for the clarification. 1. https://www.python.org/dev/peps/pep-0442/ 2. https://docs.python.org/3/c-api/typeobj.html -- Best regards, Valentine Sinitsyn From barry at python.org Fri Aug 21 17:19:39 2015 From: barry at python.org (Barry Warsaw) Date: Fri, 21 Aug 2015 11:19:39 -0400 Subject: [Python-Dev] Compiler hints to control how f-strings are construed In-Reply-To: References: <55C55DC3.8040605@trueblade.com> <586DD2C2-2FCA-4A38-BF22-7E88347C290F@trueblade.com> <55D1EBF0.2030406@trueblade.com> <55D24365.7080602@trueblade.com> Message-ID: <20150821111939.7901961d@anarchist.wooz.org> On Aug 17, 2015, at 01:36 PM, Guido van Rossum wrote: >> 1. Barry wants the substitutions to look like $identifier and possibly >> ${identifier}, and the PEP 498 proposal just uses {}. >> >> 2. There needs to be a way to identify interpolated strings and i18n >> strings, and possibly combinations of those. This leads to PEP 501's i- >> and iu- strings. >> >> 3. A way to enforce identifiers-only, instead of generalized expressions. > >In an off-list message to Barry and Nick I came up with the same three >points. :-) > >I think #2 is the hard one (unless we adopt a solution like Yury just >proposed where you can have an arbitrary identifier in front of a string >literal). I've been heads-down on other things for a little while, but trying to re-engage on this thread. One thing that occurs to me now regarding points #1 and #3 is that, if we had a way to signal to the compiler how we wanted f-strings (to use a helpful shorthand) to be parsed, we could solve both problems and make the feature more useful for i18n. I'm thinking something along the lines of __future__ imports, which already influence how code in a module is construed. If we had a similar way to hint that f-strings should be construed in a way other than the default, I could do something like: from __string__ import f_strings_as_i18n at the top of my module, and that would slot in the parser for PEP 292 strings and no-arbitrary expressions. I'd be fine with that. There are some downsides of course. I wouldn't be able to mix my simpler, i18n-based strings with the default full-featured PEP 498/501 strings in the same module. I can live with that. I don't see something like a context manager being appropriate for that use case because it's a run-time behavior, even if the syntax would look convenient. The hints inside the __string__ module wouldn't be extensible, except by modifying Python's stdlib. E.g. if you wanted $-strings but full expression support, we'd have to write and distribute that with stdlib. I'm also fine with this because I think there aren't really *that* many different use cases. (There's still #2 but let's deal with that later.) >> 4. We need a "safe substitution" mode for str.format_map_simple (from >> above). Again, a `from __string__` import could solve that, right? Cheers, -Barry From status at bugs.python.org Fri Aug 21 18:08:28 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 21 Aug 2015 18:08:28 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20150821160828.73F2256757@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-08-14 - 2015-08-21) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5013 (+11) closed 31656 (+29) total 36669 (+40) Open issues with patches: 2241 Issues opened (33) ================== #21167: float('nan') returns 0.0 on Python compiled with icc http://bugs.python.org/issue21167 reopened by r.david.murray #21192: Idle: Print filename when running a file from editor http://bugs.python.org/issue21192 reopened by rhettinger #24869: shlex lineno inaccurate with certain inputs http://bugs.python.org/issue24869 opened by rescrv #24870: Optimize coding with surrogateescape and surrogatepass error h http://bugs.python.org/issue24870 opened by naoki #24871: freeze.py doesn't work on x86_64 Linux out of the box http://bugs.python.org/issue24871 opened by termim #24872: Add /NODEFAULTLIB:MSVCRT to _msvccompiler http://bugs.python.org/issue24872 opened by steve.dower #24873: Add "full cleanup" checkbox to uninstaller http://bugs.python.org/issue24873 opened by steve.dower #24875: pyvenv doesn??t install PIP inside a new venv with --system-si http://bugs.python.org/issue24875 opened by gilgamezh #24876: distutils.errors not wildcard-import-safe http://bugs.python.org/issue24876 opened by jwilk #24880: ctypeslib patch for regular expression for symbols to include http://bugs.python.org/issue24880 opened by jwagner313 #24881: _pyio checks that `os.name == 'win32'` instead of 'nt' http://bugs.python.org/issue24881 opened by Cosimo Lupo #24882: ThreadPoolExecutor doesn't reuse threads until #threads == max http://bugs.python.org/issue24882 opened by Matt Spitz #24884: Add method reopenFile() in WatchedFileHandler class http://bugs.python.org/issue24884 opened by Marian Horban #24885: StreamReaderProtocol docs recommend using private API http://bugs.python.org/issue24885 opened by aymeric.augustin #24886: open fails randomly on AIX http://bugs.python.org/issue24886 opened by wiggin15 #24887: Sqlite3 has no option to provide open flags http://bugs.python.org/issue24887 opened by sleepycal #24888: FileNotFoundException raised by subprocess.call http://bugs.python.org/issue24888 opened by Geoffrey Royer #24889: Idle: always start with focus http://bugs.python.org/issue24889 opened by terry.reedy #24890: Windows launcher docs don't fully explain shebang semantics http://bugs.python.org/issue24890 opened by BrenBarn #24891: race condition in initstdio() (python aborts running under noh http://bugs.python.org/issue24891 opened by Yi Ding #24893: Tk occasionally mispositions Text() insert cursor on mouse cl http://bugs.python.org/issue24893 opened by rhettinger #24894: iso-8859-11 missing from codecs table http://bugs.python.org/issue24894 opened by ezio.melotti #24896: It is undocumented that re.UNICODE affects re.IGNORECASE http://bugs.python.org/issue24896 opened by Leif Arne Storset #24898: Documentation for str.find() is confusing http://bugs.python.org/issue24898 opened by Ted Lemon #24899: Add an os.path <=> pathlib equivalence table in pathlib docs http://bugs.python.org/issue24899 opened by ezio.melotti #24900: Raising an exception that cannot be unpickled causes hang in P http://bugs.python.org/issue24900 opened by filmor #24902: http.server: on startup, show host/port as URL http://bugs.python.org/issue24902 opened by fxkr #24903: Do not verify destdir argument to compileall http://bugs.python.org/issue24903 opened by jgarver #24904: Patch: add timeout to difflib SequenceMatcher ratio() and quic http://bugs.python.org/issue24904 opened by jftuga #24905: Allow incremental I/O to blobs in sqlite3 http://bugs.python.org/issue24905 opened by jim_minter #24906: asyncore asynchat hanging on ssl http://bugs.python.org/issue24906 opened by Michele Comitini #24907: Module location load order is not respected if pkg_resources i http://bugs.python.org/issue24907 opened by Vadim Kantorov #24908: sysconfig.py and distutils.sysconfig.py disagree on directory http://bugs.python.org/issue24908 opened by htnieman Most recent 15 issues with no replies (15) ========================================== #24906: asyncore asynchat hanging on ssl http://bugs.python.org/issue24906 #24905: Allow incremental I/O to blobs in sqlite3 http://bugs.python.org/issue24905 #24899: Add an os.path <=> pathlib equivalence table in pathlib docs http://bugs.python.org/issue24899 #24894: iso-8859-11 missing from codecs table http://bugs.python.org/issue24894 #24886: open fails randomly on AIX http://bugs.python.org/issue24886 #24885: StreamReaderProtocol docs recommend using private API http://bugs.python.org/issue24885 #24884: Add method reopenFile() in WatchedFileHandler class http://bugs.python.org/issue24884 #24881: _pyio checks that `os.name == 'win32'` instead of 'nt' http://bugs.python.org/issue24881 #24876: distutils.errors not wildcard-import-safe http://bugs.python.org/issue24876 #24875: pyvenv doesn??t install PIP inside a new venv with --system-si http://bugs.python.org/issue24875 #24871: freeze.py doesn't work on x86_64 Linux out of the box http://bugs.python.org/issue24871 #24869: shlex lineno inaccurate with certain inputs http://bugs.python.org/issue24869 #24853: Py_Finalize doesn't clean up PyImport_Inittab http://bugs.python.org/issue24853 #24848: Warts in UTF-7 error handling http://bugs.python.org/issue24848 #24846: Add tests for ``from ... import ...` code http://bugs.python.org/issue24846 Most recent 15 issues waiting for review (15) ============================================= #24904: Patch: add timeout to difflib SequenceMatcher ratio() and quic http://bugs.python.org/issue24904 #24903: Do not verify destdir argument to compileall http://bugs.python.org/issue24903 #24902: http.server: on startup, show host/port as URL http://bugs.python.org/issue24902 #24889: Idle: always start with focus http://bugs.python.org/issue24889 #24886: open fails randomly on AIX http://bugs.python.org/issue24886 #24884: Add method reopenFile() in WatchedFileHandler class http://bugs.python.org/issue24884 #24880: ctypeslib patch for regular expression for symbols to include http://bugs.python.org/issue24880 #24871: freeze.py doesn't work on x86_64 Linux out of the box http://bugs.python.org/issue24871 #24870: Optimize coding with surrogateescape and surrogatepass error h http://bugs.python.org/issue24870 #24861: deprecate importing components of IDLE http://bugs.python.org/issue24861 #24851: infinite loop in faulthandler._stack_overflow http://bugs.python.org/issue24851 #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 #24845: IDLE functional/integration testing http://bugs.python.org/issue24845 #24840: implement bool conversion for enums to prevent odd edge case http://bugs.python.org/issue24840 #24838: tarfile.py: fix GNU and USTAR formats to properly handle paths http://bugs.python.org/issue24838 Top 10 most discussed issues (10) ================================= #24872: Add /NODEFAULTLIB:MSVCRT to _msvccompiler http://bugs.python.org/issue24872 18 msgs #24790: Idle: improve stack viewer http://bugs.python.org/issue24790 16 msgs #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 14 msgs #8987: Distutils doesn't quote Windows command lines properly http://bugs.python.org/issue8987 11 msgs #24305: The new import system makes it impossible to correctly issue a http://bugs.python.org/issue24305 11 msgs #23496: Steps for Android Native Build of Python 3.4.2 http://bugs.python.org/issue23496 9 msgs #24294: DeprecationWarnings should be visible by default in the intera http://bugs.python.org/issue24294 8 msgs #24870: Optimize coding with surrogateescape and surrogatepass error h http://bugs.python.org/issue24870 8 msgs #24891: race condition in initstdio() (python aborts running under noh http://bugs.python.org/issue24891 7 msgs #10740: sqlite3 module breaks transactions and potentially corrupts da http://bugs.python.org/issue10740 6 msgs Issues closed (28) ================== #11691: sqlite3 Cursor.description doesn't set type_code http://bugs.python.org/issue11691 closed by ghaering #17570: Improve devguide Windows instructions http://bugs.python.org/issue17570 closed by python-dev #20362: longMessage attribute is ignored in unittest.TestCase.assertRe http://bugs.python.org/issue20362 closed by rbcollins #22680: Blacklist FunctionTestCase from test discovery http://bugs.python.org/issue22680 closed by rbcollins #23572: functools.singledispatch fails when "not BaseClass" is True http://bugs.python.org/issue23572 closed by yselivanov #23672: IDLE can crash if file name contains non-BMP Unicode character http://bugs.python.org/issue23672 closed by terry.reedy #23810: Suboptimal stacklevel of deprecation warnings for formatter an http://bugs.python.org/issue23810 closed by brett.cannon #24054: Invalid syntax in inspect_fodder2.py (on Python 2.x) http://bugs.python.org/issue24054 closed by rbcollins #24079: xml.etree.ElementTree.Element.text does not conform to the doc http://bugs.python.org/issue24079 closed by ned.deily #24379: operator.subscript http://bugs.python.org/issue24379 closed by rhettinger #24492: using custom objects as modules: AttributeErrors new in 3.5 http://bugs.python.org/issue24492 closed by brett.cannon #24764: cgi.FieldStorage can't parse multipart part headers with Conte http://bugs.python.org/issue24764 closed by haypo #24774: inconsistency in http.server.test http://bugs.python.org/issue24774 closed by rbcollins #24842: Mention SimpleNamespace in namedtuple docs http://bugs.python.org/issue24842 closed by rhettinger #24859: ctypes.Structure bit order is reversed - counts from right http://bugs.python.org/issue24859 closed by zeero #24864: errors writing to stdout during interpreter exit exit with sta http://bugs.python.org/issue24864 closed by rbcollins #24866: Boolean representation of Q/queue objects does not fit behavio http://bugs.python.org/issue24866 closed by rhettinger #24867: Asyncio Task.get_stack fails with native coroutines http://bugs.python.org/issue24867 closed by yselivanov #24868: Python start http://bugs.python.org/issue24868 closed by terry.reedy #24874: Improve pickling efficiency of itertools.cycle http://bugs.python.org/issue24874 closed by rhettinger #24877: Bad Password for file using zipfile module http://bugs.python.org/issue24877 closed by shivaprasanth #24878: Add docstrings to selected named tuples http://bugs.python.org/issue24878 closed by rhettinger #24879: Pydoc to list data descriptors in _fields order if it exists http://bugs.python.org/issue24879 closed by rhettinger #24883: Typo in c-api/buffer documentation http://bugs.python.org/issue24883 closed by python-dev #24892: bytes.join() won't take it's own type as the argument http://bugs.python.org/issue24892 closed by brett.cannon #24895: indentation fix in ceval.c in python 2.7 http://bugs.python.org/issue24895 closed by python-dev #24897: Add new attribute decorator (akin to property)? http://bugs.python.org/issue24897 closed by eric.snow #24901: (2,)!=(2) and (2,3)==(2,3,) why ??? tested in each version http://bugs.python.org/issue24901 closed by eryksun From alecsandru.patrascu at intel.com Sat Aug 22 16:46:17 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 14:46:17 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware. For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed. In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for training as it provides the best performance improvement. Some of the test programs in the suite may fail which leads to build fail. One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1. hg clone https://hg.python.org/cpython cpython 2. cd cpython 3. hg update 2.7 (needed for 2.7 only) 4. Copy *.patch to the current directory 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6. ./configure 7. make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW): Intel XEON (Broadwell) 8 Cores BIOS settings: Intel Turbo Boost Technology: false Hyper-Threading: false Operating System: Ubuntu 14.04.3 LTS trusty OS configuration: CPU freq set at fixed: 2.6GHz by echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by echo 0 > /proc/sys/kernel/randomize_va_space GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark: Grand Unified Python Benchmark (GUPB) GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: Python source: hg clone https://hg.python.org/cpython cpython Python Source: hg update 2.7 hg id: 0511b1165bb6 (2.7) hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 Benchmarks Speedup(%) simple_logging 20 raytrace 20 silent_logging 19 richards 19 chaos 16 formatted_logging 16 json_dump 15 hexiom2 13 pidigits 12 slowunpickle 12 django_v2 12 unpack_sequence 11 float 11 mako 11 slowpickle 11 fastpickle 11 django 11 go 10 json_dump_v2 10 pathlib 10 regex_compile 10 pybench 9.9 etree_process 9 regex_v8 8 bzr_startup 8 2to3 8 slowspitfire 8 telco 8 pickle_list 8 fannkuch 8 etree_iterparse 8 nqueens 8 mako_v2 8 etree_generate 8 call_method_slots 7 html5lib_warmup 7 html5lib 7 nbody 7 spectral_norm 7 spambayes 7 fastunpickle 6 meteor_contest 6 chameleon 6 rietveld 6 tornado_http 5 unpickle_list 5 pickle_dict 4 regex_effbot 3 normal_startup 3 startup_nosite 3 etree_parse 2 call_method_unknown 2 call_simple 1 json_load 1 call_method 1 Python3.6 results Python source: hg clone https://hg.python.org/cpython cpython hg id: 96d016f78726 tip hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 Benchmark Speedup(%) fastunpickle 22.94 fastpickle 21.67 json_load 17.64 simple_logging 17.49 meteor_contest 16.67 formatted_logging 15.33 etree_process 14.61 raytrace 13.57 etree_generate 13.56 chaos 12.09 hexiom2 12 nbody 11.88 json_dump_v2 11.24 richards 11.02 nqueens 10.96 fannkuch 10.79 go 10.77 float 10.26 regex_compile 9.8 silent_logging 9.63 pidigits 9.58 etree_iterparse 9.48 2to3 8.44 regex_v8 8.09 regex_effbot 7.88 call_simple 7.63 tornado_http 7.38 etree_parse 4.92 spectral_norm 4.72 normal_startup 4.39 telco 3.88 startup_nosite 3.7 call_method 3.63 unpack_sequence 3.6 call_method_slots 2.91 call_method_unknown 2.59 iterative_count 0.45 threaded_count -2.79 Thank you, Alecsandru -------------- next part -------------- A non-text attachment was scrubbed... Name: python2.7-pgo.patch Type: application/octet-stream Size: 2031 bytes Desc: python2.7-pgo.patch URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: python3.6-pgo.patch Type: application/octet-stream Size: 2334 bytes Desc: python3.6-pgo.patch URL: From guido at python.org Sat Aug 22 18:15:10 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Aug 2015 09:15:10 -0700 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: > Hi All, > > This is Alecsandru from Server Scripting Languages Optimization team at > Intel Corporation. > > I would like to submit a request to turn-on Profile Guided Optimization or > PGO as the default build option for Python (both 2.7 and 3.6), given its > performance benefits on a wide variety of workloads and hardware. For > instance, as shown from attached sample performance results from the Grand > Unified Python Benchmark, >20% speed up was observed. In addition, we are > seeing 2-9% performance boost from OpenStack/Swift where more than 60% of > the codes are in Python 2.7. Our analysis indicates the performance gain > was mainly due to reduction of icache misses and CPU front-end stalls. > > Attached is the Makefile patches that modify the all build target and adds > a new one called "disable-profile-opt". We built and tested this patch for > Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, > Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for > training as it provides the best performance improvement. Some of the test > programs in the suite may fail which leads to build fail. One solution is > to disable the specific failed test using the "-x " flag (as shown in the > patch) > > Steps to apply the patch: > 1. hg clone https://hg.python.org/cpython cpython > 2. cd cpython > 3. hg update 2.7 (needed for 2.7 only) > 4. Copy *.patch to the current directory > 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) > 6. ./configure > 7. make > > To disable PGO > 7b. make disable-profile-opt > > In the following, please find our sample performance results from latest > XEON machine, XEON Broadwell EP. > Hardware (HW): Intel XEON (Broadwell) 8 Cores > > BIOS settings: Intel Turbo Boost Technology: false > Hyper-Threading: false > > Operating System: Ubuntu 14.04.3 LTS trusty > > OS configuration: CPU freq set at fixed: 2.6GHz by > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq > Address Space Layout Randomization (ASLR) disabled (to > reduce run to run variation) by > echo 0 > /proc/sys/kernel/randomize_va_space > > GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) > > Benchmark: Grand Unified Python Benchmark (GUPB) > GUPB Source: https://hg.python.org/benchmarks/ > > Python2.7 results: > Python source: hg clone https://hg.python.org/cpython cpython > Python Source: hg update 2.7 > hg id: 0511b1165bb6 (2.7) > hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 > hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 > > Benchmarks Speedup(%) > simple_logging 20 > raytrace 20 > silent_logging 19 > richards 19 > chaos 16 > formatted_logging 16 > json_dump 15 > hexiom2 13 > pidigits 12 > slowunpickle 12 > django_v2 12 > unpack_sequence 11 > float 11 > mako 11 > slowpickle 11 > fastpickle 11 > django 11 > go 10 > json_dump_v2 10 > pathlib 10 > regex_compile 10 > pybench 9.9 > etree_process 9 > regex_v8 8 > bzr_startup 8 > 2to3 8 > slowspitfire 8 > telco 8 > pickle_list 8 > fannkuch 8 > etree_iterparse 8 > nqueens 8 > mako_v2 8 > etree_generate 8 > call_method_slots 7 > html5lib_warmup 7 > html5lib 7 > nbody 7 > spectral_norm 7 > spambayes 7 > fastunpickle 6 > meteor_contest 6 > chameleon 6 > rietveld 6 > tornado_http 5 > unpickle_list 5 > pickle_dict 4 > regex_effbot 3 > normal_startup 3 > startup_nosite 3 > etree_parse 2 > call_method_unknown 2 > call_simple 1 > json_load 1 > call_method 1 > > Python3.6 results > Python source: hg clone https://hg.python.org/cpython cpython > hg id: 96d016f78726 tip > hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 > hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 > > > Benchmark Speedup(%) > fastunpickle 22.94 > fastpickle 21.67 > json_load 17.64 > simple_logging 17.49 > meteor_contest 16.67 > formatted_logging 15.33 > etree_process 14.61 > raytrace 13.57 > etree_generate 13.56 > chaos 12.09 > hexiom2 12 > nbody 11.88 > json_dump_v2 11.24 > richards 11.02 > nqueens 10.96 > fannkuch 10.79 > go 10.77 > float 10.26 > regex_compile 9.8 > silent_logging 9.63 > pidigits 9.58 > etree_iterparse 9.48 > 2to3 8.44 > regex_v8 8.09 > regex_effbot 7.88 > call_simple 7.63 > tornado_http 7.38 > etree_parse 4.92 > spectral_norm 4.72 > normal_startup 4.39 > telco 3.88 > startup_nosite 3.7 > call_method 3.63 > unpack_sequence 3.6 > call_method_slots 2.91 > call_method_unknown 2.59 > iterative_count 0.45 > threaded_count -2.79 > > > Thank you, > Alecsandru > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Aug 22 18:25:06 2015 From: brett at python.org (Brett Cannon) Date: Sat, 22 Aug 2015 16:25:06 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On Sat, Aug 22, 2015, 09:17 Guido van Rossum wrote: How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. I agree. Updating the Makefile so it's easier to use PGO is great, but we should do a release with it as opt-in and go from there. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. That was also my question. You said that "it provides the best performance improvement", but compared to what; what else was tried? And what difference does it make to e.g. a Django app that is trained on their own simulated workload compared to using regrtest? IOW is regrtest displaying the best across-the-board performance because it stresses the largest swath of Python and thus catches generic patterns in the code but individuals could get better performance with a simulated workload? -Brett On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware. For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed. In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for training as it provides the best performance improvement. Some of the test programs in the suite may fail which leads to build fail. One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1. hg clone https://hg.python.org/cpython cpython 2. cd cpython 3. hg update 2.7 (needed for 2.7 only) 4. Copy *.patch to the current directory 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6. ./configure 7. make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW): Intel XEON (Broadwell) 8 Cores BIOS settings: Intel Turbo Boost Technology: false Hyper-Threading: false Operating System: Ubuntu 14.04.3 LTS trusty OS configuration: CPU freq set at fixed: 2.6GHz by echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by echo 0 > /proc/sys/kernel/randomize_va_space GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark: Grand Unified Python Benchmark (GUPB) GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: Python source: hg clone https://hg.python.org/cpython cpython Python Source: hg update 2.7 hg id: 0511b1165bb6 (2.7) hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 Benchmarks Speedup(%) simple_logging 20 raytrace 20 silent_logging 19 richards 19 chaos 16 formatted_logging 16 json_dump 15 hexiom2 13 pidigits 12 slowunpickle 12 django_v2 12 unpack_sequence 11 float 11 mako 11 slowpickle 11 fastpickle 11 django 11 go 10 json_dump_v2 10 pathlib 10 regex_compile 10 pybench 9.9 etree_process 9 regex_v8 8 bzr_startup 8 2to3 8 slowspitfire 8 telco 8 pickle_list 8 fannkuch 8 etree_iterparse 8 nqueens 8 mako_v2 8 etree_generate 8 call_method_slots 7 html5lib_warmup 7 html5lib 7 nbody 7 spectral_norm 7 spambayes 7 fastunpickle 6 meteor_contest 6 chameleon 6 rietveld 6 tornado_http 5 unpickle_list 5 pickle_dict 4 regex_effbot 3 normal_startup 3 startup_nosite 3 etree_parse 2 call_method_unknown 2 call_simple 1 json_load 1 call_method 1 Python3.6 results Python source: hg clone https://hg.python.org/cpython cpython hg id: 96d016f78726 tip hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 Benchmark Speedup(%) fastunpickle 22.94 fastpickle 21.67 json_load 17.64 simple_logging 17.49 meteor_contest 16.67 formatted_logging 15.33 etree_process 14.61 raytrace 13.57 etree_generate 13.56 chaos 12.09 hexiom2 12 nbody 11.88 json_dump_v2 11.24 richards 11.02 nqueens 10.96 fannkuch 10.79 go 10.77 float 10.26 regex_compile 9.8 silent_logging 9.63 pidigits 9.58 etree_iterparse 9.48 2to3 8.44 regex_v8 8.09 regex_effbot 7.88 call_simple 7.63 tornado_http 7.38 etree_parse 4.92 spectral_norm 4.72 normal_startup 4.39 telco 3.88 startup_nosite 3.7 call_method 3.63 unpack_sequence 3.6 call_method_slots 2.91 call_method_unknown 2.59 iterative_count 0.45 threaded_count -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecsandru.patrascu at intel.com Sat Aug 22 18:40:52 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 16:40:52 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Hello and thank you for your feedback. We have measured PGO gain using other workloads also. Our initial choice for this optimization was pybench, but the speedup obtained was lower than using regrtest and it didn't cover a lot of Python scenarios. Instead, regrtest has an uniform distribution for the tests and the resulting binary is overall much faster than the default, or trained using other workloads, and thus covering a larger pool of Python loads. This optimization was also tested on a production environments running OpenStack Swift and got up to 9% improvements. The reason we proposed this target to be always on is that the obtained optimized binary is better out of the box for the general cases. Alecsandru From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of Guido van Rossum Sent: Saturday, August 22, 2015 7:15 PM To: Patrascu, Alecsandru Cc: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware.? For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed.? In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores).? We use "regrtest" suite for training as it provides the best performance improvement.? Some of the test programs in the suite may fail which leads to build fail.? One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1.? hg clone https://hg.python.org/cpython cpython 2.? cd cpython 3.? hg update 2.7 (needed for 2.7 only) 4.? Copy *.patch to the current directory 5.? patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6.? ./configure 7.? make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW):? ? ? Intel XEON (Broadwell) 8 Cores BIOS settings:? ? ? Intel Turbo Boost Technology: false ? ? ? ? ? ? ? ? ? ? Hyper-Threading: false Operating System:? ?Ubuntu 14.04.3 LTS trusty OS configuration:? ?CPU freq set at fixed: 2.6GHz by ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq ? ? ? ? ? ? ? ? ? ? Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by ? ? ? ? ? ? ? ? ? ? ? ? echo 0 > /proc/sys/kernel/randomize_va_space GCC version:? ? ? ? gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark:? ? ? ? ? Grand Unified Python Benchmark (GUPB) ? ? ? ? ? ? ? ? ? ? GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? Python Source: hg update 2.7 ? ? hg id: 0511b1165bb6 (2.7) ? ? hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 ? ? hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 ? ? ? ? Benchmarks? ? ? ? ? Speedup(%) ? ? ? ? simple_logging? ? ? 20 ? ? ? ? raytrace? ? ? ? ? ? 20 ? ? ? ? silent_logging? ? ? 19 ? ? ? ? richards? ? ? ? ? ? 19 ? ? ? ? chaos? ? ? ? ? ? ? ?16 ? ? ? ? formatted_logging? ?16 ? ? ? ? json_dump? ? ? ? ? ?15 ? ? ? ? hexiom2? ? ? ? ? ? ?13 ? ? ? ? pidigits? ? ? ? ? ? 12 ? ? ? ? slowunpickle? ? ? ? 12 ? ? ? ? django_v2? ? ? ? ? ?12 ? ? ? ? unpack_sequence? ? ?11 ? ? ? ? float? ? ? ? ? ? ? ?11 ? ? ? ? mako? ? ? ? ? ? ? ? 11 ? ? ? ? slowpickle? ? ? ? ? 11 ? ? ? ? fastpickle? ? ? ? ? 11 ? ? ? ? django? ? ? ? ? ? ? 11 ? ? ? ? go? ? ? ? ? ? ? ? ? 10 ? ? ? ? json_dump_v2? ? ? ? 10 ? ? ? ? pathlib? ? ? ? ? ? ?10 ? ? ? ? regex_compile? ? ? ?10 ? ? ? ? pybench? ? ? ? ? ? ?9.9 ? ? ? ? etree_process? ? ? ?9 ? ? ? ? regex_v8? ? ? ? ? ? 8 ? ? ? ? bzr_startup? ? ? ? ?8 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8 ? ? ? ? slowspitfire? ? ? ? 8 ? ? ? ? telco? ? ? ? ? ? ? ?8 ? ? ? ? pickle_list? ? ? ? ?8 ? ? ? ? fannkuch? ? ? ? ? ? 8 ? ? ? ? etree_iterparse? ? ?8 ? ? ? ? nqueens? ? ? ? ? ? ?8 ? ? ? ? mako_v2? ? ? ? ? ? ?8 ? ? ? ? etree_generate? ? ? 8 ? ? ? ? call_method_slots? ?7 ? ? ? ? html5lib_warmup? ? ?7 ? ? ? ? html5lib? ? ? ? ? ? 7 ? ? ? ? nbody? ? ? ? ? ? ? ?7 ? ? ? ? spectral_norm? ? ? ?7 ? ? ? ? spambayes? ? ? ? ? ?7 ? ? ? ? fastunpickle? ? ? ? 6 ? ? ? ? meteor_contest? ? ? 6 ? ? ? ? chameleon? ? ? ? ? ?6 ? ? ? ? rietveld? ? ? ? ? ? 6 ? ? ? ? tornado_http? ? ? ? 5 ? ? ? ? unpickle_list? ? ? ?5 ? ? ? ? pickle_dict? ? ? ? ?4 ? ? ? ? regex_effbot? ? ? ? 3 ? ? ? ? normal_startup? ? ? 3 ? ? ? ? startup_nosite? ? ? 3 ? ? ? ? etree_parse? ? ? ? ?2 ? ? ? ? call_method_unknown 2 ? ? ? ? call_simple? ? ? ? ?1 ? ? ? ? json_load? ? ? ? ? ?1 ? ? ? ? call_method? ? ? ? ?1 Python3.6 results ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? hg id: 96d016f78726 tip ? ? hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 ? ? hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 ? ? ? ? Benchmark? ? ? ? ? ?Speedup(%) ? ? ? ? fastunpickle? ? ? ? 22.94 ? ? ? ? fastpickle? ? ? ? ? 21.67 ? ? ? ? json_load? ? ? ? ? ?17.64 ? ? ? ? simple_logging? ? ? 17.49 ? ? ? ? meteor_contest? ? ? 16.67 ? ? ? ? formatted_logging? ?15.33 ? ? ? ? etree_process? ? ? ?14.61 ? ? ? ? raytrace? ? ? ? ? ? 13.57 ? ? ? ? etree_generate? ? ? 13.56 ? ? ? ? chaos? ? ? ? ? ? ? ?12.09 ? ? ? ? hexiom2? ? ? ? ? ? ?12 ? ? ? ? nbody? ? ? ? ? ? ? ?11.88 ? ? ? ? json_dump_v2? ? ? ? 11.24 ? ? ? ? richards? ? ? ? ? ? 11.02 ? ? ? ? nqueens? ? ? ? ? ? ?10.96 ? ? ? ? fannkuch? ? ? ? ? ? 10.79 ? ? ? ? go? ? ? ? ? ? ? ? ? 10.77 ? ? ? ? float? ? ? ? ? ? ? ?10.26 ? ? ? ? regex_compile? ? ? ?9.8 ? ? ? ? silent_logging? ? ? 9.63 ? ? ? ? pidigits? ? ? ? ? ? 9.58 ? ? ? ? etree_iterparse? ? ?9.48 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8.44 ? ? ? ? regex_v8? ? ? ? ? ? 8.09 ? ? ? ? regex_effbot? ? ? ? 7.88 ? ? ? ? call_simple? ? ? ? ?7.63 ? ? ? ? tornado_http? ? ? ? 7.38 ? ? ? ? etree_parse? ? ? ? ?4.92 ? ? ? ? spectral_norm? ? ? ?4.72 ? ? ? ? normal_startup? ? ? 4.39 ? ? ? ? telco? ? ? ? ? ? ? ?3.88 ? ? ? ? startup_nosite? ? ? 3.7 ? ? ? ? call_method? ? ? ? ?3.63 ? ? ? ? unpack_sequence? ? ?3.6 ? ? ? ? call_method_slots? ?2.91 ? ? ? ? call_method_unknown 2.59 ? ? ? ? iterative_count? ? ?0.45 ? ? ? ? threaded_count? ? ? -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) From guido at python.org Sat Aug 22 18:55:58 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 22 Aug 2015 09:55:58 -0700 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Message-ID: I'm sorry, but we're just not going to turn this on by default without doing a trial period ourselves. Your (and Intel's) contribution is very welcome, but in order to establish trust in a feature like this, an optional trial period is absolutely required. Regarding the training set, I agree that regrtest sounds to be better than pybench. If we make this an opt-in change, we can experiment with different training sets easily. (Also, I haven't seen the patch yet, but I presume it's easy to use a different training set? Experimentation should be encouraged.) On Sat, Aug 22, 2015 at 9:40 AM, Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: > Hello and thank you for your feedback. > > We have measured PGO gain using other workloads also. Our initial choice > for this optimization was pybench, but the speedup obtained was lower than > using regrtest and it didn't cover a lot of Python scenarios. Instead, > regrtest has an uniform distribution for the tests and the resulting binary > is overall much faster than the default, or trained using other workloads, > and thus covering a larger pool of Python loads. This optimization was also > tested on a production environments running OpenStack Swift and got up to > 9% improvements. > > The reason we proposed this target to be always on is that the obtained > optimized binary is better out of the box for the general cases. > > Alecsandru > > From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of > Guido van Rossum > Sent: Saturday, August 22, 2015 7:15 PM > To: Patrascu, Alecsandru > Cc: python-dev at python.org > Subject: Re: [Python-Dev] Profile Guided Optimization active by-default > > How about we first add a new Makefile target that enables PGO, without > turning it on by default? Then later we can enable it by default. > Also, I have my doubts about regrtest. How sure are we that it represents > a typical Python load? Tests are often using a different mix of operations > than production code. > > On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru < > alecsandru.patrascu at intel.com> wrote: > Hi All, > > This is Alecsandru from Server Scripting Languages Optimization team at > Intel Corporation. > > I would like to submit a request to turn-on Profile Guided Optimization or > PGO as the default build option for Python (both 2.7 and 3.6), given its > performance benefits on a wide variety of workloads and hardware. For > instance, as shown from attached sample performance results from the Grand > Unified Python Benchmark, >20% speed up was observed. In addition, we are > seeing 2-9% performance boost from OpenStack/Swift where more than 60% of > the codes are in Python 2.7. Our analysis indicates the performance gain > was mainly due to reduction of icache misses and CPU front-end stalls. > > Attached is the Makefile patches that modify the all build target and adds > a new one called "disable-profile-opt". We built and tested this patch for > Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, > Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for > training as it provides the best performance improvement. Some of the test > programs in the suite may fail which leads to build fail. One solution is > to disable the specific failed test using the "-x " flag (as shown in the > patch) > > Steps to apply the patch: > 1. hg clone https://hg.python.org/cpython cpython > 2. cd cpython > 3. hg update 2.7 (needed for 2.7 only) > 4. Copy *.patch to the current directory > 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) > 6. ./configure > 7. make > > To disable PGO > 7b. make disable-profile-opt > > In the following, please find our sample performance results from latest > XEON machine, XEON Broadwell EP. > Hardware (HW): Intel XEON (Broadwell) 8 Cores > > BIOS settings: Intel Turbo Boost Technology: false > Hyper-Threading: false > > Operating System: Ubuntu 14.04.3 LTS trusty > > OS configuration: CPU freq set at fixed: 2.6GHz by > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq > Address Space Layout Randomization (ASLR) disabled (to > reduce run to run variation) by > echo 0 > /proc/sys/kernel/randomize_va_space > > GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) > > Benchmark: Grand Unified Python Benchmark (GUPB) > GUPB Source: https://hg.python.org/benchmarks/ > > Python2.7 results: > Python source: hg clone https://hg.python.org/cpython cpython > Python Source: hg update 2.7 > hg id: 0511b1165bb6 (2.7) > hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 > hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 > > Benchmarks Speedup(%) > simple_logging 20 > raytrace 20 > silent_logging 19 > richards 19 > chaos 16 > formatted_logging 16 > json_dump 15 > hexiom2 13 > pidigits 12 > slowunpickle 12 > django_v2 12 > unpack_sequence 11 > float 11 > mako 11 > slowpickle 11 > fastpickle 11 > django 11 > go 10 > json_dump_v2 10 > pathlib 10 > regex_compile 10 > pybench 9.9 > etree_process 9 > regex_v8 8 > bzr_startup 8 > 2to3 8 > slowspitfire 8 > telco 8 > pickle_list 8 > fannkuch 8 > etree_iterparse 8 > nqueens 8 > mako_v2 8 > etree_generate 8 > call_method_slots 7 > html5lib_warmup 7 > html5lib 7 > nbody 7 > spectral_norm 7 > spambayes 7 > fastunpickle 6 > meteor_contest 6 > chameleon 6 > rietveld 6 > tornado_http 5 > unpickle_list 5 > pickle_dict 4 > regex_effbot 3 > normal_startup 3 > startup_nosite 3 > etree_parse 2 > call_method_unknown 2 > call_simple 1 > json_load 1 > call_method 1 > > Python3.6 results > Python source: hg clone https://hg.python.org/cpython cpython > hg id: 96d016f78726 tip > hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 > hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 > > > Benchmark Speedup(%) > fastunpickle 22.94 > fastpickle 21.67 > json_load 17.64 > simple_logging 17.49 > meteor_contest 16.67 > formatted_logging 15.33 > etree_process 14.61 > raytrace 13.57 > etree_generate 13.56 > chaos 12.09 > hexiom2 12 > nbody 11.88 > json_dump_v2 11.24 > richards 11.02 > nqueens 10.96 > fannkuch 10.79 > go 10.77 > float 10.26 > regex_compile 9.8 > silent_logging 9.63 > pidigits 9.58 > etree_iterparse 9.48 > 2to3 8.44 > regex_v8 8.09 > regex_effbot 7.88 > call_simple 7.63 > tornado_http 7.38 > etree_parse 4.92 > spectral_norm 4.72 > normal_startup 4.39 > telco 3.88 > startup_nosite 3.7 > call_method 3.63 > unpack_sequence 3.6 > call_method_slots 2.91 > call_method_unknown 2.59 > iterative_count 0.45 > threaded_count -2.79 > > > Thank you, > Alecsandru > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > -- > --Guido van Rossum (python.org/~guido) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecsandru.patrascu at intel.com Sat Aug 22 18:58:12 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 16:58:12 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0C77@IRSMSX102.ger.corp.intel.com> This target replaces the existing one in the CPython Makefile, which now uses a quick run of pybench and the obtained binary does not perform well on general Python loads. I don't think is a good idea to add a by-default target that does PGO on dedicated workloads, like Django, because then it will perform better on that particular load and poorly on other. Of course, if any user has a dedicated workload for which he or she want to get the best benefit over PGO, it will have to run that training separately from the proposed one. Our proposal targets the broader audience that uses Python in various scenarios, and they will see an overall improvement after compiling Python from sources. Alecsandru From: Brett Cannon [mailto:brett at python.org] Sent: Saturday, August 22, 2015 7:25 PM To: guido at python.org; Patrascu, Alecsandru Cc: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default On Sat, Aug 22, 2015, 09:17?Guido van Rossum wrote: How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. I agree. Updating the Makefile so it's easier to use PGO is great, but we should do a release with it as opt-in and go from there. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. That was also my question. You said that "it provides the best performance improvement", but compared to what; what else was tried? And what difference does it make to e.g. a Django app that is trained on their own simulated workload compared to using regrtest? IOW is regrtest displaying the best across-the-board performance because it stresses the largest swath of Python and thus catches generic patterns in the code but individuals could get better performance with a simulated workload? -Brett On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware.? For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed.? In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores).? We use "regrtest" suite for training as it provides the best performance improvement.? Some of the test programs in the suite may fail which leads to build fail.? One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1.? hg clone https://hg.python.org/cpython cpython 2.? cd cpython 3.? hg update 2.7 (needed for 2.7 only) 4.? Copy *.patch to the current directory 5.? patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6.? ./configure 7.? make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW):? ? ? Intel XEON (Broadwell) 8 Cores BIOS settings:? ? ? Intel Turbo Boost Technology: false ? ? ? ? ? ? ? ? ? ? Hyper-Threading: false Operating System:? ?Ubuntu 14.04.3 LTS trusty OS configuration:? ?CPU freq set at fixed: 2.6GHz by ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq ? ? ? ? ? ? ? ? ? ? Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by ? ? ? ? ? ? ? ? ? ? ? ? echo 0 > /proc/sys/kernel/randomize_va_space GCC version:? ? ? ? gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark:? ? ? ? ? Grand Unified Python Benchmark (GUPB) ? ? ? ? ? ? ? ? ? ? GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? Python Source: hg update 2.7 ? ? hg id: 0511b1165bb6 (2.7) ? ? hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 ? ? hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 ? ? ? ? Benchmarks? ? ? ? ? Speedup(%) ? ? ? ? simple_logging? ? ? 20 ? ? ? ? raytrace? ? ? ? ? ? 20 ? ? ? ? silent_logging? ? ? 19 ? ? ? ? richards? ? ? ? ? ? 19 ? ? ? ? chaos? ? ? ? ? ? ? ?16 ? ? ? ? formatted_logging? ?16 ? ? ? ? json_dump? ? ? ? ? ?15 ? ? ? ? hexiom2? ? ? ? ? ? ?13 ? ? ? ? pidigits? ? ? ? ? ? 12 ? ? ? ? slowunpickle? ? ? ? 12 ? ? ? ? django_v2? ? ? ? ? ?12 ? ? ? ? unpack_sequence? ? ?11 ? ? ? ? float? ? ? ? ? ? ? ?11 ? ? ? ? mako? ? ? ? ? ? ? ? 11 ? ? ? ? slowpickle? ? ? ? ? 11 ? ? ? ? fastpickle? ? ? ? ? 11 ? ? ? ? django? ? ? ? ? ? ? 11 ? ? ? ? go? ? ? ? ? ? ? ? ? 10 ? ? ? ? json_dump_v2? ? ? ? 10 ? ? ? ? pathlib? ? ? ? ? ? ?10 ? ? ? ? regex_compile? ? ? ?10 ? ? ? ? pybench? ? ? ? ? ? ?9.9 ? ? ? ? etree_process? ? ? ?9 ? ? ? ? regex_v8? ? ? ? ? ? 8 ? ? ? ? bzr_startup? ? ? ? ?8 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8 ? ? ? ? slowspitfire? ? ? ? 8 ? ? ? ? telco? ? ? ? ? ? ? ?8 ? ? ? ? pickle_list? ? ? ? ?8 ? ? ? ? fannkuch? ? ? ? ? ? 8 ? ? ? ? etree_iterparse? ? ?8 ? ? ? ? nqueens? ? ? ? ? ? ?8 ? ? ? ? mako_v2? ? ? ? ? ? ?8 ? ? ? ? etree_generate? ? ? 8 ? ? ? ? call_method_slots? ?7 ? ? ? ? html5lib_warmup? ? ?7 ? ? ? ? html5lib? ? ? ? ? ? 7 ? ? ? ? nbody? ? ? ? ? ? ? ?7 ? ? ? ? spectral_norm? ? ? ?7 ? ? ? ? spambayes? ? ? ? ? ?7 ? ? ? ? fastunpickle? ? ? ? 6 ? ? ? ? meteor_contest? ? ? 6 ? ? ? ? chameleon? ? ? ? ? ?6 ? ? ? ? rietveld? ? ? ? ? ? 6 ? ? ? ? tornado_http? ? ? ? 5 ? ? ? ? unpickle_list? ? ? ?5 ? ? ? ? pickle_dict? ? ? ? ?4 ? ? ? ? regex_effbot? ? ? ? 3 ? ? ? ? normal_startup? ? ? 3 ? ? ? ? startup_nosite? ? ? 3 ? ? ? ? etree_parse? ? ? ? ?2 ? ? ? ? call_method_unknown 2 ? ? ? ? call_simple? ? ? ? ?1 ? ? ? ? json_load? ? ? ? ? ?1 ? ? ? ? call_method? ? ? ? ?1 Python3.6 results ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? hg id: 96d016f78726 tip ? ? hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 ? ? hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 ? ? ? ? Benchmark? ? ? ? ? ?Speedup(%) ? ? ? ? fastunpickle? ? ? ? 22.94 ? ? ? ? fastpickle? ? ? ? ? 21.67 ? ? ? ? json_load? ? ? ? ? ?17.64 ? ? ? ? simple_logging? ? ? 17.49 ? ? ? ? meteor_contest? ? ? 16.67 ? ? ? ? formatted_logging? ?15.33 ? ? ? ? etree_process? ? ? ?14.61 ? ? ? ? raytrace? ? ? ? ? ? 13.57 ? ? ? ? etree_generate? ? ? 13.56 ? ? ? ? chaos? ? ? ? ? ? ? ?12.09 ? ? ? ? hexiom2? ? ? ? ? ? ?12 ? ? ? ? nbody? ? ? ? ? ? ? ?11.88 ? ? ? ? json_dump_v2? ? ? ? 11.24 ? ? ? ? richards? ? ? ? ? ? 11.02 ? ? ? ? nqueens? ? ? ? ? ? ?10.96 ? ? ? ? fannkuch? ? ? ? ? ? 10.79 ? ? ? ? go? ? ? ? ? ? ? ? ? 10.77 ? ? ? ? float? ? ? ? ? ? ? ?10.26 ? ? ? ? regex_compile? ? ? ?9.8 ? ? ? ? silent_logging? ? ? 9.63 ? ? ? ? pidigits? ? ? ? ? ? 9.58 ? ? ? ? etree_iterparse? ? ?9.48 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8.44 ? ? ? ? regex_v8? ? ? ? ? ? 8.09 ? ? ? ? regex_effbot? ? ? ? 7.88 ? ? ? ? call_simple? ? ? ? ?7.63 ? ? ? ? tornado_http? ? ? ? 7.38 ? ? ? ? etree_parse? ? ? ? ?4.92 ? ? ? ? spectral_norm? ? ? ?4.72 ? ? ? ? normal_startup? ? ? 4.39 ? ? ? ? telco? ? ? ? ? ? ? ?3.88 ? ? ? ? startup_nosite? ? ? 3.7 ? ? ? ? call_method? ? ? ? ?3.63 ? ? ? ? unpack_sequence? ? ?3.6 ? ? ? ? call_method_slots? ?2.91 ? ? ? ? call_method_unknown 2.59 ? ? ? ? iterative_count? ? ?0.45 ? ? ? ? threaded_count? ? ? -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org From alecsandru.patrascu at intel.com Sat Aug 22 19:07:28 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 17:07:28 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0C88@IRSMSX102.ger.corp.intel.com> A trial period on numerous other Python loads in which the provided patches are tested is welcomed, to be sure that it works as presented. Yes, it is easy to change it to use a different training set, or subsets of the regrtest by adding additional parameters to the line inside the Makefile that runs it. Now, the attached patches run the full regrtest suite. Alecsandru From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of Guido van Rossum Sent: Saturday, August 22, 2015 7:56 PM To: Patrascu, Alecsandru Cc: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default I'm sorry, but we're just not going to turn this on by default without doing a trial period ourselves. Your (and Intel's) contribution is very welcome, but in order to establish trust in a feature like this, an optional trial period is absolutely required. Regarding the training set, I agree that regrtest sounds to be better than pybench. If we make this an opt-in change, we can experiment with different training sets easily. (Also, I haven't seen the patch yet, but I presume it's easy to use a different training set? Experimentation should be encouraged.) On Sat, Aug 22, 2015 at 9:40 AM, Patrascu, Alecsandru wrote: Hello and thank you for your feedback. We have measured PGO gain using other workloads also. Our initial choice for this optimization was pybench, but the speedup obtained was lower than using regrtest and it didn't cover a lot of Python scenarios. Instead, regrtest has an uniform distribution for the tests and the resulting binary is overall much faster than the default, or trained using other workloads, and thus covering a larger pool of Python loads. This optimization was also tested on a production environments running OpenStack Swift and got up to 9% improvements. The reason we proposed this target to be always on is that the obtained optimized binary is better out of the box for the general cases. Alecsandru From: gvanrossum at gmail.com [mailto:gvanrossum at gmail.com] On Behalf Of Guido van Rossum Sent: Saturday, August 22, 2015 7:15 PM To: Patrascu, Alecsandru Cc: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware.? For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed.? In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores).? We use "regrtest" suite for training as it provides the best performance improvement.? Some of the test programs in the suite may fail which leads to build fail.? One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1.? hg clone https://hg.python.org/cpython cpython 2.? cd cpython 3.? hg update 2.7 (needed for 2.7 only) 4.? Copy *.patch to the current directory 5.? patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6.? ./configure 7.? make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW):? ? ? Intel XEON (Broadwell) 8 Cores BIOS settings:? ? ? Intel Turbo Boost Technology: false ? ? ? ? ? ? ? ? ? ? Hyper-Threading: false Operating System:? ?Ubuntu 14.04.3 LTS trusty OS configuration:? ?CPU freq set at fixed: 2.6GHz by ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq ? ? ? ? ? ? ? ? ? ? Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by ? ? ? ? ? ? ? ? ? ? ? ? echo 0 > /proc/sys/kernel/randomize_va_space GCC version:? ? ? ? gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark:? ? ? ? ? Grand Unified Python Benchmark (GUPB) ? ? ? ? ? ? ? ? ? ? GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? Python Source: hg update 2.7 ? ? hg id: 0511b1165bb6 (2.7) ? ? hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 ? ? hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 ? ? ? ? Benchmarks? ? ? ? ? Speedup(%) ? ? ? ? simple_logging? ? ? 20 ? ? ? ? raytrace? ? ? ? ? ? 20 ? ? ? ? silent_logging? ? ? 19 ? ? ? ? richards? ? ? ? ? ? 19 ? ? ? ? chaos? ? ? ? ? ? ? ?16 ? ? ? ? formatted_logging? ?16 ? ? ? ? json_dump? ? ? ? ? ?15 ? ? ? ? hexiom2? ? ? ? ? ? ?13 ? ? ? ? pidigits? ? ? ? ? ? 12 ? ? ? ? slowunpickle? ? ? ? 12 ? ? ? ? django_v2? ? ? ? ? ?12 ? ? ? ? unpack_sequence? ? ?11 ? ? ? ? float? ? ? ? ? ? ? ?11 ? ? ? ? mako? ? ? ? ? ? ? ? 11 ? ? ? ? slowpickle? ? ? ? ? 11 ? ? ? ? fastpickle? ? ? ? ? 11 ? ? ? ? django? ? ? ? ? ? ? 11 ? ? ? ? go? ? ? ? ? ? ? ? ? 10 ? ? ? ? json_dump_v2? ? ? ? 10 ? ? ? ? pathlib? ? ? ? ? ? ?10 ? ? ? ? regex_compile? ? ? ?10 ? ? ? ? pybench? ? ? ? ? ? ?9.9 ? ? ? ? etree_process? ? ? ?9 ? ? ? ? regex_v8? ? ? ? ? ? 8 ? ? ? ? bzr_startup? ? ? ? ?8 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8 ? ? ? ? slowspitfire? ? ? ? 8 ? ? ? ? telco? ? ? ? ? ? ? ?8 ? ? ? ? pickle_list? ? ? ? ?8 ? ? ? ? fannkuch? ? ? ? ? ? 8 ? ? ? ? etree_iterparse? ? ?8 ? ? ? ? nqueens? ? ? ? ? ? ?8 ? ? ? ? mako_v2? ? ? ? ? ? ?8 ? ? ? ? etree_generate? ? ? 8 ? ? ? ? call_method_slots? ?7 ? ? ? ? html5lib_warmup? ? ?7 ? ? ? ? html5lib? ? ? ? ? ? 7 ? ? ? ? nbody? ? ? ? ? ? ? ?7 ? ? ? ? spectral_norm? ? ? ?7 ? ? ? ? spambayes? ? ? ? ? ?7 ? ? ? ? fastunpickle? ? ? ? 6 ? ? ? ? meteor_contest? ? ? 6 ? ? ? ? chameleon? ? ? ? ? ?6 ? ? ? ? rietveld? ? ? ? ? ? 6 ? ? ? ? tornado_http? ? ? ? 5 ? ? ? ? unpickle_list? ? ? ?5 ? ? ? ? pickle_dict? ? ? ? ?4 ? ? ? ? regex_effbot? ? ? ? 3 ? ? ? ? normal_startup? ? ? 3 ? ? ? ? startup_nosite? ? ? 3 ? ? ? ? etree_parse? ? ? ? ?2 ? ? ? ? call_method_unknown 2 ? ? ? ? call_simple? ? ? ? ?1 ? ? ? ? json_load? ? ? ? ? ?1 ? ? ? ? call_method? ? ? ? ?1 Python3.6 results ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? hg id: 96d016f78726 tip ? ? hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 ? ? hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 ? ? ? ? Benchmark? ? ? ? ? ?Speedup(%) ? ? ? ? fastunpickle? ? ? ? 22.94 ? ? ? ? fastpickle? ? ? ? ? 21.67 ? ? ? ? json_load? ? ? ? ? ?17.64 ? ? ? ? simple_logging? ? ? 17.49 ? ? ? ? meteor_contest? ? ? 16.67 ? ? ? ? formatted_logging? ?15.33 ? ? ? ? etree_process? ? ? ?14.61 ? ? ? ? raytrace? ? ? ? ? ? 13.57 ? ? ? ? etree_generate? ? ? 13.56 ? ? ? ? chaos? ? ? ? ? ? ? ?12.09 ? ? ? ? hexiom2? ? ? ? ? ? ?12 ? ? ? ? nbody? ? ? ? ? ? ? ?11.88 ? ? ? ? json_dump_v2? ? ? ? 11.24 ? ? ? ? richards? ? ? ? ? ? 11.02 ? ? ? ? nqueens? ? ? ? ? ? ?10.96 ? ? ? ? fannkuch? ? ? ? ? ? 10.79 ? ? ? ? go? ? ? ? ? ? ? ? ? 10.77 ? ? ? ? float? ? ? ? ? ? ? ?10.26 ? ? ? ? regex_compile? ? ? ?9.8 ? ? ? ? silent_logging? ? ? 9.63 ? ? ? ? pidigits? ? ? ? ? ? 9.58 ? ? ? ? etree_iterparse? ? ?9.48 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8.44 ? ? ? ? regex_v8? ? ? ? ? ? 8.09 ? ? ? ? regex_effbot? ? ? ? 7.88 ? ? ? ? call_simple? ? ? ? ?7.63 ? ? ? ? tornado_http? ? ? ? 7.38 ? ? ? ? etree_parse? ? ? ? ?4.92 ? ? ? ? spectral_norm? ? ? ?4.72 ? ? ? ? normal_startup? ? ? 4.39 ? ? ? ? telco? ? ? ? ? ? ? ?3.88 ? ? ? ? startup_nosite? ? ? 3.7 ? ? ? ? call_method? ? ? ? ?3.63 ? ? ? ? unpack_sequence? ? ?3.6 ? ? ? ? call_method_slots? ?2.91 ? ? ? ? call_method_unknown 2.59 ? ? ? ? iterative_count? ? ?0.45 ? ? ? ? threaded_count? ? ? -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) -- --Guido van Rossum (python.org/~guido) From stefan_ml at behnel.de Sat Aug 22 19:25:02 2015 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 22 Aug 2015 19:25:02 +0200 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Message-ID: Guido van Rossum schrieb am 22.08.2015 um 18:55: > Regarding the training set, I agree that regrtest sounds to be better than > pybench. If we make this an opt-in change, we can experiment with different > training sets easily. (Also, I haven't seen the patch yet, but I presume > it's easy to use a different training set? It's just one command in one line, yes. > Experimentation should be encouraged.) A well chosen training set can have a notable impact on PGO compiled code in general, and switching from pybench to regrtests should make such a difference. However, since CPython's overall performance is mostly determined by the interpreter loop, general object operations (getattr!) and the basic builtin types, of which the regression test suite makes plenty of use, it is rather unlikely that other training sets would provide substantially better performance for Python code execution. Note also that Ubuntu has shipped PGO builds based on the regrtests for years, and they seemed to be quite happy with it. Stefan From ericsnowcurrently at gmail.com Sat Aug 22 19:25:59 2015 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Sat, 22 Aug 2015 11:25:59 -0600 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" < alecsandru.patrascu at intel.com> wrote: [snip] > For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed. Are you referring to the tests in the benchmarks repo? [1] How does the real-world performance improvement compare with other languages you are targeting for optimization? And thanks for working on this! I have several more questions: What sorts of future changes in CPython's code might interfere with your optimizations? What future additions might stand to benefit? What changes in existing code might improve optimization opportunities? What is the added maintenance burden of the optimizations on CPython, if any? What is the performance impact on non-Intel architectures? What about older Intel architectures? ...and future ones? What is Intel's commitment to supporting these (or other) optimizations in the future? How is the practical EOL of the optimizations managed? Finally, +1 on adding an opt-in Makefile target rather than enabling the optimizations by default. Thanks again! -eric [1] https://hg.python.org/benchmarks/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan_ml at behnel.de Sat Aug 22 19:46:14 2015 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 22 Aug 2015 19:46:14 +0200 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Message-ID: Stefan Behnel schrieb am 22.08.2015 um 19:25: > Guido van Rossum schrieb am 22.08.2015 um 18:55: >> Regarding the training set, I agree that regrtest sounds to be better than >> pybench. If we make this an opt-in change, we can experiment with different >> training sets easily. (Also, I haven't seen the patch yet, but I presume >> it's easy to use a different training set? >> Experimentation should be encouraged.) > > A well chosen training set can have a notable impact on PGO compiled code > in general, and switching from pybench to regrtests should make such a > difference. However, since CPython's overall performance is mostly > determined by the interpreter loop, general object operations (getattr!) > and the basic builtin types, of which the regression test suite makes > plenty of use, it is rather unlikely that other training sets would provide > substantially better performance for Python code execution. Note that this doesn't mean that it's a good workload for the C code in the standard library (and I guess that's why Alecsandru initially excluded the hashlib tests). Improvements on that front might still be possible. But it's certainly a good workload for all the rest, i.e. for executing general Python code. Stefan From alecsandru.patrascu at intel.com Sat Aug 22 19:53:46 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 17:53:46 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0CB1@IRSMSX102.ger.corp.intel.com> Yes, the results are measured from running the benchmarks from the repo [1]. Furthermore, this optimization is generic and can handle any kind of changes in hardware or the CPython 2/3 source code. We are not adding to or modifying regrtest and our rule will be applied on the latest tests existing in the CPython repo. Since they are up to date and being easy to be executed, this proposal makes sure that users will always take benefit from them. [1] https://hg.python.org/benchmarks/ Alecsandru From: Eric Snow [mailto:ericsnowcurrently at gmail.com] Sent: Saturday, August 22, 2015 8:26 PM To: Patrascu, Alecsandru Cc: Python-Dev Subject: Re: [Python-Dev] Profile Guided Optimization active by-default On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" wrote: [snip]? > For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed. Are you referring to the tests in the benchmarks repo? [1] How does the real-world performance improvement compare with other languages you are targeting for optimization? And thanks for working on this!? I have several more questions: What sorts of future changes in CPython's code might interfere with your optimizations? What future additions might stand to benefit? What changes in existing code might improve optimization opportunities? What is the added maintenance burden of the optimizations on CPython, if any? What is the performance impact on non-Intel architectures?? What about older Intel architectures?? ...and future ones? What is Intel's commitment to supporting these (or other) optimizations in the future?? How is the practical EOL of the optimizations managed? Finally, +1 on adding an opt-in Makefile target rather than enabling the optimizations by default. Thanks again! -eric [1] https://hg.python.org/benchmarks/ From brett at python.org Sat Aug 22 19:50:18 2015 From: brett at python.org (Brett Cannon) Date: Sat, 22 Aug 2015 17:50:18 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0C77@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C77@IRSMSX102.ger.corp.intel.com> Message-ID: On Sat, Aug 22, 2015, 09:58 Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: This target replaces the existing one in the CPython Makefile, which now uses a quick run of pybench and the obtained binary does not perform well on general Python loads. I don't think is a good idea to add a by-default target that does PGO on dedicated workloads, like Django, because then it will perform better on that particular load and poorly on other. Sorry for not being clearer, but I was not suggesting that the default be for Django, just whether making the Makefile easier to work with when generating a PGO build for a custom workload. If we already have a rule that uses pybench then it should definitely be changed to use regrtest (and honestly pybench should not be used for benchmarking anything since it doesn't reflect real world usage in any way; its just for quick checks while doing development on the core of Python and otherwise shouldn't be used to measure anything substantial). Of course, if any user has a dedicated workload for which he or she want to get the best benefit over PGO, it will have to run that training separately from the proposed one. Our proposal targets the broader audience that uses Python in various scenarios, and they will see an overall improvement after compiling Python from sources. Right, but my question was whether there was any benefit to making the Makefile rules generic to make building PGO binaries easier for people who do want to do a custom profile and it sounds like it isn't worth the effort. So I'm with Guido where I'm happy to see the build rules added/updated to use regrtest for a PGO build but have it be an opt-in flag and not on by default (at least for now). -Brett Alecsandru From: Brett Cannon [mailto:brett at python.org] Sent: Saturday, August 22, 2015 7:25 PM To: guido at python.org; Patrascu, Alecsandru Cc: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default On Sat, Aug 22, 2015, 09:17 Guido van Rossum wrote: How about we first add a new Makefile target that enables PGO, without turning it on by default? Then later we can enable it by default. I agree. Updating the Makefile so it's easier to use PGO is great, but we should do a release with it as opt-in and go from there. Also, I have my doubts about regrtest. How sure are we that it represents a typical Python load? Tests are often using a different mix of operations than production code. That was also my question. You said that "it provides the best performance improvement", but compared to what; what else was tried? And what difference does it make to e.g. a Django app that is trained on their own simulated workload compared to using regrtest? IOW is regrtest displaying the best across-the-board performance because it stresses the largest swath of Python and thus catches generic patterns in the code but individuals could get better performance with a simulated workload? -Brett On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware. For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed. In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for training as it provides the best performance improvement. Some of the test programs in the suite may fail which leads to build fail. One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1. hg clone https://hg.python.org/cpython cpython 2. cd cpython 3. hg update 2.7 (needed for 2.7 only) 4. Copy *.patch to the current directory 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6. ./configure 7. make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW): Intel XEON (Broadwell) 8 Cores BIOS settings: Intel Turbo Boost Technology: false Hyper-Threading: false Operating System: Ubuntu 14.04.3 LTS trusty OS configuration: CPU freq set at fixed: 2.6GHz by echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by echo 0 > /proc/sys/kernel/randomize_va_space GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark: Grand Unified Python Benchmark (GUPB) GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: Python source: hg clone https://hg.python.org/cpython cpython Python Source: hg update 2.7 hg id: 0511b1165bb6 (2.7) hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 Benchmarks Speedup(%) simple_logging 20 raytrace 20 silent_logging 19 richards 19 chaos 16 formatted_logging 16 json_dump 15 hexiom2 13 pidigits 12 slowunpickle 12 django_v2 12 unpack_sequence 11 float 11 mako 11 slowpickle 11 fastpickle 11 django 11 go 10 json_dump_v2 10 pathlib 10 regex_compile 10 pybench 9.9 etree_process 9 regex_v8 8 bzr_startup 8 2to3 8 slowspitfire 8 telco 8 pickle_list 8 fannkuch 8 etree_iterparse 8 nqueens 8 mako_v2 8 etree_generate 8 call_method_slots 7 html5lib_warmup 7 html5lib 7 nbody 7 spectral_norm 7 spambayes 7 fastunpickle 6 meteor_contest 6 chameleon 6 rietveld 6 tornado_http 5 unpickle_list 5 pickle_dict 4 regex_effbot 3 normal_startup 3 startup_nosite 3 etree_parse 2 call_method_unknown 2 call_simple 1 json_load 1 call_method 1 Python3.6 results Python source: hg clone https://hg.python.org/cpython cpython hg id: 96d016f78726 tip hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 Benchmark Speedup(%) fastunpickle 22.94 fastpickle 21.67 json_load 17.64 simple_logging 17.49 meteor_contest 16.67 formatted_logging 15.33 etree_process 14.61 raytrace 13.57 etree_generate 13.56 chaos 12.09 hexiom2 12 nbody 11.88 json_dump_v2 11.24 richards 11.02 nqueens 10.96 fannkuch 10.79 go 10.77 float 10.26 regex_compile 9.8 silent_logging 9.63 pidigits 9.58 etree_iterparse 9.48 2to3 8.44 regex_v8 8.09 regex_effbot 7.88 call_simple 7.63 tornado_http 7.38 etree_parse 4.92 spectral_norm 4.72 normal_startup 4.39 telco 3.88 startup_nosite 3.7 call_method 3.63 unpack_sequence 3.6 call_method_slots 2.91 call_method_unknown 2.59 iterative_count 0.45 threaded_count -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sat Aug 22 20:00:08 2015 From: brett at python.org (Brett Cannon) Date: Sat, 22 Aug 2015 18:00:08 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: I just realized I didn't see anyone say it, but please upload the patches to bugs.Python.org for easier tracking and reviewing. On Sat, Aug 22, 2015, 08:01 Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: > Hi All, > > This is Alecsandru from Server Scripting Languages Optimization team at > Intel Corporation. > > I would like to submit a request to turn-on Profile Guided Optimization or > PGO as the default build option for Python (both 2.7 and 3.6), given its > performance benefits on a wide variety of workloads and hardware. For > instance, as shown from attached sample performance results from the Grand > Unified Python Benchmark, >20% speed up was observed. In addition, we are > seeing 2-9% performance boost from OpenStack/Swift where more than 60% of > the codes are in Python 2.7. Our analysis indicates the performance gain > was mainly due to reduction of icache misses and CPU front-end stalls. > > Attached is the Makefile patches that modify the all build target and adds > a new one called "disable-profile-opt". We built and tested this patch for > Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, > Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for > training as it provides the best performance improvement. Some of the test > programs in the suite may fail which leads to build fail. One solution is > to disable the specific failed test using the "-x " flag (as shown in the > patch) > > Steps to apply the patch: > 1. hg clone https://hg.python.org/cpython cpython > 2. cd cpython > 3. hg update 2.7 (needed for 2.7 only) > 4. Copy *.patch to the current directory > 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) > 6. ./configure > 7. make > > To disable PGO > 7b. make disable-profile-opt > > In the following, please find our sample performance results from latest > XEON machine, XEON Broadwell EP. > Hardware (HW): Intel XEON (Broadwell) 8 Cores > > BIOS settings: Intel Turbo Boost Technology: false > Hyper-Threading: false > > Operating System: Ubuntu 14.04.3 LTS trusty > > OS configuration: CPU freq set at fixed: 2.6GHz by > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq > Address Space Layout Randomization (ASLR) disabled (to > reduce run to run variation) by > echo 0 > /proc/sys/kernel/randomize_va_space > > GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) > > Benchmark: Grand Unified Python Benchmark (GUPB) > GUPB Source: https://hg.python.org/benchmarks/ > > Python2.7 results: > Python source: hg clone https://hg.python.org/cpython cpython > Python Source: hg update 2.7 > hg id: 0511b1165bb6 (2.7) > hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 > hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 > > Benchmarks Speedup(%) > simple_logging 20 > raytrace 20 > silent_logging 19 > richards 19 > chaos 16 > formatted_logging 16 > json_dump 15 > hexiom2 13 > pidigits 12 > slowunpickle 12 > django_v2 12 > unpack_sequence 11 > float 11 > mako 11 > slowpickle 11 > fastpickle 11 > django 11 > go 10 > json_dump_v2 10 > pathlib 10 > regex_compile 10 > pybench 9.9 > etree_process 9 > regex_v8 8 > bzr_startup 8 > 2to3 8 > slowspitfire 8 > telco 8 > pickle_list 8 > fannkuch 8 > etree_iterparse 8 > nqueens 8 > mako_v2 8 > etree_generate 8 > call_method_slots 7 > html5lib_warmup 7 > html5lib 7 > nbody 7 > spectral_norm 7 > spambayes 7 > fastunpickle 6 > meteor_contest 6 > chameleon 6 > rietveld 6 > tornado_http 5 > unpickle_list 5 > pickle_dict 4 > regex_effbot 3 > normal_startup 3 > startup_nosite 3 > etree_parse 2 > call_method_unknown 2 > call_simple 1 > json_load 1 > call_method 1 > > Python3.6 results > Python source: hg clone https://hg.python.org/cpython cpython > hg id: 96d016f78726 tip > hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 > hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 > > > Benchmark Speedup(%) > fastunpickle 22.94 > fastpickle 21.67 > json_load 17.64 > simple_logging 17.49 > meteor_contest 16.67 > formatted_logging 15.33 > etree_process 14.61 > raytrace 13.57 > etree_generate 13.56 > chaos 12.09 > hexiom2 12 > nbody 11.88 > json_dump_v2 11.24 > richards 11.02 > nqueens 10.96 > fannkuch 10.79 > go 10.77 > float 10.26 > regex_compile 9.8 > silent_logging 9.63 > pidigits 9.58 > etree_iterparse 9.48 > 2to3 8.44 > regex_v8 8.09 > regex_effbot 7.88 > call_simple 7.63 > tornado_http 7.38 > etree_parse 4.92 > spectral_norm 4.72 > normal_startup 4.39 > telco 3.88 > startup_nosite 3.7 > call_method 3.63 > unpack_sequence 3.6 > call_method_slots 2.91 > call_method_unknown 2.59 > iterative_count 0.45 > threaded_count -2.79 > > > Thank you, > Alecsandru > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecsandru.patrascu at intel.com Sat Aug 22 20:02:17 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 18:02:17 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0C4F@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0CCB@IRSMSX102.ger.corp.intel.com> Thank you Stefan for also pointing out the importance of regrtest as a good training set for building Python. Indeed, Ubuntu delivers in their repos the Python2/3 binaries already optimized using PGO based on regrtest. Alecsandru -----Original Message----- From: Python-Dev [mailto:python-dev-bounces+alecsandru.patrascu=intel.com at python.org] On Behalf Of Stefan Behnel Sent: Saturday, August 22, 2015 8:25 PM To: python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default Guido van Rossum schrieb am 22.08.2015 um 18:55: > Regarding the training set, I agree that regrtest sounds to be better > than pybench. If we make this an opt-in change, we can experiment with > different training sets easily. (Also, I haven't seen the patch yet, > but I presume it's easy to use a different training set? It's just one command in one line, yes. > Experimentation should be encouraged.) A well chosen training set can have a notable impact on PGO compiled code in general, and switching from pybench to regrtests should make such a difference. However, since CPython's overall performance is mostly determined by the interpreter loop, general object operations (getattr!) and the basic builtin types, of which the regression test suite makes plenty of use, it is rather unlikely that other training sets would provide substantially better performance for Python code execution. Note also that Ubuntu has shipped PGO builds based on the regrtests for years, and they seemed to be quite happy with it. Stefan _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/alecsandru.patrascu%40intel.com From alecsandru.patrascu at intel.com Sat Aug 22 20:10:07 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sat, 22 Aug 2015 18:10:07 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0CE6@IRSMSX102.ger.corp.intel.com> I'm sorry, I forgot to mention this, I already opened an issue and the patches are uploaded [1]. [1] http://bugs.python.org/issue24915 From: Brett Cannon [mailto:brett at python.org] Sent: Saturday, August 22, 2015 9:00 PM To: Patrascu, Alecsandru; python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default I just realized I didn't see anyone say it, but please upload the patches to bugs.Python.org for easier tracking and reviewing. On Sat, Aug 22, 2015, 08:01?Patrascu, Alecsandru wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware.? For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed.? In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores).? We use "regrtest" suite for training as it provides the best performance improvement.? Some of the test programs in the suite may fail which leads to build fail.? One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1.? hg clone https://hg.python.org/cpython cpython 2.? cd cpython 3.? hg update 2.7 (needed for 2.7 only) 4.? Copy *.patch to the current directory 5.? patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6.? ./configure 7.? make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW):? ? ? Intel XEON (Broadwell) 8 Cores BIOS settings:? ? ? Intel Turbo Boost Technology: false ? ? ? ? ? ? ? ? ? ? Hyper-Threading: false Operating System:? ?Ubuntu 14.04.3 LTS trusty OS configuration:? ?CPU freq set at fixed: 2.6GHz by ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq ? ? ? ? ? ? ? ? ? ? Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by ? ? ? ? ? ? ? ? ? ? ? ? echo 0 > /proc/sys/kernel/randomize_va_space GCC version:? ? ? ? gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark:? ? ? ? ? Grand Unified Python Benchmark (GUPB) ? ? ? ? ? ? ? ? ? ? GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? Python Source: hg update 2.7 ? ? hg id: 0511b1165bb6 (2.7) ? ? hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 ? ? hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 ? ? ? ? Benchmarks? ? ? ? ? Speedup(%) ? ? ? ? simple_logging? ? ? 20 ? ? ? ? raytrace? ? ? ? ? ? 20 ? ? ? ? silent_logging? ? ? 19 ? ? ? ? richards? ? ? ? ? ? 19 ? ? ? ? chaos? ? ? ? ? ? ? ?16 ? ? ? ? formatted_logging? ?16 ? ? ? ? json_dump? ? ? ? ? ?15 ? ? ? ? hexiom2? ? ? ? ? ? ?13 ? ? ? ? pidigits? ? ? ? ? ? 12 ? ? ? ? slowunpickle? ? ? ? 12 ? ? ? ? django_v2? ? ? ? ? ?12 ? ? ? ? unpack_sequence? ? ?11 ? ? ? ? float? ? ? ? ? ? ? ?11 ? ? ? ? mako? ? ? ? ? ? ? ? 11 ? ? ? ? slowpickle? ? ? ? ? 11 ? ? ? ? fastpickle? ? ? ? ? 11 ? ? ? ? django? ? ? ? ? ? ? 11 ? ? ? ? go? ? ? ? ? ? ? ? ? 10 ? ? ? ? json_dump_v2? ? ? ? 10 ? ? ? ? pathlib? ? ? ? ? ? ?10 ? ? ? ? regex_compile? ? ? ?10 ? ? ? ? pybench? ? ? ? ? ? ?9.9 ? ? ? ? etree_process? ? ? ?9 ? ? ? ? regex_v8? ? ? ? ? ? 8 ? ? ? ? bzr_startup? ? ? ? ?8 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8 ? ? ? ? slowspitfire? ? ? ? 8 ? ? ? ? telco? ? ? ? ? ? ? ?8 ? ? ? ? pickle_list? ? ? ? ?8 ? ? ? ? fannkuch? ? ? ? ? ? 8 ? ? ? ? etree_iterparse? ? ?8 ? ? ? ? nqueens? ? ? ? ? ? ?8 ? ? ? ? mako_v2? ? ? ? ? ? ?8 ? ? ? ? etree_generate? ? ? 8 ? ? ? ? call_method_slots? ?7 ? ? ? ? html5lib_warmup? ? ?7 ? ? ? ? html5lib? ? ? ? ? ? 7 ? ? ? ? nbody? ? ? ? ? ? ? ?7 ? ? ? ? spectral_norm? ? ? ?7 ? ? ? ? spambayes? ? ? ? ? ?7 ? ? ? ? fastunpickle? ? ? ? 6 ? ? ? ? meteor_contest? ? ? 6 ? ? ? ? chameleon? ? ? ? ? ?6 ? ? ? ? rietveld? ? ? ? ? ? 6 ? ? ? ? tornado_http? ? ? ? 5 ? ? ? ? unpickle_list? ? ? ?5 ? ? ? ? pickle_dict? ? ? ? ?4 ? ? ? ? regex_effbot? ? ? ? 3 ? ? ? ? normal_startup? ? ? 3 ? ? ? ? startup_nosite? ? ? 3 ? ? ? ? etree_parse? ? ? ? ?2 ? ? ? ? call_method_unknown 2 ? ? ? ? call_simple? ? ? ? ?1 ? ? ? ? json_load? ? ? ? ? ?1 ? ? ? ? call_method? ? ? ? ?1 Python3.6 results ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? hg id: 96d016f78726 tip ? ? hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 ? ? hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 ? ? ? ? Benchmark? ? ? ? ? ?Speedup(%) ? ? ? ? fastunpickle? ? ? ? 22.94 ? ? ? ? fastpickle? ? ? ? ? 21.67 ? ? ? ? json_load? ? ? ? ? ?17.64 ? ? ? ? simple_logging? ? ? 17.49 ? ? ? ? meteor_contest? ? ? 16.67 ? ? ? ? formatted_logging? ?15.33 ? ? ? ? etree_process? ? ? ?14.61 ? ? ? ? raytrace? ? ? ? ? ? 13.57 ? ? ? ? etree_generate? ? ? 13.56 ? ? ? ? chaos? ? ? ? ? ? ? ?12.09 ? ? ? ? hexiom2? ? ? ? ? ? ?12 ? ? ? ? nbody? ? ? ? ? ? ? ?11.88 ? ? ? ? json_dump_v2? ? ? ? 11.24 ? ? ? ? richards? ? ? ? ? ? 11.02 ? ? ? ? nqueens? ? ? ? ? ? ?10.96 ? ? ? ? fannkuch? ? ? ? ? ? 10.79 ? ? ? ? go? ? ? ? ? ? ? ? ? 10.77 ? ? ? ? float? ? ? ? ? ? ? ?10.26 ? ? ? ? regex_compile? ? ? ?9.8 ? ? ? ? silent_logging? ? ? 9.63 ? ? ? ? pidigits? ? ? ? ? ? 9.58 ? ? ? ? etree_iterparse? ? ?9.48 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8.44 ? ? ? ? regex_v8? ? ? ? ? ? 8.09 ? ? ? ? regex_effbot? ? ? ? 7.88 ? ? ? ? call_simple? ? ? ? ?7.63 ? ? ? ? tornado_http? ? ? ? 7.38 ? ? ? ? etree_parse? ? ? ? ?4.92 ? ? ? ? spectral_norm? ? ? ?4.72 ? ? ? ? normal_startup? ? ? 4.39 ? ? ? ? telco? ? ? ? ? ? ? ?3.88 ? ? ? ? startup_nosite? ? ? 3.7 ? ? ? ? call_method? ? ? ? ?3.63 ? ? ? ? unpack_sequence? ? ?3.6 ? ? ? ? call_method_slots? ?2.91 ? ? ? ? call_method_unknown 2.59 ? ? ? ? iterative_count? ? ?0.45 ? ? ? ? threaded_count? ? ? -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org From brett at python.org Sun Aug 23 03:47:14 2015 From: brett at python.org (Brett Cannon) Date: Sun, 23 Aug 2015 01:47:14 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <3CF256F4F774BD48A1691D131AA04319141C0CE6@IRSMSX102.ger.corp.intel.com> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0CE6@IRSMSX102.ger.corp.intel.com> Message-ID: On Sat, 22 Aug 2015 at 11:10 Patrascu, Alecsandru < alecsandru.patrascu at intel.com> wrote: > I'm sorry, I forgot to mention this, I already opened an issue and the > patches are uploaded [1]. > > [1] http://bugs.python.org/issue24915 Great, thanks Alecandru. Do please follow Stefan's comment, though, and upload the patch files directly and not as a zip file. That way we can use our code review tool to do a proper review of the patches. -Brett > > > From: Brett Cannon [mailto:brett at python.org] > Sent: Saturday, August 22, 2015 9:00 PM > To: Patrascu, Alecsandru; python-dev at python.org > Subject: Re: [Python-Dev] Profile Guided Optimization active by-default > > I just realized I didn't see anyone say it, but please upload the patches > to bugs.Python.org for easier tracking and reviewing. > > On Sat, Aug 22, 2015, 08:01 Patrascu, Alecsandru < > alecsandru.patrascu at intel.com> wrote: > Hi All, > > This is Alecsandru from Server Scripting Languages Optimization team at > Intel Corporation. > > I would like to submit a request to turn-on Profile Guided Optimization or > PGO as the default build option for Python (both 2.7 and 3.6), given its > performance benefits on a wide variety of workloads and hardware. For > instance, as shown from attached sample performance results from the Grand > Unified Python Benchmark, >20% speed up was observed. In addition, we are > seeing 2-9% performance boost from OpenStack/Swift where more than 60% of > the codes are in Python 2.7. Our analysis indicates the performance gain > was mainly due to reduction of icache misses and CPU front-end stalls. > > Attached is the Makefile patches that modify the all build target and adds > a new one called "disable-profile-opt". We built and tested this patch for > Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, > Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for > training as it provides the best performance improvement. Some of the test > programs in the suite may fail which leads to build fail. One solution is > to disable the specific failed test using the "-x " flag (as shown in the > patch) > > Steps to apply the patch: > 1. hg clone https://hg.python.org/cpython cpython > 2. cd cpython > 3. hg update 2.7 (needed for 2.7 only) > 4. Copy *.patch to the current directory > 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) > 6. ./configure > 7. make > > To disable PGO > 7b. make disable-profile-opt > > In the following, please find our sample performance results from latest > XEON machine, XEON Broadwell EP. > Hardware (HW): Intel XEON (Broadwell) 8 Cores > > BIOS settings: Intel Turbo Boost Technology: false > Hyper-Threading: false > > Operating System: Ubuntu 14.04.3 LTS trusty > > OS configuration: CPU freq set at fixed: 2.6GHz by > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq > Address Space Layout Randomization (ASLR) disabled (to > reduce run to run variation) by > echo 0 > /proc/sys/kernel/randomize_va_space > > GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) > > Benchmark: Grand Unified Python Benchmark (GUPB) > GUPB Source: https://hg.python.org/benchmarks/ > > Python2.7 results: > Python source: hg clone https://hg.python.org/cpython cpython > Python Source: hg update 2.7 > hg id: 0511b1165bb6 (2.7) > hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 > hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 > > Benchmarks Speedup(%) > simple_logging 20 > raytrace 20 > silent_logging 19 > richards 19 > chaos 16 > formatted_logging 16 > json_dump 15 > hexiom2 13 > pidigits 12 > slowunpickle 12 > django_v2 12 > unpack_sequence 11 > float 11 > mako 11 > slowpickle 11 > fastpickle 11 > django 11 > go 10 > json_dump_v2 10 > pathlib 10 > regex_compile 10 > pybench 9.9 > etree_process 9 > regex_v8 8 > bzr_startup 8 > 2to3 8 > slowspitfire 8 > telco 8 > pickle_list 8 > fannkuch 8 > etree_iterparse 8 > nqueens 8 > mako_v2 8 > etree_generate 8 > call_method_slots 7 > html5lib_warmup 7 > html5lib 7 > nbody 7 > spectral_norm 7 > spambayes 7 > fastunpickle 6 > meteor_contest 6 > chameleon 6 > rietveld 6 > tornado_http 5 > unpickle_list 5 > pickle_dict 4 > regex_effbot 3 > normal_startup 3 > startup_nosite 3 > etree_parse 2 > call_method_unknown 2 > call_simple 1 > json_load 1 > call_method 1 > > Python3.6 results > Python source: hg clone https://hg.python.org/cpython cpython > hg id: 96d016f78726 tip > hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 > hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 > > > Benchmark Speedup(%) > fastunpickle 22.94 > fastpickle 21.67 > json_load 17.64 > simple_logging 17.49 > meteor_contest 16.67 > formatted_logging 15.33 > etree_process 14.61 > raytrace 13.57 > etree_generate 13.56 > chaos 12.09 > hexiom2 12 > nbody 11.88 > json_dump_v2 11.24 > richards 11.02 > nqueens 10.96 > fannkuch 10.79 > go 10.77 > float 10.26 > regex_compile 9.8 > silent_logging 9.63 > pidigits 9.58 > etree_iterparse 9.48 > 2to3 8.44 > regex_v8 8.09 > regex_effbot 7.88 > call_simple 7.63 > tornado_http 7.38 > etree_parse 4.92 > spectral_norm 4.72 > normal_startup 4.39 > telco 3.88 > startup_nosite 3.7 > call_method 3.63 > unpack_sequence 3.6 > call_method_slots 2.91 > call_method_unknown 2.59 > iterative_count 0.45 > threaded_count -2.79 > > > Thank you, > Alecsandru > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecsandru.patrascu at intel.com Sun Aug 23 08:59:25 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Sun, 23 Aug 2015 06:59:25 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <3CF256F4F774BD48A1691D131AA04319141C0CE6@IRSMSX102.ger.corp.intel.com> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C0FC4@IRSMSX102.ger.corp.intel.com> I removed the zip file and uploaded the patches individually. Alecsandru From: Brett Cannon [mailto:brett at python.org] Sent: Sunday, August 23, 2015 4:47 AM To: Patrascu, Alecsandru; python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default On Sat, 22 Aug 2015 at 11:10 Patrascu, Alecsandru wrote: I'm sorry, I forgot to mention this, I already opened an issue and the patches are uploaded [1]. [1] http://bugs.python.org/issue24915 Great, thanks Alecandru. Do please follow Stefan's comment, though, and upload the patch files directly and not as a zip file. That way we can use our code review tool to do a proper review of the patches. -Brett ? From: Brett Cannon [mailto:brett at python.org] Sent: Saturday, August 22, 2015 9:00 PM To: Patrascu, Alecsandru; python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default I just realized I didn't see anyone say it, but please upload the patches to bugs.Python.org for easier tracking and reviewing. On Sat, Aug 22, 2015, 08:01?Patrascu, Alecsandru wrote: Hi All, This is Alecsandru from Server Scripting Languages Optimization team at Intel Corporation. I would like to submit a request to turn-on Profile Guided Optimization or PGO as the default build option for Python (both 2.7 and 3.6), given its performance benefits on a wide variety of workloads and hardware.? For instance, as shown from attached sample performance results from the Grand Unified Python Benchmark, >20% speed up was observed.? In addition, we are seeing 2-9% performance boost from OpenStack/Swift where more than 60% of the codes are in Python 2.7. Our analysis indicates the performance gain was mainly due to reduction of icache misses and CPU front-end stalls. Attached is the Makefile patches that modify the all build target and adds a new one called "disable-profile-opt". We built and tested this patch for Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, Intel Xeon Haswell/Broadwell with 18/8 cores).? We use "regrtest" suite for training as it provides the best performance improvement.? Some of the test programs in the suite may fail which leads to build fail.? One solution is to disable the specific failed test using the "-x " flag (as shown in the patch) Steps to apply the patch: 1.? hg clone https://hg.python.org/cpython cpython 2.? cd cpython 3.? hg update 2.7 (needed for 2.7 only) 4.? Copy *.patch to the current directory 5.? patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) 6.? ./configure 7.? make To disable PGO 7b. make disable-profile-opt In the following, please find our sample performance results from latest XEON machine, XEON Broadwell EP. Hardware (HW):? ? ? Intel XEON (Broadwell) 8 Cores BIOS settings:? ? ? Intel Turbo Boost Technology: false ? ? ? ? ? ? ? ? ? ? Hyper-Threading: false Operating System:? ?Ubuntu 14.04.3 LTS trusty OS configuration:? ?CPU freq set at fixed: 2.6GHz by ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq ? ? ? ? ? ? ? ? ? ? ? ? echo 2600000 > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq ? ? ? ? ? ? ? ? ? ? Address Space Layout Randomization (ASLR) disabled (to reduce run to run variation) by ? ? ? ? ? ? ? ? ? ? ? ? echo 0 > /proc/sys/kernel/randomize_va_space GCC version:? ? ? ? gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) Benchmark:? ? ? ? ? Grand Unified Python Benchmark (GUPB) ? ? ? ? ? ? ? ? ? ? GUPB Source: https://hg.python.org/benchmarks/ Python2.7 results: ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? Python Source: hg update 2.7 ? ? hg id: 0511b1165bb6 (2.7) ? ? hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 ? ? hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 ? ? ? ? Benchmarks? ? ? ? ? Speedup(%) ? ? ? ? simple_logging? ? ? 20 ? ? ? ? raytrace? ? ? ? ? ? 20 ? ? ? ? silent_logging? ? ? 19 ? ? ? ? richards? ? ? ? ? ? 19 ? ? ? ? chaos? ? ? ? ? ? ? ?16 ? ? ? ? formatted_logging? ?16 ? ? ? ? json_dump? ? ? ? ? ?15 ? ? ? ? hexiom2? ? ? ? ? ? ?13 ? ? ? ? pidigits? ? ? ? ? ? 12 ? ? ? ? slowunpickle? ? ? ? 12 ? ? ? ? django_v2? ? ? ? ? ?12 ? ? ? ? unpack_sequence? ? ?11 ? ? ? ? float? ? ? ? ? ? ? ?11 ? ? ? ? mako? ? ? ? ? ? ? ? 11 ? ? ? ? slowpickle? ? ? ? ? 11 ? ? ? ? fastpickle? ? ? ? ? 11 ? ? ? ? django? ? ? ? ? ? ? 11 ? ? ? ? go? ? ? ? ? ? ? ? ? 10 ? ? ? ? json_dump_v2? ? ? ? 10 ? ? ? ? pathlib? ? ? ? ? ? ?10 ? ? ? ? regex_compile? ? ? ?10 ? ? ? ? pybench? ? ? ? ? ? ?9.9 ? ? ? ? etree_process? ? ? ?9 ? ? ? ? regex_v8? ? ? ? ? ? 8 ? ? ? ? bzr_startup? ? ? ? ?8 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8 ? ? ? ? slowspitfire? ? ? ? 8 ? ? ? ? telco? ? ? ? ? ? ? ?8 ? ? ? ? pickle_list? ? ? ? ?8 ? ? ? ? fannkuch? ? ? ? ? ? 8 ? ? ? ? etree_iterparse? ? ?8 ? ? ? ? nqueens? ? ? ? ? ? ?8 ? ? ? ? mako_v2? ? ? ? ? ? ?8 ? ? ? ? etree_generate? ? ? 8 ? ? ? ? call_method_slots? ?7 ? ? ? ? html5lib_warmup? ? ?7 ? ? ? ? html5lib? ? ? ? ? ? 7 ? ? ? ? nbody? ? ? ? ? ? ? ?7 ? ? ? ? spectral_norm? ? ? ?7 ? ? ? ? spambayes? ? ? ? ? ?7 ? ? ? ? fastunpickle? ? ? ? 6 ? ? ? ? meteor_contest? ? ? 6 ? ? ? ? chameleon? ? ? ? ? ?6 ? ? ? ? rietveld? ? ? ? ? ? 6 ? ? ? ? tornado_http? ? ? ? 5 ? ? ? ? unpickle_list? ? ? ?5 ? ? ? ? pickle_dict? ? ? ? ?4 ? ? ? ? regex_effbot? ? ? ? 3 ? ? ? ? normal_startup? ? ? 3 ? ? ? ? startup_nosite? ? ? 3 ? ? ? ? etree_parse? ? ? ? ?2 ? ? ? ? call_method_unknown 2 ? ? ? ? call_simple? ? ? ? ?1 ? ? ? ? json_load? ? ? ? ? ?1 ? ? ? ? call_method? ? ? ? ?1 Python3.6 results ? ? Python source: hg clone https://hg.python.org/cpython cpython ? ? hg id: 96d016f78726 tip ? ? hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 ? ? hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 ? ? ? ? Benchmark? ? ? ? ? ?Speedup(%) ? ? ? ? fastunpickle? ? ? ? 22.94 ? ? ? ? fastpickle? ? ? ? ? 21.67 ? ? ? ? json_load? ? ? ? ? ?17.64 ? ? ? ? simple_logging? ? ? 17.49 ? ? ? ? meteor_contest? ? ? 16.67 ? ? ? ? formatted_logging? ?15.33 ? ? ? ? etree_process? ? ? ?14.61 ? ? ? ? raytrace? ? ? ? ? ? 13.57 ? ? ? ? etree_generate? ? ? 13.56 ? ? ? ? chaos? ? ? ? ? ? ? ?12.09 ? ? ? ? hexiom2? ? ? ? ? ? ?12 ? ? ? ? nbody? ? ? ? ? ? ? ?11.88 ? ? ? ? json_dump_v2? ? ? ? 11.24 ? ? ? ? richards? ? ? ? ? ? 11.02 ? ? ? ? nqueens? ? ? ? ? ? ?10.96 ? ? ? ? fannkuch? ? ? ? ? ? 10.79 ? ? ? ? go? ? ? ? ? ? ? ? ? 10.77 ? ? ? ? float? ? ? ? ? ? ? ?10.26 ? ? ? ? regex_compile? ? ? ?9.8 ? ? ? ? silent_logging? ? ? 9.63 ? ? ? ? pidigits? ? ? ? ? ? 9.58 ? ? ? ? etree_iterparse? ? ?9.48 ? ? ? ? 2to3? ? ? ? ? ? ? ? 8.44 ? ? ? ? regex_v8? ? ? ? ? ? 8.09 ? ? ? ? regex_effbot? ? ? ? 7.88 ? ? ? ? call_simple? ? ? ? ?7.63 ? ? ? ? tornado_http? ? ? ? 7.38 ? ? ? ? etree_parse? ? ? ? ?4.92 ? ? ? ? spectral_norm? ? ? ?4.72 ? ? ? ? normal_startup? ? ? 4.39 ? ? ? ? telco? ? ? ? ? ? ? ?3.88 ? ? ? ? startup_nosite? ? ? 3.7 ? ? ? ? call_method? ? ? ? ?3.63 ? ? ? ? unpack_sequence? ? ?3.6 ? ? ? ? call_method_slots? ?2.91 ? ? ? ? call_method_unknown 2.59 ? ? ? ? iterative_count? ? ?0.45 ? ? ? ? threaded_count? ? ? -2.79 Thank you, Alecsandru _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org From arigo at tunes.org Sun Aug 23 14:14:54 2015 From: arigo at tunes.org (Armin Rigo) Date: Sun, 23 Aug 2015 14:14:54 +0200 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: <55D4360B.6010400@gmail.com> References: <55D4360B.6010400@gmail.com> Message-ID: Hi Valentine, On 19 August 2015 at 09:53, Valentine Sinitsyn wrote: > why it wasn't possible to > implement proposed CI disposal scheme on top of tp_del? I'm replying here as best as I understand the situation, which might be incomplete or wrong. >From the point of view of someone writing a C extension module, both tp_del and tp_finalize are called with the same guarantee that the object is still valid at that point. The difference is only that the presence of tp_del prevents the object from being collected at all if it is part of a cycle. Maybe the same could have been done without duplicating the function pointer (tp_del + tp_finalize) with a Py_TPFLAGS_DEL_EVEN_IN_A_CYCLE. A bient?t, Armin. From valentine.sinitsyn at gmail.com Mon Aug 24 20:43:25 2015 From: valentine.sinitsyn at gmail.com (Valentine Sinitsyn) Date: Mon, 24 Aug 2015 23:43:25 +0500 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: References: <55D4360B.6010400@gmail.com> Message-ID: <55DB65CD.9060505@gmail.com> Hi Armin, Thanks for replying. On 23.08.2015 17:14, Armin Rigo wrote: > Hi Valentine, > > On 19 August 2015 at 09:53, Valentine Sinitsyn > wrote: >> why it wasn't possible to >> implement proposed CI disposal scheme on top of tp_del? > > I'm replying here as best as I understand the situation, which might > be incomplete or wrong. > > From the point of view of someone writing a C extension module, both > tp_del and tp_finalize are called with the same guarantee that the > object is still valid at that point. The difference is only that the > presence of tp_del prevents the object from being collected at all if > it is part of a cycle. Maybe the same could have been done without > duplicating the function pointer (tp_del + tp_finalize) with a > Py_TPFLAGS_DEL_EVEN_IN_A_CYCLE. So you mean that this was to keep things backwards compatible for third-party extensions? I haven't thought about it this way, but this makes sense. However, the behavior of Python code using objects with __del__ has changed nevertheless: they are collectible now, and __del__ is always called exactly once, if I understand everything correctly. Thanks, Valentine From greg at krypto.org Mon Aug 24 21:52:37 2015 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 24 Aug 2015 19:52:37 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On Sat, Aug 22, 2015 at 9:27 AM Brett Cannon wrote: > On Sat, Aug 22, 2015, 09:17 Guido van Rossum wrote: > > How about we first add a new Makefile target that enables PGO, without > turning it on by default? Then later we can enable it by default. > > There already is one and has been for many years. make profile-opt. I even setup a buildbot for it last year. The problem with the existing profile-opt build in our default Makefile.in is that is uses a horrible profiling workload (pybench, ugh) so it leaves a lot of improvements behind. What all Linux distros (Debian/Ubuntu and Redhat at least; nothing else matters) do for their Python builds is to use profile-opt but they replace the profiling workload with a stable set of the Python unittest suite itself. Results are much better all around. Generally a 20% speedup. Anyone deploying Python who is *not* using a profile-opt build is wasting CPU resources. Whether it should be *the default* or not *is a different question*. The Makefile is optimized for CPython developers who certainly do not want to run two separate builds and a profile-opt workload every time they type make to test out their changes. But all binary release builds should use it. I agree. Updating the Makefile so it's easier to use PGO is great, but we > should do a release with it as opt-in and go from there. > > Also, I have my doubts about regrtest. How sure are we that it represents > a typical Python load? Tests are often using a different mix of operations > than production code. > > That was also my question. You said that "it provides the best performance > improvement", but compared to what; what else was tried? And what > difference does it make to e.g. a Django app that is trained on their own > simulated workload compared to using regrtest? IOW is regrtest displaying > the best across-the-board performance because it stresses the largest swath > of Python and thus catches generic patterns in the code but individuals > could get better performance with a simulated workload? > This isn't something to argue about. Just use regrtest and compare the before and after with the benchmark suite. It really does exercise things well. People like to fear that it'll produce code optimized for the test suite itself or something. No. Python as an interpreter is very realistically exercised by running it as it is simply running a lot of code and a good variety of code including the extension modules that benefit most such as regexes, pickle, json, xml, etc. Thomas tried the test suite and a variety of other workloads when looking at what to use at work. The testsuite works out generally the best. Going beyond that seems to be a wash. What we tested and decided to use on our own builds after benchmarking at work was to build with: make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x test_gdb test_multiprocessing" In general if a test is unreliable or takes an extremely long time, exclude it for your sanity. (i'd also kick out test_subprocess on 2.7; we replaced subprocess with subprocess32 in our build so that wasn't an issue) -gps -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Aug 24 22:05:38 2015 From: larry at hastings.org (Larry Hastings) Date: Mon, 24 Aug 2015 13:05:38 -0700 Subject: [Python-Dev] How are we merging forward from the Bitbucket 3.5 repo? In-Reply-To: <20150816152433.15812B14095@webabinitio.net> References: <55D03806.1020808@hastings.org> <20150816152433.15812B14095@webabinitio.net> Message-ID: <55DB7912.8030905@hastings.org> On 08/16/2015 08:24 AM, R. David Murray wrote: > On Sun, 16 Aug 2015 00:13:10 -0700, Larry Hastings wrote: >> Can we pick one approach and stick with it? Pretty-please? > Pick one Larry, you are the RM :) Okay. Unsurprisingly, I pick what I called option 3 before. It's basically what we do now when checking in work to earlier-version-branches, with the added complexity of the Bitbucket repo. I just tried it and it seems fine. > Can you give us a step by > step like you did for creating the pull request? Including how it > relates to the workflow for the other branches? Also, on 08/17/2015 08:03 AM, Barry Warsaw wrote: > I agree with the "You're the RM, pick one" sentiment, but just want to add a > plea for *documenting* whatever you choose, preferably under a big red blinky > banner in the devguide. ;) I can be a good monkey and follow directions, but > I just don't want to have to dig through long threads on semi-public mailing > lists to figure out which buttons to push. I'll post a message describing the workflow to these two newsgroups (hopefully by today) and update the devguide (hopefully by tomorrow). There's no rush as I haven't accepted any pull requests recently, though I have a couple I should attend to. (For those waiting on a reply on pull requests, sit tight, I want to get these workflow docs done first, that way you'll know what to do if/when your pull request is accepted.) Thanks, everybody, //arry// /p.s. In case you're wondering, this RC period is way, way less stress than 3.4 was. Part of that is the workflow change, and part of it is that there just isn't that much people are trying to get in this time. In 3.4 I think I had 70 merge requests just from Victor for asyncio...! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doko at ubuntu.com Mon Aug 24 22:36:21 2015 From: doko at ubuntu.com (Matthias Klose) Date: Mon, 24 Aug 2015 22:36:21 +0200 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <55DB8045.5020906@ubuntu.com> The current pgo target just uses a very specific task to train for the feedback. For my Debian/Ubuntu builds I'm using the testsuite minus some problematic tests to train. Otoh I don't know if this is the best way to do it, however it gave better results at some time in the past. What I would like is a benchmark / a mixture of benchmarks on which to enable pgo/pdo. Based on that you could enable pgo based on some static decisions based on autofdo. For that you don't need any profile runs during your build; it just needs shipping the autofdo outcome together with a Python release. This doesn't give you the same performance as for for a GCC pgo build, but it would be a first step. And defining the probe for any pgo build would be welcome too. Matthias On 08/22/2015 06:25 PM, Brett Cannon wrote: > On Sat, Aug 22, 2015, 09:17 Guido van Rossum wrote: > > How about we first add a new Makefile target that enables PGO, without > turning it on by default? Then later we can enable it by default. > > > I agree. Updating the Makefile so it's easier to use PGO is great, but we > should do a release with it as opt-in and go from there. > > Also, I have my doubts about regrtest. How sure are we that it represents a > typical Python load? Tests are often using a different mix of operations > than production code. > > That was also my question. You said that "it provides the best performance > improvement", but compared to what; what else was tried? And what > difference does it make to e.g. a Django app that is trained on their own > simulated workload compared to using regrtest? IOW is regrtest displaying > the best across-the-board performance because it stresses the largest swath > of Python and thus catches generic patterns in the code but individuals > could get better performance with a simulated workload? > > -Brett > > > On Sat, Aug 22, 2015 at 7:46 AM, Patrascu, Alecsandru < > alecsandru.patrascu at intel.com> wrote: > > Hi All, > > This is Alecsandru from Server Scripting Languages Optimization team at > Intel Corporation. > > I would like to submit a request to turn-on Profile Guided Optimization or > PGO as the default build option for Python (both 2.7 and 3.6), given its > performance benefits on a wide variety of workloads and hardware. For > instance, as shown from attached sample performance results from the Grand > Unified Python Benchmark, >20% speed up was observed. In addition, we are > seeing 2-9% performance boost from OpenStack/Swift where more than 60% of > the codes are in Python 2.7. Our analysis indicates the performance gain > was mainly due to reduction of icache misses and CPU front-end stalls. > > Attached is the Makefile patches that modify the all build target and adds > a new one called "disable-profile-opt". We built and tested this patch for > Python 2.7 and 3.6 on our Linux machines (CentOS 7/Ubuntu Server 14.04, > Intel Xeon Haswell/Broadwell with 18/8 cores). We use "regrtest" suite for > training as it provides the best performance improvement. Some of the test > programs in the suite may fail which leads to build fail. One solution is > to disable the specific failed test using the "-x " flag (as shown in the > patch) > > Steps to apply the patch: > 1. hg clone https://hg.python.org/cpython cpython > 2. cd cpython > 3. hg update 2.7 (needed for 2.7 only) > 4. Copy *.patch to the current directory > 5. patch < python2.7-pgo.patch (or patch < python3.6-pgo.patch) > 6. ./configure > 7. make > > To disable PGO > 7b. make disable-profile-opt > > In the following, please find our sample performance results from latest > XEON machine, XEON Broadwell EP. > Hardware (HW): Intel XEON (Broadwell) 8 Cores > > BIOS settings: Intel Turbo Boost Technology: false > Hyper-Threading: false > > Operating System: Ubuntu 14.04.3 LTS trusty > > OS configuration: CPU freq set at fixed: 2.6GHz by > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq > echo 2600000 > > /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq > Address Space Layout Randomization (ASLR) disabled (to > reduce run to run variation) by > echo 0 > /proc/sys/kernel/randomize_va_space > > GCC version: gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04) > > Benchmark: Grand Unified Python Benchmark (GUPB) > GUPB Source: https://hg.python.org/benchmarks/ > > Python2.7 results: > Python source: hg clone https://hg.python.org/cpython cpython > Python Source: hg update 2.7 > hg id: 0511b1165bb6 (2.7) > hg id -r 'ancestors(.) and tag()': 15c95b7d81dc (2.7) v2.7.10 > hg --debug id -i: 0511b1165bb6cf40ada0768a7efc7ba89316f6a5 > > Benchmarks Speedup(%) > simple_logging 20 > raytrace 20 > silent_logging 19 > richards 19 > chaos 16 > formatted_logging 16 > json_dump 15 > hexiom2 13 > pidigits 12 > slowunpickle 12 > django_v2 12 > unpack_sequence 11 > float 11 > mako 11 > slowpickle 11 > fastpickle 11 > django 11 > go 10 > json_dump_v2 10 > pathlib 10 > regex_compile 10 > pybench 9.9 > etree_process 9 > regex_v8 8 > bzr_startup 8 > 2to3 8 > slowspitfire 8 > telco 8 > pickle_list 8 > fannkuch 8 > etree_iterparse 8 > nqueens 8 > mako_v2 8 > etree_generate 8 > call_method_slots 7 > html5lib_warmup 7 > html5lib 7 > nbody 7 > spectral_norm 7 > spambayes 7 > fastunpickle 6 > meteor_contest 6 > chameleon 6 > rietveld 6 > tornado_http 5 > unpickle_list 5 > pickle_dict 4 > regex_effbot 3 > normal_startup 3 > startup_nosite 3 > etree_parse 2 > call_method_unknown 2 > call_simple 1 > json_load 1 > call_method 1 > > Python3.6 results > Python source: hg clone https://hg.python.org/cpython cpython > hg id: 96d016f78726 tip > hg id -r 'ancestors(.) and tag()': 1a58b1227501 (3.5) v3.5.0rc1 > hg --debug id -i: 96d016f78726afbf66d396f084b291ea43792af1 > > Benchmark Speedup(%) > fastunpickle 22.94 > fastpickle 21.67 > json_load 17.64 > simple_logging 17.49 > meteor_contest 16.67 > formatted_logging 15.33 > etree_process 14.61 > raytrace 13.57 > etree_generate 13.56 > chaos 12.09 > hexiom2 12 > nbody 11.88 > json_dump_v2 11.24 > richards 11.02 > nqueens 10.96 > fannkuch 10.79 > go 10.77 > float 10.26 > regex_compile 9.8 > silent_logging 9.63 > pidigits 9.58 > etree_iterparse 9.48 > 2to3 8.44 > regex_v8 8.09 > regex_effbot 7.88 > call_simple 7.63 > tornado_http 7.38 > etree_parse 4.92 > spectral_norm 4.72 > normal_startup 4.39 > telco 3.88 > startup_nosite 3.7 > call_method 3.63 > unpack_sequence 3.6 > call_method_slots 2.91 > call_method_unknown 2.59 > iterative_count 0.45 > threaded_count -2.79 > > Thank you, > Alecsandru > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/doko%40ubuntu.com > From david.c.stewart at intel.com Mon Aug 24 23:48:02 2015 From: david.c.stewart at intel.com (Stewart, David C) Date: Mon, 24 Aug 2015 21:48:02 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default Message-ID: (Sorry about the format here - I honestly just subscribed to Python-dev so be gentle ...) > Date: Sat, 22 Aug 2015 11:25:59 -0600 > From: Eric Snow >On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" intel.com > >wrote:[snip]> For instance, as shown from attached sample performance >results from theGrand Unified Python Benchmark, >20% speed up was >observed. > > Eric ? I'm the manager of Intel's server scripting language optimization team, so I'll answer from that perspective. >Are you referring to the tests in the benchmarks repo? [1] How does the >real-world performance improvement compare with otherlanguages you are >targeting for optimization? Yes, we're using [1]. We're seeing up to 10% improvement on Swift (a project in OpenStack) on some architectures using the ssbench workload, which is as close to real-world as we can get. Relative to other languages we target, this is quite good actually. For example, Java's Hotspot JIT is driven by profiling at its core so it's hard to distinguish the value profiling alone brings. We have seen a nice boost on PHP running Wordpress using PGO, but not as impressive as Python and Swift. By the way, I think letting the compiler optimize the code is a good strategy. Not the only strategy we want to use, but it seems like one we could do more of. > And thanks for working on this! I have several more questions: What >sorts of future changes in CPython's code might interfere with >youroptimizations? > > We're also looking at other source-level optimizations, like the CGOTO patch Vamsi submitted in June. Some of these may reduce the value of PGO, but in general it's nice to let the compiler do some optimization for you. > What future additions might stand to benefit? > It's a good question. Our intent is to continue to evaluate and measure different training workloads for improvement. In other words, as with any good open source project, this patch should improve things a lot and should be accepted upstream, but we will continue to make it better. > What changes in existing code might improve optimization opportunities? > > We intend to continue to work on source-level optimizations and measuring them against GUPB and Swift. > What is the added maintenance burden of the optimizations on CPython, >ifany? > > I think the answer is none. Our goal was to introduce performance improvements without adding to maintenance effort. >What is the performance impact on non-Intel architectures? What >aboutolder Intel architectures? ...and future ones? > > We should modify the patch to make it for Intel only, since we're not evaluating non-Intel architectures. Unfortunately for us, I suspect that older Intel CPUs might benefit more than current and future ones. Future architectures will benefit from other enabling work we're planning. > What is Intel's commitment to supporting these (or other) optimizations >inthe future? How is the practical EOL of the optimizations managed? > > As with any corporation's budgeting process, it's hard to know exactly what my managers will let me spend money on. :-) But we're definitely convinced of the value of dynamic languages for servers and the need to work on optimization. As far as I have visibility, it appears to be holding true. > Finally, +1 on adding an opt-in Makefile target rather than enabling >theoptimizations by default. > > Frankly since Ubuntu has been running this way for past two years, I think it's fine to make it opt-in, but eventually I hope it can be the default once we're happy with it. > Thanks again! -eric From ncoghlan at gmail.com Tue Aug 25 08:19:20 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 25 Aug 2015 16:19:20 +1000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On 25 August 2015 at 05:52, Gregory P. Smith wrote: > What we tested and decided to use on our own builds after benchmarking at > work was to build with: > > make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x test_gdb > test_multiprocessing" > > In general if a test is unreliable or takes an extremely long time, exclude > it for your sanity. (i'd also kick out test_subprocess on 2.7; we replaced > subprocess with subprocess32 in our build so that wasn't an issue) Having the "production ready" make target be "make profile-opt" doesn't strike me as the most intuitive thing in the world. I agree we want the "./configure && make" sequence to be oriented towards local development builds rather than highly optimised production ones, so perhaps we could provide a "make production" target that enables PGO with an appropriate training set from regrtest, and also complains if "--with-pydebug" is configured? Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From arigo at tunes.org Tue Aug 25 09:51:31 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 25 Aug 2015 09:51:31 +0200 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: <55DB65CD.9060505@gmail.com> References: <55D4360B.6010400@gmail.com> <55DB65CD.9060505@gmail.com> Message-ID: Hi Valentine, On 24 August 2015 at 20:43, Valentine Sinitsyn wrote: > So you mean that this was to keep things backwards compatible for > third-party extensions? I haven't thought about it this way, but this makes > sense. However, the behavior of Python code using objects with __del__ has > changed nevertheless: they are collectible now, and __del__ is always called > exactly once, if I understand everything correctly. Yes, I think so. There is a *highly obscure* corner case: __del__ will still be called several times if you declare your class with "__slots__=()". A bient?t, Armin. From arigo at tunes.org Tue Aug 25 10:00:24 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 25 Aug 2015 10:00:24 +0200 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: <55DC1FB6.1010700@gmail.com> References: <55D4360B.6010400@gmail.com> <55DB65CD.9060505@gmail.com> <55DC1FB6.1010700@gmail.com> Message-ID: Hi Valentine, On 25 August 2015 at 09:56, Valentine Sinitsyn wrote: >> Yes, I think so. There is a *highly obscure* corner case: __del__ >> will still be called several times if you declare your class with >> "__slots__=()". > > Even on "post-PEP-0442" Python 3.4+? Could you share a link please? class X(object): __slots__=() # <= try with and without this def __del__(self): global revive revive = self print("hi") X() revive = None revive = None revive = None --Armin From ericsnowcurrently at gmail.com Tue Aug 25 15:21:08 2015 From: ericsnowcurrently at gmail.com (Eric Snow) Date: Tue, 25 Aug 2015 07:21:08 -0600 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: Message-ID: On Aug 24, 2015 3:51 PM, "Stewart, David C" wrote: > > (Sorry about the format here - I honestly just subscribed to Python-dev so > be gentle ...) :) > > > Date: Sat, 22 Aug 2015 11:25:59 -0600 > > From: Eric Snow > > >On Aug 22, 2015 9:02 AM, "Patrascu, Alecsandru" >intel.com > > >wrote:[snip]> For instance, as shown from attached sample performance > >results from theGrand Unified Python Benchmark, >20% speed up was > >observed. > > > > > > Eric ? I'm the manager of Intel's server scripting language optimization > team, so I'll answer from that perspective. Thanks, David! > > >Are you referring to the tests in the benchmarks repo? [1] How does the > >real-world performance improvement compare with otherlanguages you are > >targeting for optimization? > > Yes, we're using [1]. > > We're seeing up to 10% improvement on Swift (a project in OpenStack) on > some architectures using the ssbench workload, which is as close to > real-world as we can get. Cool. > Relative to other languages we target, this is > quite good actually. For example, Java's Hotspot JIT is driven by > profiling at its core so it's hard to distinguish the value profiling > alone brings. Interesting. So pypy (with it's profiling JIT) would be in a similar boat, potentially. > We have seen a nice boost on PHP running Wordpress using > PGO, but not as impressive as Python and Swift. Nice. Presumably this reflects some of the choices we've made on the level of complexity in the interpreter source. > > By the way, I think letting the compiler optimize the code is a good > strategy. Not the only strategy we want to use, but it seems like one we > could do more of. > > > And thanks for working on this! I have several more questions: What > >sorts of future changes in CPython's code might interfere with > >youroptimizations? > > > > > > We're also looking at other source-level optimizations, like the CGOTO > patch Vamsi submitted in June. Some of these may reduce the value of PGO, > but in general it's nice to let the compiler do some optimization for you. > > > What future additions might stand to benefit? > > > > It's a good question. Our intent is to continue to evaluate and measure > different training workloads for improvement. In other words, as with any > good open source project, this patch should improve things a lot and > should be accepted upstream, but we will continue to make it better. > > > What changes in existing code might improve optimization opportunities? > > > > > > We intend to continue to work on source-level optimizations and measuring > them against GUPB and Swift. Thanks! These sorts of contribution has far-reaching positive effects. > > > What is the added maintenance burden of the optimizations on CPython, > >ifany? > > > > > > I think the answer is none. Our goal was to introduce performance > improvements without adding to maintenance effort. > > >What is the performance impact on non-Intel architectures? What > >aboutolder Intel architectures? ...and future ones? > > > > > > We should modify the patch to make it for Intel only, since we're not > evaluating non-Intel architectures. Unfortunately for us, I suspect that > older Intel CPUs might benefit more than current and future ones. Future > architectures will benefit from other enabling work we're planning. That's fine though. At the least you're setting the stage for future work, including building a relationship here. :) > > > What is Intel's commitment to supporting these (or other) optimizations > >inthe future? How is the practical EOL of the optimizations managed? > > > > > > As with any corporation's budgeting process, it's hard to know exactly > what my managers will let me spend money on. :-) But we're definitely > convinced of the value of dynamic languages for servers and the need to > work on optimization. As far as I have visibility, it appears to be > holding true. Sounds good. > > > Finally, +1 on adding an opt-in Makefile target rather than enabling > >theoptimizations by default. > > > > > > Frankly since Ubuntu has been running this way for past two years, I think > it's fine to make it opt-in, but eventually I hope it can be the default > once we're happy with it. Given the reaction here that sounds reasonable. Thanks for answering these questions and to your team for getting involved! -eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Aug 25 15:29:10 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 25 Aug 2015 15:29:10 +0200 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: Message-ID: > > Interesting. So pypy (with it's profiling JIT) would be in a similar boat, > potentially. > PGO and what pypy does have pretty much nothing to do with each other. I'm not sure what do you mean by "similar boat" From florin.papa at intel.com Tue Aug 25 15:11:37 2015 From: florin.papa at intel.com (Papa, Florin) Date: Tue, 25 Aug 2015 13:11:37 +0000 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 Message-ID: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> Hi All, My name is Florin Papa and I work in the Server Languages Optimizations Team at Intel Corporation. I would like to submit a patch that solves compatibility issues of the django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 benchmark uses inspect.getargspec(), which is deprecated and was removed in Python 3.6. Therefore, it crashes with the message "ImportError: cannot import name 'getargspec'" when using the latest version of Python on the default branch. The patch modifies the benchmark to use inspect.signature() when Python version is 3.6 or above and keep using inspect.getargspec() otherwise. Regards, Florin -------------- next part -------------- A non-text attachment was scrubbed... Name: django_v2_compat_3_6.patch Type: application/octet-stream Size: 1497 bytes Desc: django_v2_compat_3_6.patch URL: From valentine.sinitsyn at gmail.com Tue Aug 25 09:56:38 2015 From: valentine.sinitsyn at gmail.com (Valentine Sinitsyn) Date: Tue, 25 Aug 2015 12:56:38 +0500 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: References: <55D4360B.6010400@gmail.com> <55DB65CD.9060505@gmail.com> Message-ID: <55DC1FB6.1010700@gmail.com> Hi Armin, On 25.08.2015 12:51, Armin Rigo wrote: > Hi Valentine, > > On 24 August 2015 at 20:43, Valentine Sinitsyn > wrote: >> So you mean that this was to keep things backwards compatible for >> third-party extensions? I haven't thought about it this way, but this makes >> sense. However, the behavior of Python code using objects with __del__ has >> changed nevertheless: they are collectible now, and __del__ is always called >> exactly once, if I understand everything correctly. > > Yes, I think so. There is a *highly obscure* corner case: __del__ > will still be called several times if you declare your class with > "__slots__=()". Even on "post-PEP-0442" Python 3.4+? Could you share a link please? Thanks, Valentine From valentine.sinitsyn at gmail.com Tue Aug 25 10:06:33 2015 From: valentine.sinitsyn at gmail.com (Valentine Sinitsyn) Date: Tue, 25 Aug 2015 13:06:33 +0500 Subject: [Python-Dev] tp_finalize vs tp_del sematics In-Reply-To: References: <55D4360B.6010400@gmail.com> <55DB65CD.9060505@gmail.com> <55DC1FB6.1010700@gmail.com> Message-ID: <55DC2209.6050003@gmail.com> Hi Armin, On 25.08.2015 13:00, Armin Rigo wrote: > Hi Valentine, > > On 25 August 2015 at 09:56, Valentine Sinitsyn > wrote: >>> Yes, I think so. There is a *highly obscure* corner case: __del__ >>> will still be called several times if you declare your class with >>> "__slots__=()". >> >> Even on "post-PEP-0442" Python 3.4+? Could you share a link please? > > class X(object): > __slots__=() # <= try with and without this > def __del__(self): > global revive > revive = self > print("hi") > > X() > revive = None > revive = None > revive = None Indeed, that's very strange. Looks like a bug IMHO. Thanks for pointing out. Valentine From rdmurray at bitdance.com Tue Aug 25 16:51:58 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 25 Aug 2015 10:51:58 -0400 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 In-Reply-To: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> References: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> Message-ID: <20150825145159.440E1B500F6@webabinitio.net> On Tue, 25 Aug 2015 13:11:37 -0000, "Papa, Florin" wrote: > My name is Florin Papa and I work in the Server Languages Optimizations Team at Intel Corporation. > > I would like to submit a patch that solves compatibility issues of the django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 benchmark uses inspect.getargspec(), which is deprecated and was removed in Python 3.6. Therefore, it crashes with the message "ImportError: cannot import name 'getargspec'" when using the latest version of Python on the default branch. > > The patch modifies the benchmark to use inspect.signature() when Python version is 3.6 or above and keep using inspect.getargspec() otherwise. Note that Papa has submitted the patch to the tracker: http://bugs.python.org/issue24934 I'm not myself sure how we are maintaining that repo (https://hg.python.org/benchmarks), but it does seem like the bug tracker is the right place for such a patch. --David From tjreedy at udel.edu Tue Aug 25 17:18:54 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 25 Aug 2015 11:18:54 -0400 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 In-Reply-To: <20150825145159.440E1B500F6@webabinitio.net> References: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> <20150825145159.440E1B500F6@webabinitio.net> Message-ID: On 8/25/2015 10:51 AM, R. David Murray wrote: > On Tue, 25 Aug 2015 13:11:37 -0000, "Papa, Florin" wrote: >> My name is Florin Papa and I work in the Server Languages Optimizations Team at Intel Corporation. >> >> I would like to submit a patch that solves compatibility issues of the django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 benchmark uses inspect.getargspec(), which is deprecated and was removed in Python 3.6. Therefore, it crashes with the message "ImportError: cannot import name 'getargspec'" when using the latest version of Python on the default branch. >> >> The patch modifies the benchmark to use inspect.signature() when Python version is 3.6 or above and keep using inspect.getargspec() otherwise. > > Note that Papa has submitted the patch to the tracker: > > http://bugs.python.org/issue24934 > > I'm not myself sure how we are maintaining that repo > (https://hg.python.org/benchmarks), but it does seem like the bug > tracker is the right place for such a patch. Is the django_v2 benchmark original to benchmarks, or a copy from django? -- Terry Jan Reedy From rdmurray at bitdance.com Tue Aug 25 17:31:28 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 25 Aug 2015 11:31:28 -0400 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> <20150825145159.440E1B500F6@webabinitio.net> Message-ID: <20150825153129.AABE1250FEB@webabinitio.net> On Tue, 25 Aug 2015 11:18:54 -0400, Terry Reedy wrote: > On 8/25/2015 10:51 AM, R. David Murray wrote: > > On Tue, 25 Aug 2015 13:11:37 -0000, "Papa, Florin" wrote: > >> My name is Florin Papa and I work in the Server Languages Optimizations Team at Intel Corporation. > >> > >> I would like to submit a patch that solves compatibility issues of the django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 benchmark uses inspect.getargspec(), which is deprecated and was removed in Python 3.6. Therefore, it crashes with the message "ImportError: cannot import name 'getargspec'" when using the latest version of Python on the default branch. > >> > >> The patch modifies the benchmark to use inspect.signature() when Python version is 3.6 or above and keep using inspect.getargspec() otherwise. > > > > Note that Papa has submitted the patch to the tracker: > > > > http://bugs.python.org/issue24934 > > > > I'm not myself sure how we are maintaining that repo > > (https://hg.python.org/benchmarks), but it does seem like the bug > > tracker is the right place for such a patch. > > Is the django_v2 benchmark original to benchmarks, or a copy from django? Yeah, that's one question that was in my mind when I said I don't know how we maintain that repo. I'm pretty sure it was originally a copy of the django project, but how do we maintain it? --David From brett at python.org Tue Aug 25 17:48:21 2015 From: brett at python.org (Brett Cannon) Date: Tue, 25 Aug 2015 15:48:21 +0000 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 In-Reply-To: <20150825153129.AABE1250FEB@webabinitio.net> References: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> <20150825145159.440E1B500F6@webabinitio.net> <20150825153129.AABE1250FEB@webabinitio.net> Message-ID: On Tue, 25 Aug 2015 at 08:31 R. David Murray wrote: > On Tue, 25 Aug 2015 11:18:54 -0400, Terry Reedy wrote: > > On 8/25/2015 10:51 AM, R. David Murray wrote: > > > On Tue, 25 Aug 2015 13:11:37 -0000, "Papa, Florin" < > florin.papa at intel.com> wrote: > > >> My name is Florin Papa and I work in the Server Languages > Optimizations Team at Intel Corporation. > > >> > > >> I would like to submit a patch that solves compatibility issues of > the django_v2 benchmark in the Grand Unified Python Benchmark. The > django_v2 benchmark uses inspect.getargspec(), which is deprecated and was > removed in Python 3.6. Therefore, it crashes with the message "ImportError: > cannot import name 'getargspec'" when using the latest version of Python on > the default branch. > > >> > > >> The patch modifies the benchmark to use inspect.signature() when > Python version is 3.6 or above and keep using inspect.getargspec() > otherwise. > > > > > > Note that Papa has submitted the patch to the tracker: > > > > > > http://bugs.python.org/issue24934 > > > > > > I'm not myself sure how we are maintaining that repo > > > (https://hg.python.org/benchmarks), but it does seem like the bug > > > tracker is the right place for such a patch. > > > > Is the django_v2 benchmark original to benchmarks, or a copy from django? > > Yeah, that's one question that was in my mind when I said I don't know > how we maintain that repo. I'm pretty sure it was originally a copy of the > django project, but how do we maintain it? > It's maintained by primarily Antoine and me occasionally doing stuff to it. =) Traditionally bugs have been reported to bugs.python.org. As for the django_v2 benchmark, it was created by Unladen Swallow (it's v2 because it was updated to work with Django 1.5 so as to get Python 3 support for the benchmark). IOW it's out own benchmark and we can do whatever we want with it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Aug 25 17:59:23 2015 From: brett at python.org (Brett Cannon) Date: Tue, 25 Aug 2015 15:59:23 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On Mon, 24 Aug 2015 at 23:19 Nick Coghlan wrote: > On 25 August 2015 at 05:52, Gregory P. Smith wrote: > > What we tested and decided to use on our own builds after benchmarking at > > work was to build with: > > > > make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x > test_gdb > > test_multiprocessing" > > > > In general if a test is unreliable or takes an extremely long time, > exclude > > it for your sanity. (i'd also kick out test_subprocess on 2.7; we > replaced > > subprocess with subprocess32 in our build so that wasn't an issue) > > Having the "production ready" make target be "make profile-opt" > doesn't strike me as the most intuitive thing in the world. > > I agree we want the "./configure && make" sequence to be oriented > towards local development builds rather than highly optimised > production ones, so perhaps we could provide a "make production" > target that enables PGO with an appropriate training set from > regrtest, and also complains if "--with-pydebug" is configured? > That's an interesting idea for a make target. It might help get the visibility of PGO builds higher as well. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Tue Aug 25 18:09:43 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 25 Aug 2015 12:09:43 -0400 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: <20150825160944.21F14B500F6@webabinitio.net> On Tue, 25 Aug 2015 15:59:23 -0000, Brett Cannon wrote: > On Mon, 24 Aug 2015 at 23:19 Nick Coghlan wrote: > > > On 25 August 2015 at 05:52, Gregory P. Smith wrote: > > > What we tested and decided to use on our own builds after benchmarking at > > > work was to build with: > > > > > > make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x > > test_gdb > > > test_multiprocessing" > > > > > > In general if a test is unreliable or takes an extremely long time, > > exclude > > > it for your sanity. (i'd also kick out test_subprocess on 2.7; we > > replaced > > > subprocess with subprocess32 in our build so that wasn't an issue) > > > > Having the "production ready" make target be "make profile-opt" > > doesn't strike me as the most intuitive thing in the world. > > > > I agree we want the "./configure && make" sequence to be oriented > > towards local development builds rather than highly optimised > > production ones, so perhaps we could provide a "make production" > > target that enables PGO with an appropriate training set from > > regrtest, and also complains if "--with-pydebug" is configured? > > > > That's an interesting idea for a make target. It might help get the > visibility of PGO builds higher as well. If we did want to make PGO the default, having a 'make develop' target would also be an option. We already have a precedent for that in the 'setup.py develop' command. --David From brett at python.org Tue Aug 25 18:17:50 2015 From: brett at python.org (Brett Cannon) Date: Tue, 25 Aug 2015 16:17:50 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <20150825160944.21F14B500F6@webabinitio.net> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <20150825160944.21F14B500F6@webabinitio.net> Message-ID: On Tue, 25 Aug 2015 at 09:10 R. David Murray wrote: > On Tue, 25 Aug 2015 15:59:23 -0000, Brett Cannon wrote: > > On Mon, 24 Aug 2015 at 23:19 Nick Coghlan wrote: > > > > > On 25 August 2015 at 05:52, Gregory P. Smith wrote: > > > > What we tested and decided to use on our own builds after > benchmarking at > > > > work was to build with: > > > > > > > > make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x > > > test_gdb > > > > test_multiprocessing" > > > > > > > > In general if a test is unreliable or takes an extremely long time, > > > exclude > > > > it for your sanity. (i'd also kick out test_subprocess on 2.7; we > > > replaced > > > > subprocess with subprocess32 in our build so that wasn't an issue) > > > > > > Having the "production ready" make target be "make profile-opt" > > > doesn't strike me as the most intuitive thing in the world. > > > > > > I agree we want the "./configure && make" sequence to be oriented > > > towards local development builds rather than highly optimised > > > production ones, so perhaps we could provide a "make production" > > > target that enables PGO with an appropriate training set from > > > regrtest, and also complains if "--with-pydebug" is configured? > > > > > > > That's an interesting idea for a make target. It might help get the > > visibility of PGO builds higher as well. > > If we did want to make PGO the default, having a 'make develop' target > would also be an option. We already have a precedent for that in the > 'setup.py develop' command. > With a `make develop` target we also can make sure not only that --with-pydebug is used but that the installation target is /tmp so that new contributors don't accidentally install a debug build. -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavier.combelle at gmail.com Tue Aug 25 18:28:25 2015 From: xavier.combelle at gmail.com (Xavier Combelle) Date: Tue, 25 Aug 2015 18:28:25 +0200 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: <20150825160944.21F14B500F6@webabinitio.net> References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <20150825160944.21F14B500F6@webabinitio.net> Message-ID: Pardon me if I'm not in the right place to ask the following naive question. (say me if it's the case) Does Profile Guided Optimization performance improvements are specific to the chip where the built is done or the performance is better on a larger set of chips? -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Tue Aug 25 18:43:56 2015 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 25 Aug 2015 16:43:56 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <20150825160944.21F14B500F6@webabinitio.net> Message-ID: PGO is unrelated to the particular CPU the profiling is done on. (It is conceivable that it'd make a small difference but I've never observed that in practice) On Tue, Aug 25, 2015, 9:28 AM Xavier Combelle wrote: Pardon me if I'm not in the right place to ask the following naive question. (say me if it's the case) Does Profile Guided Optimization performance improvements are specific to the chip where the built is done or the performance is better on a larger set of chips? -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Tue Aug 25 18:48:23 2015 From: greg at krypto.org (Gregory P. Smith) Date: Tue, 25 Aug 2015 16:48:23 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> Message-ID: On Mon, Aug 24, 2015, 11:19 PM Nick Coghlan wrote: On 25 August 2015 at 05:52, Gregory P. Smith wrote: > What we tested and decided to use on our own builds after benchmarking at > work was to build with: > > make profile-opt PROFILE_TASK="-m test.regrtest -w -uall,-audio -x test_gdb > test_multiprocessing" > > In general if a test is unreliable or takes an extremely long time, exclude > it for your sanity. (i'd also kick out test_subprocess on 2.7; we replaced > subprocess with subprocess32 in our build so that wasn't an issue) Having the "production ready" make target be "make profile-opt" doesn't strike me as the most intuitive thing in the world. I agree we want the "./configure && make" sequence to be oriented towards local development builds rather than highly optimised production ones, so perhaps we could provide a "make production" target that enables PGO with an appropriate training set from regrtest, and also complains if "--with-pydebug" is configured? Regards, Nick. -- Nick Coghlan | ncoghlan@ gmail.com | Brisbane, Australia Agreed. Also, printing a message out at the end of a default make all build suggesting people use make production for additional performance instead might help advertise it. make install could possibly depend on make production as well? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alecsandru.patrascu at intel.com Tue Aug 25 19:16:45 2015 From: alecsandru.patrascu at intel.com (Patrascu, Alecsandru) Date: Tue, 25 Aug 2015 17:16:45 +0000 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <20150825160944.21F14B500F6@webabinitio.net> Message-ID: <3CF256F4F774BD48A1691D131AA04319141C1F58@IRSMSX102.ger.corp.intel.com> Indeed, as Gregory well mentioned, PGO is unrelated to a particular CPU on which we do profiling. From: Python-Dev [mailto:python-dev-bounces+alecsandru.patrascu=intel.com at python.org] On Behalf Of Gregory P. Smith Sent: Tuesday, August 25, 2015 7:44 PM To: Xavier Combelle; python-dev at python.org Subject: Re: [Python-Dev] Profile Guided Optimization active by-default PGO is unrelated to the particular CPU the profiling is done on. (It is conceivable that it'd make a small difference but I've never observed that in practice) On Tue, Aug 25, 2015, 9:28 AM?Xavier Combelle wrote: Pardon me if I'm not in the right place to ask the following naive question. (say me if it's the case) Does Profile Guided Optimization performance improvements are specific to the chip where the built is done or the performance is better on a larger set of chips? From steve.dower at python.org Tue Aug 25 20:17:43 2015 From: steve.dower at python.org (Steve Dower) Date: Tue, 25 Aug 2015 11:17:43 -0700 Subject: [Python-Dev] Building Extensions for Python 3.5 on Windows Message-ID: <55DCB147.9020604@python.org> I've written up a long technical blog post about the compiler and CRT changes in Python 3.5, which will be of interest to those who build and distribute native extensions for Windows. http://stevedower.id.au/blog/building-for-python-3-5/ Hopefully it puts some of the changes we've made into a context where they don't just look like unnecessary pain. Feedback and discussion welcome, either on these lists or on the post itself. Cheers, Steve From skip.montanaro at gmail.com Tue Aug 25 21:54:24 2015 From: skip.montanaro at gmail.com (Skip Montanaro) Date: Tue, 25 Aug 2015 14:54:24 -0500 Subject: [Python-Dev] Profile Guided Optimization active by-default In-Reply-To: References: <3CF256F4F774BD48A1691D131AA04319141C0795@IRSMSX102.ger.corp.intel.com> <20150825160944.21F14B500F6@webabinitio.net> Message-ID: On Tue, Aug 25, 2015 at 11:17 AM, Brett Cannon wrote: > With a `make develop` target we also can make sure not only that > --with-pydebug is used but that the installation target is /tmp so that new > contributors don't accidentally install a debug build. You need to be careful there. In my environment, I interface with a lot of Boost.Python-wrapped code which would be quite impractical to compile with --with-pydebug. I'd like to be able to throw in all the other development bells and whistles though, without changing the size of the object header. Maybe "develop-lite"? -ly, y'rs, Skip -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Tue Aug 25 22:16:33 2015 From: larry at hastings.org (Larry Hastings) Date: Tue, 25 Aug 2015 13:16:33 -0700 Subject: [Python-Dev] [RELEASED] Python 3.5.0rc2 is now available Message-ID: <55DCCD21.4020308@hastings.org> On behalf of the Python development community and the Python 3.5 release team, I'm relieved to announce the availability of Python 3.5.0rc2, also known as Python 3.5.0 Release Candidate 2. Python 3.5 has now entered "feature freeze". By default new features may no longer be added to Python 3.5. This is a preview release, and its use is not recommended for production settings. You can find Python 3.5.0rc2 here: https://www.python.org/downloads/release/python-350rc2/ Windows and Mac users: please read the important platform-specific "Notes on this release" section near the end of that page. Happy hacking, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Aug 26 06:53:16 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 26 Aug 2015 00:53:16 -0400 Subject: [Python-Dev] Building Extensions for Python 3.5 on Windows In-Reply-To: <55DCB147.9020604@python.org> References: <55DCB147.9020604@python.org> Message-ID: On 8/25/2015 2:17 PM, Steve Dower wrote: > I've written up a long technical blog post about the compiler and CRT > changes in Python 3.5, which will be of interest to those who build and > distribute native extensions for Windows. > > http://stevedower.id.au/blog/building-for-python-3-5/ > > Hopefully it puts some of the changes we've made into a context where > they don't just look like unnecessary pain. Feedback and discussion > welcome, either on these lists or on the post itself. This is an excellent technical writeup. Can it be linked to from the devguide, or maybe the C-API docs, if they do not contain everything in the post? -- Terry Jan Reedy From florin.papa at intel.com Wed Aug 26 09:49:11 2015 From: florin.papa at intel.com (Papa, Florin) Date: Wed, 26 Aug 2015 07:49:11 +0000 Subject: [Python-Dev] django_v2 benchmark compatibility fix for Python 3.6 In-Reply-To: References: <3A375A669FBEFF45B6B60E689636EDCAEAAAB2@IRSMSX101.ger.corp.intel.com> <20150825145159.440E1B500F6@webabinitio.net> <20150825153129.AABE1250FEB@webabinitio.net> Message-ID: <3A375A669FBEFF45B6B60E689636EDCAEAAEA0@IRSMSX101.ger.corp.intel.com> Hi all, Based on the feedback I received, I updated the patch to introduce django_v3 benchmark, which uses django 1.8. Also, django_v2 was deprecated for Python 3.6 and above. In order for django_v3 to work, the latest django release must be present in lib/Django-1.8. Django-1.8 is attached as a zip file because the patch would be too large if it included all these files. In order for the modifications to work, extract the archive to lib/Django-1.8 . Please see the issue here: http://bugs.python.org/issue24934 Thank you, Florin Papa On Tue, 25 Aug 2015 at 08:31 R. David Murray wrote: On Tue, 25 Aug 2015 11:18:54 -0400, Terry Reedy wrote: > On 8/25/2015 10:51 AM, R. David Murray wrote: > > On Tue, 25 Aug 2015 13:11:37 -0000, "Papa, Florin" wrote: > >> My name is Florin Papa and I work in the Server Languages Optimizations Team at Intel Corporation. > >> > >> I would like to submit a patch that solves compatibility issues of the django_v2 benchmark in the Grand Unified Python Benchmark. The django_v2 benchmark uses inspect.getargspec(), which is deprecated and was removed in Python 3.6. Therefore, it crashes with the message "ImportError: cannot import name 'getargspec'" when using the latest version of Python on the default branch. > >> > >> The patch modifies the benchmark to use inspect.signature() when Python version is 3.6 or above and keep using inspect.getargspec() otherwise. > > > > Note that Papa has submitted the patch to the tracker: > > > >? ? ? http://bugs.python.org/issue24934 > > > > I'm not myself sure how we are maintaining that repo > > (https://hg.python.org/benchmarks), but it does seem like the bug > > tracker is the right place for such a patch. > > Is the django_v2 benchmark original to benchmarks, or a copy from django? Yeah, that's one question that was in my mind when I said I don't know how we maintain that repo.? I'm pretty sure it was originally a copy of the django project, but how do we maintain it? It's maintained by primarily Antoine and me occasionally doing stuff to it. =) Traditionally bugs have been reported to bugs.python.org. As for the django_v2 benchmark, it was created by Unladen Swallow (it's v2 because it was updated to work with Django 1.5 so as to get Python 3 support for the benchmark). IOW it's out own benchmark and we can do whatever we want with it. From steve.dower at python.org Wed Aug 26 18:14:07 2015 From: steve.dower at python.org (Steve Dower) Date: Wed, 26 Aug 2015 09:14:07 -0700 Subject: [Python-Dev] Building Extensions for Python 3.5 on Windows In-Reply-To: References: <55DCB147.9020604@python.org> Message-ID: <55DDE5CF.7030604@python.org> On 25Aug2015 2153, Terry Reedy wrote: > On 8/25/2015 2:17 PM, Steve Dower wrote: >> I've written up a long technical blog post about the compiler and CRT >> changes in Python 3.5, which will be of interest to those who build and >> distribute native extensions for Windows. >> >> http://stevedower.id.au/blog/building-for-python-3-5/ >> >> Hopefully it puts some of the changes we've made into a context where >> they don't just look like unnecessary pain. Feedback and discussion >> welcome, either on these lists or on the post itself. > > This is an excellent technical writeup. Can it be linked to from the > devguide, or maybe the C-API docs, if they do not contain everything in > the post? > I probably need to go through the "Building C and C++ Extensions on Windows" chapter (https://docs.python.org/3.5/extending/windows.html) and update it. Judging by the note near the top ("For example, if you are using Python 2.2.1") it's a little out of date :) Things I'd like to see on that page: * setup.py example to build simple extensions * command-line commands to build directly * VS walkthrough for setting up a project (like what is there now) * MinGW walkthrough for building extensions via distutils or directly (I'll need some help with this one) * deeper discussion on DLLs/static linking/distribution (like section 4.3 now, plus details from my post) On the VS walkthrough, my team at work already has a strong interest (and vague plans) to publish VS templates for building Python extensions, which naturally come with docs and maybe a video walkthrough (like https://youtu.be/D9RlT06a1EI, which I did for Python 3.4 without a template). If there's no opposition, it may be neater to link to that rather than walking through VS in Python's docs - then this section would just cover the command line invocations. I have no issues with linking to other IDE's templates/walkthroughs, but I don't know of any that exist yet. Cheers, Steve From tjreedy at udel.edu Thu Aug 27 06:20:54 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 27 Aug 2015 00:20:54 -0400 Subject: [Python-Dev] Testing tkinter on Linux Message-ID: None of the linux buildbots run with X enabled. Consequently none of the tkinter (or tkinter user) gui tests are run on Linux. It was thus pointed out to me, during discussion of using ttk widgets in Idle, that we do not really know if ttk works on the variety of Linux systems (beyond the one Serhiy uses) and that I should look into this. I asked on python-list for help, by linux users running python3 -m test -ugui test_tk test_ttk_guionly test_idle Seven people did so with Debian Jessie, Debian Wheezy, Gentoo, Mint, openSUSE, and Ubuntu (x2). One machine failed once with the ttk test, and then passed. Another failed the tk test until a mis-configuration was fixed. So tkinter, and ttk in particular, seems to be working on linux. Do any of the core devs who run the test suite on Linux do so with -uall or -ugui? The gui tests above take about 10 seconds (mostly the tk test) on my machine, with some flashing boxes near the end. -- Terry Jan Reedy From rosuav at gmail.com Thu Aug 27 06:35:36 2015 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 27 Aug 2015 14:35:36 +1000 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: References: Message-ID: On Thu, Aug 27, 2015 at 2:20 PM, Terry Reedy wrote: > None of the linux buildbots run with X enabled. Consequently none of the > tkinter (or tkinter user) gui tests are run on Linux. It was thus pointed > out to me, during discussion of using ttk widgets in Idle, that we do not > really know if ttk works on the variety of Linux systems (beyond the one > Serhiy uses) and that I should look into this. If it helps, my buildbot has full GUI services, so if there's a simple way to tell it to run the GUI tests every time, they should pass. ChrisA From wai.hoex.low at intel.com Thu Aug 27 10:57:13 2015 From: wai.hoex.low at intel.com (Low, Wai HoeX) Date: Thu, 27 Aug 2015 08:57:13 +0000 Subject: [Python-Dev] Python Issue-subprocess problem Message-ID: <9EE08301DF71934FA8F763DD304DEFE4423592@PGSMSX102.gar.corp.intel.com> Dear, I have faced a problem as below when I startup the python IDLE [cid:image001.png at 01D0E0CE.454096D0] Then I try to run a simple program like this: [cid:image002.png at 01D0E0CE.454096D0] Other issue is pop out [cid:image003.png at 01D0E0CE.454096D0] May I know how to fix the issue? My laptop is window 7 64bits. Thanks and having a nice day ! Regards, Arthur Low Wai Hoe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 24383 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 18167 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 21157 bytes Desc: image003.png URL: From rymg19 at gmail.com Thu Aug 27 17:05:28 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 27 Aug 2015 10:05:28 -0500 Subject: [Python-Dev] Python Issue-subprocess problem In-Reply-To: <9EE08301DF71934FA8F763DD304DEFE4423592@PGSMSX102.gar.corp.intel.com> References: <9EE08301DF71934FA8F763DD304DEFE4423592@PGSMSX102.gar.corp.intel.com> Message-ID: *before anyone else says it* This list is for development *of* Python, not *in* Python. If you need help with things like this, I'd advise you to use the python-list mailing list or Stack Overflow . On Thu, Aug 27, 2015 at 3:57 AM, Low, Wai HoeX wrote: > Dear, > > > > I have faced a problem as below when I startup the python IDLE > > > > [image: cid:image001.png at 01D0E0CE.454096D0] > > > > Then I try to run a simple program like this: > > [image: cid:image002.png at 01D0E0CE.454096D0] > > > > Other issue is pop out > > > > [image: cid:image003.png at 01D0E0CE.454096D0] > > > > May I know how to fix the issue? My laptop is window 7 64bits. > > > > > > > > Thanks and having a nice day ! > > > > Regards, > > Arthur Low Wai Hoe > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com > > -- Ryan [ERROR]: Your autotools build scripts are 200 lines longer than your program. Something?s wrong. http://kirbyfan64.github.io/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 24383 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 18167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 21157 bytes Desc: not available URL: From tjreedy at udel.edu Thu Aug 27 20:10:15 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 27 Aug 2015 14:10:15 -0400 Subject: [Python-Dev] Python Issue-subprocess problem In-Reply-To: References: <9EE08301DF71934FA8F763DD304DEFE4423592@PGSMSX102.gar.corp.intel.com> Message-ID: On 8/27/2015 11:05 AM, Ryan Gonzalez wrote: > *before anyone else says it* > > This list is for development /of/ Python, not /in/ Python. If you need > help with things like this, I'd advise you to use the python-list > mailing list or > Stack Overflow . + sent private message -- Terry Jan Reedy From tjreedy at udel.edu Thu Aug 27 20:24:36 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 27 Aug 2015 14:24:36 -0400 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: References: Message-ID: On 8/27/2015 12:35 AM, Chris Angelico wrote: > On Thu, Aug 27, 2015 at 2:20 PM, Terry Reedy wrote: >> None of the linux buildbots run with X enabled. Consequently none of the >> tkinter (or tkinter user) gui tests are run on Linux. It was thus pointed >> out to me, during discussion of using ttk widgets in Idle, that we do not >> really know if ttk works on the variety of Linux systems (beyond the one >> Serhiy uses) and that I should look into this. > > If it helps, my buildbot has full GUI services, so if there's a simple > way to tell it to run the GUI tests every time, they should pass. Somewhere your buildbot has a shell script to run that ends with a command to start the tests. The commands are echoed to the buildbot output. Here are two that I found. ./python ./Tools/scripts/run_tests.py -j 1 -u all -W --timeout=3600 ... PCbuild\..\lib\test\regrtest.py" -uall -rwW -n --timeout 3600 (and python -m test ... should work) If the command has -ugui (included in -uall) *and* a graphics system can be initiated (X on Linux), then the gui resource is marked present and gui tests will run. -- Terry Jan Reedy From rdmurray at bitdance.com Thu Aug 27 21:00:40 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 27 Aug 2015 15:00:40 -0400 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: References: Message-ID: <20150827190041.2CC68B20098@webabinitio.net> On Thu, 27 Aug 2015 14:24:36 -0400, Terry Reedy wrote: > On 8/27/2015 12:35 AM, Chris Angelico wrote: > > On Thu, Aug 27, 2015 at 2:20 PM, Terry Reedy wrote: > >> None of the linux buildbots run with X enabled. Consequently none of the > >> tkinter (or tkinter user) gui tests are run on Linux. It was thus pointed > >> out to me, during discussion of using ttk widgets in Idle, that we do not > >> really know if ttk works on the variety of Linux systems (beyond the one > >> Serhiy uses) and that I should look into this. > > > > If it helps, my buildbot has full GUI services, so if there's a simple > > way to tell it to run the GUI tests every time, they should pass. > > Somewhere your buildbot has a shell script to run that ends with a > command to start the tests. The commands are echoed to the buildbot > output. Here are two that I found. No, the master controls this. > ./python ./Tools/scripts/run_tests.py -j 1 -u all -W --timeout=3600 > ... PCbuild\..\lib\test\regrtest.py" -uall -rwW -n --timeout 3600 > (and python -m test ... should work) > > If the command has -ugui (included in -uall) *and* a graphics system can > be initiated (X on Linux), then the gui resource is marked present and > gui tests will run. I believe gui depends on the existence of the DISPLAY environment variable on unix/linux (that is, TK will fail to start if DISPLAY is not set, so _is_gui_available will return False). You should be able to confirm this by looking at the text of the skip message in the buildbot output. It is possible to create a "virtual" X on an otherwise headless linux system, but I've never tried to do it myself. If someone comes up with a recipe we could add it to the devguide chapter on running a buildbot. --David From me at the-compiler.org Thu Aug 27 21:10:09 2015 From: me at the-compiler.org (Florian Bruhin) Date: Thu, 27 Aug 2015 21:10:09 +0200 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: <20150827190041.2CC68B20098@webabinitio.net> References: <20150827190041.2CC68B20098@webabinitio.net> Message-ID: <20150827191009.GJ509@tonks> * R. David Murray [2015-08-27 15:00:40 -0400]: > It is possible to create a "virtual" X on an otherwise headless linux > system, but I've never tried to do it myself. If someone comes up > with a recipe we could add it to the devguide chapter on running > a buildbot. It's usually as easy as installing Xvfb and prepending "xvfb-run" to the command: $ export DISPLAY= $ python3 -m test -ugui test_tk test_ttk_guionly test_idle [1/3] test_tk test_tk skipped -- Tk unavailable due to TclError: couldn't connect to display "" [2/3] test_ttk_guionly test_ttk_guionly skipped -- Tk unavailable due to TclError: couldn't connect to display "" [3/3] test_idle 1 test OK. 2 tests skipped: test_tk test_ttk_guionly $ xvfb-run python3 -m test -ugui test_tk test_ttk_guionly test_idle [1/3] test_tk [2/3] test_ttk_guionly [3/3] test_idle All 3 tests OK. Florian -- http://www.the-compiler.org | me at the-compiler.org (Mail/XMPP) GPG: 916E B0C8 FD55 A072 | http://the-compiler.org/pubkey.asc I love long mails! | http://email.is-not-s.ms/ -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From larry at hastings.org Thu Aug 27 21:32:52 2015 From: larry at hastings.org (Larry Hastings) Date: Thu, 27 Aug 2015 12:32:52 -0700 Subject: [Python-Dev] How To Forward-Merge Your Change After Your Pull Request Is Accepted Into Python 3.5.0rcX Message-ID: <55DF65E4.7060109@hastings.org> Now that we're in the "release candidate" phase of Python 3.5.0, the workflow has changed a little. We're trying an experiment using Bitbucket and pull requests. You can read about that workflow, here: https://mail.python.org/pipermail/python-dev/2015-August/141167.html But the instructions for that workflow are pretty hazy on what you do after your pull request is accepted. This message is an addendum to those instructions, describing exactly what you should do after your pull request is accepted. To save wear and tear on my hands (and your eyes), for the rest of these instructions, I'm going to refer to each place-you-can-check-things-in-to by version number as follows: 3.4 : hg.python.org/cpython (branch "3.4") 3.5.0 : https://bitbucket.org/larry/cpython350 (branch "3.5") 3.5.1 : hg.python.org/cpython (branch "3.5") 3.6 : hg.python.org/cpython (branch "default") With that nomenclature established I can now precisely say: when your pull request is accepted into 3.5.0, you need to merge from 3.5.0 into 3.5.1, and then from 3.5.1 into 3.6. Doing this is much like the existing workflow. You use "hg merge" to merge your changes from previous versions into subsequent versions (what I call "forward merging"). What complicates matters is the fact that the 3.5.0 release candidates don't live in the normal repo--they lives in a repo on Bitbucket which is only writeable by me. In order to keep a tight lid on the changes checked in to the 3.5.0 release candidates, I won't pull revisions from the normal CPython repo. (If I did, I'd also pull in changes intended for 3.5.1, and... it'd be a mess.) So here come the instructions. They look long, but that's just because I go into a lot of detail to try and make them as foolproof as possible. They aren't really much longer or more complicated than the steps you already follow to perform forward-merges. Note that these are easy, guaranteed-clean instructions on how to perform the merge. Are there shortcuts you could take that might speed things up? Yes. But I encourage you to skip those shortcuts and stick to my instructions. Working with multiple branches is complicated enough, and this external repo makes things even more complicated. It's all too easy to make a mistake. HOW TO FORWARD-MERGE FROM 3.5.0 TO 3.5.1 ---------------------------------------- 1: Wait until your pull request has been accepted. 2: Make a *clean* local clone of the CPython tree, updated to the "3.5" branch. In my instructions I'll call the clone "cpython351-merge": % hg clone ssh://hg at hg.python.org/cpython -u 3.5 cpython351-merge % cd cpython351-merge 3: Confirm that you're in the correct branch. You should be in the "3.5" branch. Run this command: % hg id Let's assume that the current head in the "3.5" branch has changeset ID "7890abcdef". If that were true, the output of "hg id" would look like this: 7890abcdef (3.5) It might also say "tip" on the end, like this: 7890abcdef (3.5) tip If it doesn't say "3.5", switch to the 3.5 branch: % hg up -r 3.5 and repeat this step. 4: Pull from the 3.5.0 repo into your "cpython351-merge" directory. % hg pull ssh://hg at bitbucket.org/larry/cpython350 You should now have two "heads" in the 3.5 branch; the existing head you saw in the previous step, and the new head you just pulled in, which should be the changeset from your pull request. 5: As an optional step: confirm you have the correct two heads. This command will print a list of all the heads in the current repo: % hg heads Again, you should have exactly two identified as being on the "3.5" branch; one should have the changeset ID shown by "hg id" in step 3, the other should be your change from the pull request. 6: Merge the two heads together: % hg merge If there are merge conflicts, Mercurial will guide you through the conflict resolution process as normal. 7: Make sure that all your changes merged properly and you didn't merge anything else by accident. I run these two commands: % hg stat % hg diff and read all the output. 8: Make sure Misc/NEWS has your update in the right spot. (See below.) 9: Check in. The checkin comment should be something like Merge from 3.5.0 to 3.5.1. 10: Push your changes back into the main CPython repo. % hg push 11: Now forward-merge your change to 3.6 as normal, following the CPython Dev Guide instructions: https://docs.python.org/devguide/committing.html#merging-between-different-branches-within-the-same-major-version FREQUENTLY ASKED QUESTIONS -------------------------- Q: I screwed something up! What do I do now? If you haven't pushed your changes out, it's no problem. Just delete your repo and start over. If you *have* pushed your changes out, obviously we'll need to fix the mistake. If you're not sure how to fix the problem, I suggest logging in to the #python-dev IRC channel and asking for help. Q: What do I need to do about Misc/NEWS? I'm glad you asked! First, you *must* put your Misc/NEWS update into the correct section. If you're creating a pull request for Python 3.5.0 rc-something, put it in the 3.5.0 rc-something section. If you're checking in to 3.5.1, put it in the 3.5.1 section. If you're just checking into 3.6, put it in the 3.6.0 alpha 1 section. Second, when you merge forward, make sure the merge tool puts your Misc/NEWS entry in the right section. The merge tool I seem to use isn't particularly smart about this, so I've had to manually edit Misc/NEWS many times to fix it. (When I released 3.5.0rc2, I had to do a lot of cleanup on Misc/NEWS, and again in the 3.5.1 branch, and again in 3.6.) Every time you merge forward, make sure your Misc/NEWS entry is in the right spot. Q: What if a second pull request is accepted before I get around to doing the merge? Well, *someone* needs to merge, and they're going to have to merge *both* changes. I can't come up with a good general policy here. Hopefully this won't happen often; for now let's just handle it on a case-by-case basis. Q: What if I have a bugfix for 3.4 that I want to ship with 3.5.0? You have to check in twice, and merge-forward twice. First, check in to 3.4, then merge forward into 3.5.1 and 3.6. Then, once your pull request is accepted into 3.5.0, do a "null merge" (a merge where no files are changed) from 3.5.0 into 3.5.1 and 3.6. Q: What if my pull request is turned down? If your bug fix isn't critical enough to merit shipping with 3.5.0, just check it into the normal 3.5 branch on hg.python.org and it'll ship with 3.5.1. (And, naturally, forward-merge it into 3.6.) //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Thu Aug 27 23:13:07 2015 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 28 Aug 2015 07:13:07 +1000 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: <20150827190041.2CC68B20098@webabinitio.net> References: <20150827190041.2CC68B20098@webabinitio.net> Message-ID: On Fri, Aug 28, 2015 at 5:00 AM, R. David Murray wrote: > I believe gui depends on the existence of the DISPLAY environment > variable on unix/linux (that is, TK will fail to start if DISPLAY is not > set, so _is_gui_available will return False). You should be able to > confirm this by looking at the text of the skip message in the buildbot > output. A recent buildbot log [1] shows that the GUI tests are being skipped, although I'm not seeing the message. Where do I go to set DISPLAY for the bot (which runs mostly in the background)? Note that it's all running as root, which may make a difference to the defaults, so this might have to be done more explicitly than it otherwise would. (But root is quite happy to use X; running xclock brings up a clock just fine.) [1] http://buildbot.python.org/all/builders/AMD64%20Debian%20root%203.5/builds/210/steps/test/logs/stdio ChrisA From yselivanov.ml at gmail.com Thu Aug 27 23:15:50 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 17:15:50 -0400 Subject: [Python-Dev] provisional status for asyncio Message-ID: <55DF7E06.1020207@gmail.com> Recently, in an asyncio related issue [1], Guido said that new features for asyncio have to wait till 3.6, since asyncio is no longer a provisional package. Later, in an off-list conversation, he suggested that this topic should be discussed on python-dev, and that it might indeed make sense to either write a new PEP for cases like this or to augment PEP 411. My opinion on this topic is that we must maintain full backwards compatibility for asyncio from 3.5.0, as it is now widely used, and there is quite a big ecosystem around it. However, asyncio is simply not mature enough to be completely feature frozen for about 2 years. For example, there is an issue [2] to add starttls support to asyncio. It is an essential feature, because some protocols start as clear text and upgrade to TLS later, for example PostgreSQL PQ3 protocol. It's very hard to implement starttls on top of asyncio, lots of code will have to be duplicated -- it's a feature that has to implemented in the asyncio core. Aside from adding new APIs, we also have to improve debugging capabilities. One example is using os.fork() from within a running event loop -- it must be avoided by all means. There are safe ways to fork in asyncio applications (and I plan to document them soon), but asyncio should raise an exception in debug mode if this happens (see issue [3]). These are just two immediate issues that I have in mind. In reality, asyncio is quite young compared to frameworks like Twisted, which had years to mature, and accumulate essential features. My proposal is to amend PEP 411 with two levels of provisional packages: Level 1: Backwards incompatible changes might be introduced in point releases. Level 2: Only backwards compatible changes can be introduced in new point releases. With the above amendments, asyncio status should be restated as a level-2 provisional package. I'm CC-ing authors of PEP 411 Nick and Eli. Thank you, Yury [1] http://bugs.python.org/issue23630 [2] http://bugs.python.org/issue23749 [3] http://bugs.python.org/issue21998 From yselivanov.ml at gmail.com Thu Aug 27 23:31:49 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 17:31:49 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> Message-ID: <55DF81C5.907@gmail.com> On 2015-08-27 5:24 PM, Brett Cannon wrote: > > My proposal is to amend PEP 411 with two levels of provisional > packages: > > Level 1: Backwards incompatible changes might be introduced in point > releases. > > Level 2: Only backwards compatible changes can be introduced in > new point > releases. > > > How is this any different from the normal compatibility promise we > have for any non-provisional code in the stdlib? > > And by point release I assume you mean a new minor release, e.g. 3.5 > -> 3.6. Right, my mistake, I indeed meant minor releases. The difference is that right now we don't introduce new features (regardless of backwards compatibility promises) for any non-provisional code in minor releases, we can only do bug fixes. My proposal is to enable asyncio receiving new strictly backwards compatible APIs/features (and bug fixes too, of course) in minor releases (3.5.x). Yury From yselivanov.ml at gmail.com Thu Aug 27 23:39:14 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 17:39:14 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF81C5.907@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> Message-ID: <55DF8382.5090200@gmail.com> On 2015-08-27 5:31 PM, Yury Selivanov wrote: > On 2015-08-27 5:24 PM, Brett Cannon wrote: >> >> My proposal is to amend PEP 411 with two levels of provisional >> packages: >> >> Level 1: Backwards incompatible changes might be introduced in point >> releases. >> >> Level 2: Only backwards compatible changes can be introduced in >> new point >> releases. >> >> >> How is this any different from the normal compatibility promise we >> have for any non-provisional code in the stdlib? >> >> And by point release I assume you mean a new minor release, e.g. 3.5 >> -> 3.6. > > Right, my mistake, I indeed meant minor releases. > > The difference is that right now we don't introduce new features > (regardless of backwards compatibility promises) for any > non-provisional code in minor releases, we can only do bug fixes. > > My proposal is to enable asyncio receiving new strictly backwards > compatible APIs/features (and bug fixes too, of course) in minor > releases (3.5.x). > Turns out I was lost in terminology :) Considering that Python versioning is defined as major.minor.micro, I'll rephrase the proposal: Level 1: Backwards incompatible changes might be introduced in new Python releases (including micro releases) Level 2: Only backwards compatible changes (new APIs including) can be introduced in micro releases. Sorry for the confusion. Yury From guido at python.org Thu Aug 27 23:42:54 2015 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Aug 2015 14:42:54 -0700 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF8382.5090200@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> Message-ID: Please use "feature release" (e.g. 3.5 -> 3.6) and "bugfix release" (e.g. 3.5.0 -> 3.5.1). The major/minor terminology is confusing, since something like 2 -> 3 isn't just "major", it is "earthshattering". :-) On Thu, Aug 27, 2015 at 2:39 PM, Yury Selivanov wrote: > > On 2015-08-27 5:31 PM, Yury Selivanov wrote: > >> On 2015-08-27 5:24 PM, Brett Cannon wrote: >> >>> >>> My proposal is to amend PEP 411 with two levels of provisional >>> packages: >>> >>> Level 1: Backwards incompatible changes might be introduced in point >>> releases. >>> >>> Level 2: Only backwards compatible changes can be introduced in >>> new point >>> releases. >>> >>> >>> How is this any different from the normal compatibility promise we have >>> for any non-provisional code in the stdlib? >>> >>> And by point release I assume you mean a new minor release, e.g. 3.5 -> >>> 3.6. >>> >> >> Right, my mistake, I indeed meant minor releases. >> >> The difference is that right now we don't introduce new features >> (regardless of backwards compatibility promises) for any non-provisional >> code in minor releases, we can only do bug fixes. >> >> My proposal is to enable asyncio receiving new strictly backwards >> compatible APIs/features (and bug fixes too, of course) in minor releases >> (3.5.x). >> >> > Turns out I was lost in terminology :) > > Considering that Python versioning is defined as major.minor.micro, I'll > rephrase the proposal: > > Level 1: Backwards incompatible changes might be introduced in new Python > releases (including micro releases) > > Level 2: Only backwards compatible changes (new APIs including) can be > introduced in micro releases. > > Sorry for the confusion. > > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Aug 27 23:53:57 2015 From: brett at python.org (Brett Cannon) Date: Thu, 27 Aug 2015 21:53:57 +0000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF8382.5090200@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> Message-ID: On Thu, 27 Aug 2015 at 14:39 Yury Selivanov wrote: > > On 2015-08-27 5:31 PM, Yury Selivanov wrote: > > On 2015-08-27 5:24 PM, Brett Cannon wrote: > >> > >> My proposal is to amend PEP 411 with two levels of provisional > >> packages: > >> > >> Level 1: Backwards incompatible changes might be introduced in point > >> releases. > >> > >> Level 2: Only backwards compatible changes can be introduced in > >> new point > >> releases. > >> > >> > >> How is this any different from the normal compatibility promise we > >> have for any non-provisional code in the stdlib? > >> > >> And by point release I assume you mean a new minor release, e.g. 3.5 > >> -> 3.6. > > > > Right, my mistake, I indeed meant minor releases. > > > > The difference is that right now we don't introduce new features > > (regardless of backwards compatibility promises) for any > > non-provisional code in minor releases, we can only do bug fixes. > > > > My proposal is to enable asyncio receiving new strictly backwards > > compatible APIs/features (and bug fixes too, of course) in minor > > releases (3.5.x). > > > > Turns out I was lost in terminology :) > > Considering that Python versioning is defined as major.minor.micro, I'll > rephrase the proposal: > > Level 1: Backwards incompatible changes might be introduced in new > Python releases (including micro releases) > > Level 2: Only backwards compatible changes (new APIs including) can be > introduced in micro releases. > In that case I don't think it's a good idea for something that has widespread use to get new APIs in a micro release; I lived the 2.2.1/boolean event and I don't want to go through that again. If a module is used enough to warrant not breaking backwards-compatibility then it warrants not being provisional and being like any other module. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Aug 27 23:24:15 2015 From: brett at python.org (Brett Cannon) Date: Thu, 27 Aug 2015 21:24:15 +0000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF7E06.1020207@gmail.com> References: <55DF7E06.1020207@gmail.com> Message-ID: On Thu, 27 Aug 2015 at 14:16 Yury Selivanov wrote: > Recently, in an asyncio related issue [1], Guido said that new features > for asyncio have to wait till 3.6, since asyncio is no longer a provisional > package. Later, in an off-list conversation, he suggested that this topic > should be discussed on python-dev, and that it might indeed make sense to > either write a new PEP for cases like this or to augment PEP 411. > > My opinion on this topic is that we must maintain full backwards > compatibility for asyncio from 3.5.0, as it is now widely used, and there > is quite a big ecosystem around it. However, asyncio is simply not mature > enough to be completely feature frozen for about 2 years. > > For example, there is an issue [2] to add starttls support to asyncio. It > is an essential feature, because some protocols start as clear text and > upgrade to TLS later, for example PostgreSQL PQ3 protocol. It's very hard > to implement starttls on top of asyncio, lots of code will have to be > duplicated -- it's a feature that has to implemented in the asyncio core. > > Aside from adding new APIs, we also have to improve debugging > capabilities. One example is using os.fork() from within a running event > loop -- it must be avoided by all means. There are safe ways to fork in > asyncio applications (and I plan to document them soon), but asyncio > should raise an exception in debug mode if this happens (see issue [3]). > > These are just two immediate issues that I have in mind. In reality, > asyncio is quite young compared to frameworks like Twisted, which had > years to mature, and accumulate essential features. > > My proposal is to amend PEP 411 with two levels of provisional packages: > > Level 1: Backwards incompatible changes might be introduced in point > releases. > > Level 2: Only backwards compatible changes can be introduced in new point > releases. > How is this any different from the normal compatibility promise we have for any non-provisional code in the stdlib? And by point release I assume you mean a new minor release, e.g. 3.5 -> 3.6. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Aug 28 00:24:33 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 18:24:33 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> Message-ID: <55DF8E21.40100@gmail.com> On 2015-08-27 5:53 PM, Brett Cannon wrote: > > > Considering that Python versioning is defined as > major.minor.micro, I'll > rephrase the proposal: > > Level 1: Backwards incompatible changes might be introduced in new > Python releases (including micro releases) > > Level 2: Only backwards compatible changes (new APIs including) can be > introduced in micro releases. > > > In that case I don't think it's a good idea for something that has > widespread use to get new APIs in a micro release; I lived the > 2.2.1/boolean event and I don't want to go through that again. If a > module is used enough to warrant not breaking backwards-compatibility > then it warrants not being provisional and being like any other module. I wasn't using Python 2.2/2.3, but from what I could google the "2.2.1/boolean event" you mention was introducing True/False/bool built-ins. This sounds like a language-level change, as opposed to new API in a stdlib module, which is a different scale. My understanding about adding new features in bugfix releases (3.5.x) is that you might end up in a situation where your 3.5 code developed on 3.5.x suddenly stops working on 3.5.y. Yes, you have to be careful about how you deploy and test your code when using a provisional package. But the thing about asyncio is that it *is* still provisional in 3.4. During 3.4 release cycle we introduced many new features to asyncio, and to be honest, I haven't heard anybody complaining. I believe that main motivation for making asyncio non-provisional was to guarantee that we won't introduce backwards-incompatible changes to it. Given the fact that asyncio sees some adoption, I support that from now on we will guarantee that backwards compatibility is preserved. But withholding new useful (and sometimes essential) features till 3.6.0 is out (March 2017?) sounds wrong to me. I should also mention that asyncio is different from other packages in the stdlib: 1. It's new, it virtually didn't exist before 3.4.0. 2. It's not a module, it's a framework. If it lacks a core feature (like starttls) that is hard to implement as an add-on, you're basically forced either copy/paste a lot of code or to fork asyncio. And if you fork it, how will your dependencies can be upgraded to use that fork? I want to continue (as we did in 3.4.x releases) evolving asyncio on a faster scale than CPython currently evolves, *and* to guarantee that we won't break existing code. That's why I propose to tweak our definition of provisional packages. Yury From guido at python.org Fri Aug 28 00:44:37 2015 From: guido at python.org (Guido van Rossum) Date: Thu, 27 Aug 2015 15:44:37 -0700 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF8E21.40100@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> Message-ID: Maybe asyncio should just be kept provisional during 3.5, with a separate promise to remain backward compatible? On Thu, Aug 27, 2015 at 3:24 PM, Yury Selivanov wrote: > On 2015-08-27 5:53 PM, Brett Cannon wrote: > >> >> >> Considering that Python versioning is defined as >> major.minor.micro, I'll >> rephrase the proposal: >> >> Level 1: Backwards incompatible changes might be introduced in new >> Python releases (including micro releases) >> >> Level 2: Only backwards compatible changes (new APIs including) can be >> introduced in micro releases. >> >> >> In that case I don't think it's a good idea for something that has >> widespread use to get new APIs in a micro release; I lived the >> 2.2.1/boolean event and I don't want to go through that again. If a module >> is used enough to warrant not breaking backwards-compatibility then it >> warrants not being provisional and being like any other module. >> > > I wasn't using Python 2.2/2.3, but from what I could google the > "2.2.1/boolean event" you mention was introducing True/False/bool > built-ins. This sounds like a language-level change, as opposed to new API > in a stdlib module, which is a different scale. > > My understanding about adding new features in bugfix releases (3.5.x) is > that you might end up in a situation where your 3.5 code developed on 3.5.x > suddenly stops working on 3.5.y. Yes, you have to be careful about how you > deploy and test your code when using a provisional package. > > But the thing about asyncio is that it *is* still provisional in 3.4. > During 3.4 release cycle we introduced many new features to asyncio, and to > be honest, I haven't heard anybody complaining. I believe that main > motivation for making asyncio non-provisional was to guarantee that we > won't introduce backwards-incompatible changes to it. > > Given the fact that asyncio sees some adoption, I support that from now on > we will guarantee that backwards compatibility is preserved. But > withholding new useful (and sometimes essential) features till 3.6.0 is out > (March 2017?) sounds wrong to me. > > I should also mention that asyncio is different from other packages in the > stdlib: > > 1. It's new, it virtually didn't exist before 3.4.0. > > 2. It's not a module, it's a framework. If it lacks a core feature (like > starttls) that is hard to implement as an add-on, you're basically forced > either copy/paste a lot of code or to fork asyncio. And if you fork it, > how will your dependencies can be upgraded to use that fork? > > I want to continue (as we did in 3.4.x releases) evolving asyncio on a > faster scale than CPython currently evolves, *and* to guarantee that we > won't break existing code. That's why I propose to tweak our definition of > provisional packages. > > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Fri Aug 28 00:46:28 2015 From: brett at python.org (Brett Cannon) Date: Thu, 27 Aug 2015 22:46:28 +0000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF8E21.40100@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> Message-ID: On Thu, 27 Aug 2015 at 15:24 Yury Selivanov wrote: > On 2015-08-27 5:53 PM, Brett Cannon wrote: > > > > > > Considering that Python versioning is defined as > > major.minor.micro, I'll > > rephrase the proposal: > > > > Level 1: Backwards incompatible changes might be introduced in new > > Python releases (including micro releases) > > > > Level 2: Only backwards compatible changes (new APIs including) can > be > > introduced in micro releases. > > > > > > In that case I don't think it's a good idea for something that has > > widespread use to get new APIs in a micro release; I lived the > > 2.2.1/boolean event and I don't want to go through that again. If a > > module is used enough to warrant not breaking backwards-compatibility > > then it warrants not being provisional and being like any other module. > > I wasn't using Python 2.2/2.3, but from what I could google the > "2.2.1/boolean event" you mention was introducing True/False/bool > built-ins. This sounds like a language-level change, as opposed to new > API in a stdlib module, which is a different scale. > True, but that doesn't mean that 3.5.1 will be able to run code that 3.5.0 has no chance of running because you introduced a new feature in asyncio. > > My understanding about adding new features in bugfix releases (3.5.x) is > that you might end up in a situation where your 3.5 code developed on > 3.5.x suddenly stops working on 3.5.y. Yes, you have to be careful > about how you deploy and test your code when using a provisional package. > > But the thing about asyncio is that it *is* still provisional in 3.4. > Right, but we're talking about 3.5 here, right? > During 3.4 release cycle we introduced many new features to asyncio, and > to be honest, I haven't heard anybody complaining. I believe that main > motivation for making asyncio non-provisional was to guarantee that we > won't introduce backwards-incompatible changes to it. > > Given the fact that asyncio sees some adoption, I support that from now > on we will guarantee that backwards compatibility is preserved. But > withholding new useful (and sometimes essential) features till 3.6.0 is > out (March 2017?) sounds wrong to me. > > I should also mention that asyncio is different from other packages in > the stdlib: > > 1. It's new, it virtually didn't exist before 3.4.0. > Sure, but now it does. You said yourself that asyncio is seeing adoption, so exists now for at least some people. > > 2. It's not a module, it's a framework. But it's still in the stdlib and has already gone through one release as provisional. > If it lacks a core feature > (like starttls) that is hard to implement as an add-on, you're basically > forced either copy/paste a lot of code or to fork asyncio. And if you > fork it, how will your dependencies can be upgraded to use that fork? > I'm not forking so that's not my problem. =) > > I want to continue (as we did in 3.4.x releases) evolving asyncio on a > faster scale than CPython currently evolves, *and* to guarantee that we > won't break existing code. That's why I propose to tweak our definition > of provisional packages. > I still don't like it. I say it's either fully provisional or it's not. I'm fine with extending its provisional status another feature release as long as it clearly states that in What's New for 3.5, but I don't think this granularity guarantee of not breaking APIs while adding new features is worth it. What if you want to add a new feature that is really hard to do right without breaking compatibility? We all know how trying that is. If you truly want to keep an accelerated development cycle, then short of releasing new stdlib versions every 6 months separate from the language then I say keep it provisional for 3.5. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Aug 28 00:47:28 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 18:47:28 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> Message-ID: <55DF9380.3060201@gmail.com> On 2015-08-27 6:44 PM, Guido van Rossum wrote: > Maybe asyncio should just be kept provisional during 3.5, with a > separate promise to remain backward compatible? I'm +1. I'm also certain that by 3.6.0 we will stabilize asyncio to the point we can freeze it like any other stdlib module. Yury From brett at python.org Fri Aug 28 00:53:13 2015 From: brett at python.org (Brett Cannon) Date: Thu, 27 Aug 2015 22:53:13 +0000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> Message-ID: On Thu, 27 Aug 2015 at 15:44 Guido van Rossum wrote: > Maybe asyncio should just be kept provisional during 3.5, with a separate > promise to remain backward compatible? > My worry is that promising backwards-compatibility while still trying to change things is going to lead to needlessly hampering the evolution of the framework. If you want to say you will try hard to not break code and stay provisional I'm all for that (although I assume we already do that anyway), but making our standard backwards-compatibility promise on top of new features just seems too messy to me without knowing where the new features will take you. Past that it seems like asyncio is cheating on our bugfix release compatibility promises because it was lucky enough to be added as provisional. I realize asyncio went in like it did partially to motivate `yield from` and now async/await, and that's fine. But I say go all-in and just stay provisional another release and then lock it down in 3.6. -Brett > > On Thu, Aug 27, 2015 at 3:24 PM, Yury Selivanov > wrote: > >> On 2015-08-27 5:53 PM, Brett Cannon wrote: >> >>> >>> >>> Considering that Python versioning is defined as >>> major.minor.micro, I'll >>> rephrase the proposal: >>> >>> Level 1: Backwards incompatible changes might be introduced in new >>> Python releases (including micro releases) >>> >>> Level 2: Only backwards compatible changes (new APIs including) can >>> be >>> introduced in micro releases. >>> >>> >>> In that case I don't think it's a good idea for something that has >>> widespread use to get new APIs in a micro release; I lived the >>> 2.2.1/boolean event and I don't want to go through that again. If a module >>> is used enough to warrant not breaking backwards-compatibility then it >>> warrants not being provisional and being like any other module. >>> >> >> I wasn't using Python 2.2/2.3, but from what I could google the >> "2.2.1/boolean event" you mention was introducing True/False/bool >> built-ins. This sounds like a language-level change, as opposed to new API >> in a stdlib module, which is a different scale. >> >> My understanding about adding new features in bugfix releases (3.5.x) is >> that you might end up in a situation where your 3.5 code developed on 3.5.x >> suddenly stops working on 3.5.y. Yes, you have to be careful about how you >> deploy and test your code when using a provisional package. >> >> But the thing about asyncio is that it *is* still provisional in 3.4. >> During 3.4 release cycle we introduced many new features to asyncio, and to >> be honest, I haven't heard anybody complaining. I believe that main >> motivation for making asyncio non-provisional was to guarantee that we >> won't introduce backwards-incompatible changes to it. >> >> Given the fact that asyncio sees some adoption, I support that from now >> on we will guarantee that backwards compatibility is preserved. But >> withholding new useful (and sometimes essential) features till 3.6.0 is out >> (March 2017?) sounds wrong to me. >> >> I should also mention that asyncio is different from other packages in >> the stdlib: >> >> 1. It's new, it virtually didn't exist before 3.4.0. >> >> 2. It's not a module, it's a framework. If it lacks a core feature (like >> starttls) that is hard to implement as an add-on, you're basically forced >> either copy/paste a lot of code or to fork asyncio. And if you fork it, >> how will your dependencies can be upgraded to use that fork? >> >> I want to continue (as we did in 3.4.x releases) evolving asyncio on a >> faster scale than CPython currently evolves, *and* to guarantee that we >> won't break existing code. That's why I propose to tweak our definition of >> provisional packages. > > >> >> Yury >> > >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Fri Aug 28 01:00:26 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Thu, 27 Aug 2015 19:00:26 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> Message-ID: <55DF968A.805@gmail.com> Brett, On 2015-08-27 6:46 PM, Brett Cannon wrote: [...] > I say it's either fully provisional or it's not. I'm fine with > extending its provisional status another feature release as long as it > clearly states that in What's New for 3.5, but I don't think this > granularity guarantee of not breaking APIs while adding new features > is worth it. What if you want to add a new feature that is really hard > to do right without breaking compatibility? We all know how trying > that is. If you truly want to keep an accelerated development cycle, > then short of releasing new stdlib versions every 6 months separate > from the language then I say keep it provisional for 3.5. I'm fine with keeping it provisional in 3.5 (and Guido suggests this idea too in this thread). A lot of companies (including big ones) are using asyncio already, despite the fact that it's provisional in 3.4. I seriously doubt that keeping it provisional in 3.5 will do any harm. asyncio documentation in 3.4.x has the following notes section: Note: The asyncio package has been included in the standard library on a provisional basis. Backwards incompatible changes (up to and including removal of the module) may occur if deemed necessary by the core developers. I suggest to add a slightly less strong-worded note to 3.5 documentation: Note: The asyncio package has been included in the standard library on a provisional basis. Backwards incompatible changes may occur if deemed absolutely necessary by the core developers. Yury From ncoghlan at gmail.com Fri Aug 28 01:25:07 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 28 Aug 2015 09:25:07 +1000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF968A.805@gmail.com> References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> <55DF968A.805@gmail.com> Message-ID: On 28 August 2015 at 09:00, Yury Selivanov wrote: > Brett, > > On 2015-08-27 6:46 PM, Brett Cannon wrote: > [...] >> >> I say it's either fully provisional or it's not. I'm fine with extending >> its provisional status another feature release as long as it clearly states >> that in What's New for 3.5, but I don't think this granularity guarantee of >> not breaking APIs while adding new features is worth it. What if you want to >> add a new feature that is really hard to do right without breaking >> compatibility? We all know how trying that is. If you truly want to keep an >> accelerated development cycle, then short of releasing new stdlib versions >> every 6 months separate from the language then I say keep it provisional for >> 3.5. > > > I'm fine with keeping it provisional in 3.5 (and Guido suggests this idea > too in this thread). > > A lot of companies (including big ones) are using asyncio already, despite > the fact that it's provisional in 3.4. I seriously doubt that keeping it > provisional in 3.5 will do any harm. > > asyncio documentation in 3.4.x has the following notes section: > > Note: The asyncio package has been included in the > standard library on a provisional basis. Backwards > incompatible changes (up to and including removal > of the module) may occur if deemed necessary by the > core developers. > > I suggest to add a slightly less strong-worded note to 3.5 documentation: > > Note: The asyncio package has been included in the > standard library on a provisional basis. Backwards > incompatible changes may occur if deemed absolutely > necessary by the core developers. I'd suggest including a clearer motivation there: Note: The asyncio package has been included in the standard library on a provisional basis, and thus may gain new APIs and capabilities in maintenance releases as it matures. Backwards incompatible changes may occur if deemed absolutely necessary by the core developers. It's a gentler phrasing that still serves to warn away folks that are particularly change averse, while also assuring folks on faster iteration cycles that the update cadence is still 6 months rather than 18-24 months. New standard library APIs could thus evolve through "not stable yet" -> "mostly stable" -> "stable". This is essentially Yury's original "two levels of provisional" idea - if a package survives an entire release cycle, it's pretty clear we're not going to be taking it out, but it's also the case that a comparatively broad API like asyncio may take a couple of feature release cycles to settle down to a point where we can declare it sufficiently complete that we're only going to add new interfaces in future feature releases. asyncio's just the first addition we've made under the PEP 411 guidelines that's big enough to benefit from the extra release cycle of stabilisation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From zachary.ware+pydev at gmail.com Fri Aug 28 08:55:01 2015 From: zachary.ware+pydev at gmail.com (Zachary Ware) Date: Fri, 28 Aug 2015 01:55:01 -0500 Subject: [Python-Dev] Testing tkinter on Linux In-Reply-To: References: <20150827190041.2CC68B20098@webabinitio.net> Message-ID: On Thu, Aug 27, 2015 at 4:13 PM, Chris Angelico wrote: > On Fri, Aug 28, 2015 at 5:00 AM, R. David Murray wrote: >> I believe gui depends on the existence of the DISPLAY environment >> variable on unix/linux (that is, TK will fail to start if DISPLAY is not >> set, so _is_gui_available will return False). You should be able to >> confirm this by looking at the text of the skip message in the buildbot >> output. > > A recent buildbot log [1] shows that the GUI tests are being skipped, > although I'm not seeing the message. Where do I go to set DISPLAY for > the bot (which runs mostly in the background)? > > Note that it's all running as root, which may make a difference to the > defaults, so this might have to be done more explicitly than it > otherwise would. (But root is quite happy to use X; running xclock > brings up a clock just fine.) I just set up a buildslave building with X available [1]. I don't know if the way I did it is strictly correct, but my slave is set up with Xvfb set to start at boot, and I hacked the buildbot.tac file to add 'os.environ["DISPLAY"] = ":100"' (which is what Xvfb starts on) as that was the simplest way I could figure out to do it. It seems to work; an initial test build of 2.7 passed all the Tcl/Tk tests [2]. For those interested, the slave has Tcl/Tk 8.5.17 installed (which is the latest "stable" release in portage). [1] http://buildbot.python.org/all/buildslaves/ware-gentoo-x86 [2] http://buildbot.python.org/all/builders/x86%20Gentoo%20Non-Debug%20with%20X%202.7/builds/0/steps/test/logs/stdio -- Zach From p.f.moore at gmail.com Fri Aug 28 09:39:17 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 28 Aug 2015 08:39:17 +0100 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55DF81C5.907@gmail.com> <55DF8382.5090200@gmail.com> <55DF8E21.40100@gmail.com> <55DF968A.805@gmail.com> Message-ID: On 28 August 2015 at 00:25, Nick Coghlan wrote: > I'd suggest including a clearer motivation there: > > Note: The asyncio package has been included in the standard > library on a provisional basis, and thus may gain new APIs and > capabilities in maintenance releases as it matures. Backwards > incompatible changes may occur if deemed absolutely necessary by the > core developers. I'm happy with a statement like this offering additional guidance, but I think that formally we should stick with the current provisional-or-not situation (with asyncio remaining provisional for another release, if the asyncio devs feel that's needed). Ultimately, end users only really have two choices - use a library or not - so adding extra levels of "provisionalness" actually complicates their choice rather than simplifying it. Paul From victor.stinner at gmail.com Fri Aug 28 11:01:24 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 28 Aug 2015 11:01:24 +0200 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55DF7E06.1020207@gmail.com> References: <55DF7E06.1020207@gmail.com> Message-ID: Hi, 2015-08-27 23:15 GMT+02:00 Yury Selivanov : > Recently, in an asyncio related issue [1], Guido said that new features > for asyncio have to wait till 3.6, since asyncio is no longer a provisional > package. (...) > For example, there is an issue [2] to add starttls support to asyncio. > (...) > Aside from adding new APIs, we also have to improve debugging > capabilities. > (...) I would propose something more radical: remove asyncio from the stdlib. PEP 411: "While it is considered an unlikely outcome, such packages *may even be removed* from the standard library without a deprecation period if the concerns regarding their API or maintenance prove well-founded." As an asyncio developer, I'm not sure that asyncio fits well into the Python stdlib. The release cycle of feature release is long (18 months? or more?), the release cycle for bugfix release is sometimes also too long (1 month at least). It's also frustrating to have subtle API differences between Python 3.3, 3.4 and 3.5. For example, asyncio.JoinableQueue was removed in Python 3.5, asyncio.Queue can *now* be used instead, but asyncio.JoinableQueue should be used on older Python version... It means that you have to write different code depending on your Python version to support all Python versions. I can give much more examples of missing asyncio features. Example: Windows proactor event loop doesn't support signals (CTRL+C) nor UDP. asyncio is moving so fast, that changes are not documented at all in Misc/NEWS or Doc/whatsnews/x.y.rst. I tried to document changes in my fork Trollius. See its changelog to have an idea how fast asyncio is still changing: http://trollius.readthedocs.org/changelog.html I don't know how users will react to the removal of asyncio from the stdlib ("asyncio is not trusted/supported by Python?"). The idea is to install it using pip: "pip install asyncio". The major difference is that "pip install -U asyncio" allows to retrieve the latest asyncio version, independently of your Python version. Hum, I don't know if it works with Python 3.4 (which "asyncio" module is used in this case?). Developing asyncio only on Github would avoid the risk of having patches temporary only in Github or only in CPython. It avoids the need to synchronize the Git (Github) and Mercurial (CPython) repositories. Compare Python release dates with Twisted, Tornado and eventlet release dates. Twisted releases: * 2015-01-30: 15.0.0 * 2015-04-13: 15.1.0 * 2015-05-19: 15.2.0 * 2015-08-04: 15.3.0 Tornado releases: * 2014-07-15: 4.0 * 2015-02-07: 4.1 * 2015-05-27: 4.2 * 2015-07-17: 4.2.1 eventlet releases: * 2015-02-25: 0.17.1 * 2015-04-03: 0.17.2 * 2015-04-09: 0.17.3 * 2015-05-08: 0.17.4 Victor From gjcarneiro at gmail.com Fri Aug 28 11:46:05 2015 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Fri, 28 Aug 2015 10:46:05 +0100 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> Message-ID: I think this is a packaging problem. In an ideal world, Python would ship some version of asyncio, but you would also be able to pip install a newer version into your virtual environment, and it would override the version in stdlib. As it stands now, however, if you pip install another version of asyncio, the version in stdlib still takes precedence. What I end up doing in my (non open source) projects is to include a copy of asyncio and manually modify sys.path to point to it. Can we fix pip/virtualenv instead? On 28 Aug 2015 10:02 am, "Victor Stinner" wrote: > Hi, > > 2015-08-27 23:15 GMT+02:00 Yury Selivanov : > > Recently, in an asyncio related issue [1], Guido said that new features > > for asyncio have to wait till 3.6, since asyncio is no longer a > provisional > > package. (...) > > For example, there is an issue [2] to add starttls support to asyncio. > > (...) > > Aside from adding new APIs, we also have to improve debugging > > capabilities. > > (...) > > I would propose something more radical: remove asyncio from the stdlib. > > PEP 411: "While it is considered an unlikely outcome, such packages > *may even be removed* from the standard library without a deprecation > period if the concerns regarding their API or maintenance prove > well-founded." > > As an asyncio developer, I'm not sure that asyncio fits well into the > Python stdlib. The release cycle of feature release is long (18 > months? or more?), the release cycle for bugfix release is sometimes > also too long (1 month at least). It's also frustrating to have subtle > API differences between Python 3.3, 3.4 and 3.5. For example, > asyncio.JoinableQueue was removed in Python 3.5, asyncio.Queue can > *now* be used instead, but asyncio.JoinableQueue should be used on > older Python version... It means that you have to write different code > depending on your Python version to support all Python versions. > > I can give much more examples of missing asyncio features. Example: > Windows proactor event loop doesn't support signals (CTRL+C) nor UDP. > > asyncio is moving so fast, that changes are not documented at all in > Misc/NEWS or Doc/whatsnews/x.y.rst. I tried to document changes in my > fork Trollius. See its changelog to have an idea how fast asyncio is > still changing: > http://trollius.readthedocs.org/changelog.html > > I don't know how users will react to the removal of asyncio from the > stdlib ("asyncio is not trusted/supported by Python?"). > > The idea is to install it using pip: "pip install asyncio". The major > difference is that "pip install -U asyncio" allows to retrieve the > latest asyncio version, independently of your Python version. Hum, I > don't know if it works with Python 3.4 (which "asyncio" module is used > in this case?). > > Developing asyncio only on Github would avoid the risk of having > patches temporary only in Github or only in CPython. It avoids the > need to synchronize the Git (Github) and Mercurial (CPython) > repositories. > > > Compare Python release dates with Twisted, Tornado and eventlet release > dates. > > Twisted releases: > > * 2015-01-30: 15.0.0 > * 2015-04-13: 15.1.0 > * 2015-05-19: 15.2.0 > * 2015-08-04: 15.3.0 > > Tornado releases: > > * 2014-07-15: 4.0 > * 2015-02-07: 4.1 > * 2015-05-27: 4.2 > * 2015-07-17: 4.2.1 > > eventlet releases: > > * 2015-02-25: 0.17.1 > * 2015-04-03: 0.17.2 > * 2015-04-09: 0.17.3 > * 2015-05-08: 0.17.4 > > Victor > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.f.moore at gmail.com Fri Aug 28 12:28:56 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 28 Aug 2015 11:28:56 +0100 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> Message-ID: On 28 August 2015 at 10:46, Gustavo Carneiro wrote: > I think this is a packaging problem. > > In an ideal world, Python would ship some version of asyncio, but you would > also be able to pip install a newer version into your virtual environment, > and it would override the version in stdlib. > > As it stands now, however, if you pip install another version of asyncio, > the version in stdlib still takes precedence. What I end up doing in my > (non open source) projects is to include a copy of asyncio and manually > modify sys.path to point to it. > > Can we fix pip/virtualenv instead? It's not a pip/virtualenv issue - it's a result of the way sys.path is set (in site.py IIRC). And it was a deliberate decision to put the stdlib before user installed packages, precisely so that random PyPI packages can't override the standard behaviour. (Setuptools included some hacks to circumvent this, but they are pretty nasty, and TBH not widely liked...) Paul From donald at stufft.io Fri Aug 28 17:09:04 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 28 Aug 2015 11:09:04 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> Message-ID: On August 28, 2015 at 6:36:21 AM, Paul Moore (p.f.moore at gmail.com) wrote: > On 28 August 2015 at 10:46, Gustavo Carneiro wrote: > > I think this is a packaging problem. > > > > In an ideal world, Python would ship some version of asyncio, but you would > > also be able to pip install a newer version into your virtual environment, > > and it would override the version in stdlib. > > > > As it stands now, however, if you pip install another version of asyncio, > > the version in stdlib still takes precedence. What I end up doing in my > > (non open source) projects is to include a copy of asyncio and manually > > modify sys.path to point to it. > > > > Can we fix pip/virtualenv instead? > > It's not a pip/virtualenv issue - it's a result of the way sys.path is > set (in site.py IIRC). And it was a deliberate decision to put the > stdlib before user installed packages, precisely so that random PyPI > packages can't override the standard behaviour. (Setuptools included > some hacks to circumvent this, but they are pretty nasty, and TBH not > widely liked...) > Right, the default sys.path is stdlib > user-packages > site-packages. I would personally prefer it if looked more like user-packages > site-packages > stdlib but I?m not sure how popular that opinion is :)? ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From yselivanov.ml at gmail.com Fri Aug 28 17:34:49 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 28 Aug 2015 11:34:49 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> Message-ID: <55E07F99.8070008@gmail.com> Victor, On 2015-08-28 5:01 AM, Victor Stinner wrote: > Hi, > > 2015-08-27 23:15 GMT+02:00 Yury Selivanov : >> Recently, in an asyncio related issue [1], Guido said that new features >> for asyncio have to wait till 3.6, since asyncio is no longer a provisional >> package. (...) >> For example, there is an issue [2] to add starttls support to asyncio. >> (...) >> Aside from adding new APIs, we also have to improve debugging >> capabilities. >> (...) > I would propose something more radical: remove asyncio from the stdlib. I too would enjoy more frequent release schedule of asyncio. Unfortunately, separating it from the standard library is something that I don't think we can do so late in the 3.5 release candidates process. > > PEP 411: "While it is considered an unlikely outcome, such packages > *may even be removed* from the standard library without a deprecation > period if the concerns regarding their API or maintenance prove > well-founded." > > As an asyncio developer, I'm not sure that asyncio fits well into the > Python stdlib. The release cycle of feature release is long (18 > months? or more?), the release cycle for bugfix release is sometimes > also too long (1 month at least). It's also frustrating to have subtle > API differences between Python 3.3, 3.4 and 3.5. For example, > asyncio.JoinableQueue was removed in Python 3.5, asyncio.Queue can > *now* be used instead, but asyncio.JoinableQueue should be used on > older Python version... It means that you have to write different code > depending on your Python version to support all Python versions. > > I can give much more examples of missing asyncio features. Example: > Windows proactor event loop doesn't support signals (CTRL+C) nor UDP. > > asyncio is moving so fast, that changes are not documented at all in > Misc/NEWS or Doc/whatsnews/x.y.rst. I tried to document changes in my > fork Trollius. See its changelog to have an idea how fast asyncio is > still changing: > http://trollius.readthedocs.org/changelog.html > > I don't know how users will react to the removal of asyncio from the > stdlib ("asyncio is not trusted/supported by Python?"). That's another concern. > > The idea is to install it using pip: "pip install asyncio". The major > difference is that "pip install -U asyncio" allows to retrieve the > latest asyncio version, independently of your Python version. Hum, I > don't know if it works with Python 3.4 (which "asyncio" module is used > in this case?). The one from the stdlib. > > Developing asyncio only on Github would avoid the risk of having > patches temporary only in Github or only in CPython. It avoids the > need to synchronize the Git (Github) and Mercurial (CPython) > repositories. > > > Compare Python release dates with Twisted, Tornado and eventlet release dates. > > Twisted releases: > > * 2015-01-30: 15.0.0 > * 2015-04-13: 15.1.0 > * 2015-05-19: 15.2.0 > * 2015-08-04: 15.3.0 > > Tornado releases: > > * 2014-07-15: 4.0 > * 2015-02-07: 4.1 > * 2015-05-27: 4.2 > * 2015-07-17: 4.2.1 > > eventlet releases: > > * 2015-02-25: 0.17.1 > * 2015-04-03: 0.17.2 > * 2015-04-09: 0.17.3 > * 2015-05-08: 0.17.4 > These are very good stats. It shows that even mature async libraries require frequent release cycles. Anyways, I vote for at least keeping asyncio provisional in 3.5.x. I'd be glad if we can consider to have more bugfix releases of 3.5 (and 3.6), say every 3 months. Yury From brett at python.org Fri Aug 28 17:44:07 2015 From: brett at python.org (Brett Cannon) Date: Fri, 28 Aug 2015 15:44:07 +0000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E07F99.8070008@gmail.com> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: On Fri, 28 Aug 2015 at 08:35 Yury Selivanov wrote: > Victor, > > On 2015-08-28 5:01 AM, Victor Stinner wrote: > > Hi, > > > > 2015-08-27 23:15 GMT+02:00 Yury Selivanov : > >> Recently, in an asyncio related issue [1], Guido said that new features > >> for asyncio have to wait till 3.6, since asyncio is no longer a > provisional > >> package. (...) > >> For example, there is an issue [2] to add starttls support to asyncio. > >> (...) > >> Aside from adding new APIs, we also have to improve debugging > >> capabilities. > >> (...) > > I would propose something more radical: remove asyncio from the stdlib. > > I too would enjoy more frequent release schedule of asyncio. > > Unfortunately, separating it from the standard library is something > that I don't think we can do so late in the 3.5 release candidates > process. > Ultimately it's Larry's call, but I don't see why we couldn't. If we were talking about something as low-level as the urllib package then I would agree, but beyond its own tests is there anything in the stdlib that depends on asyncio? -Brett > > > > > > PEP 411: "While it is considered an unlikely outcome, such packages > > *may even be removed* from the standard library without a deprecation > > period if the concerns regarding their API or maintenance prove > > well-founded." > > > > As an asyncio developer, I'm not sure that asyncio fits well into the > > Python stdlib. The release cycle of feature release is long (18 > > months? or more?), the release cycle for bugfix release is sometimes > > also too long (1 month at least). It's also frustrating to have subtle > > API differences between Python 3.3, 3.4 and 3.5. For example, > > asyncio.JoinableQueue was removed in Python 3.5, asyncio.Queue can > > *now* be used instead, but asyncio.JoinableQueue should be used on > > older Python version... It means that you have to write different code > > depending on your Python version to support all Python versions. > > > > I can give much more examples of missing asyncio features. Example: > > Windows proactor event loop doesn't support signals (CTRL+C) nor UDP. > > > > asyncio is moving so fast, that changes are not documented at all in > > Misc/NEWS or Doc/whatsnews/x.y.rst. I tried to document changes in my > > fork Trollius. See its changelog to have an idea how fast asyncio is > > still changing: > > http://trollius.readthedocs.org/changelog.html > > > > I don't know how users will react to the removal of asyncio from the > > stdlib ("asyncio is not trusted/supported by Python?"). > > That's another concern. > > > > > The idea is to install it using pip: "pip install asyncio". The major > > difference is that "pip install -U asyncio" allows to retrieve the > > latest asyncio version, independently of your Python version. Hum, I > > don't know if it works with Python 3.4 (which "asyncio" module is used > > in this case?). > > The one from the stdlib. > > > > > Developing asyncio only on Github would avoid the risk of having > > patches temporary only in Github or only in CPython. It avoids the > > need to synchronize the Git (Github) and Mercurial (CPython) > > repositories. > > > > > > Compare Python release dates with Twisted, Tornado and eventlet release > dates. > > > > Twisted releases: > > > > * 2015-01-30: 15.0.0 > > * 2015-04-13: 15.1.0 > > * 2015-05-19: 15.2.0 > > * 2015-08-04: 15.3.0 > > > > Tornado releases: > > > > * 2014-07-15: 4.0 > > * 2015-02-07: 4.1 > > * 2015-05-27: 4.2 > > * 2015-07-17: 4.2.1 > > > > eventlet releases: > > > > * 2015-02-25: 0.17.1 > > * 2015-04-03: 0.17.2 > > * 2015-04-09: 0.17.3 > > * 2015-05-08: 0.17.4 > > > > > These are very good stats. It shows that even mature async libraries > require frequent release cycles. > > Anyways, I vote for at least keeping asyncio provisional in 3.5.x. > > I'd be glad if we can consider to have more bugfix releases of 3.5 > (and 3.6), say every 3 months. > > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Fri Aug 28 17:56:10 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 28 Aug 2015 17:56:10 +0200 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: 2015-08-28 17:44 GMT+02:00 Brett Cannon : > Ultimately it's Larry's call, but I don't see why we couldn't. If we were > talking about something as low-level as the urllib package then I would > agree, but beyond its own tests is there anything in the stdlib that depends > on asyncio? At the beginning, asyncio was more fat: it contained a HTTP client and server. The HTTP code was moved out of asyncio (into the aiohttp project, which is great and popular by the way) to keep asyncio small and simple. There is no plan to put libraries based on asyncio into the stdlib. And I don't think that existing modules should start to support asyncio, it's more the opposite :-) For example, urllib.request, http.client and http.server are replaced by aiohttp. I don't think that it would make sense to add new asynchronous methods to these modules. Victor From victor.stinner at gmail.com Fri Aug 28 17:59:33 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 28 Aug 2015 17:59:33 +0200 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E07F99.8070008@gmail.com> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: 2015-08-28 17:34 GMT+02:00 Yury Selivanov : > I too would enjoy more frequent release schedule of asyncio. The problem is also to allow users to upgrade easily asyncio to retrieve new features, or simply latest bug fixes. As explained in other emails, if asyncio is part of the stlidb, it's *not* possible to upgrade it using "pip install -U asyncio", except if you "hack" sys.path. Usually, Python 3 comes with the system, for an user, it's hard to upgrade it. For example, Ubuntu Trusty still provides Python 3.4.0. asyncio doesn't have the loop.create_task() method for example in this release. asyncio got many bugfixes and some new features between 3.4.0 and 3.4.3. It's much easier to upgrade a third-party library than upgrading the system python. Victor From donald at stufft.io Fri Aug 28 18:02:19 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 28 Aug 2015 12:02:19 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: On August 28, 2015 at 12:01:25 PM, Victor Stinner (victor.stinner at gmail.com) wrote: > 2015-08-28 17:34 GMT+02:00 Yury Selivanov : > > I too would enjoy more frequent release schedule of asyncio. > > The problem is also to allow users to upgrade easily asyncio to > retrieve new features, or simply latest bug fixes. As explained in > other emails, if asyncio is part of the stlidb, it's *not* possible to > upgrade it using "pip install -U asyncio", except if you "hack" > sys.path. > > Usually, Python 3 comes with the system, for an user, it's hard to > upgrade it. For example, Ubuntu Trusty still provides Python 3.4.0. > asyncio doesn't have the loop.create_task() method for example in this > release. asyncio got many bugfixes and some new features between 3.4.0 > and 3.4.3. It's much easier to upgrade a third-party library than > upgrading the system python. > Unless we fix the sys.path ordering to make it possible. ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From victor.stinner at gmail.com Fri Aug 28 18:07:27 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 28 Aug 2015 18:07:27 +0200 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: 2015-08-28 18:02 GMT+02:00 Donald Stufft : > Unless we fix the sys.path ordering to make it possible. The problem is the deadline: Python 3.5 final is scheduled for the September, 13. We have 2 weeks to decide what to do with asyncio. I don't think that it's a good idea to modify how sys.path is built between an RC version (ex: 3.5rc2) and 3.5.0 final... Victor From status at bugs.python.org Fri Aug 28 18:08:26 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 28 Aug 2015 18:08:26 +0200 (CEST) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20150828160826.C644856757@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-08-21 - 2015-08-28) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5042 (+29) closed 31671 (+15) total 36713 (+44) Open issues with patches: 2251 Issues opened (39) ================== #24909: Windows: subprocess.Popen: race condition for leaking inherita http://bugs.python.org/issue24909 opened by Adam Meily #24910: Windows MSIs don't have unique display names http://bugs.python.org/issue24910 opened by steve.dower #24911: Context manager of socket.socket is not documented http://bugs.python.org/issue24911 opened by zodalahtathi #24912: The type of cached objects is mutable http://bugs.python.org/issue24912 opened by serhiy.storchaka #24913: deque.index() overruns deque boundary http://bugs.python.org/issue24913 opened by JohnLeitch #24914: Python: Not just OO style but this is not mentioned on python. http://bugs.python.org/issue24914 opened by Paddy McCarthy #24915: Profile Guided Optimization improvements (better training, llv http://bugs.python.org/issue24915 opened by alecsandru.patrascu #24916: In sysconfig, don't rely on sys.version format http://bugs.python.org/issue24916 opened by takluyver #24917: time_strftime() Buffer Over-read http://bugs.python.org/issue24917 opened by JohnLeitch #24918: Docs layout bug http://bugs.python.org/issue24918 opened by reag #24920: shutil.get_terminal_size throws AttributeError http://bugs.python.org/issue24920 opened by Isaac Levy #24921: Operator precedence table in 5.15 should be highest to lowest http://bugs.python.org/issue24921 opened by Joseph Schachner #24922: assertWarnsRegex doesn't allow multiple warning messages http://bugs.python.org/issue24922 opened by sleepycal #24923: Append system paths in setup.py instead of prepending http://bugs.python.org/issue24923 opened by christopher.hogan #24924: _posixsubprocess.c: sysconf() might not be async-signal-safe http://bugs.python.org/issue24924 opened by jwilk #24925: Allow doctest to find line number of __test__ strings if forma http://bugs.python.org/issue24925 opened by r.david.murray #24927: multiprocessing.Pool hangs forever on segfault http://bugs.python.org/issue24927 opened by Jonas Obrist #24928: mock.patch.dict spoils order of items in collections.OrderedDi http://bugs.python.org/issue24928 opened by Alexander Oblovatniy #24929: _strptime.TimeRE should not enforce range in regex http://bugs.python.org/issue24929 opened by Steve Yeung #24930: test_ssl broker was fixed http://bugs.python.org/issue24930 opened by marcosptf #24931: _asdict breaks when inheriting from a namedtuple http://bugs.python.org/issue24931 opened by Samuel Isaacson #24932: Migrate _testembed to a C unit testing library http://bugs.python.org/issue24932 opened by ncoghlan #24933: socket.recv(size, MSG_TRUNC) returns more than size bytes http://bugs.python.org/issue24933 opened by Andrey Wagin #24934: django_v2 benchmark not working in Python 3.6 http://bugs.python.org/issue24934 opened by florin.papa #24935: LDSHARED is not set according when CC is set. http://bugs.python.org/issue24935 opened by yunlian #24936: Idle: handle 'raise' properly when running with subprocess (2. http://bugs.python.org/issue24936 opened by terry.reedy #24937: Multiple problems in getters & setters in capsulethunk.h http://bugs.python.org/issue24937 opened by encukou #24938: Measure if _warnings.c is still worth having http://bugs.python.org/issue24938 opened by brett.cannon #24939: Remove unicode_format.h from stringlib http://bugs.python.org/issue24939 opened by eric.smith #24940: RotatingFileHandler uses tell in non-binary mode to determine http://bugs.python.org/issue24940 opened by Ilya.Kulakov #24941: Add classproperty as builtin class http://bugs.python.org/issue24941 opened by guettli #24942: Remove domain from ipaddress.reverse_pointer property and add http://bugs.python.org/issue24942 opened by leeclemens #24945: Expose Py_TPFLAGS_ values from Python http://bugs.python.org/issue24945 opened by erik.bray #24946: Tkinter tests that fail on linux in tiling window manager http://bugs.python.org/issue24946 opened by terry.reedy #24948: Multiprocessing not timely flushing stack trace to stderr http://bugs.python.org/issue24948 opened by memeplex #24949: Identifier lookup in a multi-level package is flakey http://bugs.python.org/issue24949 opened by SegundoBob #24950: FAIL: test_expanduser when $HOME=/ http://bugs.python.org/issue24950 opened by felixonmars #24951: Idle test_configdialog fails on Fedora 23, 3.6 http://bugs.python.org/issue24951 opened by terry.reedy #24952: stack_size([size]) is actually stack_size(size=0) http://bugs.python.org/issue24952 opened by mattip Most recent 15 issues with no replies (15) ========================================== #24946: Tkinter tests that fail on linux in tiling window manager http://bugs.python.org/issue24946 #24942: Remove domain from ipaddress.reverse_pointer property and add http://bugs.python.org/issue24942 #24940: RotatingFileHandler uses tell in non-binary mode to determine http://bugs.python.org/issue24940 #24936: Idle: handle 'raise' properly when running with subprocess (2. http://bugs.python.org/issue24936 #24925: Allow doctest to find line number of __test__ strings if forma http://bugs.python.org/issue24925 #24924: _posixsubprocess.c: sysconf() might not be async-signal-safe http://bugs.python.org/issue24924 #24923: Append system paths in setup.py instead of prepending http://bugs.python.org/issue24923 #24922: assertWarnsRegex doesn't allow multiple warning messages http://bugs.python.org/issue24922 #24917: time_strftime() Buffer Over-read http://bugs.python.org/issue24917 #24916: In sysconfig, don't rely on sys.version format http://bugs.python.org/issue24916 #24910: Windows MSIs don't have unique display names http://bugs.python.org/issue24910 #24906: asyncore asynchat hanging on ssl http://bugs.python.org/issue24906 #24905: Allow incremental I/O to blobs in sqlite3 http://bugs.python.org/issue24905 #24899: Add an os.path <=> pathlib equivalence table in pathlib docs http://bugs.python.org/issue24899 #24894: iso-8859-11 missing from codecs table http://bugs.python.org/issue24894 Most recent 15 issues waiting for review (15) ============================================= #24952: stack_size([size]) is actually stack_size(size=0) http://bugs.python.org/issue24952 #24938: Measure if _warnings.c is still worth having http://bugs.python.org/issue24938 #24937: Multiple problems in getters & setters in capsulethunk.h http://bugs.python.org/issue24937 #24934: django_v2 benchmark not working in Python 3.6 http://bugs.python.org/issue24934 #24931: _asdict breaks when inheriting from a namedtuple http://bugs.python.org/issue24931 #24930: test_ssl broker was fixed http://bugs.python.org/issue24930 #24927: multiprocessing.Pool hangs forever on segfault http://bugs.python.org/issue24927 #24925: Allow doctest to find line number of __test__ strings if forma http://bugs.python.org/issue24925 #24923: Append system paths in setup.py instead of prepending http://bugs.python.org/issue24923 #24917: time_strftime() Buffer Over-read http://bugs.python.org/issue24917 #24916: In sysconfig, don't rely on sys.version format http://bugs.python.org/issue24916 #24915: Profile Guided Optimization improvements (better training, llv http://bugs.python.org/issue24915 #24913: deque.index() overruns deque boundary http://bugs.python.org/issue24913 #24909: Windows: subprocess.Popen: race condition for leaking inherita http://bugs.python.org/issue24909 #24904: Patch: add timeout to difflib SequenceMatcher ratio() and quic http://bugs.python.org/issue24904 Top 10 most discussed issues (10) ================================= #24915: Profile Guided Optimization improvements (better training, llv http://bugs.python.org/issue24915 21 msgs #23630: support multiple hosts in create_server/start_server http://bugs.python.org/issue23630 15 msgs #2786: Names in function call exception should have class names, if t http://bugs.python.org/issue2786 13 msgs #23496: Steps for Android Native Build of Python 3.4.2 http://bugs.python.org/issue23496 8 msgs #24294: DeprecationWarnings should be visible by default in the intera http://bugs.python.org/issue24294 8 msgs #24829: Use interactive input even if stdout is redirected http://bugs.python.org/issue24829 8 msgs #24913: deque.index() overruns deque boundary http://bugs.python.org/issue24913 8 msgs #24934: django_v2 benchmark not working in Python 3.6 http://bugs.python.org/issue24934 8 msgs #3548: subprocess.pipe function http://bugs.python.org/issue3548 7 msgs #24305: The new import system makes it inconvenient to correctly issue http://bugs.python.org/issue24305 7 msgs Issues closed (14) ================== #21112: 3.4 regression: unittest.expectedFailure no longer works on Te http://bugs.python.org/issue21112 closed by rbcollins #22812: Documentation of unittest -p usage wrong on windows. http://bugs.python.org/issue22812 closed by rbcollins #22936: traceback module has no way to show locals http://bugs.python.org/issue22936 closed by rbcollins #23552: Have timeit warn about runs that are not independent of each o http://bugs.python.org/issue23552 closed by rbcollins #24633: README file installed into site-packages conflicts with packag http://bugs.python.org/issue24633 closed by rbcollins #24769: Interpreter doesn't start when dynamic loading is disabled http://bugs.python.org/issue24769 closed by larry #24808: PyTypeObject fields have incorrectly documented types http://bugs.python.org/issue24808 closed by martin.panter #24847: Can't import tkinter in Python 3.5.0rc1 http://bugs.python.org/issue24847 closed by larry #24850: syslog.syslog() does not return error when unable to send the http://bugs.python.org/issue24850 closed by r.david.murray #24919: Use user shell in subprocess http://bugs.python.org/issue24919 closed by r.david.murray #24926: Incorrect Example in HTMLParser.handle_comment(data) http://bugs.python.org/issue24926 closed by r.david.murray #24943: Simplifying os.exec* http://bugs.python.org/issue24943 closed by r.david.murray #24944: traceback when using tempfile module on windows http://bugs.python.org/issue24944 closed by haypo #24947: asyncio-eventloop documentation grammar (minor) http://bugs.python.org/issue24947 closed by python-dev From donald at stufft.io Fri Aug 28 18:09:11 2015 From: donald at stufft.io (Donald Stufft) Date: Fri, 28 Aug 2015 12:09:11 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: On August 28, 2015 at 12:07:48 PM, Victor Stinner (victor.stinner at gmail.com) wrote: > 2015-08-28 18:02 GMT+02:00 Donald Stufft : > > Unless we fix the sys.path ordering to make it possible. > > The problem is the deadline: Python 3.5 final is scheduled for the > September, 13. We have 2 weeks to decide what to do with asyncio. > > I don't think that it's a good idea to modify how sys.path is built > between an RC version (ex: 3.5rc2) and 3.5.0 final... > Yea, there?s always 3.6 though! ----------------- Donald Stufft PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From yselivanov.ml at gmail.com Fri Aug 28 18:11:55 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Fri, 28 Aug 2015 12:11:55 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: <55E0884B.5000705@gmail.com> On 2015-08-28 11:44 AM, Brett Cannon wrote: > > Unfortunately, separating it from the standard library is something > that I don't think we can do so late in the 3.5 release candidates > process. > > > Ultimately it's Larry's call, but I don't see why we couldn't. If we > were talking about something as low-level as the urllib package then I > would agree, but beyond its own tests is there anything in the stdlib > that depends on asyncio? As Victor already replied, asyncio is pretty self contained, and nothing in the stdlib depends on it. If we really can remove it, let's consider it. Yury From gjcarneiro at gmail.com Fri Aug 28 20:44:10 2015 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Fri, 28 Aug 2015 19:44:10 +0100 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E0884B.5000705@gmail.com> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E0884B.5000705@gmail.com> Message-ID: On 28 August 2015 at 17:11, Yury Selivanov wrote: > On 2015-08-28 11:44 AM, Brett Cannon wrote: > >> >> Unfortunately, separating it from the standard library is something >> that I don't think we can do so late in the 3.5 release candidates >> process. >> >> >> Ultimately it's Larry's call, but I don't see why we couldn't. If we were >> talking about something as low-level as the urllib package then I would >> agree, but beyond its own tests is there anything in the stdlib that >> depends on asyncio? >> > > As Victor already replied, asyncio is pretty self contained, and > nothing in the stdlib depends on it. > > If we really can remove it, let's consider it. -1 for removing it. It is way too late for 3.5 IMHO. You should have proposed it at least 6 months ago. This feels too rushed. +1 for changing sys.path to allow pip-installed asyncio to override the stdlib version (for 3.6). Some developers complain that NodeJS's packaging system is better that Python, maybe this lack of flexibility is one of the reasons... -- Gustavo J. A. M. Carneiro Gambit Research "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Aug 29 04:20:49 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 29 Aug 2015 12:20:49 +1000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E0884B.5000705@gmail.com> Message-ID: On 29 August 2015 at 04:44, Gustavo Carneiro wrote: > On 28 August 2015 at 17:11, Yury Selivanov wrote: >> >> On 2015-08-28 11:44 AM, Brett Cannon wrote: >>> >>> >>> Unfortunately, separating it from the standard library is something >>> that I don't think we can do so late in the 3.5 release candidates >>> process. >>> >>> >>> Ultimately it's Larry's call, but I don't see why we couldn't. If we were >>> talking about something as low-level as the urllib package then I would >>> agree, but beyond its own tests is there anything in the stdlib that depends >>> on asyncio? >> >> >> As Victor already replied, asyncio is pretty self contained, and >> nothing in the stdlib depends on it. >> >> If we really can remove it, let's consider it. > > -1 for removing it. It is way too late for 3.5 IMHO. You should have > proposed it at least 6 months ago. This feels too rushed. > > +1 for changing sys.path to allow pip-installed asyncio to override the > stdlib version (for 3.6). Some developers complain that NodeJS's packaging > system is better that Python, maybe this lack of flexibility is one of the > reasons... I think ensurepip now offers us a better option for dealing with this kind of situation: create a new category of "recommended" 3rd party modules, which have their own upgrade cycle and maintenance model, independent of the standard library, but use pip and bundled wheel files to install them by default in the CPython installers. This allows them to be made available by default to new Python users, but also still easily kept in sync across all supported Python versions. We've started inching our way towards this already, with provisional modules, the IDLE enhancement exception for all releases, the network security backports to Python 2.7, and ensurepip itself. This means it isn't just asyncio that could benefit from such an enhancement to our modularisation capabilities - we could also apply it to tkinter, idle and distutils (all of which are often unbundled by downstream redistributors anyway), and potentially even to particularly security sensitive components like the ssl module. >From a Linux downstream perspective, APT has long supported a distinction between hard requirements, recommended additions, and suggested ones in the Debian world, and RPM recently gained that capability as well, so such a change would actually make our lives easier, since "bundled as a wheel file" in the CPython Windows and Mac OS X installers would map to a Recommends or Suggests downstream, rather than a hard requirement. However, we don't need to rush into this. As long as we keep asyncio provisional for 3.5.0, then we can take our time in considering what we want to do in terms of its ongoing maintenance and release management model, and then apply that new approach to at least asyncio for 3.5.x, and consider modularising the development and release cycles for other components of interest in 3.6+ (and potentially 2.7.x in some cases). Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From larry at hastings.org Sat Aug 29 19:36:33 2015 From: larry at hastings.org (Larry Hastings) Date: Sat, 29 Aug 2015 10:36:33 -0700 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> Message-ID: <55E1EDA1.8020709@hastings.org> On 08/28/2015 08:44 AM, Brett Cannon wrote: > On Fri, 28 Aug 2015 at 08:35 Yury Selivanov > wrote: > > Unfortunately, separating it from the standard library is something > that I don't think we can do so late in the 3.5 release candidates > process. > > > Ultimately it's Larry's call, but I don't see why we couldn't. If we > were talking about something as low-level as the urllib package then I > would agree, but beyond its own tests is there anything in the stdlib > that depends on asyncio? I'm flexible here. My concern is shipping high-quality software. Removing an entire package outright, even at such a late date, is pretty low-risk. But before I'd allow it, you'd have to get a BDFL pronouncement (or BDFL-delegate pronouncement). //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvanrossum at gmail.com Sat Aug 29 19:58:18 2015 From: gvanrossum at gmail.com (Guido van Rossum) Date: Sat, 29 Aug 2015 10:58:18 -0700 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E1EDA1.8020709@hastings.org> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E1EDA1.8020709@hastings.org> Message-ID: I don't want to remove asyncio from the stdlib. Another cycle of provisional status is fine. --Guido (on mobile) On Aug 29, 2015 10:38 AM, "Larry Hastings" wrote: > > > On 08/28/2015 08:44 AM, Brett Cannon wrote: > > On Fri, 28 Aug 2015 at 08:35 Yury Selivanov > wrote: > >> Unfortunately, separating it from the standard library is something >> that I don't think we can do so late in the 3.5 release candidates >> process. >> > > Ultimately it's Larry's call, but I don't see why we couldn't. If we were > talking about something as low-level as the urllib package then I would > agree, but beyond its own tests is there anything in the stdlib that > depends on asyncio? > > > I'm flexible here. My concern is shipping high-quality software. > Removing an entire package outright, even at such a late date, is pretty > low-risk. But before I'd allow it, you'd have to get a BDFL pronouncement > (or BDFL-delegate pronouncement). > > > */arry* > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Sat Aug 29 21:16:16 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 29 Aug 2015 15:16:16 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E1EDA1.8020709@hastings.org> Message-ID: <55E20500.5000309@gmail.com> Since there is at least some possibility that we might have another discussion about asyncio removal from the stdlib in 3.6, should I just reuse the warning we had in 3.4 for asyncio: Note: The asyncio package has been included in the standard library on a provisional basis. Backwards incompatible changes (up to and including removal of the module) may occur if deemed necessary by the core developers. ? Yury On 2015-08-29 1:58 PM, Guido van Rossum wrote: > > I don't want to remove asyncio from the stdlib. Another cycle of > provisional status is fine. > > --Guido (on mobile) > > On Aug 29, 2015 10:38 AM, "Larry Hastings" > wrote: > > > > On 08/28/2015 08:44 AM, Brett Cannon wrote: >> On Fri, 28 Aug 2015 at 08:35 Yury Selivanov >> > wrote: >> >> Unfortunately, separating it from the standard library is >> something >> that I don't think we can do so late in the 3.5 release >> candidates >> process. >> >> >> Ultimately it's Larry's call, but I don't see why we couldn't. If >> we were talking about something as low-level as the urllib >> package then I would agree, but beyond its own tests is there >> anything in the stdlib that depends on asyncio? > > I'm flexible here. My concern is shipping high-quality software. > Removing an entire package outright, even at such a late date, is > pretty low-risk. But before I'd allow it, you'd have to get a > BDFL pronouncement (or BDFL-delegate pronouncement). > > > //arry/ > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From yselivanov.ml at gmail.com Sat Aug 29 21:18:22 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Sat, 29 Aug 2015 15:18:22 -0400 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E1EDA1.8020709@hastings.org> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E1EDA1.8020709@hastings.org> Message-ID: <55E2057E.7010809@gmail.com> Larry, what will the release cycle for 3.5.x look like? Can we do bugfix releases every 3 or 4 months? Yury On 2015-08-29 1:36 PM, Larry Hastings wrote: > > > On 08/28/2015 08:44 AM, Brett Cannon wrote: >> On Fri, 28 Aug 2015 at 08:35 Yury Selivanov >> wrote: >> >> Unfortunately, separating it from the standard library is something >> that I don't think we can do so late in the 3.5 release candidates >> process. >> >> >> Ultimately it's Larry's call, but I don't see why we couldn't. If we >> were talking about something as low-level as the urllib package then >> I would agree, but beyond its own tests is there anything in the >> stdlib that depends on asyncio? > > I'm flexible here. My concern is shipping high-quality software. > Removing an entire package outright, even at such a late date, is > pretty low-risk. But before I'd allow it, you'd have to get a BDFL > pronouncement (or BDFL-delegate pronouncement). > > > //arry/ > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/yselivanov.ml%40gmail.com From larry at hastings.org Sat Aug 29 21:57:27 2015 From: larry at hastings.org (Larry Hastings) Date: Sat, 29 Aug 2015 12:57:27 -0700 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E2057E.7010809@gmail.com> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E1EDA1.8020709@hastings.org> <55E2057E.7010809@gmail.com> Message-ID: <55E20EA7.4060408@hastings.org> On 08/29/2015 12:18 PM, Yury Selivanov wrote: > Larry, what will the release cycle for 3.5.x look like? Can we do > bugfix releases every 3 or 4 months? It's usually more like every six. I've proposed doing them a little more frequently and gotten push back; a new Python release causes a bunch of work for a lot of people. You could just hope for another Heartbleed bug, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Aug 30 01:42:55 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 30 Aug 2015 09:42:55 +1000 Subject: [Python-Dev] provisional status for asyncio In-Reply-To: <55E20500.5000309@gmail.com> References: <55DF7E06.1020207@gmail.com> <55E07F99.8070008@gmail.com> <55E1EDA1.8020709@hastings.org> <55E20500.5000309@gmail.com> Message-ID: On 30 August 2015 at 05:16, Yury Selivanov wrote: > Since there is at least some possibility that we might have another > discussion about asyncio removal from the stdlib in 3.6, should I > just reuse the warning we had in 3.4 for asyncio: > > > Note: The asyncio package has been included in the > standard library on a provisional basis. Backwards > incompatible changes (up to and including removal > of the module) may occur if deemed necessary by the > core developers. > > ? I don't see much chance of us actually removing it - the most I see us doing is potentially shifting to a "bundled 3rd party library" update model if there's still a high churn rate in the API after another release cycle. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From eric at trueblade.com Sun Aug 30 15:41:35 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 30 Aug 2015 09:41:35 -0400 Subject: [Python-Dev] PEP 498: Literal String Interpolation Message-ID: <55E3080F.9030601@trueblade.com> PEP 498 adds a new string prefix which allows simpler string formatting by embedding expressions inside string literals. The expressions are evaluated and inserted into the resulting string value: >>> import datetime >>> name = 'Fred' >>> age = 50 >>> anniversary = datetime.date(1991, 10, 12) >>> f'My name is {name}, my age next year is {age+1}, my anniversary is {anniversary:%A, %B %d, %Y}.' 'My name is Fred, my age next year is 51, my anniversary is Saturday, October 12, 1991.' There's been a lot of discussion about this on python-ideas, much of which has been incorporated in to the PEP. Now I feel it's ready for python-dev input. Note there's a companion PEP 501 which extends this idea by delaying converting the expression into a string. This allows for more control over how the expressions are converted in to strings, and allows for non-string conversions as well. I have a complete implementation of PEP 498. I'll shortly create an issue and attach the patch to it. -- Eric. From eric at trueblade.com Sun Aug 30 19:49:05 2015 From: eric at trueblade.com (Eric V. Smith) Date: Sun, 30 Aug 2015 13:49:05 -0400 Subject: [Python-Dev] PEP 498: Literal String Interpolation In-Reply-To: <55E3080F.9030601@trueblade.com> References: <55E3080F.9030601@trueblade.com> Message-ID: <55E34211.3050500@trueblade.com> On 08/30/2015 09:41 AM, Eric V. Smith wrote: > I have a complete implementation of PEP 498. I'll shortly create an > issue and attach the patch to it. Issue 24965. Eric. From ncoghlan at gmail.com Mon Aug 31 15:10:02 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 31 Aug 2015 23:10:02 +1000 Subject: [Python-Dev] PEP 498: Literal String Interpolation In-Reply-To: <55E3080F.9030601@trueblade.com> References: <55E3080F.9030601@trueblade.com> Message-ID: On 30 August 2015 at 23:41, Eric V. Smith wrote: > Note there's a companion PEP 501 which extends this idea by delaying > converting the expression into a string. This allows for more control > over how the expressions are converted in to strings, and allows for > non-string conversions as well. For the benefit of folks that weren't following the (many) iterations on python-ideas: PEP 501's general purpose string interpolation started out as a competitor to 498, but as the discussion continued and my ideas started to converge more and more with Eric's, I eventually realised it made more sense as an optional extension to PEP 498 that exposed the inner workings of the scope aware interpolation machinery to Python code. As part of that, I'm +1 on PEP 498's literal string interpolation syntax. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From yselivanov.ml at gmail.com Mon Aug 31 22:08:57 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Mon, 31 Aug 2015 16:08:57 -0400 Subject: [Python-Dev] PEP 498: Literal String Interpolation In-Reply-To: References: <55E3080F.9030601@trueblade.com> Message-ID: <55E4B459.9030000@gmail.com> On 2015-08-31 9:10 AM, Nick Coghlan wrote: > On 30 August 2015 at 23:41, Eric V. Smith wrote: >> Note there's a companion PEP 501 which extends this idea by delaying >> converting the expression into a string. This allows for more control >> over how the expressions are converted in to strings, and allows for >> non-string conversions as well. > For the benefit of folks that weren't following the (many) iterations > on python-ideas: PEP 501's general purpose string interpolation > started out as a competitor to 498, but as the discussion continued > and my ideas started to converge more and more with Eric's, I > eventually realised it made more sense as an optional extension to PEP > 498 that exposed the inner workings of the scope aware interpolation > machinery to Python code. If PEP 501 is an extension to PEP 498, then the proposal is to add i'' prefix *in addition* to f'', right? If so, I think it might be confusing to a lot of people on what prefix should be used and when. I think it's too easy for an average user to write ``os.system(f'...')`` and think that their code is fine, instead of ``os.system(i'...')``. What's worse, is that there is no way for ``os.system()`` to reject the former use. Second, given that we use "f" for "formatted", using "i" for "interpolated template" is a bit confusing. Can we use "t" ("template strings")? Yury From eric at trueblade.com Mon Aug 31 23:07:50 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 31 Aug 2015 17:07:50 -0400 Subject: [Python-Dev] PEP 498: Literal String Interpolation In-Reply-To: <55E4B459.9030000@gmail.com> References: <55E3080F.9030601@trueblade.com> <55E4B459.9030000@gmail.com> Message-ID: <55E4C226.9020205@trueblade.com> On 8/31/2015 4:08 PM, Yury Selivanov wrote: > > > On 2015-08-31 9:10 AM, Nick Coghlan wrote: >> On 30 August 2015 at 23:41, Eric V. Smith wrote: >>> Note there's a companion PEP 501 which extends this idea by delaying >>> converting the expression into a string. This allows for more control >>> over how the expressions are converted in to strings, and allows for >>> non-string conversions as well. >> For the benefit of folks that weren't following the (many) iterations >> on python-ideas: PEP 501's general purpose string interpolation >> started out as a competitor to 498, but as the discussion continued >> and my ideas started to converge more and more with Eric's, I >> eventually realised it made more sense as an optional extension to PEP >> 498 that exposed the inner workings of the scope aware interpolation >> machinery to Python code. > > If PEP 501 is an extension to PEP 498, then the proposal > is to add i'' prefix *in addition* to f'', right? That would be my intention, yes. Your only other choice to get plain strings from a template would be to always say format(i'{expression}'), which I don't think is an improvement. > If so, I think it might be confusing to a lot of people on > what prefix should be used and when. I think it's too easy > for an average user to write ``os.system(f'...')`` and think > that their code is fine, instead of ``os.system(i'...')``. > What's worse, is that there is no way for ``os.system()`` > to reject the former use. That's all correct. I have to say that I like PEP 501 mainly for its non-string use cases. That is, interpolators that don't produce strings. Note that this isn't possible with PEP 502's proposal. > Second, given that we use "f" for "formatted", using "i" > for "interpolated template" is a bit confusing. Can > we use "t" ("template strings")? I don't really care so much about the prefixes. I'd be okay with PEP 498 being either 'f', 'i', or 't'. I could warm up to 't' easier than 'i'. Eric. From steve at holdenweb.com Mon Aug 31 23:15:32 2015 From: steve at holdenweb.com (Steve Holden) Date: Mon, 31 Aug 2015 22:15:32 +0100 Subject: [Python-Dev] ZIP File Instead of Installer In-Reply-To: References: Message-ID: Hi Alex, Thanks for your email, and the suggestion. A couple of things ? First, as webmaster we are simply volunteers who try to accept and process feedback *about the python.org web site*. We aren?t (mostly) actively involved in the development of the language, though we are all dedicated users - most of us either don?t have tie to take part in python-dev or feel we have already put our time in. So this isn?t the best channel for the particular inout you?re making. Second, while it?s practical to put *most* of the stdlib in a zip file it may be impractical to put them *all* in there. I did some experiments (admittedly about eight years ago, so things may have changed) to enable an import mechanism that imported code from a relational database. Naturally there were modules that had to be loaded using the standard mechanisms - the ones I needed to import to enable my database imports! It?s a classical bootstrap problem. Third, your idea would need considerable expansion before being actionable: consider the fact that there are millions of downloads of Python every month. Obviously there is huge amount of engineering detail to consider before releasing anything new - if an error only occurs one time in 100,000 that will lead to a substantial number of bug reports, so the dev tend to proceed fairly carefully and with much deliberation. Fourth, I really like it that you are enthusiastic about Python, and would suggest you consider joining the python-dev mailing list. Start as a lurker, just reading what is said, and (since you are obviously thinking a lot as you learn Python) you will be surprised how much you pick up. The people on that list aren?t gods, they are just regular developers who happen to commit a lot of their time to Python?s development, and they rely on a continuous influx of new blood. So as long as you take your time getting to know the personalities involved and how the list works you are likely to find a welcome there. The day will arrive when you realize that rather to your surprise you have something to contribute. There?s also the python-ideas list and, if you are nervous about just diving straight into python-dev, there?s the python-mentor list that was specifically designed to provide a sympathetic and uncritical response to enthusiasm such as you exhibit. The caveat in all this is that as an old fart I haven?t actually been active on any of those lists for quite a while, so I am unsure how active they all are. I hope this response is helpful. I?ve invested a lot of time in trying to make the Python ecosystem one that welcomes enthusiasm and energy, and lot of other people do the same, many of them more effectively than me. It seems to me that you have both enthusiasm and energy, which is why I?ve taken the time to encourage you to put it to use. Good luck in your progress with Python. regards Steve PS: The mailing lists are all detailed on https://mail.python.org/mailman/listinfo > On Aug 31, 2015, at 9:48 PM, Alex Huffstetler wrote: > > I have an idea. I had a problem with installing Python because I?m not an admin. > My idea is you delete the installer and put all the files in a ZIP folder. It can make download time faster. > Also, I made the Python logo out off underscores and pipes. > If it dosen?t look right, put it somewhere on the Internet and it?ll look right. > _____________ > | . | > _______|_______ |________ > | | | > | _____________| | > | | | > |_______| _______________| > | . | > |_____________| -- Steve Holden steve at holdenweb.com / +44 208 289 6308 / @holdenweb -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Mon Aug 31 23:33:43 2015 From: eric at trueblade.com (Eric V. Smith) Date: Mon, 31 Aug 2015 17:33:43 -0400 Subject: [Python-Dev] PEP 498: Literal String Interpolation In-Reply-To: <55E4C226.9020205@trueblade.com> References: <55E3080F.9030601@trueblade.com> <55E4B459.9030000@gmail.com> <55E4C226.9020205@trueblade.com> Message-ID: <55E4C837.40103@trueblade.com> On 8/31/2015 5:07 PM, Eric V. Smith wrote: > That's all correct. I have to say that I like PEP 501 mainly for its > non-string use cases. That is, interpolators that don't produce strings. > Note that this isn't possible with PEP 502's proposal. Let me rephrase that: with PEP 502, it must be possible to produce a string from every e-string (as they're called there), and since types.ExpressionString derives from str, it's even easier to misuse them as strings and not provide the correct interpolator. This is as opposed to PEP 501, where the equivalent type is not derived from str, and you must call format() or a custom interpolator to get a str. But back to PEP 498: I can't imagine accepting either PEP 501 or 502 without also accepting PEP 498. And once you've accepted 498, are i- or e-strings sufficiently powerful and useful enough, and are they likely to be used in the correct way? I think that's the question. Eric.