From fijall at gmail.com Tue Dec 1 03:36:08 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 1 Dec 2015 10:36:08 +0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <20151130135240.3E341251104@webabinitio.net> References: <20151130135240.3E341251104@webabinitio.net> Message-ID: Hi Thanks for doing the work! I'm on of the pypy devs and I'm very interested in seeing this getting somewhere. I must say I struggle to read the graph - is red good or is red bad for example? I'm keen to help you getting anything you want to run it repeatedly. PS. The intel stuff runs one benchmark in a very questionable manner, so let's maybe not rely on it too much. On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray wrote: > On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny wrote: >> Note that uploading the data to SpeedTin should be pretty straightforward >> (by using https://github.com/fabioz/pyspeedtin, so, the main issue would be >> setting up o machine to run the benchmarks). > > Thanks, but Zach almost has this working using codespeed (he's still > waiting on a review from infrastructure, I think). The server was not in > fact running; a large part of what Zach did was to get that server set up. > I don't know what it would take to export the data to another consumer, > but if you want to work on that I'm guessing there would be no objection. > And I'm sure there would be no objection if you want to get involved > in maintaining the benchmark server! > > There's also an Intel project posted about here recently that checks > individual benchmarks for performance regressions and posts the results > to python-checkins. > > --David > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com From fabiofz at gmail.com Tue Dec 1 04:36:04 2015 From: fabiofz at gmail.com (Fabio Zadrozny) Date: Tue, 1 Dec 2015 07:36:04 -0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> Message-ID: On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C wrote: > > On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" > rdmurray at bitdance.com> wrote: > > > > >There's also an Intel project posted about here recently that checks > >individual benchmarks for performance regressions and posts the results > >to python-checkins. > > The description of the project is at https://01.org/lp - Python results > are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 > due to Romania National Day holiday!) > > There is also a graphic dashboard at > http://languagesperformance.intel.com/ ?Hi Dave, Interesting, but ?I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). ?-- Fabio? > ? > > Dave > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabiofz at gmail.com Tue Dec 1 04:49:40 2015 From: fabiofz at gmail.com (Fabio Zadrozny) Date: Tue, 1 Dec 2015 07:49:40 -0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> Message-ID: On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski wrote: > Hi > > Thanks for doing the work! I'm on of the pypy devs and I'm very > interested in seeing this getting somewhere. I must say I struggle to > read the graph - is red good or is red bad for example? > > I'm keen to help you getting anything you want to run it repeatedly. > > PS. The intel stuff runs one benchmark in a very questionable manner, > so let's maybe not rely on it too much. > ?Hi Maciej, Great, it'd be awesome having data on multiple Python VMs (my latest target is really having a way to compare across multiple VMs/versions easily and help each implementation keep a focus on performance). Ideally, a single, dedicated machine could be used just to run the benchmarks from multiple VMs (one less variable to take into account for comparisons later on, as I'm not sure it'd be reliable to normalize benchmark data from different machines -- it seems Zach was the one to contact from that, but if there's such a machine already being used to run PyPy, maybe it could be extended to run other VMs too?). As for the graph, it should be easy to customize (and I'm open to suggestions). In the case, as it is, red is slower and blue is faster (so, for instance in https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time, the fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline). I've updated the comments to make it clearer (and changed the second graph to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for the individual benchmarks. Best Regards, Fabio > > On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray > wrote: > > On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny > wrote: > >> Note that uploading the data to SpeedTin should be pretty > straightforward > >> (by using https://github.com/fabioz/pyspeedtin, so, the main issue > would be > >> setting up o machine to run the benchmarks). > > > > Thanks, but Zach almost has this working using codespeed (he's still > > waiting on a review from infrastructure, I think). The server was not in > > fact running; a large part of what Zach did was to get that server set > up. > > I don't know what it would take to export the data to another consumer, > > but if you want to work on that I'm guessing there would be no objection. > > And I'm sure there would be no objection if you want to get involved > > in maintaining the benchmark server! > > > > There's also an Intel project posted about here recently that checks > > individual benchmarks for performance regressions and posts the results > > to python-checkins. > > > > --David > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fijall at gmail.com Tue Dec 1 05:14:40 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 1 Dec 2015 12:14:40 +0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> Message-ID: On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny wrote: > > On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski wrote: >> >> Hi >> >> Thanks for doing the work! I'm on of the pypy devs and I'm very >> interested in seeing this getting somewhere. I must say I struggle to >> read the graph - is red good or is red bad for example? >> >> I'm keen to help you getting anything you want to run it repeatedly. >> >> PS. The intel stuff runs one benchmark in a very questionable manner, >> so let's maybe not rely on it too much. > > > Hi Maciej, > > Great, it'd be awesome having data on multiple Python VMs (my latest target > is really having a way to compare across multiple VMs/versions easily and > help each implementation keep a focus on performance). Ideally, a single, > dedicated machine could be used just to run the benchmarks from multiple VMs > (one less variable to take into account for comparisons later on, as I'm not > sure it'd be reliable to normalize benchmark data from different machines -- > it seems Zach was the one to contact from that, but if there's such a > machine already being used to run PyPy, maybe it could be extended to run > other VMs too?). > > As for the graph, it should be easy to customize (and I'm open to > suggestions). In the case, as it is, red is slower and blue is faster (so, > for instance in > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time, the > fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline). > I've updated the comments to make it clearer (and changed the second graph > to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for > the individual benchmarks. > > Best Regards, > > Fabio There is definitely a machine available. I suggest you ask python-infra list for access. It definitely can be used to run more than just pypy stuff. As for normalizing across multiple machines - don't even bother. Different architectures make A LOT of difference, especially with cache sizes and whatnot, that seems to have different impact on different loads. As for graph - I like the split on the benchmarks and a better description (higher is better) would be good. I have a lot of ideas about visualizations, pop in on IRC, I'm happy to discuss :-) Cheers, fijal From victor.stinner at gmail.com Tue Dec 1 06:35:47 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 1 Dec 2015 12:35:47 +0100 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> Message-ID: 2015-12-01 10:49 GMT+01:00 Fabio Zadrozny : > As for the graph, it should be easy to customize (and I'm open to > suggestions). In the case, as it is, red is slower and blue is faster (so, > for instance in > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time For me, -10% means "faster" in the context of a benchmark. On this graph, I see -21% but it's slower in fact. I'm confused. Victor From fabiofz at gmail.com Tue Dec 1 08:06:30 2015 From: fabiofz at gmail.com (Fabio Zadrozny) Date: Tue, 1 Dec 2015 11:06:30 -0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> Message-ID: On Tue, Dec 1, 2015 at 9:35 AM, Victor Stinner wrote: > 2015-12-01 10:49 GMT+01:00 Fabio Zadrozny : > > As for the graph, it should be easy to customize (and I'm open to > > suggestions). In the case, as it is, red is slower and blue is faster > (so, > > for instance in > > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time > > For me, -10% means "faster" in the context of a benchmark. On this > graph, I see -21% but it's slower in fact. I'm confused. > > Victor > Humm, I understand your point, although I think the main reason for the confusion is the lack of a real legend there... I.e.: the reason it's like that is because the idea is that it's a comparison among 2 versions, not absolute benchmark times, so negative means one version is 'slower/worse' than another and blue means it's 'faster/better' (as a reference, Eclipse also uses the same format for reporting it -- e.g.: http://download.eclipse.org/eclipse/downloads/drops4/R-4.5-201506032000/performance/performance.php?fp_type=0 ) I've added a legend now, so, hopefully it clears up the confusion ;) -- Fabio -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.c.stewart at intel.com Tue Dec 1 10:26:52 2015 From: david.c.stewart at intel.com (Stewart, David C) Date: Tue, 1 Dec 2015 15:26:52 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> Message-ID: <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> From: Fabio Zadrozny > Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart > Cc: "R. David Murray" >, "python-dev at python.org" > Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C > wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" on behalf of rdmurray at bitdance.com> wrote: > >There's also an Intel project posted about here recently that checks >individual benchmarks for performance regressions and posts the results >to python-checkins. The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ ?Hi Dave, Interesting, but ?I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio ? my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave From fabiofz at gmail.com Tue Dec 1 10:40:37 2015 From: fabiofz at gmail.com (Fabio Zadrozny) Date: Tue, 1 Dec 2015 13:40:37 -0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> Message-ID: On Tue, Dec 1, 2015 at 8:14 AM, Maciej Fijalkowski wrote: > On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny wrote: > > > > On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski > wrote: > >> > >> Hi > >> > >> Thanks for doing the work! I'm on of the pypy devs and I'm very > >> interested in seeing this getting somewhere. I must say I struggle to > >> read the graph - is red good or is red bad for example? > >> > >> I'm keen to help you getting anything you want to run it repeatedly. > >> > >> PS. The intel stuff runs one benchmark in a very questionable manner, > >> so let's maybe not rely on it too much. > > > > > > Hi Maciej, > > > > Great, it'd be awesome having data on multiple Python VMs (my latest > target > > is really having a way to compare across multiple VMs/versions easily and > > help each implementation keep a focus on performance). Ideally, a single, > > dedicated machine could be used just to run the benchmarks from multiple > VMs > > (one less variable to take into account for comparisons later on, as I'm > not > > sure it'd be reliable to normalize benchmark data from different > machines -- > > it seems Zach was the one to contact from that, but if there's such a > > machine already being used to run PyPy, maybe it could be extended to run > > other VMs too?). > > > > As for the graph, it should be easy to customize (and I'm open to > > suggestions). In the case, as it is, red is slower and blue is faster > (so, > > for instance in > > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time, > the > > fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline). > > I've updated the comments to make it clearer (and changed the second > graph > > to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for > > the individual benchmarks. > > > > Best Regards, > > > > Fabio > > There is definitely a machine available. I suggest you ask > python-infra list for access. It definitely can be used to run more > than just pypy stuff. As for normalizing across multiple machines - > don't even bother. Different architectures make A LOT of difference, > especially with cache sizes and whatnot, that seems to have different > impact on different loads. > > As for graph - I like the split on the benchmarks and a better > description (higher is better) would be good. > > I have a lot of ideas about visualizations, pop in on IRC, I'm happy > to discuss :-) > > ?Ok, I mailed infrastructure(at)python.org to see how to make it work. I did add a legend now, so, it should be much easier to read already ;) As for ideas on visualizations, I definitely want to hear about suggestions on how to improve it, although I'll start focusing on having the servers to get benchmark data running and will move on to improving the graphs right afterwards. Cheers, Fabio > Cheers, > fijal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.c.stewart at intel.com Tue Dec 1 10:47:58 2015 From: david.c.stewart at intel.com (Stewart, David C) Date: Tue, 1 Dec 2015 15:47:58 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> Message-ID: On 12/1/15, 7:26 AM, "Python-Dev on behalf of Stewart, David C" wrote: > >Fabio ? my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). Whoops - silly me - today is a national holiday in Romania where Stefan lives so might not get an answer until tomorrow. :-/ From storchaka at gmail.com Tue Dec 1 10:50:31 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 1 Dec 2015 17:50:31 +0200 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: Message-ID: On 25.11.15 08:39, Nick Coghlan wrote: > On 25 November 2015 at 07:33, Guido van Rossum wrote: >> Ooooh, that's probably really old code. I guess for the slots the >> reasoning is to save on slots. For the public functions, alas it will >> be hard to know if anyone is depending on it, even if it's >> undocumented. Perhaps add a deprecation warning to these if the value >> is NULL for one release cycle? > > I did a quick scan for "PyObject_SetAttr", and it turns out > PyObject_DelAttr is only a convenience macro for calling > PyObject_SetAttr with NULL as the value argument. bltinmodule.c and > ceval.c also both include direct calls to PyObject_SetAttr with > "(PyObject *)NULL" as the value argument. > > Investigating some of the uses that passed a variable as the value > argument, one case is the weakref proxy implementation, which uses > PyObject_SetAttr on the underlying object in its implementation of the > setattr slot in the proxy. > > So it looks to me like replicating the NULL-handling behaviour of the > slots in the public Set* APIs was intentional, and it's just the > documentation of that detail that was missed (since most folks > presumably use the Del* convenience APIs instead). I'm not sure. This looks rather as implementation detail to me. There cases found by you are the only cases in the core/stdlib that call PyObject_SetAttr with third argument is NULL. Tests are passed after replacing Set* functions with Del* functions in these cases and making Set* functions to reject value is NULL. [1] Wouldn't be worth to deprecate deleting with Set* functions? Neither other abstract Set* APIs, not concrete Set* APIs don't support deleting. Deleting with Set* API can be unintentional and hide a bug. [1] http://bugs.python.org/issue25773 From alexei_belenki at yahoo.com Tue Dec 1 09:30:25 2015 From: alexei_belenki at yahoo.com (Alexei Belenki) Date: Tue, 1 Dec 2015 14:30:25 +0000 (UTC) Subject: [Python-Dev] "python.exe is not a valid Win32 app" References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> Installed python 3.5 (from https://www.python.org/downloads/) on Windows XPsp3/32 On starting >>python.exe got the text above in the Windows message box. Any suggestions?Thanks.AB -------------- next part -------------- An HTML attachment was scrubbed... URL: From rymg19 at gmail.com Tue Dec 1 11:13:10 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Tue, 01 Dec 2015 10:13:10 -0600 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> Message-ID: Did you get the x86-64 version or x86? If you had gotten the former, it would lead to that error. On December 1, 2015 8:30:25 AM CST, Alexei Belenki via Python-Dev wrote: >Installed python 3.5 (from https://www.python.org/downloads/) on >Windows XPsp3/32 >On starting >>python.exe got the text above in the Windows message box. >Any suggestions?Thanks.AB > >------------------------------------------------------------------------ > >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From breamoreboy at yahoo.co.uk Tue Dec 1 11:26:32 2015 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Tue, 1 Dec 2015 16:26:32 +0000 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> Message-ID: On 01/12/2015 14:30, Alexei Belenki via Python-Dev wrote: > Installed python 3.5 (from https://www.python.org/downloads/) on Windows > XPsp3/32 > > On starting >>python.exe got the text above in the Windows message box. > > Any suggestions? > Thanks. > AB > > This isn't really the place to ask questions such as this. However Python 3.5 is *NOT* supported on XP. Work has been done for 3.5.1 to improve the user experience in this scenario. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence From fijall at gmail.com Tue Dec 1 13:56:56 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 1 Dec 2015 20:56:56 +0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> Message-ID: Hi David. Any reason you run a tiny tiny subset of benchmarks? On Tue, Dec 1, 2015 at 5:26 PM, Stewart, David C wrote: > > > From: Fabio Zadrozny > > Date: Tuesday, December 1, 2015 at 1:36 AM > To: David Stewart > > Cc: "R. David Murray" >, "python-dev at python.org" > > Subject: Re: [Python-Dev] Avoiding CPython performance regressions > > > On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C > wrote: > > On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" on behalf of rdmurray at bitdance.com> wrote: > >> >>There's also an Intel project posted about here recently that checks >>individual benchmarks for performance regressions and posts the results >>to python-checkins. > > The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) > > There is also a graphic dashboard at http://languagesperformance.intel.com/ > > Hi Dave, > > Interesting, but I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). > > Fabio ? my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). > > The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). > > We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. > > Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. > > Dave > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com From david.c.stewart at intel.com Tue Dec 1 14:04:52 2015 From: david.c.stewart at intel.com (Stewart, David C) Date: Tue, 1 Dec 2015 19:04:52 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> Message-ID: <00E38027-2C97-4998-BC99-CF49CC538F5D@intel.com> On 12/1/15, 10:56 AM, "Maciej Fijalkowski" wrote: >Hi David. > >Any reason you run a tiny tiny subset of benchmarks? We could always run more. There are so many in the full set in https://hg.python.org/benchmarks/ with such divergent results that it seems hard to see the forest because there are so many trees. I'm more interested in gradually adding to the set rather than the huge blast of all of them in daily email. Would you disagree? Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python. We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions? Dave From lac at openend.se Tue Dec 1 14:13:34 2015 From: lac at openend.se (Laura Creighton) Date: Tue, 01 Dec 2015 20:13:34 +0100 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> Message-ID: <201512011913.tB1JDYAv007962@fido.openend.se> In a message of Tue, 01 Dec 2015 10:13:10 -0600, Ryan Gonzalez writes: >Did you get the x86-64 version or x86? If you had gotten the former, it would lead to that error. No, his problem is his windows XP. Python 3.5 is not supported on windows XP. Upgrade your OS or stick with 3.4 Laura Creighton > >On December 1, 2015 8:30:25 AM CST, Alexei Belenki via Python-Dev wrote: >>Installed python 3.5 (from https://www.python.org/downloads/) on >>Windows XPsp3/32 >>On starting >>python.exe got the text above in the Windows message box. >>Any suggestions?Thanks.AB >> >>------------------------------------------------------------------------ >> >>_______________________________________________ >>Python-Dev mailing list >>Python-Dev at python.org >>https://mail.python.org/mailman/listinfo/python-dev >>Unsubscribe: >>https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com > >-- >Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: https://mail.python.org/mailman/options/python-dev/lac%40openend.se > From fijall at gmail.com Tue Dec 1 14:38:55 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Tue, 1 Dec 2015 21:38:55 +0200 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <00E38027-2C97-4998-BC99-CF49CC538F5D@intel.com> References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> <00E38027-2C97-4998-BC99-CF49CC538F5D@intel.com> Message-ID: On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C wrote: > On 12/1/15, 10:56 AM, "Maciej Fijalkowski" wrote: > > > >>Hi David. >> >>Any reason you run a tiny tiny subset of benchmarks? > > We could always run more. There are so many in the full set in https://hg.python.org/benchmarks/ with such divergent results that it seems hard to see the forest because there are so many trees. I'm more interested in gradually adding to the set rather than the huge blast of all of them in daily email. Would you disagree? > > Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python. Last time I checked, Swift was quite a bit faster under pypy :-) > > We are really interested in workloads which are representative of the way Python is used by a lot of people and which produce repeatable results. (and which are open source). Do you have a suggestions? You know our benchmark suite (https://bitbucket.org/pypy/benchmarks), we're gradually incorporating what people report. That means that (Typically) it'll be open source library benchmarks, if they get to the point of writing some. I have for example coming django ORM benchmark, can show you if you want. I don't think there is a "representative benchmark" or maybe even "representative set", also because open source code tends to be higher quality and less spaghetti-like than closed source code that I've seen, but we're adding and adding. Cheers, fijal From david.c.stewart at intel.com Tue Dec 1 15:56:14 2015 From: david.c.stewart at intel.com (Stewart, David C) Date: Tue, 1 Dec 2015 20:56:14 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> <00E38027-2C97-4998-BC99-CF49CC538F5D@intel.com> Message-ID: <9A0247E8-20CF-44D0-A796-F3BEEDA7F0B2@intel.com> On 12/1/15, 11:38 AM, "Maciej Fijalkowski" wrote: >On Tue, Dec 1, 2015 at 9:04 PM, Stewart, David C > wrote: >> >> Part of the reason that I monitor ssbench so closely on Python 2 is that Swift is a major element in cloud computing (and OpenStack in particular) and has ~70% of its cycles in Python. > >Last time I checked, Swift was quite a bit faster under pypy :-) There is some porting required, but it's very promising. :-) From ncoghlan at gmail.com Wed Dec 2 01:16:55 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 2 Dec 2015 16:16:55 +1000 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: Message-ID: On 2 December 2015 at 01:50, Serhiy Storchaka wrote: > On 25.11.15 08:39, Nick Coghlan wrote: >> So it looks to me like replicating the NULL-handling behaviour of the >> slots in the public Set* APIs was intentional, and it's just the >> documentation of that detail that was missed (since most folks >> presumably use the Del* convenience APIs instead). > > I'm not sure. This looks rather as implementation detail to me. There cases > found by you are the only cases in the core/stdlib that call > PyObject_SetAttr with third argument is NULL. Tests are passed after > replacing Set* functions with Del* functions in these cases and making Set* > functions to reject value is NULL. [1] Which means at the very least, folks relying on the current behaviour are relying on untested functionality, and would be better of switching to the tested APIs regardless of what happens on the deprecation front. > Wouldn't be worth to deprecate deleting with Set* functions? Neither other > abstract Set* APIs, not concrete Set* APIs don't support deleting. Deleting > with Set* API can be unintentional and hide a bug. Since the behaviour is currently neither documented not tested, and it doesn't raise any new Python 2/3 migation issues, I don't personally mind deprecating the "delete via set" APIs for 3.6 - as you say, having "set this field/attribute to this value" occasionally mean "delete this field/attribute" if a pointer is NULL offers a surprising second way to do something that already has a more explicit spelling. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From storchaka at gmail.com Wed Dec 2 03:42:56 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 02 Dec 2015 10:42:56 +0200 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: Message-ID: <2085391.yyrIlRt5Sf@raxxla> ??????, 02-???-2015 08:30:35 ?? ????????: > Le 1 d?c. 2015 16:51, "Serhiy Storchaka" a ?crit : > > Wouldn't be worth to deprecate deleting with Set* functions? Neither > > other abstract Set* APIs, not concrete Set* APIs don't support deleting. > >Deleting with Set* API can be unintentional and hide a bug. > Wow wow wow, what? No, dont break Python C API for purity. 8 years later, > we are still porting projects to python 3 And we are not done yet. I suggest just to deprecate this feature. I'm not suggesting to remove it in the foreseeable future (at least before 4.0). > Practicability beats purity. I don't think this argument applies here. Two things make the deprecation more painless than usual: 1. This feature has never been documented. 2. PyObject_DelAttr() exists from the start (from the time of adding Generic Abstract Object Interface). You have enough time to update your projects, and you can update them uniformly for all versions. And may be you will found few weird bugs related to misuse of Set* API. From victor.stinner at gmail.com Wed Dec 2 05:06:15 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 2 Dec 2015 11:06:15 +0100 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: <2085391.yyrIlRt5Sf@raxxla> References: <2085391.yyrIlRt5Sf@raxxla> Message-ID: 2015-12-02 9:42 GMT+01:00 Serhiy Storchaka : > You have enough time to update your projects, and you can update them > uniformly for all versions. And may be you will found few weird bugs related > to misuse of Set* API. Did you check popular projects using C extensions to check if they call Set*() functions to delete attributes/items? If the feature is used, I suggest to document and test it, not remove it. Victor From storchaka at gmail.com Wed Dec 2 07:29:33 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 2 Dec 2015 14:29:33 +0200 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> Message-ID: On 02.12.15 12:06, Victor Stinner wrote: > 2015-12-02 9:42 GMT+01:00 Serhiy Storchaka : >> You have enough time to update your projects, and you can update them >> uniformly for all versions. And may be you will found few weird bugs related >> to misuse of Set* API. > > Did you check popular projects using C extensions to check if they > call Set*() functions to delete attributes/items? I have checked following projects. regex, simplejson, Pillow, PyQt4, LibreOffice, PyGTK, PyICU, pyOpenSSL, libxml2, Boost, psutil, mercurial don't use PyObject_SetAttr at all. NumPy, pgobject don't use PyObject_SetAttr for deleting. PyYAML and lxml use PyObject_SetAttr only in code generated by Cython and never use it for deleting. From mal at egenix.com Wed Dec 2 07:41:03 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 2 Dec 2015 13:41:03 +0100 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> Message-ID: <565EE6DF.2050105@egenix.com> On 02.12.2015 13:29, Serhiy Storchaka wrote: > On 02.12.15 12:06, Victor Stinner wrote: >> 2015-12-02 9:42 GMT+01:00 Serhiy Storchaka : >>> You have enough time to update your projects, and you can update them >>> uniformly for all versions. And may be you will found few weird bugs related >>> to misuse of Set* API. >> >> Did you check popular projects using C extensions to check if they >> call Set*() functions to delete attributes/items? > > I have checked following projects. > > regex, simplejson, Pillow, PyQt4, LibreOffice, PyGTK, PyICU, pyOpenSSL, libxml2, Boost, psutil, > mercurial don't use PyObject_SetAttr at all. > > NumPy, pgobject don't use PyObject_SetAttr for deleting. > > PyYAML and lxml use PyObject_SetAttr only in code generated by Cython and never use it for deleting. While I don't think deleting attributes is a very common thing to do in any Python code base (unless you need to break circular references or explicitly want to free resources), the fact that PyObject_DelAttr() itself is implemented as macro using the NULL attribute value clearly creates an API incompatibility when removing this functionality or generating warnings, since all code using the correct PyObject_DelAttr() at the moment, would then trigger the warning as well. As a result, the deprecation would have to be extended across more releases than the usual cycle. A first step would be to replace the macro with a proper function to avoid false positive warnings, even when using the correct API. Then we could add a warning to the PyObject_SetAttr() function and hope that not too many projects use the stable ABI as basis to have C extensions work across several releases. Overall, I'm not sure whether it's worth the trouble. Documenting the feature and adding a deprecation notice to just the documentation would likely be better. We could then remove the functionality in Python 4. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 02 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From ncoghlan at gmail.com Wed Dec 2 09:26:20 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Dec 2015 00:26:20 +1000 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: <565EE6DF.2050105@egenix.com> References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: On 2 December 2015 at 22:41, M.-A. Lemburg wrote: > On 02.12.2015 13:29, Serhiy Storchaka wrote: >> On 02.12.15 12:06, Victor Stinner wrote: >>> 2015-12-02 9:42 GMT+01:00 Serhiy Storchaka : >>>> You have enough time to update your projects, and you can update them >>>> uniformly for all versions. And may be you will found few weird bugs related >>>> to misuse of Set* API. >>> >>> Did you check popular projects using C extensions to check if they >>> call Set*() functions to delete attributes/items? >> >> I have checked following projects. >> >> regex, simplejson, Pillow, PyQt4, LibreOffice, PyGTK, PyICU, pyOpenSSL, libxml2, Boost, psutil, >> mercurial don't use PyObject_SetAttr at all. >> >> NumPy, pgobject don't use PyObject_SetAttr for deleting. >> >> PyYAML and lxml use PyObject_SetAttr only in code generated by Cython and never use it for deleting. > > While I don't think deleting attributes is a very common thing > to do in any Python code base (unless you need to break circular > references or explicitly want to free resources), the > fact that PyObject_DelAttr() itself is implemented as macro > using the NULL attribute value clearly creates an API incompatibility > when removing this functionality or generating warnings, since > all code using the correct PyObject_DelAttr() at the moment, > would then trigger the warning as well. > > As a result, the deprecation would have to be extended across > more releases than the usual cycle. > > A first step would be to replace the macro with a proper function > to avoid false positive warnings, even when using the correct API. > > Then we could add a warning to the PyObject_SetAttr() function and > hope that not too many projects use the stable ABI as basis to > have C extensions work across several releases. Ah, I forgot to take the stable ABI guarantee into account - you're right, it isn't possible to introduce the deprecation without making an addition to the stable ABI, which would mean extension modules relying on the stable ABI would need to be rebuilt, rather defeating the purpose of the stable ABI guarantee. I think that puts the idea squarely in "we can't do it" territory, since the benefit on offer through the deprecation process is only a much easier debugging session when someone is trying to track down the root cause of an unexpectedly missing attribute on an object. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From random832 at fastmail.com Wed Dec 2 09:40:55 2015 From: random832 at fastmail.com (Random832) Date: Wed, 2 Dec 2015 14:40:55 +0000 (UTC) Subject: [Python-Dev] Deleting with setting C API functions References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: On 2015-12-02, M.-A. Lemburg wrote: > A first step would be to replace the macro with a proper function > to avoid false positive warnings, even when using the correct API. > > Then we could add a warning to the PyObject_SetAttr() function and > hope that not too many projects use the stable ABI as basis to > have C extensions work across several releases. How about using a versioned ABI? Make a new function that doesn't allow NULL, called something like PyObject_SetAttr2, and instead of declaring the old one in headers, use a #define to the new name. > Overall, I'm not sure whether it's worth the trouble. Documenting > the feature and adding a deprecation notice to just the documentation > would likely be better. We could then remove the functionality > in Python 4. Are there plans for a Python 4? From victor.stinner at gmail.com Wed Dec 2 09:46:37 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 2 Dec 2015 15:46:37 +0100 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: 2015-12-02 15:40 GMT+01:00 Random832 : > Are there plans for a Python 4? No. Don't. Don't schedule any "removal" or *any* kind of "break backward compatibility" anymore, or you will definetly kill the Python community. It will probably take 10 years or more to convert *all* Python 2 code around the world to Python 3. I don't want to have to redo the same thing again. Never ever again. To be clear: removing functions is fine, but if and only if you have a smooth transition plan. Sorry, it's unclear to me what is a "smooth transition plan". IMHO the deprecation warnings which are current quiet by default is not a good idea. Everybody ignore them, and then complain when the function is really removed. Maybe I should write an informal PEP to explain my idea. Victor From random832 at fastmail.com Wed Dec 2 10:01:48 2015 From: random832 at fastmail.com (Random832) Date: Wed, 2 Dec 2015 15:01:48 +0000 (UTC) Subject: [Python-Dev] Deleting with setting C API functions References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: On 2015-12-02, Victor Stinner wrote: >> Are there plans for a Python 4? > > No. Don't. Don't schedule any "removal" or *any* kind of "break > backward compatibility" anymore, or you will definetly kill the Python > community. I feel like I should note that I agree with your position here, I was just asking the question to articulate the issue that "put it off to the indefinite future" isn't a real plan for anything. From stefan.a.popa at intel.com Wed Dec 2 10:18:35 2015 From: stefan.a.popa at intel.com (Popa, Stefan A) Date: Wed, 2 Dec 2015 15:18:35 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> References: <20151130135240.3E341251104@webabinitio.net> <5A37912B-B69D-452D-847E-EACBF8E5F3E4@intel.com> <7DACEDE4-909D-42D1-B8C7-89967D90A784@intel.com> Message-ID: <00EE5484-68ED-4204-ADA8-0287CFF56839@intel.com> Hi Fabio, Let me know if you have any questions related to the Python benchmarks run nightly in Intel?s 0-Day Lab. Thanks, Stefan From: "Stewart, David C" > Date: Tuesday 1 December 2015 at 17:26 To: Fabio Zadrozny > Cc: "R. David Murray" >, "python-dev at python.org" >, Stefan A Popa > Subject: Re: [Python-Dev] Avoiding CPython performance regressions From: Fabio Zadrozny > Date: Tuesday, December 1, 2015 at 1:36 AM To: David Stewart > Cc: "R. David Murray" >, "python-dev at python.org" > Subject: Re: [Python-Dev] Avoiding CPython performance regressions On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C > wrote: On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" on behalf of rdmurray at bitdance.com> wrote: > >There's also an Intel project posted about here recently that checks >individual benchmarks for performance regressions and posts the results >to python-checkins. The description of the project is at https://01.org/lp - Python results are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to Romania National Day holiday!) There is also a graphic dashboard at http://languagesperformance.intel.com/ ?Hi Dave, Interesting, but ?I'm curious on which benchmark set are you running? From the graphs it seems it has a really high standard deviation, so, I'm curious to know if that's really due to changes in the CPython codebase / issues in the benchmark set or in how the benchmarks are run... (it doesn't seem to be the benchmarks from https://hg.python.org/benchmarks/ right?). Fabio ? my advice to you is to check out the daily emails sent to python-checkins. An example is https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. If you still have questions, Stefan can answer (he is copied). The graphs are really just a manager-level indicator of trends, which I find very useful (I have it running continuously on one of the monitors in my office) but core developers might want to see day-to-day the effect of their changes. (Particular if they thought one was going to improve performance. It's nice to see if you get community confirmation). We do run nightly a subset of https://hg.python.org/benchmarks/ and run the full set when we are evaluating our performance patches. Some of the "benchmarks" really do have a high standard deviation, which makes them hardly very useful for measuring incremental performance improvements, IMHO. I like to see it spelled out so I can tell whether I should be worried or not about a particular delta. Dave From vgr255 at live.ca Wed Dec 2 10:32:28 2015 From: vgr255 at live.ca (Emanuel Barry) Date: Wed, 2 Dec 2015 10:32:28 -0500 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: , , <2085391.yyrIlRt5Sf@raxxla>, , <565EE6DF.2050105@egenix.com>, , Message-ID: > From: victor.stinner at gmail.com > Date: Wed, 2 Dec 2015 15:46:37 +0100 > To: random832 at fastmail.com > Subject: Re: [Python-Dev] Deleting with setting C API functions > CC: python-dev at python.org > > 2015-12-02 15:40 GMT+01:00 Random832 : > > Are there plans for a Python 4? > > No. Don't. Don't schedule any "removal" or *any* kind of "break > backward compatibility" anymore, or you will definetly kill the Python > community. Nick Coghlan made a pretty elaborated blog post about that here: http://opensource.com/life/14/9/why-python-4-wont-be-python-3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdboom at gmail.com Wed Dec 2 09:33:22 2015 From: mdboom at gmail.com (Michael Droettboom) Date: Wed, 02 Dec 2015 14:33:22 +0000 Subject: [Python-Dev] Avoiding CPython performance regressions Message-ID: You may also be interested in a project I've been working on, airspeed velocity, which will automatically benchmark historical versions of a git or hg repo. http://github.com/spacetelescope/asv astropy, scipy, numpy and dask are already using it. Cheers, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Wed Dec 2 11:23:30 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Wed, 2 Dec 2015 08:23:30 -0800 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: <93E7D1BD-AB9F-41F2-A3C9-9D329FFBABAC@yahoo.com> On Dec 2, 2015, at 07:01, Random832 wrote: > > On 2015-12-02, Victor Stinner wrote: >>> Are there plans for a Python 4? >> >> No. Don't. Don't schedule any "removal" or *any* kind of "break >> backward compatibility" anymore, or you will definetly kill the Python >> community. > > I feel like I should note that I agree with your position here, I was > just asking the question to articulate the issue that "put it off to the > indefinite future" isn't a real plan for anything. Python could just go from 3.9 to 4.0, as a regular dot release, just to dispel the idea of an inevitable backward-incompatible "Python 4". (That should be around 2 years after the expiration of 2.7 support, py2/py3 naming, etc., right?) Or, of course, Python could avoid the number 4, go to 3.17 and then decide that the next release is big enough to be worthy of 5.0. Or go from 3.9 to 2022, or XP, or Python Enterprise Python 1. :) From guido at python.org Wed Dec 2 11:35:48 2015 From: guido at python.org (Guido van Rossum) Date: Wed, 2 Dec 2015 08:35:48 -0800 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: On Wed, Dec 2, 2015 at 7:32 AM, Emanuel Barry wrote: > Nick Coghlan made a pretty elaborated blog post about that here: > http://opensource.com/life/14/9/why-python-4-wont-be-python-3 > I wholeheartedly agree with what Nick writes there -- but I can't resist noting that the title is backwards -- the whole point is that Python 4 *will* be like Python 3, i.e. it will *not* differ (in a backward-incompatible way) from Python 3. What Nick probably meant is "Why the *transition to* Python 4 won't be like the transition to Python 3." And that is exactly right. We've learned our lesson (though we're in much better shape than Perl :-). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From random832 at fastmail.com Wed Dec 2 11:41:10 2015 From: random832 at fastmail.com (Random832) Date: Wed, 2 Dec 2015 16:41:10 +0000 (UTC) Subject: [Python-Dev] Deleting with setting C API functions References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <93E7D1BD-AB9F-41F2-A3C9-9D329FFBABAC@yahoo.com> Message-ID: On 2015-12-02, Andrew Barnert wrote: > Python could just go from 3.9 to 4.0, as a regular dot release, just > to dispel the idea of an inevitable backward-incompatible "Python 4". > (That should be around 2 years after the expiration of 2.7 support, > py2/py3 naming, etc., right?) Why bother with the dot? Why not rename 3.5 to Python 5, and then go to Python 6, etc, and then your "4.0" would be 10. From barry at python.org Wed Dec 2 11:57:39 2015 From: barry at python.org (Barry Warsaw) Date: Wed, 2 Dec 2015 11:57:39 -0500 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: <20151202115739.3894c7d5@anarchist.wooz.org> On Dec 02, 2015, at 08:35 AM, Guido van Rossum wrote: >I wholeheartedly agree with what Nick writes there As do I. One interesting point will be what *nix calls the /usr/bin thingie for Python 4. It would seem weird to call it /usr/bin/python3 and symlink it to say /usr/bin/python4.0 but maybe that's the most practical solution. OTOH, by 2023, Python 2 will at worst be in source-only security release mode, if not finally retired so maybe we can reclaim /usr/bin/python by then. Oh well, PEP 394 will hash all that out I'm sure. One other potentially disruptive change would be when Python's Einstein, er David Beazley, finally cracks the nut of the GIL. Should that require a new backward incompatible C API, Python 4.0 would be the time to do it. Cheers, -Barry From guido at python.org Wed Dec 2 12:12:28 2015 From: guido at python.org (Guido van Rossum) Date: Wed, 2 Dec 2015 09:12:28 -0800 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: <20151202115739.3894c7d5@anarchist.wooz.org> References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <20151202115739.3894c7d5@anarchist.wooz.org> Message-ID: On Wed, Dec 2, 2015 at 8:57 AM, Barry Warsaw wrote: > On Dec 02, 2015, at 08:35 AM, Guido van Rossum wrote: > > >I wholeheartedly agree with what Nick writes there > > As do I. > > One interesting point will be what *nix calls the /usr/bin thingie for > Python > 4. It would seem weird to call it /usr/bin/python3 and symlink it to say > /usr/bin/python4.0 but maybe that's the most practical solution. OTOH, by > 2023, Python 2 will at worst be in source-only security release mode, if > not > finally retired so maybe we can reclaim /usr/bin/python by then. Oh well, > PEP > 394 will hash all that out I'm sure. > Maybe the criteria for switching to 4 would be that all traces of 2 are gone. > One other potentially disruptive change would be when Python's Einstein, er > David Beazley, finally cracks the nut of the GIL. Should that require a > new > backward incompatible C API, Python 4.0 would be the time to do it. > There would still have to be a backward compatibility API for a very long time. So I don't see why this particular change (however eagerly anticipated! :-) should force a major version bump. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at krypto.org Wed Dec 2 18:26:40 2015 From: greg at krypto.org (Gregory P. Smith) Date: Wed, 02 Dec 2015 23:26:40 +0000 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <20151202115739.3894c7d5@anarchist.wooz.org> Message-ID: Except that we should skip version 4 and go directly to 5 in homage to http://www.montypython.net/scripts/HG-handgrenade.php. On Wed, Dec 2, 2015 at 9:13 AM Guido van Rossum wrote: > On Wed, Dec 2, 2015 at 8:57 AM, Barry Warsaw wrote: > >> On Dec 02, 2015, at 08:35 AM, Guido van Rossum wrote: >> >> >I wholeheartedly agree with what Nick writes there >> >> As do I. >> >> One interesting point will be what *nix calls the /usr/bin thingie for >> Python >> 4. It would seem weird to call it /usr/bin/python3 and symlink it to say >> /usr/bin/python4.0 but maybe that's the most practical solution. OTOH, by >> 2023, Python 2 will at worst be in source-only security release mode, if >> not >> finally retired so maybe we can reclaim /usr/bin/python by then. Oh >> well, PEP >> 394 will hash all that out I'm sure. >> > > Maybe the criteria for switching to 4 would be that all traces of 2 are > gone. > > >> One other potentially disruptive change would be when Python's Einstein, >> er >> David Beazley, finally cracks the nut of the GIL. Should that require a >> new >> backward incompatible C API, Python 4.0 would be the time to do it. >> > > There would still have to be a backward compatibility API for a very long > time. So I don't see why this particular change (however eagerly > anticipated! :-) should force a major version bump. > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Wed Dec 2 18:36:05 2015 From: barry at python.org (Barry Warsaw) Date: Wed, 2 Dec 2015 18:36:05 -0500 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <20151202115739.3894c7d5@anarchist.wooz.org> Message-ID: <20151202183605.336ead3d@limelight.wooz.org> On Dec 02, 2015, at 11:26 PM, Gregory P. Smith wrote: >Except that we should skip version 4 and go directly to 5 in homage to >http://www.montypython.net/scripts/HG-handgrenade.php. Five is right out. https://youtu.be/QM9Bynjh2Lk?t=3m35s -Barry -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From storchaka at gmail.com Wed Dec 2 18:40:42 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Thu, 3 Dec 2015 01:40:42 +0200 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <20151202115739.3894c7d5@anarchist.wooz.org> Message-ID: On 03.12.15 01:26, Gregory P. Smith wrote: > Except that we should skip version 4 and go directly to 5 in homage to > http://www.montypython.net/scripts/HG-handgrenade.php. Good point! So now we can assign version 4 as a term of un-realising any stupid ideas. From greg.ewing at canterbury.ac.nz Wed Dec 2 19:47:41 2015 From: greg.ewing at canterbury.ac.nz (Greg) Date: Thu, 03 Dec 2015 13:47:41 +1300 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <93E7D1BD-AB9F-41F2-A3C9-9D329FFBABAC@yahoo.com> Message-ID: <565F912D.6000609@canterbury.ac.nz> On 3/12/2015 5:41 a.m., Random832 wrote: > Why bother with the dot? Why not rename 3.5 to Python 5, and then go to > Python 6, etc, and then your "4.0" would be 10. Then we could call it Python X! Everything is better with an X in the name. -- Greg From breamoreboy at yahoo.co.uk Wed Dec 2 19:55:50 2015 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Thu, 3 Dec 2015 00:55:50 +0000 Subject: [Python-Dev] Python 4 musings (was Re: Deleting with setting C API functions) In-Reply-To: <20151202183605.336ead3d@limelight.wooz.org> References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> <20151202115739.3894c7d5@anarchist.wooz.org> <20151202183605.336ead3d@limelight.wooz.org> Message-ID: On 02/12/2015 23:36, Barry Warsaw wrote: > On Dec 02, 2015, at 11:26 PM, Gregory P. Smith wrote: > >> Except that we should skip version 4 and go directly to 5 in homage to >> http://www.montypython.net/scripts/HG-handgrenade.php. > > Five is right out. > > https://youtu.be/QM9Bynjh2Lk?t=3m35s > > -Barry > Can we have a PEP on this please, otherwise there's likely to be a great deal of confusion between hand grenade style counting, and the faculty rules at the University of Walamaloo. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence From ncoghlan at gmail.com Wed Dec 2 21:24:39 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 3 Dec 2015 12:24:39 +1000 Subject: [Python-Dev] Deleting with setting C API functions In-Reply-To: References: <2085391.yyrIlRt5Sf@raxxla> <565EE6DF.2050105@egenix.com> Message-ID: On 3 December 2015 at 02:35, Guido van Rossum wrote: > On Wed, Dec 2, 2015 at 7:32 AM, Emanuel Barry wrote: >> >> Nick Coghlan made a pretty elaborated blog post about that here: >> http://opensource.com/life/14/9/why-python-4-wont-be-python-3 > > I wholeheartedly agree with what Nick writes there -- but I can't resist > noting that the title is backwards -- the whole point is that Python 4 > *will* be like Python 3, i.e. it will *not* differ (in a > backward-incompatible way) from Python 3. What Nick probably meant is "Why > the *transition to* Python 4 won't be like the transition to Python 3." And > that is exactly right. Yeah, the full title was actually "Why Python 4.0 won't be like Python 3.0" which I think better conveys the implied "transition to" aspect, but the zeros got left out in the opensource.com URL. The RHEL dev blog version uses the full title in the URL as well: https://developerblog.redhat.com/2014/09/17/why-python-4-0-wont-be-like-python-3-0/ I also wrote the article before you mentioned you might be amenable to doing a 3.10 instead of rolling over to 4.0 (although I suspect there are even more systems that assume XY is sufficient to identify a Python feature release than assumed XYZ was sufficient to identify maintenance releases prior to 2.7.10) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lac at openend.se Thu Dec 3 07:51:39 2015 From: lac at openend.se (Laura Creighton) Date: Thu, 3 Dec 2015 13:51:39 +0100 Subject: Python Language Reference has no mention of list comÃprehensions Message-ID: <201512031251.tB3Cpdh3014048@fido.openend.se> Intentional or Oversight? Laura From p.f.moore at gmail.com Thu Dec 3 08:37:17 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Dec 2015 13:37:17 +0000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031251.tB3Cpdh3014048@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> Message-ID: On 3 December 2015 at 12:51, Laura Creighton wrote: > Intentional or Oversight? Hard to find :-) https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries I went via "Atoms" in the expression section, then followed the links in the actual grammar spec. Paul From mal at egenix.com Thu Dec 3 09:04:09 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 3 Dec 2015 15:04:09 +0100 Subject: [Python-Dev] Python Language Reference has no mention of list comprehensions In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> Message-ID: <56604BD9.4010107@egenix.com> On 03.12.2015 14:37, Paul Moore wrote: > On 3 December 2015 at 12:51, Laura Creighton wrote: >> Intentional or Oversight? > > Hard to find :-) > > https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries > > I went via "Atoms" in the expression section, then followed the links > in the actual grammar spec. Strange that the doc search facility doesn't find this: https://docs.python.org/3/search.html?q=comprehension The human readable documentation is in the tutorial: https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions (this is found by the doc search) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 03 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From lac at openend.se Thu Dec 3 09:26:23 2015 From: lac at openend.se (Laura Creighton) Date: Thu, 03 Dec 2015 15:26:23 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> Message-ID: <201512031426.tB3EQNcE015488@fido.openend.se> In a message of Thu, 03 Dec 2015 13:37:17 +0000, Paul Moore writes: >On 3 December 2015 at 12:51, Laura Creighton wrote: >> Intentional or Oversight? > >Hard to find :-) > >https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries > >I went via "Atoms" in the expression section, then followed the links >in the actual grammar spec. > >Paul I think the whole use of the language displays as in 6.2.4. Displays for lists, sets and dictionaries For constructing a list, a set or a dictionary Python provides special syntax called ?displays?, each of them in two flavors: either the container contents are listed explicitly, or they are computed via a set of looping and filtering instructions, called a comprehension. is very odd. I don't know anybody who talks of 'displays'. They talk of 'two ways to construct a'. Who came up with the word 'display' and what does it have going for it that I have missed? Right now I think its chief virtue is that it is a meaningless noun. (But not meaningless enough, as I associate displays with output, not construction). I think that 6.2.4 Constructing lists, sets and dictionaries would be a much more useful title, and 6.2.4 Constructing lists, sets and dictionaries -- explicitly or through the use of comprehensions an even better one. Am I missing something important about the 'display' language? Laura From random832 at fastmail.com Thu Dec 3 10:09:12 2015 From: random832 at fastmail.com (Random832) Date: Thu, 3 Dec 2015 15:09:12 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: On 2015-12-03, Laura Creighton wrote: > Who came up with the word 'display' and what does it have going for > it that I have missed? Right now I think its chief virtue is that > it is a meaningless noun. (But not meaningless enough, as I > associate displays with output, not construction). In a recent discussion it seemed like people mainly use it because they don't like using "literal" for things other than single token constants. In most other languages' contexts the equivalent thing would be called a literal. > I think that > > 6.2.4 Constructing lists, sets and dictionaries > > would be a much more useful title, and > > 6.2.4 Constructing lists, sets and dictionaries -- explicitly or through the use of comprehensions I don't like the idea of calling it "explicit construction". Explicit construction to me means the actual use of a call to the constructor function. From lac at openend.se Thu Dec 3 10:43:21 2015 From: lac at openend.se (Laura Creighton) Date: Thu, 03 Dec 2015 16:43:21 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: <201512031543.tB3FhLEw016825@fido.openend.se> In a message of Thu, 03 Dec 2015 15:09:12 +0000, Random832 writes: >> 6.2.4 Constructing lists, sets and dictionaries -- explicitly or through the use of comprehensions > >I don't like the idea of calling it "explicit construction". >Explicit construction to me means the actual use of a call to the >constructor function. Would 6.2.4 Creating lists, sets and dictionaries -- explicitly or through the use of comprehensions get rid of that objection? Laura From rymg19 at gmail.com Thu Dec 3 11:09:56 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 03 Dec 2015 10:09:56 -0600 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031426.tB3EQNcE015488@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: <9C995658-5822-425F-9F18-7BE982636391@gmail.com> On December 3, 2015 8:26:23 AM CST, Laura Creighton wrote: >In a message of Thu, 03 Dec 2015 13:37:17 +0000, Paul Moore writes: >>On 3 December 2015 at 12:51, Laura Creighton wrote: >>> Intentional or Oversight? >> >>Hard to find :-) >> >>https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries >> >>I went via "Atoms" in the expression section, then followed the links >>in the actual grammar spec. >> >>Paul > >I think the whole use of the language displays as in > > 6.2.4. Displays for lists, sets and dictionaries > > For constructing a list, a set or a dictionary Python provides > special syntax called ?displays?, each of them in two flavors: > > either the container contents are listed explicitly, or > they are computed via a set of looping and filtering instructions, > called a comprehension. > >is very odd. I don't know anybody who talks of 'displays'. They >talk of 'two ways to construct a'. > >Who came up with the word 'display' and what does it have going for >it that I have missed? Right now I think its chief virtue is that >it is a meaningless noun. (But not meaningless enough, as I >associate displays with output, not construction). > >I think that > > 6.2.4 Constructing lists, sets and dictionaries > >would be a much more useful title, and > >6.2.4 Constructing lists, sets and dictionaries -- explicitly or >through the use of comprehensions > What about: 6.2.4 Constricting lists, sets, and dictionaries (including comprehensions) or something to that effect? >an even better one. > >Am I missing something important about the 'display' language? > >Laura >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. From rymg19 at gmail.com Thu Dec 3 11:11:23 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Thu, 03 Dec 2015 10:11:23 -0600 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <9C995658-5822-425F-9F18-7BE982636391@gmail.com> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <9C995658-5822-425F-9F18-7BE982636391@gmail.com> Message-ID: On December 3, 2015 10:09:56 AM CST, Ryan Gonzalez wrote: > > >On December 3, 2015 8:26:23 AM CST, Laura Creighton >wrote: >>In a message of Thu, 03 Dec 2015 13:37:17 +0000, Paul Moore writes: >>>On 3 December 2015 at 12:51, Laura Creighton wrote: >>>> Intentional or Oversight? >>> >>>Hard to find :-) >>> >>>https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries >>> >>>I went via "Atoms" in the expression section, then followed the links >>>in the actual grammar spec. >>> >>>Paul >> >>I think the whole use of the language displays as in >> >> 6.2.4. Displays for lists, sets and dictionaries >> >> For constructing a list, a set or a dictionary Python provides >> special syntax called ?displays?, each of them in two flavors: >> >> either the container contents are listed explicitly, or >> they are computed via a set of looping and filtering instructions, > >> called a comprehension. >> >>is very odd. I don't know anybody who talks of 'displays'. They >>talk of 'two ways to construct a'. >> >>Who came up with the word 'display' and what does it have going for >>it that I have missed? Right now I think its chief virtue is that >>it is a meaningless noun. (But not meaningless enough, as I >>associate displays with output, not construction). >> >>I think that >> >> 6.2.4 Constructing lists, sets and dictionaries >> >>would be a much more useful title, and >> >>6.2.4 Constructing lists, sets and dictionaries -- explicitly or >>through the use of comprehensions >> > >What about: > >6.2.4 Constricting lists, sets, and dictionaries (including >comprehensions) > Whoops! I meant "Constructing", not "Constricting". Pythons definitely constrict their prey, but that's not what I was referring to... >or something to that effect? > >>an even better one. >> >>Am I missing something important about the 'display' language? >> >>Laura >>_______________________________________________ >>Python-Dev mailing list >>Python-Dev at python.org >>https://mail.python.org/mailman/listinfo/python-dev >>Unsubscribe: >>https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. From python at mrabarnett.plus.com Thu Dec 3 11:15:30 2015 From: python at mrabarnett.plus.com (MRAB) Date: Thu, 3 Dec 2015 16:15:30 +0000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: <56606AA2.7040100@mrabarnett.plus.com> On 2015-12-03 15:09, Random832 wrote: > On 2015-12-03, Laura Creighton wrote: >> Who came up with the word 'display' and what does it have going for >> it that I have missed? Right now I think its chief virtue is that >> it is a meaningless noun. (But not meaningless enough, as I >> associate displays with output, not construction). > > In a recent discussion it seemed like people mainly use it > because they don't like using "literal" for things other than > single token constants. In most other languages' contexts the > equivalent thing would be called a literal. > "Literals" also tend to be constants, or be constructed out of constants. A list comprehension can contain functions, etc. >> I think that >> >> 6.2.4 Constructing lists, sets and dictionaries >> >> would be a much more useful title, and >> >> 6.2.4 Constructing lists, sets and dictionaries -- explicitly or through the use of comprehensions > > I don't like the idea of calling it "explicit construction". > Explicit construction to me means the actual use of a call to the > constructor function. > From mal at egenix.com Thu Dec 3 11:34:41 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 3 Dec 2015 17:34:41 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <9C995658-5822-425F-9F18-7BE982636391@gmail.com> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <9C995658-5822-425F-9F18-7BE982636391@gmail.com> Message-ID: <56606F21.5080801@egenix.com> On 03.12.2015 17:09, Ryan Gonzalez wrote: > > > On December 3, 2015 8:26:23 AM CST, Laura Creighton wrote: >> In a message of Thu, 03 Dec 2015 13:37:17 +0000, Paul Moore writes: >>> On 3 December 2015 at 12:51, Laura Creighton wrote: >>>> Intentional or Oversight? >>> >>> Hard to find :-) >>> >>> https://docs.python.org/3/reference/expressions.html#displays-for-lists-sets-and-dictionaries >>> >>> I went via "Atoms" in the expression section, then followed the links >>> in the actual grammar spec. >>> >>> Paul >> >> I think the whole use of the language displays as in >> >> 6.2.4. Displays for lists, sets and dictionaries >> >> For constructing a list, a set or a dictionary Python provides >> special syntax called ?displays?, each of them in two flavors: >> >> either the container contents are listed explicitly, or >> they are computed via a set of looping and filtering instructions, >> called a comprehension. >> >> is very odd. I don't know anybody who talks of 'displays'. They >> talk of 'two ways to construct a'. >> >> Who came up with the word 'display' and what does it have going for >> it that I have missed? Right now I think its chief virtue is that >> it is a meaningless noun. (But not meaningless enough, as I >> associate displays with output, not construction). >> >> I think that >> >> 6.2.4 Constructing lists, sets and dictionaries >> >> would be a much more useful title, and >> >> 6.2.4 Constructing lists, sets and dictionaries -- explicitly or >> through the use of comprehensions >> > > What about: > > 6.2.4 Constricting lists, sets, and dictionaries (including comprehensions) > > or something to that effect? > >> an even better one. >> >> Am I missing something important about the 'display' language? I don't think changing a single header is useful in this case. The grammar uses the token term "display" to mean "representation of an object". While you normally only think of output when talking of the representation of an object, it can also refer to the visual definition of an object when passed to the parser. A list comprehension is an example of such a visual definition of an object, hence the token name. If we were to change the term, we'd have to change it throughout the reference, grammar and parser implementation. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 03 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From rdmurray at bitdance.com Thu Dec 3 11:47:23 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Thu, 03 Dec 2015 11:47:23 -0500 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <56606AA2.7040100@mrabarnett.plus.com> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> Message-ID: <20151203164723.D28E2B90082@webabinitio.net> On Thu, 03 Dec 2015 16:15:30 +0000, MRAB wrote: > On 2015-12-03 15:09, Random832 wrote: > > On 2015-12-03, Laura Creighton wrote: > >> Who came up with the word 'display' and what does it have going for > >> it that I have missed? Right now I think its chief virtue is that > >> it is a meaningless noun. (But not meaningless enough, as I > >> associate displays with output, not construction). > > > > In a recent discussion it seemed like people mainly use it > > because they don't like using "literal" for things other than > > single token constants. In most other languages' contexts the > > equivalent thing would be called a literal. > > > "Literals" also tend to be constants, or be constructed out of > constants. > > A list comprehension can contain functions, etc. Actually, it looks like Random832 is right. The docs for ast.literal_eval say "a Python literal or container display". Which also means we are using the term 'display' inconsistently, since literal_eval will not eval a comprehension. --David From p.f.moore at gmail.com Thu Dec 3 12:04:50 2015 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 3 Dec 2015 17:04:50 +0000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031426.tB3EQNcE015488@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: On 3 December 2015 at 14:26, Laura Creighton wrote: > Am I missing something important about the 'display' language? It's a term that's used in the lisp and/or functional programming communities, I believe. And I think I recollect that something similar is used in (mathematical) set theory So it's not completely an invented term. But that's not to say it's particularly obvious in this context... Paul From guido at python.org Thu Dec 3 12:20:17 2015 From: guido at python.org (Guido van Rossum) Date: Thu, 3 Dec 2015 09:20:17 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: I borrowed 'display' from the formal definition of ABC. It's still used in the quick reference: http://homepages.cwi.nl/~steven/abc/qr.html#EXPRESSIONS . I hadn't heard it before and didn't think to research its heritage. I like it for list/set/dict displays since it's rather a stretch to call those literals (they can contain expressions after all). I don't think of a comprehension as a display though (even though it's syntactically related). On Thu, Dec 3, 2015 at 9:04 AM, Paul Moore wrote: > On 3 December 2015 at 14:26, Laura Creighton wrote: > > Am I missing something important about the 'display' language? > > It's a term that's used in the lisp and/or functional programming > communities, I believe. And I think I recollect that something similar > is used in (mathematical) set theory So it's not completely an > invented term. > > But that's not to say it's particularly obvious in this context... > Paul > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Thu Dec 3 12:25:53 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 3 Dec 2015 09:25:53 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <56606AA2.7040100@mrabarnett.plus.com> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> Message-ID: <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> > On Dec 3, 2015, at 08:15, MRAB wrote: > >>> On 2015-12-03 15:09, Random832 wrote: >>> On 2015-12-03, Laura Creighton wrote: >>> Who came up with the word 'display' and what does it have going for >>> it that I have missed? Right now I think its chief virtue is that >>> it is a meaningless noun. (But not meaningless enough, as I >>> associate displays with output, not construction). >> >> In a recent discussion it seemed like people mainly use it >> because they don't like using "literal" for things other than >> single token constants. In most other languages' contexts the >> equivalent thing would be called a literal. > "Literals" also tend to be constants, or be constructed out of > constants. I've seen people saying that before, but I don't know where they get that. It's certainly not the way, say, C++ or JavaScript use the term. But I don't see any point in arguing about it if people just accept that "literal" is too broad a term to capture any useful intuition here. > A list comprehension can contain functions, etc. A non-comprehension display can include function calls, lambdas, or any other kind of expression, just as easily as a comprehension can. Is [1, x, f(y), lambda z: w+z] a literal? If so, why isn't [i*x for i in y] a literal? The problem is that we need a word that distinguishes the former; trying to press "literal" into service to help the distinction doesn't help. At some point, Python distinguished between displays and comprehensions; I'm assuming someone realized there's no principled sense in which a comprehension isn't also a display, and now we're stuck with no word again. >>> I think that >>> >>> 6.2.4 Constructing lists, sets and dictionaries >>> >>> would be a much more useful title, and >>> >>> 6.2.4 Constructing lists, sets and dictionaries -- explicitly or through the use of comprehensions >> >> I don't like the idea of calling it "explicit construction". >> Explicit construction to me means the actual use of a call to the >> constructor function. Agreed. The obvious mathematical terms are "extension" and "intention", but I get the feeling nobody would go for that. Ultimately, the best we have is "displays that aren't comprehensions" or "constructions that aren't comprehensions". Which means that something like "list, set, and dictionary displays (including comprehensions)" is about as good as you can make it without inventing a new term. There's nothing to contrast comprehensions with. From lac at openend.se Thu Dec 3 12:30:25 2015 From: lac at openend.se (Laura Creighton) Date: Thu, 03 Dec 2015 18:30:25 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: <201512031730.tB3HUPEv018874@fido.openend.se> What I would like is if it were a lot easier for a person who just saw a list comprehension for the very first time, and was told what it is, to have a much, much easier time finding it in the Reference Manual. Would a section on comprehensions in general, defining what a comprehension is be appropriate? Laura From stephen at xemacs.org Thu Dec 3 12:31:05 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 4 Dec 2015 02:31:05 +0900 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031426.tB3EQNcE015488@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> Message-ID: <22112.31833.681480.237349@turnbull.sk.tsukuba.ac.jp> Laura Creighton writes: > Am I missing something important about the 'display' language? A display is a constructor that looks like a literal but isn't. It is syntactically like the printed output, but may contain expressions to be evaluated at runtime as well as compile-time constant expressions that can be "folded". I find it useful to have a single word that means that, and can't think of a better one. I suppose "display" was chosen because the syntax is intended to "look like" the constructed object (ie, its printable representation). A comprehension corresponds to what is often called "set-builder notation" for sets; it doesn't look like the print representation. I'd be perfectly happy to include comprehensions in the concept of display, but Guido says no, and I'm happy to have them be different too. :-) I don't know if you missed any of that, I don't claim that it's terribly important, and Your Mileage May Vary, but it works for me. :-) BTW, I don't care if usage is consistent in this case. I like consistency, but insisting on it here would be an Emersonian hobgoblin IMO (again, YMMV). From mal at egenix.com Thu Dec 3 12:37:55 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 3 Dec 2015 18:37:55 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031730.tB3HUPEv018874@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <201512031730.tB3HUPEv018874@fido.openend.se> Message-ID: <56607DF3.30705@egenix.com> On 03.12.2015 18:30, Laura Creighton wrote: > What I would like is if it were a lot easier for a person who just > saw a list comprehension for the very first time, and was told what it > is, to have a much, much easier time finding it in the Reference Manual. Such a person should more likely be directed to the tutorial rather than the very technical language spec :-) > Would a section on comprehensions in general, defining what a comprehension > is be appropriate? We already have this in the tutorial: https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions Cheers, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 03 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From lac at openend.se Thu Dec 3 13:27:11 2015 From: lac at openend.se (Laura Creighton) Date: Thu, 03 Dec 2015 19:27:11 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <22112.31833.681480.237349@turnbull.sk.tsukuba.ac.jp> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <22112.31833.681480.237349@turnbull.sk.tsukuba.ac.jp> Message-ID: <201512031827.tB3IRBT9019786@fido.openend.se> So how do we get search to work so that people in the Language Reference who type in 'List Comprehension' get a hit? Laura From mal at egenix.com Thu Dec 3 15:00:13 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 3 Dec 2015 21:00:13 +0100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <201512031827.tB3IRBT9019786@fido.openend.se> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <22112.31833.681480.237349@turnbull.sk.tsukuba.ac.jp> <201512031827.tB3IRBT9019786@fido.openend.se> Message-ID: <56609F4D.4050600@egenix.com> On 03.12.2015 19:27, Laura Creighton wrote: > So how do we get search to work so that people in the Language > Reference who type in 'List Comprehension' get a hit? It seems that the search index is broken for at least a few documentation file releases: ok: https://docs.python.org/2.6/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/2.7/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/3.2/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/3.3/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/3.4/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/3.5/search.html?q=comprehension&check_keywords=yes&area=default (ok = "/reference/expressions.html is found") Interestingly, these URLs give different results, e.g. ok: https://docs.python.org/release/2.7.1/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.2/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.3/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.4/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.5/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.6/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.7/search.html?q=comprehension&check_keywords=yes&area=default ok: https://docs.python.org/release/2.7.8/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/release/2.7.9/search.html?q=comprehension&check_keywords=yes&area=default not ok: https://docs.python.org/release/2.7.10/search.html?q=comprehension&check_keywords=yes&area=default Looks like something changed between the 2.7.8 and 2.7.9 release. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 03 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From steve at pearwood.info Thu Dec 3 20:25:14 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 4 Dec 2015 12:25:14 +1100 Subject: [Python-Dev] =?iso-8859-1?q?Python_Language_Reference_has_no_ment?= =?iso-8859-1?q?ion_of_list_com=C3prehensions?= In-Reply-To: <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> Message-ID: <20151204012514.GB3821@ando.pearwood.info> On Thu, Dec 03, 2015 at 09:25:53AM -0800, Andrew Barnert via Python-Dev wrote: > > On Dec 3, 2015, at 08:15, MRAB wrote: > > > >>> On 2015-12-03 15:09, Random832 wrote: > >>> On 2015-12-03, Laura Creighton wrote: > >>> Who came up with the word 'display' and what does it have going for > >>> it that I have missed? Right now I think its chief virtue is that > >>> it is a meaningless noun. (But not meaningless enough, as I > >>> associate displays with output, not construction). I completely agree with Laura here -- to me "display" means output, not construction, no matter what the functional programming community says :-) but I suppose the connection is that you can construct a list using the same syntax used to display that list: [1, 2, 3] say. I don't think the term "display" will ever feel natural to me, but I have got used to it. Random832 wrote: > >> In a recent discussion it seemed like people mainly use it > >> because they don't like using "literal" for things other than > >> single token constants. In most other languages' contexts the > >> equivalent thing would be called a literal. I'm not sure where you get "most" other languages from. At the very least, I'd want to see a language survey. I did a *very* fast one (an entire three languages *wink*) and found these results: The equivalent of a list [1, a, func(), x+y] is called: "display" (Python) "literal" (Ruby) "constructor" (Lua) http://ruby-doc.org/core-2.1.1/doc/syntax/literals_rdoc.html#label-Arrays http://www.lua.org/manual/5.1/manual.html Of the three, I think Lua's terminology is least worst. MRAB: > > "Literals" also tend to be constants, or be constructed out of > > constants. Andrew: > I've seen people saying that before, but I don't know where they get > that. It's certainly not the way, say, C++ or JavaScript use the term. I wouldn't take either of those two languages as examples of best practices in language design :-) "Literal" in computing has usually meant something like MRAB's sense for close on 20 years, at least. This definition is from FOLDOC (Free On-Line Dictionary Of Computing), dated 1996-01-23: literal A constant made available to a process, by inclusion in the executable text. Most modern systems do not allow texts to modify themselves during execution, so literals are indeed constant; their value is written at compile-time and is read-only at run time. In contrast, values placed in variables or files and accessed by the process via a symbolic name, can be changed during execution. This may be an asset. For example, messages can be given in a choice of languages by placing the translation in a file. Literals are used when such modification is not desired. The name of the file mentioned above (not its content), or a physical constant such as 3.14159, might be coded as a literal. Literals can be accessed quickly, a potential advantage of their use. I think that an important factor is that "literal" is a description of something in source code, like "expression" and "declaration". We surely don't want a distinction between the *values* x and y below: x = 3.14159 y = 4.0 - len("a") y += 0.14159 but we might want to distinguish between the way they are constructed: x is constructed from a literal, y is not. [...] > > A list comprehension can contain functions, etc. > > A non-comprehension display can include function calls, lambdas, or > any other kind of expression, just as easily as a comprehension can. > Is [1, x, f(y), lambda z: w+z] a literal? If so, why isn't [i*x for i > in y] a literal? I wouldn't call either a literal. I often find myself (mis)using the term "literal" to describe constructing a list using a display where each item is itself a literal: x = [1, 2, 3] (or at least something which *could* have been a literal, if Python's parsing rules were just a tiny bit different, like -1 or 2+3j) but I accept that's an abuse of the term. But I certainly wouldn't use the term to describe a list constructed from non-literal parts: x = [a, b**2, func() or None] and absolutely not for a list comprehension. > The problem is that we need a word that distinguishes the former; > trying to press "literal" into service to help the distinction doesn't > help. > > At some point, Python distinguished between displays and > comprehensions; I'm assuming someone realized there's no principled > sense in which a comprehension isn't also a display, and now we're > stuck with no word again. I don't think comprehensions are displays. They certainly look different, both in input form and output form: py> [1, 2, 4, 8, 16] # Display. [1, 2, 4, 8, 16] py> [2**n for n in range(5)] # Comprehension. [1, 2, 4, 8, 16] Lists display using display syntax, not comprehension syntax. Obviously the list you get (the value of the object) is the same whichever syntax you use, but the syntax is quite different. [...] > Ultimately, the best we have is "displays that aren't comprehensions" > or "constructions that aren't comprehensions". I don't think that's right. We can easily distinguish a display from a comprehension: - displays use the comma-separated item syntax [1, 2, 3], the same syntax used for output; - comprehensions use a variation on "set builder" syntax from mathematics, using for-loop syntax [expr for x in seq]. I don't see any good reason for maintaining that there's just one syntax, "display", which comes in two forms: a comma-separated set of values, or a for-loop. The only thing they have in common (syntax-wise) is that they both use [ ] as delimiters. They look different, they behave differently, and only one matches what the list actually displays as. Why use one term for what is clearly two distinct (if related) syntaxes? -- Steve From rosuav at gmail.com Thu Dec 3 20:56:45 2015 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 4 Dec 2015 12:56:45 +1100 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <20151204012514.GB3821@ando.pearwood.info> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: On Fri, Dec 4, 2015 at 12:25 PM, Steven D'Aprano wrote: > I don't see any good reason for maintaining that there's just one > syntax, "display", which comes in two forms: a comma-separated set of > values, or a for-loop. The only thing they have in common (syntax-wise) > is that they both use [ ] as delimiters. They look different, they > behave differently, and only one matches what the list actually displays > as. Why use one term for what is clearly two distinct (if related) > syntaxes? You come across something syntactic that begins by opening a square bracket, and you know that its semantics are: "construct a new list". That's what's common here. What goes *inside* those brackets can be one of two things: 1) A (possibly empty) comma-separated sequence of expressions 2) One or more nested 'for' loops, possibly guarded by 'if's, and a single expression So we have two subforms of the same basic syntax. The first one corresponds better to the output format, in the same way that a string literal might correspond to its repr under specific circumstances. Neither is a literal. Neither is a call to a constructor function (contrast "list()" or "list.__new__(list)", which do call a constructor). So what is this shared syntax? Whatever word is used, it's going to be a bit wrong. I'd be happy with either "constructor" or "display", myself. ChrisA From v+python at g.nevcal.com Thu Dec 3 21:01:54 2015 From: v+python at g.nevcal.com (Glenn Linderman) Date: Thu, 3 Dec 2015 18:01:54 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: <5660F412.4030909@g.nevcal.com> On 12/3/2015 5:56 PM, Chris Angelico wrote: > You come across something syntactic that begins by opening a square > bracket, and you know that its semantics are: "construct a new list". > That's what's common here. > > What goes*inside* those brackets can be one of two things: > > 1) A (possibly empty) comma-separated sequence of expressions > > 2) One or more nested 'for' loops, possibly guarded by 'if's, and a > single expression > > So we have two subforms of the same basic syntax. The first one > corresponds better to the output format, in the same way that a string > literal might correspond to its repr under specific circumstances. > Neither is a literal. Neither is a call to a constructor function > (contrast "list()" or "list.__new__(list)", which do call a > constructor). So what is this shared syntax? Whatever word is used, > it's going to be a bit wrong. I'd be happy with either "constructor" > or "display", myself. Construction. It includes an implicit constructor call and does more. -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at mrabarnett.plus.com Thu Dec 3 21:42:19 2015 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 4 Dec 2015 02:42:19 +0000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <20151204012514.GB3821@ando.pearwood.info> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: <5660FD8B.1070803@mrabarnett.plus.com> On 2015-12-04 01:25, Steven D'Aprano wrote: > On Thu, Dec 03, 2015 at 09:25:53AM -0800, Andrew Barnert via Python-Dev wrote: >> > On Dec 3, 2015, at 08:15, MRAB wrote: >> > >> >>> On 2015-12-03 15:09, Random832 wrote: >> >>> On 2015-12-03, Laura Creighton wrote: >> >>> Who came up with the word 'display' and what does it have going for >> >>> it that I have missed? Right now I think its chief virtue is that >> >>> it is a meaningless noun. (But not meaningless enough, as I >> >>> associate displays with output, not construction). > > I completely agree with Laura here -- to me "display" means output, not > construction, no matter what the functional programming community says > :-) but I suppose the connection is that you can construct a list using > the same syntax used to display that list: [1, 2, 3] say. > > I don't think the term "display" will ever feel natural to me, but I > have got used to it. > > > Random832 wrote: > >> >> In a recent discussion it seemed like people mainly use it >> >> because they don't like using "literal" for things other than >> >> single token constants. In most other languages' contexts the >> >> equivalent thing would be called a literal. > > I'm not sure where you get "most" other languages from. At the very > least, I'd want to see a language survey. I did a *very* fast one (an > entire three languages *wink*) and found these results: > > The equivalent of a list [1, a, func(), x+y] is called: > > "display" (Python) > > "literal" (Ruby) > > "constructor" (Lua) > > http://ruby-doc.org/core-2.1.1/doc/syntax/literals_rdoc.html#label-Arrays > http://www.lua.org/manual/5.1/manual.html > > Of the three, I think Lua's terminology is least worst. > > > MRAB: >> > "Literals" also tend to be constants, or be constructed out of >> > constants. > > Andrew: >> I've seen people saying that before, but I don't know where they get >> that. It's certainly not the way, say, C++ or JavaScript use the term. > > I wouldn't take either of those two languages as examples of best > practices in language design :-) > > "Literal" in computing has usually meant something like MRAB's sense > for close on 20 years, at least. This definition is from FOLDOC (Free > On-Line Dictionary Of Computing), dated 1996-01-23: > > literal > > A constant made available to a process, by > inclusion in the executable text. Most modern systems do not > allow texts to modify themselves during execution, so literals > are indeed constant; their value is written at compile-time > and is read-only at run time. > > In contrast, values placed in variables or files and accessed > by the process via a symbolic name, can be changed during > execution. This may be an asset. For example, messages can > be given in a choice of languages by placing the translation > in a file. > > Literals are used when such modification is not desired. The > name of the file mentioned above (not its content), or a > physical constant such as 3.14159, might be coded as a > literal. Literals can be accessed quickly, a potential > advantage of their use. > > > I think that an important factor is that "literal" is a description of > something in source code, like "expression" and "declaration". We surely > don't want a distinction between the *values* x and y below: > > x = 3.14159 > > y = 4.0 - len("a") > y += 0.14159 > > but we might want to distinguish between the way they are constructed: x > is constructed from a literal, y is not. > > > [...] >> > A list comprehension can contain functions, etc. >> >> A non-comprehension display can include function calls, lambdas, or >> any other kind of expression, just as easily as a comprehension can. >> Is [1, x, f(y), lambda z: w+z] a literal? If so, why isn't [i*x for i >> in y] a literal? > > I wouldn't call either a literal. > > I often find myself (mis)using the term "literal" to describe > constructing a list using a display where each item is itself a literal: > > x = [1, 2, 3] > Where there can be a grey area is in those languages where strings are mutable: you can assign a string literal to a variable and then mutate it. In that case, there's copying going on, either on the assignment or on the modification (copy on write, more efficient than always copying when, say, passing it into a function). > (or at least something which *could* have been a literal, if Python's > parsing rules were just a tiny bit different, like -1 or 2+3j) but I > accept that's an abuse of the term. But I certainly wouldn't use the > term to describe a list constructed from non-literal parts: > > x = [a, b**2, func() or None] > > and absolutely not for a list comprehension. > > >> The problem is that we need a word that distinguishes the former; >> trying to press "literal" into service to help the distinction doesn't >> help. >> >> At some point, Python distinguished between displays and >> comprehensions; I'm assuming someone realized there's no principled >> sense in which a comprehension isn't also a display, and now we're >> stuck with no word again. > > I don't think comprehensions are displays. They certainly look > different, both in input form and output form: > > py> [1, 2, 4, 8, 16] # Display. > [1, 2, 4, 8, 16] > py> [2**n for n in range(5)] # Comprehension. > [1, 2, 4, 8, 16] > > > Lists display using display syntax, not comprehension syntax. Obviously > the list you get (the value of the object) is the same whichever syntax > you use, but the syntax is quite different. > > > [...] >> Ultimately, the best we have is "displays that aren't comprehensions" >> or "constructions that aren't comprehensions". > > I don't think that's right. We can easily distinguish a display from a > comprehension: > > - displays use the comma-separated item syntax [1, 2, 3], the same > syntax used for output; > > - comprehensions use a variation on "set builder" syntax from > mathematics, using for-loop syntax [expr for x in seq]. > > I don't see any good reason for maintaining that there's just one > syntax, "display", which comes in two forms: a comma-separated set of > values, or a for-loop. The only thing they have in common (syntax-wise) > is that they both use [ ] as delimiters. They look different, they > behave differently, and only one matches what the list actually displays > as. Why use one term for what is clearly two distinct (if related) > syntaxes? > From python at mrabarnett.plus.com Thu Dec 3 21:43:26 2015 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 4 Dec 2015 02:43:26 +0000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: <5660FDCE.6070502@mrabarnett.plus.com> On 2015-12-04 01:56, Chris Angelico wrote: > On Fri, Dec 4, 2015 at 12:25 PM, Steven D'Aprano wrote: >> I don't see any good reason for maintaining that there's just one >> syntax, "display", which comes in two forms: a comma-separated set of >> values, or a for-loop. The only thing they have in common (syntax-wise) >> is that they both use [ ] as delimiters. They look different, they >> behave differently, and only one matches what the list actually displays >> as. Why use one term for what is clearly two distinct (if related) >> syntaxes? > > You come across something syntactic that begins by opening a square > bracket, and you know that its semantics are: "construct a new list". > That's what's common here. > > What goes *inside* those brackets can be one of two things: > > 1) A (possibly empty) comma-separated sequence of expressions > > 2) One or more nested 'for' loops, possibly guarded by 'if's, and a > single expression > > So we have two subforms of the same basic syntax. The first one > corresponds better to the output format, in the same way that a string > literal might correspond to its repr under specific circumstances. > Neither is a literal. Neither is a call to a constructor function > (contrast "list()" or "list.__new__(list)", which do call a > constructor). So what is this shared syntax? Whatever word is used, > it's going to be a bit wrong. I'd be happy with either "constructor" > or "display", myself. > The problem with "constructor" is that it's already used for the "__new__" class method. From abarnert at yahoo.com Thu Dec 3 21:48:15 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 3 Dec 2015 18:48:15 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <20151204012514.GB3821@ando.pearwood.info> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: On Dec 3, 2015, at 17:25, Steven D'Aprano wrote: > > On Thu, Dec 03, 2015 at 09:25:53AM -0800, Andrew Barnert via Python-Dev wrote: >>> On Dec 3, 2015, at 08:15, MRAB wrote: >>> >>>>> On 2015-12-03 15:09, Random832 wrote: >>>>> On 2015-12-03, Laura Creighton wrote: >>>>> Who came up with the word 'display' and what does it have going for >>>>> it that I have missed? Right now I think its chief virtue is that >>>>> it is a meaningless noun. (But not meaningless enough, as I >>>>> associate displays with output, not construction). > > I completely agree with Laura here -- to me "display" means output, not > construction, no matter what the functional programming community says > :-) but I suppose the connection is that you can construct a list using > the same syntax used to display that list: [1, 2, 3] say. > > I don't think the term "display" will ever feel natural to me, but I > have got used to it. > > > Random832 wrote: > >>>> In a recent discussion it seemed like people mainly use it >>>> because they don't like using "literal" for things other than >>>> single token constants. In most other languages' contexts the >>>> equivalent thing would be called a literal. > > I'm not sure where you get "most" other languages from. At the very > least, I'd want to see a language survey. I did a *very* fast one (an > entire three languages *wink*) and found these results: > > The equivalent of a list [1, a, func(), x+y] is called: > > "display" (Python) > > "literal" (Ruby) > > "constructor" (Lua) > > http://ruby-doc.org/core-2.1.1/doc/syntax/literals_rdoc.html#label-Arrays > http://www.lua.org/manual/5.1/manual.html > > Of the three, I think Lua's terminology is least worst. > > > MRAB: >>> "Literals" also tend to be constants, or be constructed out of >>> constants. > > Andrew: >> I've seen people saying that before, but I don't know where they get >> that. It's certainly not the way, say, C++ or JavaScript use the term. > > I wouldn't take either of those two languages as examples of best > practices in language design :-) No, but they seem to be the languages (along with C and Java) that people usually appeal to. You also found "literal" used the same way as JavaScript in Ruby, one of three languages in your quick survey. It's also used similarly in ML and Haskell. In Lisp, it has a completely different meaning (a quoted list). But as I said before, we can't use the word "literal" to contrast with comprehensions, because a large segment of the Python community (including you) would find that use of the word confusing and/or annoying because you intuitively think of the C/FORTRAN/etc. definition rather than the C++/Ruby/JS/Haskell definition. It doesn't matter whether that's a peculiar quirk of the Python community or not, whether there's a good reason for it or not, etc.; all that matters is that it's true. > [...] >>> A list comprehension can contain functions, etc. >> >> A non-comprehension display can include function calls, lambdas, or >> any other kind of expression, just as easily as a comprehension can. >> Is [1, x, f(y), lambda z: w+z] a literal? If so, why isn't [i*x for i >> in y] a literal? > > I wouldn't call either a literal. My point was that if the reason comprehensions aren't literals but the other kind of displays are is that the former can contain functions and the latter can't, that reason is just wrong. Both can contain functions. The intuition MRAB was appealing to doesn't even match his intuition, much less a universal one. And my sentence that you quoted directly below directly follows from that: >> The problem is that we need a word that distinguishes the former; >> trying to press "literal" into service to help the distinction doesn't >> help. >> >> At some point, Python distinguished between displays and >> comprehensions; I'm assuming someone realized there's no principled >> sense in which a comprehension isn't also a display, and now we're >> stuck with no word again. > > I don't think comprehensions are displays. Well, the reference docs say they are. (See 6.2.4 and following.) And I don't think the word "display" is used in the tutorial, glossary, etc.; the only place it's used, it explicitly includes comprehensions, calls them a "flavor" of displays, etc. In fact, that's exactly why this issue came up: it's because comprehensions are a subset of displays that they don't have their own section you can look up in the docs. And, as I explained before, Python's definition matches with the mathematical terms, and the terms in Miranda (which is probably the language we ultimately got them from). > They certainly look > different, both in input form and output form: > > py> [1, 2, 4, 8, 16] # Display. > [1, 2, 4, 8, 16] > py> [2**n for n in range(5)] # Comprehension. > [1, 2, 4, 8, 16] > > > Lists display using display syntax, not comprehension syntax. Well, yes, because lists are stored extensionally, and therefore the only possible way to represent what's stored is extensionally. But in source code, where we're representing what's stored in the coder's head rather than in the interpreter's object heap, that's not a problem, so the coder can display them either way. > Obviously > the list you get (the value of the object) is the same whichever syntax > you use, but the syntax is quite different. Yes, the syntax for a list display is brackets around either an expression list or a comprehension clause. Just like the syntax for an if statement and a def statement are also quite different, but they're both still statements. You can argue that you don't like that or don't find it intuitive or whatever, but you can't just choose to use the words in a way contrary to their actual definition and pretend you're clearing things up. Unless you have a time machine and can retroactively change Python 2.5 and 3.0. From ncoghlan at gmail.com Fri Dec 4 03:38:03 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 4 Dec 2015 18:38:03 +1000 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: On 4 December 2015 at 12:48, Andrew Barnert via Python-Dev wrote: > On Dec 3, 2015, at 17:25, Steven D'Aprano wrote: >> On Thu, Dec 03, 2015 at 09:25:53AM -0800, Andrew Barnert via Python-Dev wrote: >>> I've seen people saying that before, but I don't know where they get >>> that. It's certainly not the way, say, C++ or JavaScript use the term. I'm one of the folks that use it that way, but I learned that terminology *from* the Python language reference. >> I wouldn't take either of those two languages as examples of best >> practices in language design :-) > > No, but they seem to be the languages (along with C and Java) that people usually appeal to. > > You also found "literal" used the same way as JavaScript in Ruby, one of three languages in your quick survey. It's also used similarly in ML and Haskell. In Lisp, it has a completely different meaning (a quoted list). > > But as I said before, we can't use the word "literal" to contrast with comprehensions, because a large segment of the Python community (including you) would find that use of the word confusing and/or annoying because you intuitively think of the C/FORTRAN/etc. definition rather than the C++/Ruby/JS/Haskell definition. It doesn't matter whether that's a peculiar quirk of the Python community or not, whether there's a good reason for it or not, etc.; all that matters is that it's true. Even though it's true, I'm not sure it's sufficient to rule out a switch to "there are two kinds of literal" as the preferred terminology. The recent case that comes to mind is the new format string literals - those can include arbitrary subexpressions, like container displays and comprehensions, but the conclusion from the PEP 498 discussion was that it makes the most sense to still consider them a kind of string literal. There's also a relatively straightforward way of defining the key semantic different between a literal and a normal constructor call: with a literal, there's no way to override the type of the resulting object, while a constructor call can be monkeypatched like any other callable. The distinction that arises for containers is then the one that Chris Angelico pointed out: a container literal may have constant content, *or* it may have dynamic content. If we switched from calling things "displays" to calling them "dynamic literals", I'd be surprised if too many folks that were able to figure out what "display" meant struggled to make the transition. Summarising that idea: * literals: any of the dedicated expressions that produce an instance of a builtin type * constant literal: literals that produce a constant object that can be cached in the bytecode * dynamic literal: literals containing dynamic subexpressions that can't be pre-calculated * display: legacy term for a dynamic literal (originally inherited from ABC) * comprehension: a dynamic literal that creates a new container from an existing iterable * lexical literal: constant literals and dynamic string literals [1] The ast.literal_eval() docs would need a slight adjustment to refer to "literals (excluding container comprehensions and generator expressions)", rather than the current "literals and container displays". Regards, Nick. [1] https://docs.python.org/dev/reference/lexical_analysis.html#literals -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Fri Dec 4 07:49:24 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 4 Dec 2015 13:49:24 +0100 Subject: [Python-Dev] Third milestone of FAT Python Message-ID: Hi, I implemented 3 new optimizations in FAT Python: loop unrolling, constant folding and copy builtin functions to constants. In the previous thread, Terry Reedy asked me if the test suite is complete enough to ensure that FAT Python doesn't break the Python semantic. I can now say that the second milestone didn't optimize enough code to see bugs, new optimizations helped me to find a *lot* of bugs which are now all fixed. The full Python test suite pass with all optimizations enabled. Only two tests are skipped in FAT Python: test_dynamic and mock tests of test_unittest. test_dynamic checks that it's possible to replace builtin functions in a function and then use the replaced builtins from the same function. FAT Python currently doesn't support the specific case of this test. The mock tests of test_unittest does something similar, I'm more concerned by these failures. This email is an updated version on my previous blog article: https://haypo.github.io/fat-python-status-nov26-2015.html Since I wrote this blog article, I implemented the constant folding optimization and I fixed the two major bugs mentioned in the article (line number and exec(code, dict)). Documentation ============= I combined the documentation of (my) various optimizations projects into a single documentation: http://faster-cpython.readthedocs.org/ The FAT Python project has its own page: http://faster-cpython.readthedocs.org/fat_python.html Constant folding ================ This optimization propagates constant values of variables. Example: def func() x = 1 y = x return y Constant folding: def func() x = 1 y = 1 return 1 This optimization alone is not really exciting. It will more used later when the optimizer will implement peephole optimizations (ex: a+b) and remove dead code. For example, it will be possible to remove code specific to a platform (ex: 'if sys.platform.startswith("freebsd"): ...'). Later, removal of unused local variables will be implemented to simplify the code even more. The previous example will be simplified to: def func(): return 1 Loop unrolling optimization =========================== The optimization generates assignement statements (for the loop index variable) and duplicates the loop body to reduce the cost of loops. Example: def func(): for i in range(2): print(i) Loop unrolled: def func(): i = 0 print(i) i = 1 print(i) If the iterator uses the builtin range function, two guards are required on builtins and globals namespaces. The optimization also handles tuple iterator (ex: "for i in (1, 2, 3): ..."). No guard is needed in this case (the code is always optimized). Loop unrolling combines well with constant folding. The previous example is simplified to: def func(): i = 0 print(0) i = 1 print(1) Again, with a future removal of unused local variables optimization, the previous example will be simplified to: def func(): print(0) print(1) Copy builtins to constants optimization ======================================= This optimization is currently disabled by default. (Well, in practice, it's enabled by the site module to test it and detect code which doesn't work with it.) The LOAD_GLOBAL instruction is used to load a builtin function. The instruction requires two dictionary lookup: one in the globals namespace (which almost always fail) and then in the builtins namespace. It's rare to replace builtins, so the idea here is to replace the dynamic LOAD_GLOBAL instruction with a static LOAD_CONST instruction which loads the function from a C array, a fast O(1) access. It is not possible to inject a builtin function during the compilation. Python code objects are serialized by the marshal module which only support simple types like integers, strings and tuples, not functions. The trick is to modify the constants at runtime when the module is loaded. I added a new patch_constants() method to functions. Example: def log(message): print(message) This function is specialized to: def log(message): 'LOAD_GLOBAL print'(message) log.patch_constants({'LOAD_GLOBAL print': print}) The specialized bytecode uses two guards on builtins and globals namespaces to disable the optimization if the builtin function is replaced. This optimization doesn't support the case when builtins are modified while the function is executed. I think that it will be safer to disable the optimization by default. Later, we can enhance the optimization to enable it if the function cannot modify builtins and if it only calls funcions which cannot modify builtins. I bet that the optimization will be safe with these additional checks. Changes on builtin guards ========================= When a guard is used on a builtin function, the specialization of the function is now ignored if the builtin was replaced or if a function with the name already exists in the globals namespace. At the end of the Python finalization (after the site module is imported), the fat module keeps a private copy of builtins. When a builtin guard is used, the current builtin function is simply compared to the old copy old builtins. The assumption here is that builtin functions are not replaced during Python initialization. By the way, I started to document FAT PYthon limitations and effects on the Python semantic: http://faster-cpython.readthedocs.org/fat_python.html#limitations-and-python-semantic Lot of enhancements of the AST optimizer ======================================== New optimizations helped to find bugs in the AST optimizer. Many fixes and various enhancements were done in the AST optimizer. The optimizer was optimized, copy.deepcopy() is no more used to duplicate a full tree. The new NodeTransformer class only duplicates modified nodes. The optimizer understands now much better Python namespaces (globals, locals, non locals, etc.). It is now able to optimize a function without guards: it's used to unroll a loop using a tuple as iterator. Versionned dictionary ===================== In the previous milestone of FAT Python, the versionned dictionary was a new type inherited from the builtin dict type which added a __version__ read-only (global "version" of dictionary, incremented at each change), a getversion(key) method (version of a one key) and it added support for weak references. I done my best to make FAT Python changes optional to leave CPython completly unchanged to not hit performances when the FAT mode is not used. But I had two major technical issues. The first one is that using a different structure for dictionary entries would make the dict code more complex and maybe even slower (which is not acceptable). The second one is that it was no more possible to call exec(code, globals, locals) in FAT mode where globals or locals were a dict. The code needed to be modified using something like: globals = fat.verdict() if __fat__ or {} It required to import the fat module and modify all code calling exec(). I removed the fat.verdict type and added the __version__ property to the builtin dict type. It's incremented at each change. The getversion() method and the support for weak reference is removed. Python has already special code to handle reference cycles of dictionaries, there is no need to support weak references. Guards now use strong references to namespaces. Victor From rdmurray at bitdance.com Fri Dec 4 09:52:55 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 04 Dec 2015 09:52:55 -0500 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: <20151204145255.6485E2510C2@webabinitio.net> On Fri, 04 Dec 2015 18:38:03 +1000, Nick Coghlan wrote: > Summarising that idea: > > * literals: any of the dedicated expressions that produce an instance > of a builtin type > * constant literal: literals that produce a constant object that can > be cached in the bytecode > * dynamic literal: literals containing dynamic subexpressions that > can't be pre-calculated > * display: legacy term for a dynamic literal (originally inherited from ABC) > * comprehension: a dynamic literal that creates a new container from > an existing iterable > * lexical literal: constant literals and dynamic string literals [1] > > The ast.literal_eval() docs would need a slight adjustment to refer to > "literals (excluding container comprehensions and generator > expressions)", rather than the current "literals and container > displays". Except that that isn't accurate either: >>> import ast >>> ast.literal_eval('[1, id(1)]') Traceback (most recent call last): File "", line 1, in File "/home/rdmurray/python/p36/Lib/ast.py", line 84, in literal_eval return _convert(node_or_string) File "/home/rdmurray/python/p36/Lib/ast.py", line 57, in _convert return list(map(_convert, node.elts)) File "/home/rdmurray/python/p36/Lib/ast.py", line 83, in _convert raise ValueError('malformed node or string: ' + repr(node)) ValueError: malformed node or string: <_ast.Call object at 0xb73633ec> So it's really container displays consisting of literals, which we could call a "literal container display". I think the intuitive notion of "literal" is "the value is literally what is written here". Which is a redundant statement; 'as written' is, after all, what literally means when used correctly :). That makes it a language-agnostic concept if I'm correct. I think we will find that f strings are called f expressions, not f literals. --David From status at bugs.python.org Fri Dec 4 12:08:35 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 4 Dec 2015 18:08:35 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20151204170835.45EE85620E@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-11-27 - 2015-12-04) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5289 (+26) closed 32266 (+28) total 37555 (+54) Open issues with patches: 2332 Issues opened (37) ================== #25744: Reference leaks in test_collections http://bugs.python.org/issue25744 opened by serhiy.storchaka #25745: Reference leaks in test_curses http://bugs.python.org/issue25745 opened by serhiy.storchaka #25746: test_unittest failure in leaks searching mode http://bugs.python.org/issue25746 opened by serhiy.storchaka #25747: test_idle failure in leaks searching mode http://bugs.python.org/issue25747 opened by serhiy.storchaka #25749: asyncio.Server class documented but not exported http://bugs.python.org/issue25749 opened by Ron Frederick #25750: tp_descr_get(self, obj, type) is called without owning a refer http://bugs.python.org/issue25750 opened by jdemeyer #25752: asyncio.readline - add customizable line separator http://bugs.python.org/issue25752 opened by mmarkk #25753: Reference leaks in test_smtplib http://bugs.python.org/issue25753 opened by serhiy.storchaka #25755: Test test_property failed if run twice http://bugs.python.org/issue25755 opened by serhiy.storchaka #25757: Subclasses of property lose docstring http://bugs.python.org/issue25757 opened by torsten #25758: ensurepip/venv broken on Windows if path includes unicode http://bugs.python.org/issue25758 opened by Dima.Tisnek #25759: Python 2.7.11rc1 not building with Visual Studio 2015 http://bugs.python.org/issue25759 opened by kovidgoyal #25761: Improve unpickling errors handling http://bugs.python.org/issue25761 opened by serhiy.storchaka #25764: PyObject_Call() is called with an exception set in subprocess http://bugs.python.org/issue25764 opened by serhiy.storchaka #25765: Installation error http://bugs.python.org/issue25765 opened by ayushmaan121 #25766: __bytes__ doesn't work for str subclasses http://bugs.python.org/issue25766 opened by serhiy.storchaka #25768: compileall functions do not document return values http://bugs.python.org/issue25768 opened by Nicholas Chammas #25769: Crash due to using weakref referent without acquiring a strong http://bugs.python.org/issue25769 opened by ldeller #25770: expose name, args, and kwargs from methodcaller http://bugs.python.org/issue25770 opened by llllllllll #25771: importlib: '.submodule' is not a relative name (no leading dot http://bugs.python.org/issue25771 opened by martin.panter #25773: Deprecate deleting with PyObject_SetAttr, PyObject_SetAttrStri http://bugs.python.org/issue25773 opened by serhiy.storchaka #25774: [benchmarks] Adjust to allow uploading benchmark data to codes http://bugs.python.org/issue25774 opened by zach.ware #25776: More compact pickle of iterators etc http://bugs.python.org/issue25776 opened by serhiy.storchaka #25777: Misleading descriptions in docs about invoking descriptors. http://bugs.python.org/issue25777 opened by Juchen Zeng #25778: winreg.EnumValue does not truncate strings correctly http://bugs.python.org/issue25778 opened by anshul6 #25780: Add support for CAN_RAW_JOIN_FILTERS http://bugs.python.org/issue25780 opened by rumpelsepp #25782: CPython hangs on error __context__ set to the error itself http://bugs.python.org/issue25782 opened by yselivanov #25783: test_traceback.test_walk_stack() fails when run directly (with http://bugs.python.org/issue25783 opened by haypo #25785: TimedRotatingFileHandler missing rotations http://bugs.python.org/issue25785 opened by felipecruz #25786: contextlib.ExitStack introduces a cycle in exception __context http://bugs.python.org/issue25786 opened by yselivanov #25787: Add an explanation what happens with subprocess parent and chi http://bugs.python.org/issue25787 opened by krichter #25788: fileinput.hook_encoded has no way to pass arguments to codecs http://bugs.python.org/issue25788 opened by lac #25789: py launcher stderr is not piped to subprocess.Popen.stderr http://bugs.python.org/issue25789 opened by wolma #25791: Raise an ImportWarning when __spec__.parent/__package__ isn't http://bugs.python.org/issue25791 opened by brett.cannon #25794: __setattr__ does not always overload operators http://bugs.python.org/issue25794 opened by Dominik Schmid #25795: test_fork1 cannot be run directly: ./pyhon Lib/test/test_fork1 http://bugs.python.org/issue25795 opened by haypo #25796: Running test_multiprocessing_spawn is slow (more than 8 minute http://bugs.python.org/issue25796 opened by haypo Most recent 15 issues with no replies (15) ========================================== #25795: test_fork1 cannot be run directly: ./pyhon Lib/test/test_fork1 http://bugs.python.org/issue25795 #25791: Raise an ImportWarning when __spec__.parent/__package__ isn't http://bugs.python.org/issue25791 #25785: TimedRotatingFileHandler missing rotations http://bugs.python.org/issue25785 #25776: More compact pickle of iterators etc http://bugs.python.org/issue25776 #25774: [benchmarks] Adjust to allow uploading benchmark data to codes http://bugs.python.org/issue25774 #25773: Deprecate deleting with PyObject_SetAttr, PyObject_SetAttrStri http://bugs.python.org/issue25773 #25766: __bytes__ doesn't work for str subclasses http://bugs.python.org/issue25766 #25753: Reference leaks in test_smtplib http://bugs.python.org/issue25753 #25746: test_unittest failure in leaks searching mode http://bugs.python.org/issue25746 #25745: Reference leaks in test_curses http://bugs.python.org/issue25745 #25744: Reference leaks in test_collections http://bugs.python.org/issue25744 #25726: sys.setprofile / sys.getprofile asymetry http://bugs.python.org/issue25726 #25724: SSLv3 test failure on Ubuntu 16.04 LTS http://bugs.python.org/issue25724 #25720: Fix curses module compilation with ncurses6 http://bugs.python.org/issue25720 #25713: Setuptools included with 64-bit Windows installer is outdated http://bugs.python.org/issue25713 Most recent 15 issues waiting for review (15) ============================================= #25794: __setattr__ does not always overload operators http://bugs.python.org/issue25794 #25789: py launcher stderr is not piped to subprocess.Popen.stderr http://bugs.python.org/issue25789 #25786: contextlib.ExitStack introduces a cycle in exception __context http://bugs.python.org/issue25786 #25782: CPython hangs on error __context__ set to the error itself http://bugs.python.org/issue25782 #25780: Add support for CAN_RAW_JOIN_FILTERS http://bugs.python.org/issue25780 #25778: winreg.EnumValue does not truncate strings correctly http://bugs.python.org/issue25778 #25776: More compact pickle of iterators etc http://bugs.python.org/issue25776 #25774: [benchmarks] Adjust to allow uploading benchmark data to codes http://bugs.python.org/issue25774 #25773: Deprecate deleting with PyObject_SetAttr, PyObject_SetAttrStri http://bugs.python.org/issue25773 #25770: expose name, args, and kwargs from methodcaller http://bugs.python.org/issue25770 #25769: Crash due to using weakref referent without acquiring a strong http://bugs.python.org/issue25769 #25768: compileall functions do not document return values http://bugs.python.org/issue25768 #25766: __bytes__ doesn't work for str subclasses http://bugs.python.org/issue25766 #25764: PyObject_Call() is called with an exception set in subprocess http://bugs.python.org/issue25764 #25761: Improve unpickling errors handling http://bugs.python.org/issue25761 Top 10 most discussed issues (10) ================================= #25778: winreg.EnumValue does not truncate strings correctly http://bugs.python.org/issue25778 26 msgs #25782: CPython hangs on error __context__ set to the error itself http://bugs.python.org/issue25782 22 msgs #25698: The copy_reg module becomes unexpectedly empty in test_cpickle http://bugs.python.org/issue25698 17 msgs #25759: Python 2.7.11rc1 not building with Visual Studio 2015 http://bugs.python.org/issue25759 12 msgs #25770: expose name, args, and kwargs from methodcaller http://bugs.python.org/issue25770 11 msgs #14285: Traceback wrong on ImportError while executing a package http://bugs.python.org/issue14285 9 msgs #25768: compileall functions do not document return values http://bugs.python.org/issue25768 8 msgs #25780: Add support for CAN_RAW_JOIN_FILTERS http://bugs.python.org/issue25780 8 msgs #19527: Test failures with COUNT_ALLOCS http://bugs.python.org/issue19527 7 msgs #25627: distutils : file "bdist_rpm.py" does not quote filenames when http://bugs.python.org/issue25627 7 msgs Issues closed (27) ================== #5319: stdout error at interpreter shutdown fails to return OS error http://bugs.python.org/issue5319 closed by martin.panter #12460: SocketServer.shutdown() does not have "timeout=None" parameter http://bugs.python.org/issue12460 closed by martin.panter #18082: Inconsistent behavior of IOBase methods on closed files http://bugs.python.org/issue18082 closed by martin.panter #20836: Pickle Nonetype http://bugs.python.org/issue20836 closed by serhiy.storchaka #25252: Hard-coded line ending in asyncio.streams.StreamReader.readlin http://bugs.python.org/issue25252 closed by martin.panter #25485: Add a context manager to telnetlib.Telnet http://bugs.python.org/issue25485 closed by r.david.murray #25601: test_cpickle failure on the ware-gentoo-x86 buildbot http://bugs.python.org/issue25601 closed by serhiy.storchaka #25708: runpy hides traceback for some exceptions http://bugs.python.org/issue25708 closed by martin.panter #25719: Deprecate spitfire benchmark http://bugs.python.org/issue25719 closed by zach.ware #25742: locale.setlocale does not work with unicode strings http://bugs.python.org/issue25742 closed by python-dev #25748: Resource warnings when run test_asyncio in leaks searching mod http://bugs.python.org/issue25748 closed by martin.panter #25751: ctypes.util , Shell Injection in find_library() http://bugs.python.org/issue25751 closed by martin.panter #25754: Test test_rlcompleter failed if run twice http://bugs.python.org/issue25754 closed by martin.panter #25756: asyncio WriteTransport documentation typo http://bugs.python.org/issue25756 closed by asvetlov #25760: TextWrapper fails to split 'two-and-a-half-hour' correctly http://bugs.python.org/issue25760 closed by serhiy.storchaka #25762: Calculation Mistake 1.5 * 0.3 http://bugs.python.org/issue25762 closed by ethan.furman #25763: I cannot use absolute path in sqlite3 , python 2.7.9, windows http://bugs.python.org/issue25763 closed by jingtao chen #25767: asyncio documentation section 18.5.2.3.1. (Windows) links to F http://bugs.python.org/issue25767 closed by python-dev #25772: Misleading descriptions about built-in `super.` http://bugs.python.org/issue25772 closed by martin.panter #25775: Bug tracker emails go to spam http://bugs.python.org/issue25775 closed by Nicholas Chammas #25779: deadlock with asyncio+contextmanager+ExitStack http://bugs.python.org/issue25779 closed by yselivanov #25781: infinite loop in reprlib http://bugs.python.org/issue25781 closed by yselivanov #25784: Please consider integrating performance fix for ipaddress.py http://bugs.python.org/issue25784 closed by Alexander Finkel #25790: shutil.chown function enhancement http://bugs.python.org/issue25790 closed by r.david.murray #25792: sorted() is not stable given key=len and large inputs http://bugs.python.org/issue25792 closed by r.david.murray #25793: spam http://bugs.python.org/issue25793 closed by r.david.murray #25797: Default argument values with type hints break type correctness http://bugs.python.org/issue25797 closed by ebarry From abarnert at yahoo.com Fri Dec 4 12:56:29 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Fri, 4 Dec 2015 09:56:29 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> Message-ID: On Dec 4, 2015, at 00:38, Nick Coghlan wrote: > > On 4 December 2015 at 12:48, Andrew Barnert via Python-Dev > wrote: >> On Dec 3, 2015, at 17:25, Steven D'Aprano wrote: >>>> On Thu, Dec 03, 2015 at 09:25:53AM -0800, Andrew Barnert via Python-Dev wrote: >>>> I've seen people saying that before, but I don't know where they get >>>> that. It's certainly not the way, say, C++ or JavaScript use the term. > > I'm one of the folks that use it that way, but I learned that > terminology *from* the Python language reference. If that's the usual case, then isn't it almost certainly more true for "display" than for "literal"? I doubt most Python users came in with a pre-existing notion of "display" from another language, or from programming in general--or, if they did, it's probably one of the senses that's irrelevant enough to not confuse anyone (like a repr, or a string formatting template). So if you want to redefine one of our terms to allow a new distinction, why not that one? More importantly, as I said in my other message: do we actually need to be able to make this distinction? The problem this thread set out to solve is that "comprehension" doesn't have a docs section because it's just a subset of displays, so you can't search for it. Making it a subset of dynamic literals, which is a subset of literals, seems like it gets us farther from a solution. Right now, we could easily change the section title to "list displays (including comprehensions)" and we're done. >>> I wouldn't take either of those two languages as examples of best >>> practices in language design :-) >> >> No, but they seem to be the languages (along with C and Java) that people usually appeal to. >> >> You also found "literal" used the same way as JavaScript in Ruby, one of three languages in your quick survey. It's also used similarly in ML and Haskell. In Lisp, it has a completely different meaning (a quoted list). >> >> But as I said before, we can't use the word "literal" to contrast with comprehensions, because a large segment of the Python community (including you) would find that use of the word confusing and/or annoying because you intuitively think of the C/FORTRAN/etc. definition rather than the C++/Ruby/JS/Haskell definition. It doesn't matter whether that's a peculiar quirk of the Python community or not, whether there's a good reason for it or not, etc.; all that matters is that it's true. > > Even though it's true, I'm not sure it's sufficient to rule out a > switch to "there are two kinds of literal" as the preferred > terminology. > > The recent case that comes to mind is the new format string literals - > those can include arbitrary subexpressions, like container displays > and comprehensions, but the conclusion from the PEP 498 discussion was > that it makes the most sense to still consider them a kind of string > literal. > > There's also a relatively straightforward way of defining the key > semantic different between a literal and a normal constructor call: > with a literal, there's no way to override the type of the resulting > object, while a constructor call can be monkeypatched like any other > callable. Is that an important distinction to anyone but people who write Python implementations? If some library I'm using chooses to monkeypatch or shadow a type name, the objects are still going to quack the way I expect (otherwise, I'm going to stop using that library pretty quickly). And meanwhile, why do I need to distinguish between libraries that monkeypatch the stdlib for me and libraries that install an import hook to patch my code? It's certainly not meaningless or completely useless (e.g., the discussion about whether f-strings are literals would have been shorter, and had more of a point), but it doesn't seem useful enough to be worth redefining existing terminology. > The distinction that arises for containers is then the one that Chris > Angelico pointed out: a container literal may have constant content, > *or* it may have dynamic content. Well, yes, but, again, both forms of container literal can have dynamic content: [f(), g()] is just as dynamic as [x() for x in (f, g)]. So we still don't have the contrast we were looking for. Also, [1, 2] is literal, and not dynamic, but it's not a constant value, so calling it a constant literal seems likely to be more confusing than helpful. One more thing: we don't have to worry about whether def and class are literal constructs because they're not expressions, but what about lambda? It fits your definition of literal. JS calls them literals (Ruby is a bit confusing because it splits the notion of function into three entirely independent things), as do some functional languages. And this means you can write a function-table dict that's all literals. As for whether it's constant--the code object obviously can be compiled into the bytecode, because that's what CPython does, and the function object itself could be when there are no free variables, even if it isn't today. (In fact, even with free variables, it could be done, but not the way CPython implements closures.) > If we switched from calling things > "displays" to calling them "dynamic literals", I'd be surprised if too > many folks that were able to figure out what "display" meant struggled > to make the transition. > > Summarising that idea: > > * literals: any of the dedicated expressions that produce an instance > of a builtin type > * constant literal: literals that produce a constant object that can > be cached in the bytecode > * dynamic literal: literals containing dynamic subexpressions that > can't be pre-calculated > * display: legacy term for a dynamic literal (originally inherited from ABC) That doesn't seem right--f-strings are dynamic literals, but they aren't displays. (And lambdas, too.) And (1, 2) is a constant literal but it is a display. > * comprehension: a dynamic literal that creates a new container from > an existing iterable > * lexical literal: constant literals and dynamic string literals [1] > > The ast.literal_eval() docs would need a slight adjustment to refer to > "literals (excluding container comprehensions and generator > expressions)", rather than the current "literals and container > displays". It seems like it already need a change with or without your suggestion, if it uses "container displays" to rule out comprehensions, which are a kind of container display under today's definition. From python at mrabarnett.plus.com Fri Dec 4 14:16:49 2015 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 4 Dec 2015 19:16:49 +0000 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: <5661E6A1.10907@mrabarnett.plus.com> On 2015-12-04 12:49, Victor Stinner wrote: [snip] > Constant folding > ================ > > This optimization propagates constant values of variables. Example: > > def func() > x = 1 > y = x > return y > > Constant folding: > > def func() > x = 1 > y = 1 > return 1 > [snip] I don't think that's constant folding, but constant _propagation_. Constant folding is when, say, "1 + 2" replaced by "2". From ijmorlan at uwaterloo.ca Fri Dec 4 14:22:06 2015 From: ijmorlan at uwaterloo.ca (Isaac Morland) Date: Fri, 4 Dec 2015 14:22:06 -0500 (EST) Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: <5661E6A1.10907@mrabarnett.plus.com> References: <5661E6A1.10907@mrabarnett.plus.com> Message-ID: On Fri, 4 Dec 2015, MRAB wrote: > Constant folding is when, say, "1 + 2" replaced by "2". Isn't that called backspacing? ;-) Isaac Morland CSCF Web Guru DC 2619, x36650 WWW Software Specialist From python at mrabarnett.plus.com Fri Dec 4 14:39:52 2015 From: python at mrabarnett.plus.com (MRAB) Date: Fri, 4 Dec 2015 19:39:52 +0000 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: <5661E6A1.10907@mrabarnett.plus.com> Message-ID: <5661EC08.9040800@mrabarnett.plus.com> On 2015-12-04 19:22, Isaac Morland wrote: > On Fri, 4 Dec 2015, MRAB wrote: > > > Constant folding is when, say, "1 + 2" replaced by "2". > > Isn't that called backspacing? ;-) > Oops! I meant "1 + 1", of course. Or "3". Either would work. :-) From ericfahlgren at gmail.com Fri Dec 4 16:32:19 2015 From: ericfahlgren at gmail.com (Eric Fahlgren) Date: Fri, 4 Dec 2015 13:32:19 -0800 Subject: [Python-Dev] =?utf-8?q?Python_Language_Reference_has_no_mention_o?= =?utf-8?q?f_list_com=C3=83prehensions?= In-Reply-To: <20151204145255.6485E2510C2@webabinitio.net> References: <201512031251.tB3Cpdh3014048@fido.openend.se> <201512031426.tB3EQNcE015488@fido.openend.se> <56606AA2.7040100@mrabarnett.plus.com> <61A6465F-00EE-4B5E-8DC8-9A4DB01B08C0@yahoo.com> <20151204012514.GB3821@ando.pearwood.info> <20151204145255.6485E2510C2@webabinitio.net> Message-ID: <011301d12edb$424d1b00$c6e75100$@gmail.com> David R. Murray wrote: > I think the intuitive notion of "literal" is "the value is literally what is written > here". Which is a redundant statement; 'as written' is, after all, what literally > means when used correctly :). That makes it a language-agnostic concept if I'm > correct. So { x : 1 } is not literally a literal, it's figuratively a literal, or more simply a figurative. Eric From victor.stinner at gmail.com Fri Dec 4 20:00:41 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Sat, 5 Dec 2015 02:00:41 +0100 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: <5661E6A1.10907@mrabarnett.plus.com> References: <5661E6A1.10907@mrabarnett.plus.com> Message-ID: 2015-12-04 20:16 GMT+01:00 MRAB : > On 2015-12-04 12:49, Victor Stinner wrote: > [snip] > > I don't think that's constant folding, but constant _propagation_. > > Constant folding is when, say, "1 + 2" replaced by "2". Oh, you're right. I update the documentation. To avoid confusion, I just implemented constant folding as well :-D https://faster-cpython.readthedocs.org/fat_python.html#constant-folding Victor From benjamin at python.org Sat Dec 5 17:20:30 2015 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 05 Dec 2015 14:20:30 -0800 Subject: [Python-Dev] [RELEASED] Python 2.7.11 Message-ID: <1449354030.2091510.459129289.32191B7B@webmail.messagingengine.com> Python 2.7.11, the latest bugfix release of the Python 2.7 series, is now available for download at https://www.python.org/downloads/release/python-2711/ Thank you as always to Steve Dower and Ned Deily, who build our binaries. Enjoy the rest of the year, Benjamin From emile at fenx.com Sun Dec 6 13:37:21 2015 From: emile at fenx.com (Emile van Sebille) Date: Sun, 6 Dec 2015 10:37:21 -0800 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: <5661EC08.9040800@mrabarnett.plus.com> References: <5661E6A1.10907@mrabarnett.plus.com> <5661EC08.9040800@mrabarnett.plus.com> Message-ID: On 12/4/2015 11:39 AM, MRAB wrote: > On 2015-12-04 19:22, Isaac Morland wrote: >> On Fri, 4 Dec 2015, MRAB wrote: >> >> > Constant folding is when, say, "1 + 2" replaced by "2". >> >> Isn't that called backspacing? ;-) >> > Oops! I meant "1 + 1", of course. Or "3". Either would work. :-) Oh, you must surely have meant '1 and 2' Looking-for-truth-in-all-the-wrong-places-ly y'rs, Emile From larry at hastings.org Mon Dec 7 00:06:46 2015 From: larry at hastings.org (Larry Hastings) Date: Sun, 6 Dec 2015 21:06:46 -0800 Subject: [Python-Dev] [RELEASED] Python 3.5.1 and 3.4.4rc1 are now available Message-ID: <566513E6.9050108@hastings.org> On behalf of the Python development community and the Python 3.4 and 3.5 release teams, I'm pleased to announce the simultaneous availability of Python 3.5.1 and Python 3.4.4rc1. As point releases, both have many incremental improvements over their predecessor releases. You can find Python 3.5.1 here: https://www.python.org/downloads/release/python-351/ And you can find Python 3.4.4rc1 here: https://www.python.org/downloads/release/python-344rc1/ Python 2.7.11 shipped today too, so it's a Python release-day hat trick! Happy computing, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From arigo at tunes.org Mon Dec 7 04:20:12 2015 From: arigo at tunes.org (Armin Rigo) Date: Mon, 7 Dec 2015 10:20:12 +0100 Subject: [Python-Dev] Avoiding CPython performance regressions In-Reply-To: References: Message-ID: Hi all, Spending an hour with "hg bisect" is a good way to figure out some of the worst speed regressions that occurred in the early days of 2.7 (which are still not fixed now). Here's my favorite's pick: * be4bec689de3 made bm_mako 15% slower, and spitfire_cstringio even much more * ad030571e6c0 made ai 5% slower Just thought it would be worth mentioning it here. There is much more waiting for someone with a little more patience if we believe https://www.speedtin.com/public . A bient?t, Armin. From benhoyt at gmail.com Mon Dec 7 07:42:52 2015 From: benhoyt at gmail.com (Ben Hoyt) Date: Mon, 7 Dec 2015 07:42:52 -0500 Subject: [Python-Dev] [RELEASED] Python 3.5.1 and 3.4.4rc1 are now available In-Reply-To: <566513E6.9050108@hastings.org> References: <566513E6.9050108@hastings.org> Message-ID: Great, thank you! Small note, not sure if it's related to the release or not: the downloads menu on python.org seems to be broken. The 2.7 download button is showing for me, but the 3.x download button is kind of kaput. See screenshot: http://i.imgur.com/ji1LCnn.png -Ben On Mon, Dec 7, 2015 at 12:06 AM, Larry Hastings wrote: > > On behalf of the Python development community and the Python 3.4 and 3.5 > release teams, I'm pleased to announce the simultaneous availability of > Python 3.5.1 and Python 3.4.4rc1. As point releases, both have many > incremental improvements over their predecessor releases. > > > You can find Python 3.5.1 here: > > https://www.python.org/downloads/release/python-351/ > > And you can find Python 3.4.4rc1 here: > > https://www.python.org/downloads/release/python-344rc1/ > > > Python 2.7.11 shipped today too, so it's a Python release-day hat trick! > > > Happy computing, > > > */arry* > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benhoyt%40gmail.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrei at 5monkeys.se Mon Dec 7 04:37:54 2015 From: andrei at 5monkeys.se (Andrei Fokau) Date: Mon, 7 Dec 2015 10:37:54 +0100 Subject: [Python-Dev] Wrong change log link for 3.5.1 Message-ID: Hi, The Changelog link on https://www.python.org/downloads/release/python-351/ has wrong object id. It should be https://docs.python.org/3.5/whatsnew/changelog.html#python-3-5-1-final Thanks, Andrei -- Andrei Fokau 5 Monkeys Agency AB Stadsg?rden 10, 12tr SE-116 45 Stockholm tel: 08-5000 66 53 mob: +46 76 3060 888 skype:andrei.fokau andrei at 5monkeys.se -------------- next part -------------- An HTML attachment was scrubbed... URL: From nad at acm.org Mon Dec 7 11:16:07 2015 From: nad at acm.org (Ned Deily) Date: Mon, 7 Dec 2015 11:16:07 -0500 Subject: [Python-Dev] Wrong change log link for 3.5.1 In-Reply-To: References: Message-ID: Thanks, it should really be fixed this time! (I thought I fixed it yesterday.) -- Ned Deily > On Dec 7, 2015, at 04:37, Andrei Fokau wrote: > > Hi, > > The Changelog link on https://www.python.org/downloads/release/python-351/ has wrong object id. > It should be https://docs.python.org/3.5/whatsnew/changelog.html#python-3-5-1-final > > Thanks, > Andrei > > -- > Andrei Fokau > 5 Monkeys Agency AB > > Stadsg?rden 10, 12tr > SE-116 45 Stockholm > tel: 08-5000 66 53 > mob: +46 76 3060 888 > skype:andrei.fokau > andrei at 5monkeys.se > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/nad%40acm.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrei at 5monkeys.se Mon Dec 7 11:33:46 2015 From: andrei at 5monkeys.se (Andrei Fokau) Date: Mon, 7 Dec 2015 17:33:46 +0100 Subject: [Python-Dev] Wrong change log link for 3.5.1 In-Reply-To: References: Message-ID: Hi Ned, Thanks again! You did fix it yesterday but that was for rc1. Is there some automation involved? Maybe a bug somewhere... Andrei On Mon, Dec 7, 2015 at 5:16 PM, Ned Deily wrote: > Thanks, it should really be fixed this time! (I thought I fixed it > yesterday.) > > -- > Ned Deily > > > On Dec 7, 2015, at 04:37, Andrei Fokau wrote: > > Hi, > > The Changelog link on https://www.python.org/downloads/release/python-351/ > has wrong object id. > It should be > https://docs.python.org/3.5/whatsnew/changelog.html#python-3-5-1-final > > Thanks, > Andrei > > -- > Andrei Fokau > 5 Monkeys Agency AB > > Stadsg?rden 10, 12tr > SE-116 45 Stockholm > tel: 08-5000 66 53 > mob: +46 76 3060 888 > skype:andrei.fokau > andrei at 5monkeys.se > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/nad%40acm.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Nikolaus at rath.org Mon Dec 7 12:46:11 2015 From: Nikolaus at rath.org (Nikolaus Rath) Date: Mon, 07 Dec 2015 09:46:11 -0800 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: (Victor Stinner's message of "Fri, 4 Dec 2015 13:49:24 +0100") References: Message-ID: <87poyitan0.fsf@thinkpad.rath.org> On Dec 04 2015, Victor Stinner wrote: > Hi, > > I implemented 3 new optimizations in FAT Python: loop unrolling, constant > folding and copy builtin functions to constants. In the previous thread, > Terry Reedy asked me if the test suite is complete enough to ensure that > FAT Python doesn't break the Python semantic. [...] I just wanted to say that I think this is truly great! Thanks working on this! Best, -Nikolaus -- GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F ?Time flies like an arrow, fruit flies like a Banana.? From lac at openend.se Mon Dec 7 15:50:40 2015 From: lac at openend.se (Laura Creighton) Date: Mon, 7 Dec 2015 21:50:40 +0100 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? Message-ID: <201512072050.tB7Koe6c023677@fido.openend.se> As webmaster, I am dealing with 3 unhappy would-be python users who have windows 10. Right now their first problem is that when they click on the big yellow button here: https://www.python.org/downloads/ instead of getting a download of 3.5.1 they get a redirect to https://www.python.org/downloads/windows/ I've tried them on both the Download Windows x86 web-based installer and Download Windows x86-64 web-based installer but still no go, they get the Modify/Repair/Uninstall screen like: http://www2.openend.se/~lac/5796.2.png I do not know how to help them now. Laura From mal at egenix.com Mon Dec 7 15:58:16 2015 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 7 Dec 2015 21:58:16 +0100 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <201512072050.tB7Koe6c023677@fido.openend.se> References: <201512072050.tB7Koe6c023677@fido.openend.se> Message-ID: <5665F2E8.6030308@egenix.com> On 07.12.2015 21:50, Laura Creighton wrote: > As webmaster, I am dealing with 3 unhappy would-be python users who have > windows 10. > > Right now their first problem is that when they click on the big > yellow button here: https://www.python.org/downloads/ > > instead of getting a download of 3.5.1 they get a redirect to > https://www.python.org/downloads/windows/ > > I've tried them on both the > Download Windows x86 web-based installer > > and > > Download Windows x86-64 web-based installer > > but still no go, they get the Modify/Repair/Uninstall screen > like: http://www2.openend.se/~lac/5796.2.png > > I do not know how to help them now. Have they already tried the regular installers (as opposed to the web installers) ? Cheers, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Experts (#1, Dec 07 2015) >>> Python Projects, Coaching and Consulting ... http://www.egenix.com/ >>> Python Database Interfaces ... http://products.egenix.com/ >>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ ________________________________________________________________________ ::: We implement business ideas - efficiently in both time and costs ::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ http://www.malemburg.com/ From brett at python.org Mon Dec 7 16:23:22 2015 From: brett at python.org (Brett Cannon) Date: Mon, 07 Dec 2015 21:23:22 +0000 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <201512072050.tB7Koe6c023677@fido.openend.se> References: <201512072050.tB7Koe6c023677@fido.openend.se> Message-ID: On Mon, 7 Dec 2015 at 12:51 Laura Creighton wrote: > As webmaster, I am dealing with 3 unhappy would-be python users who have > windows 10. > > Right now their first problem is that when they click on the big > yellow button here: https://www.python.org/downloads/ > > instead of getting a download of 3.5.1 they get a redirect to > https://www.python.org/downloads/windows/ Reported at https://github.com/python/pythondotorg/issues/863 > > > I've tried them on both the > Download Windows x86 web-based installer > > and > > Download Windows x86-64 web-based installer > > but still no go, they get the Modify/Repair/Uninstall screen > like: http://www2.openend.se/~lac/5796.2.png > > I do not know how to help them now. > Did they have any previous installations? I'm not aware of any specific version requirements. -Brett > > Laura > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve.dower at python.org Mon Dec 7 16:24:57 2015 From: steve.dower at python.org (Steve Dower) Date: Mon, 7 Dec 2015 13:24:57 -0800 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <201512072050.tB7Koe6c023677@fido.openend.se> References: <201512072050.tB7Koe6c023677@fido.openend.se> Message-ID: <5665F929.9040501@python.org> On 07Dec2015 1250, Laura Creighton wrote: > As webmaster, I am dealing with 3 unhappy would-be python users who have > windows 10. > > Right now their first problem is that when they click on the big > yellow button here: https://www.python.org/downloads/ > > instead of getting a download of 3.5.1 they get a redirect to > https://www.python.org/downloads/windows/ There were a few web site glitches. I thought I saw Ned dealing with some earlier? > I've tried them on both the > Download Windows x86 web-based installer > > and > > Download Windows x86-64 web-based installer > > but still no go, they get the Modify/Repair/Uninstall screen > like: http://www2.openend.se/~lac/5796.2.png This means they've already installed it. What is the actual problem they're having? > I do not know how to help them now. > > Laura From steve.dower at python.org Mon Dec 7 17:03:41 2015 From: steve.dower at python.org (Steve Dower) Date: Mon, 7 Dec 2015 14:03:41 -0800 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <5665F929.9040501@python.org> References: <201512072050.tB7Koe6c023677@fido.openend.se> <5665F929.9040501@python.org> Message-ID: <5666023D.3090805@python.org> On 07Dec2015 1324, Steve Dower wrote: > On 07Dec2015 1250, Laura Creighton wrote: >> As webmaster, I am dealing with 3 unhappy would-be python users who have >> windows 10. >> Not directly related to this thread, but I just pushed an update to the Windows installers for 3.5.1. (Should avoid people being confused when the py.exe launcher is removed on upgrade from 3.5.0.) There's a chance that people installing over the next 5-10 minutes will see issues. If anyone asks, just let them know to clear their download cache and redownload the installer. Apologies in advance for the extra support requests this will generate, and thank you to everyone who helps out by patiently dealing with them. Cheers, Steve P.S. Until the web site is updated, the new hashes and file sizes are: File MD5 Size python-3.5.1-webinstall.exe 6dfcc4012c96d84f0a83d00cfddf8bb8 937680 python-3.5.1.exe 4d6fdb5c3630cf60d457c9825f69b4d7 28743504 python-3.5.1-embed-win32.zip 6e783d8fd44570315d488b9a9881ff10 6023182 python-3.5.1-amd64-webinstall.exe 6a14ac8dfb70017c07b8f6cb622daa1a 963360 python-3.5.1-amd64.exe 863782d22a521d8ea9f3cf41db1e484d 29627072 python-3.5.1-embed-amd64.zip b07d15f515882452684e0551decad242 6832590 python351.chm cc3e73cbe2d71920483923b731710391 7719456 From steve.dower at python.org Mon Dec 7 17:13:04 2015 From: steve.dower at python.org (Steve Dower) Date: Mon, 7 Dec 2015 14:13:04 -0800 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <5666023D.3090805@python.org> References: <201512072050.tB7Koe6c023677@fido.openend.se> <5665F929.9040501@python.org> <5666023D.3090805@python.org> Message-ID: <56660470.40806@python.org> On 07Dec2015 1403, Steve Dower wrote: > On 07Dec2015 1324, Steve Dower wrote: >> On 07Dec2015 1250, Laura Creighton wrote: >>> As webmaster, I am dealing with 3 unhappy would-be python users who have >>> windows 10. >>> > > > Not directly related to this thread, but I just pushed an update to the > Windows installers for 3.5.1. (Should avoid people being confused when > the py.exe launcher is removed on upgrade from 3.5.0.) To be clearer here, people won't be confused because the launcher will no longer be removed (though it might be *added*, but that's less confusing and easier to fix). From lac at openend.se Mon Dec 7 22:36:19 2015 From: lac at openend.se (Laura Creighton) Date: Tue, 08 Dec 2015 04:36:19 +0100 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <5665F929.9040501@python.org> References: <201512072050.tB7Koe6c023677@fido.openend.se> <5665F929.9040501@python.org> Message-ID: <201512080336.tB83aJgv018155@fido.openend.se> In a message of Mon, 07 Dec 2015 13:24:57 -0800, Steve Dower writes: >On 07Dec2015 1250, Laura Creighton wrote: >> As webmaster, I am dealing with 3 unhappy would-be python users who have >> windows 10. >> >> Right now their first problem is that when they click on the big >> yellow button here: https://www.python.org/downloads/ >> >> instead of getting a download of 3.5.1 they get a redirect to >> https://www.python.org/downloads/windows/ > >There were a few web site glitches. I thought I saw Ned dealing with >some earlier? > >> I've tried them on both the >> Download Windows x86 web-based installer >> >> and >> >> Download Windows x86-64 web-based installer >> >> but still no go, they get the Modify/Repair/Uninstall screen >> like: http://www2.openend.se/~lac/5796.2.png > >This means they've already installed it. What is the actual problem >they're having? It is all they are getting. py -3.5 doesn't get them an interpreter. I haven't been able to get them to find a traceback, either. Laura From lac at openend.se Mon Dec 7 22:37:49 2015 From: lac at openend.se (Laura Creighton) Date: Tue, 08 Dec 2015 04:37:49 +0100 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <5665F2E8.6030308@egenix.com> References: <201512072050.tB7Koe6c023677@fido.openend.se> <5665F2E8.6030308@egenix.com> Message-ID: <201512080337.tB83bnuM018263@fido.openend.se> In a message of Mon, 07 Dec 2015 21:58:16 +0100, "M.-A. Lemburg" writes: >On 07.12.2015 21:50, Laura Creighton wrote: >> As webmaster, I am dealing with 3 unhappy would-be python users who have >> windows 10. >> >> Right now their first problem is that when they click on the big >> yellow button here: https://www.python.org/downloads/ >> >> instead of getting a download of 3.5.1 they get a redirect to >> https://www.python.org/downloads/windows/ >> >> I've tried them on both the >> Download Windows x86 web-based installer >> >> and >> >> Download Windows x86-64 web-based installer >> >> but still no go, they get the Modify/Repair/Uninstall screen >> like: http://www2.openend.se/~lac/5796.2.png >> >> I do not know how to help them now. > >Have they already tried the regular installers (as opposed to >the web installers) ? No. I really was hoping to not have to walk them through this one, as I don't have a windows machine and have never done it myself. (And my internet connection went away for several hours, so now they are likely asleep. As I should be.) Laura From ncoghlan at gmail.com Tue Dec 8 03:55:21 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 8 Dec 2015 18:55:21 +1000 Subject: [Python-Dev] Request for pronouncement on PEP 493 (HTTPS verification backport guidance) In-Reply-To: <20151130153230.2117981c@anarchist.wooz.org> References: <20151123210557.57b7e732@limelight.wooz.org> <201511241427.tAOERFJC028474@fido.openend.se> <56547C68.8060506@egenix.com> <20151125145714.2a02651d@limelight.wooz.org> <20151126121510.32c9a500@anarchist.wooz.org> <20151130153230.2117981c@anarchist.wooz.org> Message-ID: (Oops, I had a version of this reply sitting in my Drafts folder for a week, and only noticed after pushing the most recent PEP update that it had never been sent) On 1 December 2015 at 06:32, Barry Warsaw wrote: > On Nov 27, 2015, at 04:04 PM, Nick Coghlan wrote: > >>New draft pushed: https://hg.python.org/peps/rev/f602a47ea795 >> >>This is a significant rewrite that switches the PEP to a Standards Track PEP >>proposing two new features for 2.7.12+: an "ssl._verify_https_certificates()" >>configuration function, and a "PYTHONHTTPSVERIFY" environment variable >>(although writing them together like that makes me wonder if the latter >>should now be "PYTHONVERIFYHTTPS" instead). > > Thanks for this, and +1 on Stephen's suggested name change (which you've > already pushed). > > Two comments: the PEP still describes the configuration file implementation. > Is this slated for 2.7.12 also? If not, should it just be dropped from the > PEP? That recommendation is still needed to backport PEP 476 itself to versions prior to 2.7.9 - otherwise there's no way to flip the default from "don't verify" to "verify" for the entire Python installation. It may be that the system Python in RHEL/CentOS 7 and derivatives ends up being the only case where that happens (since that's the only instance I'm aware of with a 2024 support deadline, rather than 2019 or earlier), but if anyone else does do it, it would be preferable if they adopted the same approach to configuring it. However, I just pushed an update that reverses the presentation order of the two main backporting sections: https://hg.python.org/peps/rev/17e0e36cbc19 The original order came from the point where this was just an Informational PEP suggesting some backporting techniques, but now that it suggests some actual upstream changes for 2.7.12+, it makes more sense to cover those first, and then be more explicit that it's acceptable to skip implementing the rest of the PEP entirely. Accordingly, that change also includes the following new paragraph in the section on the PEP 476 backport: ================ This PEP doesn't take a position on whether or not this particular change is a good idea - rather, it suggests that *if* a redistributor chooses to go down the path of making the default behaviour configurable in a version of Python older than Python 2.7.9, then maintaining a consistent approach across redistributors would be beneficial for users. ================ > I'd mildly prefer no default value for `enable` in > _https_verify_certificates(). I'd have preferred a keyword-only argument, but > of course this is Python 2. Instead, I'd like to force passing True or False > (and document using `enable=True` or `enable=False`) and not rely on a default > argument. But I'm only +0 on that detail. My rationale for giving it a default is to make it marginally more straightforward to turn verification on than it is to turn it off. That's going to be most relevant in the pre-2.7.9 backport case, since in 2.7.9+ the HTTPS certificate verification will already be on by default. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From lac at openend.se Wed Dec 9 05:25:28 2015 From: lac at openend.se (Laura Creighton) Date: Wed, 09 Dec 2015 11:25:28 +0100 Subject: [Python-Dev] Do windows 10 users, like windows 7 users need to install a SP before installing Python will work? In-Reply-To: <5666023D.3090805@python.org> References: <201512072050.tB7Koe6c023677@fido.openend.se> <5665F929.9040501@python.org> <5666023D.3090805@python.org> Message-ID: <201512091025.tB9APSPT014669@fido.openend.se> In a message of Mon, 07 Dec 2015 14:03:41 -0800, Steve Dower writes: >On 07Dec2015 1324, Steve Dower wrote: >> On 07Dec2015 1250, Laura Creighton wrote: >>> As webmaster, I am dealing with 3 unhappy would-be python users who have >>> windows 10. >>> > > >Not directly related to this thread, but I just pushed an update to the >Windows installers for 3.5.1. (Should avoid people being confused when >the py.exe launcher is removed on upgrade from 3.5.0.) > >There's a chance that people installing over the next 5-10 minutes will >see issues. If anyone asks, just let them know to clear their download >cache and redownload the installer. > >Apologies in advance for the extra support requests this will generate, >and thank you to everyone who helps out by patiently dealing with them. > >Cheers, >Steve I am happy to report that one of my outstanding problems was just fixed by this. It seems that without the launcher he could not find his Python. Many other people who just couldn't get things to work with Windows 10 and 3.5 report that 3.5.1 fixed things for them, once the download button started working for them. And one person, who was a clear member of the would-be scientific python community I just pointed at Anaconda Python and he reports happiness (which is good, because I never understood from his mail what his problem was in the first place.) I still have one who is having problems. We are working on a manual install and he is hitting this: https://bugs.python.org/issue25144 But, all in all, I can report a great increase in happiness in my corner of the world. Thank you and Congratulations. Laura From talh555 at walla.co.il Wed Dec 9 06:43:37 2015 From: talh555 at walla.co.il (=?UTF-8?b?15jXnCDXlw==?=) Date: Wed, 09 Dec 2015 13:43:37 +0200 Subject: [Python-Dev] A function for Find-Replace in lists Message-ID: <~000566813E98F7BEA00077C@walla.co.il> An HTML attachment was scrubbed... URL: From tjreedy at udel.edu Wed Dec 9 10:52:43 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 9 Dec 2015 10:52:43 -0500 Subject: [Python-Dev] A function for Find-Replace in lists In-Reply-To: <~000566813E98F7BEA00077C@walla.co.il> References: <~000566813E98F7BEA00077C@walla.co.il> Message-ID: On 12/9/2015 6:43 AM, ?? ? wrote: > > I think it could be helpful for everyone if the function proposed by > user "SomethingSomething" can be added as built-in in Python > > See both question by "SomethingSomething" and answer to himself with > implementation.. > > http://stackoverflow.com/questions/34174643/python-find-replace-on-lists Please submit a specific proposal to python-ideas list. Use normal left-justified plain text formatting. -- Terry Jan Reedy From abarnert at yahoo.com Wed Dec 9 11:47:42 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Wed, 9 Dec 2015 08:47:42 -0800 Subject: [Python-Dev] A function for Find-Replace in lists In-Reply-To: <~000566813E98F7BEA00077C@walla.co.il> References: <~000566813E98F7BEA00077C@walla.co.il> Message-ID: On Dec 9, 2015, at 03:43, ?? ? wrote: > > Hi, > > I think it could be helpful for everyone if the function proposed by user "SomethingSomething" can be added as built-in > in Python Why? When he was asked what use it might have, he didn't have an answer. Also, notice that the answer he provided doesn't actually do what he asked for; as he himself points out, it's different in at least two ways from his stated requirements. So, which one of the two do you want? And why is that one, rather than the other, useful? Also, why would you call this list_replace? That sounds like a function that would replace elements with elements, not make a copy with elements replaced by new lists flattened into place. Also, why would you only want this lists, rather than for any iterable? And what can it do that this more general and completely trivial function can't: def flattening_subst(iterable, value, sequence): for x in iterable: if x == value: yield from sequence else: yield x If you know of another language whose standard library has an equivalent, that might narrow down exactly what the requirements are, point at an implementation that actually meets those requirements, and probably provide examples that hint at the point of having this function in the first place. > See both question by "SomethingSomething" and answer to himself with implementation.. > > http://stackoverflow.com/questions/34174643/python-find-replace-on-lists > > > Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From status at bugs.python.org Fri Dec 11 12:08:33 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 11 Dec 2015 18:08:33 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20151211170833.B595156677@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-12-04 - 2015-12-11) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5297 ( +8) closed 32303 (+37) total 37600 (+45) Open issues with patches: 2332 Issues opened (28) ================== #25624: shutil.make_archive makes invalid directory entries http://bugs.python.org/issue25624 reopened by serhiy.storchaka #25798: Update python.org installers to use OpenSSL 1.0.2e http://bugs.python.org/issue25798 opened by ned.deily #25799: 2.7.11rc1 not added to Win10 app list (start menu) http://bugs.python.org/issue25799 opened by terry.reedy #25801: ResourceWarning in test_zipfile64 http://bugs.python.org/issue25801 opened by SilentGhost #25802: Finish deprecating load_module() http://bugs.python.org/issue25802 opened by brett.cannon #25803: pathlib.Path('/').mkdir() raises wrong error type http://bugs.python.org/issue25803 opened by Daniel Lepage #25804: Make Profile.print_stats support sorting by mutiple values http://bugs.python.org/issue25804 opened by wdv4758h #25805: Failure in test_pkgutil run from command-line http://bugs.python.org/issue25805 opened by SilentGhost #25809: "Invalid" tests on locales http://bugs.python.org/issue25809 opened by bapt #25810: Python 3 documentation for eval is incorrect http://bugs.python.org/issue25810 opened by aroberge #25812: locale.nl_langinfo() can't decode value http://bugs.python.org/issue25812 opened by serhiy.storchaka #25813: co_flags section of inspect module docs out of date http://bugs.python.org/issue25813 opened by BreamoreBoy #25817: modsupport: 'countformat' does not handle strings without brac http://bugs.python.org/issue25817 opened by Myron Walker #25821: Documentation for threading.enumerate / threading.Thread.is_al http://bugs.python.org/issue25821 opened by anthonygreen #25822: Add docstrings to fields of urllib.parse results http://bugs.python.org/issue25822 opened by serhiy.storchaka #25823: Speed-up oparg decoding on little-endian machines http://bugs.python.org/issue25823 opened by rhettinger #25824: 32-bit 2.7.11 installer creates registry keys that are incompa http://bugs.python.org/issue25824 opened by aundro #25825: AIX shared library extension modules installation broken http://bugs.python.org/issue25825 opened by David.Edelsohn #25827: Support ICC in configure http://bugs.python.org/issue25827 opened by zach.ware #25828: PyCode_Optimize() (peephole optimizer) doesn't handle Keyboard http://bugs.python.org/issue25828 opened by haypo #25829: Mixing multiprocessing pool and subprocess may create zombie p http://bugs.python.org/issue25829 opened by amikoren at yahoo.com #25830: _TypeAlias: Discrepancy between docstring and behavior http://bugs.python.org/issue25830 opened by flying sheep #25833: pyvenv: venvs cannot be moved because activate scripts hard-co http://bugs.python.org/issue25833 opened by moorecm #25834: getpass falls back when sys.stdin is changed http://bugs.python.org/issue25834 opened by Drekin #25836: Documentation of MAKE_FUNCTION is still incorrect http://bugs.python.org/issue25836 opened by freakboy3742 #25838: Lib/httplib.py: Resend http request on server close connection http://bugs.python.org/issue25838 opened by gmixo #25841: In FancyURLopener error in example with http address. http://bugs.python.org/issue25841 opened by Denis Savenko #25842: Installer does not set permissions correctly? http://bugs.python.org/issue25842 opened by lac Most recent 15 issues with no replies (15) ========================================== #25842: Installer does not set permissions correctly? http://bugs.python.org/issue25842 #25841: In FancyURLopener error in example with http address. http://bugs.python.org/issue25841 #25836: Documentation of MAKE_FUNCTION is still incorrect http://bugs.python.org/issue25836 #25834: getpass falls back when sys.stdin is changed http://bugs.python.org/issue25834 #25830: _TypeAlias: Discrepancy between docstring and behavior http://bugs.python.org/issue25830 #25812: locale.nl_langinfo() can't decode value http://bugs.python.org/issue25812 #25805: Failure in test_pkgutil run from command-line http://bugs.python.org/issue25805 #25802: Finish deprecating load_module() http://bugs.python.org/issue25802 #25791: Raise an ImportWarning when __spec__.parent/__package__ isn't http://bugs.python.org/issue25791 #25785: TimedRotatingFileHandler missing rotations http://bugs.python.org/issue25785 #25776: More compact pickle of iterators etc http://bugs.python.org/issue25776 #25774: [benchmarks] Adjust to allow uploading benchmark data to codes http://bugs.python.org/issue25774 #25773: Deprecate deleting with PyObject_SetAttr, PyObject_SetAttrStri http://bugs.python.org/issue25773 #25766: __bytes__ doesn't work for str subclasses http://bugs.python.org/issue25766 #25753: Reference leaks in test_smtplib http://bugs.python.org/issue25753 Most recent 15 issues waiting for review (15) ============================================= #25838: Lib/httplib.py: Resend http request on server close connection http://bugs.python.org/issue25838 #25827: Support ICC in configure http://bugs.python.org/issue25827 #25823: Speed-up oparg decoding on little-endian machines http://bugs.python.org/issue25823 #25822: Add docstrings to fields of urllib.parse results http://bugs.python.org/issue25822 #25809: "Invalid" tests on locales http://bugs.python.org/issue25809 #25804: Make Profile.print_stats support sorting by mutiple values http://bugs.python.org/issue25804 #25801: ResourceWarning in test_zipfile64 http://bugs.python.org/issue25801 #25794: __setattr__ does not always overload operators http://bugs.python.org/issue25794 #25789: py launcher stderr is not piped to subprocess.Popen.stderr http://bugs.python.org/issue25789 #25786: contextlib.ExitStack introduces a cycle in exception __context http://bugs.python.org/issue25786 #25782: CPython hangs on error __context__ set to the error itself http://bugs.python.org/issue25782 #25780: Add support for CAN_RAW_JOIN_FILTERS http://bugs.python.org/issue25780 #25778: winreg.EnumValue does not truncate strings correctly http://bugs.python.org/issue25778 #25776: More compact pickle of iterators etc http://bugs.python.org/issue25776 #25774: [benchmarks] Adjust to allow uploading benchmark data to codes http://bugs.python.org/issue25774 Top 10 most discussed issues (10) ================================= #25810: Python 3 documentation for eval is incorrect http://bugs.python.org/issue25810 10 msgs #25823: Speed-up oparg decoding on little-endian machines http://bugs.python.org/issue25823 9 msgs #24682: Add Quick Start: Communications section to devguide http://bugs.python.org/issue24682 8 msgs #25089: Can't run Python Launcher on Windows http://bugs.python.org/issue25089 7 msgs #25638: Verify the etree_parse and etree_iterparse benchmarks are work http://bugs.python.org/issue25638 7 msgs #25809: "Invalid" tests on locales http://bugs.python.org/issue25809 7 msgs #25701: Document that tp_setattro and tp_setattr are used for deleting http://bugs.python.org/issue25701 6 msgs #25817: modsupport: 'countformat' does not handle strings without brac http://bugs.python.org/issue25817 6 msgs #25698: The copy_reg module becomes unexpectedly empty in test_cpickle http://bugs.python.org/issue25698 5 msgs #25716: typeobject.c call_method & call_maybe can leak references on ' http://bugs.python.org/issue25716 5 msgs Issues closed (38) ================== #12509: test_gdb fails on debug build when builddir != srcdir http://bugs.python.org/issue12509 closed by martin.panter #14285: Traceback wrong on ImportError while executing a package http://bugs.python.org/issue14285 closed by ncoghlan #15858: tarfile missing entries due to omitted uid/gid fields http://bugs.python.org/issue15858 closed by martin.panter #16458: subprocess.py throw "The handle is invalid" error on duplicati http://bugs.python.org/issue16458 closed by eryksun #17772: test_gdb doesn't detect a gdb built with python3.3 (or higher) http://bugs.python.org/issue17772 closed by martin.panter #21240: Add an abstactmethod directive to the Python ReST domain http://bugs.python.org/issue21240 closed by berker.peksag #22341: Python 3 crc32 documentation clarifications http://bugs.python.org/issue22341 closed by martin.panter #22758: Regression in Python 3.2 cookie parsing http://bugs.python.org/issue22758 closed by Tim.Graham #23936: Wrong references to deprecated find_module instead of find_spe http://bugs.python.org/issue23936 closed by brett.cannon #24903: Do not verify destdir argument to compileall http://bugs.python.org/issue24903 closed by r.david.murray #24934: django_v2 benchmark not working in Python 3.6 http://bugs.python.org/issue24934 closed by brett.cannon #25039: Docs: Link to Stackless Python in Design and History FAQ secti http://bugs.python.org/issue25039 closed by zach.ware #25492: subprocess with redirection fails after FreeConsole http://bugs.python.org/issue25492 closed by eryksun #25500: docs claim __import__ checked for in globals, but IMPORT_NAME http://bugs.python.org/issue25500 closed by brett.cannon #25591: improve test coverage for the imaplib http://bugs.python.org/issue25591 closed by maciej.szulik #25715: Python 3.5.1 installer shows wrong upgrade path http://bugs.python.org/issue25715 closed by larry #25717: tempfile.TemporaryFile fails when dir option set to directory http://bugs.python.org/issue25717 closed by martin.panter #25764: PyObject_Call() is called with an exception set in subprocess http://bugs.python.org/issue25764 closed by martin.panter #25771: importlib: '.submodule' is not a relative name (no leading dot http://bugs.python.org/issue25771 closed by brett.cannon #25795: test_fork1 cannot be run directly: ./python Lib/test/test_fork http://bugs.python.org/issue25795 closed by python-dev #25800: errors running test_capi from command line http://bugs.python.org/issue25800 closed by python-dev #25806: ResourceWarning in test_tasks http://bugs.python.org/issue25806 closed by SilentGhost #25807: test_multiprocessing_fork.test_mymanager fails and hangs http://bugs.python.org/issue25807 closed by SilentGhost #25808: The Python Tutorial 5.3. Tuples and Sequences http://bugs.python.org/issue25808 closed by SilentGhost #25811: return from random.shuffle http://bugs.python.org/issue25811 closed by serhiy.storchaka #25814: Propagate all errors from ElementTree.iterparse http://bugs.python.org/issue25814 closed by serhiy.storchaka #25815: Improper subprocess output of arguments with braces in them on http://bugs.python.org/issue25815 closed by ebarry #25816: https://www.python.org/downloads/ not working for 3.5.1 for wi http://bugs.python.org/issue25816 closed by brett.cannon #25818: asyncio: If protocol_factory raises an error, the connection c http://bugs.python.org/issue25818 closed by ebarry #25819: print "Hi" in python 3 exception handling doesn't work http://bugs.python.org/issue25819 closed by ebarry #25820: Clean up run_gdb() calls http://bugs.python.org/issue25820 closed by martin.panter #25826: imaplib can't process lines after starttls http://bugs.python.org/issue25826 closed by David Wahlund #25831: dbm.gnu leaks file descriptors on .reorganize() http://bugs.python.org/issue25831 closed by ischwabacher #25832: Document weird behavior of `finally` when it has `break` in it http://bugs.python.org/issue25832 closed by r.david.murray #25835: httplib uses print for debugging http://bugs.python.org/issue25835 closed by r.david.murray #25837: Errors when Installing Anaconda 3 http://bugs.python.org/issue25837 closed by r.david.murray #25839: negative zero components are ignored in complex number literal http://bugs.python.org/issue25839 closed by mark.dickinson #25840: Allow `False` to be passed to `filter` http://bugs.python.org/issue25840 closed by r.david.murray From arigo at tunes.org Tue Dec 15 05:46:03 2015 From: arigo at tunes.org (Armin Rigo) Date: Tue, 15 Dec 2015 11:46:03 +0100 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: <201512011913.tB1JDYAv007962@fido.openend.se> References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> <201512011913.tB1JDYAv007962@fido.openend.se> Message-ID: Hi all, On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: > Python 3.5 is not supported on windows XP. Upgrade your OS or > stick with 3.4 Maybe this information should be written down somewhere more official? I can't find it in any of these pages: https://www.python.org/downloads/windows/ https://www.python.org/downloads/release/python-350/ https://www.python.org/downloads/release/python-351/ https://docs.python.org/3/using/windows.html It is found on the following page, to which googling "python 3.5 windows XP" does not point: https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems Instead, the google query above returns various threads on stackoverflow and elsewhere where users wonder about that very question. A bient?t, Armin. From victor.stinner at gmail.com Tue Dec 15 08:04:43 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Dec 2015 14:04:43 +0100 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) Message-ID: Hi, I implemented more constant folding optimizations in my FAT Python project, but it looks like I made a subtle change in the Python semantic. Replacing "not x == y" with "x != y" changes the behaviour of Python. For example, this optimization breaks test_unittest because unittest.mock._Call implements __eq__() but not __ne__(). Is it expected that "not x.__eq__(y)" can be different than "x.__ne__(y)"? Is it part of the Python semantic? IMHO it's a bug in the unittest.mock module, but it's "acceptable" because "it just works" :-) So FAT Python must not replace "not x == y" with "x != y" to not break the code. Should Python emit a warning when __eq__() is implemented but not __ne__()? Should Python be modified to call "not __eq__()" when __ne__() is not implemented? For me, it can be an annoying and sublte bug, hard to track. Victor From victor.stinner at gmail.com Tue Dec 15 08:11:24 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Dec 2015 14:11:24 +0100 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) In-Reply-To: References: Message-ID: Oh, I sent my email too quickly, I forgot to ask for other operations. Currently, FAT implements the following optimizations: * "not (x == y)" replaced with "x != y" * "not (x != y)" replaced with "x == y" * "not (x < y)" replaced with "x >= y" * "not (x <= y)" replaced with "x > y" * "not (x > y)" replaced with "x <= y" * "not (x >= y)" replaced with "x < y" * "not (x in y)" replaced with "x not in y" * "not (x not in y)" replaced with "x in y" * "not (x is y)" replaced with "x is not y" * "not (x is not y)" replaced with "x is y" I guess that the optimizations on "in" and "is" operators are fine, but optimizations on all other operations must be removed to not break the Python semantic. Python has also some funny objects like math.nan: >>> math.nan != math.nan True >>> math.nan == math.nan False >>> math.nan < math.nan False >>> math.nan > math.nan False >>> math.nan <= math.nan False >>> math.nan >= math.nan False >>> math.nan != 1.0 True >>> math.nan == 1.0 False >>> math.nan <= 1.0 False >>> math.nan < 1.0 False >>> math.nan >= 1.0 False >>> math.nan > 1.0 False So "not(math.nan < 1.0)" is different than "math.nan >= 1.0"... Victor 2015-12-15 14:04 GMT+01:00 Victor Stinner : > Hi, > > I implemented more constant folding optimizations in my FAT Python > project, but it looks like I made a subtle change in the Python > semantic. > > Replacing "not x == y" with "x != y" changes the behaviour of Python. > For example, this optimization breaks test_unittest because > unittest.mock._Call implements __eq__() but not __ne__(). > > Is it expected that "not x.__eq__(y)" can be different than > "x.__ne__(y)"? Is it part of the Python semantic? > > IMHO it's a bug in the unittest.mock module, but it's "acceptable" > because "it just works" :-) So FAT Python must not replace "not x == > y" with "x != y" to not break the code. > > Should Python emit a warning when __eq__() is implemented but not __ne__()? > > Should Python be modified to call "not __eq__()" when __ne__() is not > implemented? > > For me, it can be an annoying and sublte bug, hard to track. > > Victor From drekin at gmail.com Tue Dec 15 09:01:57 2015 From: drekin at gmail.com (=?UTF-8?B?QWRhbSBCYXJ0b8Wh?=) Date: Tue, 15 Dec 2015 15:01:57 +0100 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) Message-ID: Hello, the comparisons >=, <=, >, < cannot be optimized this way. Not every order is a total order. For example, sets a = {1, 2} and b = {2, 3} are incomparable (in the sense that both a >= b and a <= b is False), and it is no pathology. Regards, Adam Barto? -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephane at wirtel.be Tue Dec 15 08:54:05 2015 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 15 Dec 2015 14:54:05 +0100 Subject: [Python-Dev] Urgent: Last call for the CfP of PythonFOSDEM 2016 Message-ID: <20151215135405.GA32632@sg1> Hi all Because the deadline is imminent and because we have only received some proposals, we have extended the current deadline. The new submission deadline is 2015-12-20. Call For Proposals ================== This is the official call for sessions for the Python devroom at FOSDEM 2016. FOSDEM is the Free and Open source Software Developers' European Meeting, a free and non-commercial two-day week-end that offers open source contributors a place to meet, share ideas and collaborate. It's the biggest event in Europe with +5000 hackers, +400 speakers. For this edition, Python will be represented by its Community. If you want to discuss with a lot of Python Users, it's the place to be! Important dates =============== * Submission deadlines: 2015-12-20 * Acceptance notifications: 2015-12-24 Practical ========= * The duration for talks will be 30 minutes, including presentations and questions and answers. * Presentation can be recorded and streamed, sending your proposal implies giving permission to be recorded. * A mailing list for the Python devroom is available for discussions about devroom organisation. You can register at this address: https://lists.fosdem.org/listinfo/python-devroom How to submit ============= All submissions are made in the Pentabarf event planning tool at https://penta.fosdem.org/submission/FOSDEM16 When submitting your talk in Pentabarf, make sure to select the Python devroom as the Track. Of course, if you already have a user account, please reuse it. Questions ========= Any questions, please sned an email to info AT python-fosdem DOT org Thank you for submitting your sessions and see you soon in Brussels to talk about Python. If you want to keep informed for this edition, you can follow our twitter account @PythonFOSDEM. * FOSDEM 2016: https://fosdem.org/2016 * Python Devroom: http://python-fosdem.org * Twitter: https://twitter.com/PythonFOSDEM Thank you so much, Stephane -- St?phane Wirtel - http://wirtel.be - @matrixise From stephane at wirtel.be Tue Dec 15 08:57:56 2015 From: stephane at wirtel.be (Stephane Wirtel) Date: Tue, 15 Dec 2015 14:57:56 +0100 Subject: [Python-Dev] Urgent: Last call for the CfP of PythonFOSDEM 2016 Message-ID: <20151215135756.GA1160@sg1> Hi all Because the deadline is imminent and because we have only received some proposals, we have extended the current deadline. The new submission deadline is 2015-12-20. Call For Proposals ================== This is the official call for sessions for the Python devroom at FOSDEM 2016. FOSDEM is the Free and Open source Software Developers' European Meeting, a free and non-commercial two-day week-end that offers open source contributors a place to meet, share ideas and collaborate. It's the biggest event in Europe with +5000 hackers, +400 speakers. For this edition, Python will be represented by its Community. If you want to discuss with a lot of Python Users, it's the place to be! Important dates =============== * Submission deadlines: 2015-12-20 * Acceptance notifications: 2015-12-24 Practical ========= * The duration for talks will be 30 minutes, including presentations and questions and answers. * Presentation can be recorded and streamed, sending your proposal implies giving permission to be recorded. * A mailing list for the Python devroom is available for discussions about devroom organisation. You can register at this address: https://lists.fosdem.org/listinfo/python-devroom How to submit ============= All submissions are made in the Pentabarf event planning tool at https://penta.fosdem.org/submission/FOSDEM16 When submitting your talk in Pentabarf, make sure to select the Python devroom as the Track. Of course, if you already have a user account, please reuse it. Questions ========= Any questions, please sned an email to info AT python-fosdem DOT org Thank you for submitting your sessions and see you soon in Brussels to talk about Python. If you want to keep informed for this edition, you can follow our twitter account @PythonFOSDEM. * FOSDEM 2016: https://fosdem.org/2016 * Python Devroom: http://python-fosdem.org * Twitter: https://twitter.com/PythonFOSDEM Thank you so much, Stephane -- St?phane Wirtel - http://wirtel.be - @matrixise From lac at openend.se Tue Dec 15 09:41:35 2015 From: lac at openend.se (Laura Creighton) Date: Tue, 15 Dec 2015 15:41:35 +0100 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> <201512011913.tB1JDYAv007962@fido.openend.se> Message-ID: <201512151441.tBFEfZxa031982@fido.openend.se> In a message of Tue, 15 Dec 2015 11:46:03 +0100, Armin Rigo writes: >Hi all, > >On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: >> Python 3.5 is not supported on windows XP. Upgrade your OS or >> stick with 3.4 > >Maybe this information should be written down somewhere more official? > I can't find it in any of these pages: > >https://www.python.org/downloads/windows/ >https://www.python.org/downloads/release/python-350/ >https://www.python.org/downloads/release/python-351/ >https://docs.python.org/3/using/windows.html > >It is found on the following page, to which googling "python 3.5 >windows XP" does not point: > >https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems > >Instead, the google query above returns various threads on >stackoverflow and elsewhere where users wonder about that very >question. > > >A bient?t, > >Armin. I already asked for that, on the bug tracker but maybe I picked the wrong issue tracker for that request. So now I have made one here, too. https://github.com/python/pythondotorg/issues/867 Laura From rdmurray at bitdance.com Tue Dec 15 10:14:00 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Tue, 15 Dec 2015 10:14:00 -0500 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: <201512151441.tBFEfZxa031982@fido.openend.se> References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> <201512011913.tB1JDYAv007962@fido.openend.se> <201512151441.tBFEfZxa031982@fido.openend.se> Message-ID: <20151215151402.189671B10004@webabinitio.net> On Tue, 15 Dec 2015 15:41:35 +0100, Laura Creighton wrote: > In a message of Tue, 15 Dec 2015 11:46:03 +0100, Armin Rigo writes: > >Hi all, > > > >On Tue, Dec 1, 2015 at 8:13 PM, Laura Creighton wrote: > >> Python 3.5 is not supported on windows XP. Upgrade your OS or > >> stick with 3.4 > > > >Maybe this information should be written down somewhere more official? > > I can't find it in any of these pages: > > > >https://www.python.org/downloads/windows/ > >https://www.python.org/downloads/release/python-350/ > >https://www.python.org/downloads/release/python-351/ > >https://docs.python.org/3/using/windows.html > > > >It is found on the following page, to which googling "python 3.5 > >windows XP" does not point: > > > >https://docs.python.org/3.5/whatsnew/3.5.html#unsupported-operating-systems That's too bad, since that's the official place such info appears. > >Instead, the google query above returns various threads on > >stackoverflow and elsewhere where users wonder about that very > >question. > > I already asked for that, on the bug tracker but maybe I picked the wrong > issue tracker for that request. > > So now I have made one here, too. > https://github.com/python/pythondotorg/issues/867 IMO the second is the right one...although the release managers sometimes adjust the web site, I think this is a web site issue and not a release management issue. I would think that we should have "supported versions" in the 'product description' for both Windows and OSX, but IMO the current way the releases are organized on the web site does not make that easy to achieve in a way that will be useful to end users. That said, I'm not sure whether or not there is a way we could add "supported versions" to the main docs that would make sense and be useful...your bugs.python.org issue would be useful for discussing that. --David From leewangzhong+python at gmail.com Tue Dec 15 06:23:02 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 15 Dec 2015 06:23:02 -0500 Subject: [Python-Dev] Third milestone of FAT Python Message-ID: On Sat, Dec 04, 2015 at 7:49 AM, Victor Stinner wrote: > Versionned dictionary > ===================== > > In the previous milestone of FAT Python, the versionned dictionary was a > new type inherited from the builtin dict type which added a __version__ > read-only (global "version" of dictionary, incremented at each change), > a getversion(key) method (version of a one key) and it added support for > weak references. I was thinking (as an alternative to versioning dicts) about a dictionary which would be able to return name/value pairs, which would also be internally used by the dictionary. This would be way less sensitive to irrelevant changes in the scope dictionary, but cost an extra pointer to each key. Here's how it would work: pair = scope.item(name) scope[name] = newval assert pair.value is newval assert pair.key is name assert pair is scope.item(name) # Alternatively, to only create pair objects when `item` is called, have `==` compare the underlying pair. del scope[name] assert pair.key is None # name-dicts can't have `None` keys assert pair.value is None # Alternatively, pair.value is scope.NULL This dict will allow one to hold references to its entries (with the caller promising not to change them, enforced by exceptions). You won't have to keep looking up keys (unless the name is deleted), and functions are allowed to change. For inlining, you can detect whether the function has been redefined by testing the saved pair.value against the saved function, and go into the slow path if needed (or recompile the inlining). I am not sure whether deleting from the dict and then readding the same key should reuse the pair container. I think the only potential issue for the Python version is memory use. There aren't going to be THAT many names being deleted, right? So I say that deleted things in the scope dict should not be removed from the inner dict. I predict that this will simplify a lot of other things, especially when deleting and readding the same name: if you save a pair, and it becomes invalid, you don't have to do another lookup to make sure that it's REALLY gone. If memory is a real concern, deleted pairs can be weakrefed (and saved in a second dict?) until they are reused. This way, pairs which aren't saved by something outside will be removed. For implementation, a Python implementation of the idea has probably already been done. Here are some details: - set: Internally store d._d[k] = k,v. - get: Reject k if d._d[k].key is None. (Names must be strings.) - del: Set d._d[k].key = None and .val = d.NULL to invalidate this entry. For the CPython version, CPython's dict already stores its entries as PyDictKeyEntry (hash, *key, *value), but those entries can move around on resizing. Two possible implementations: 1. Fork dict to store {hash, *kv_pair}. 2. Use an inner dict (like in the Python suggestion) where values are kv_pair. Write the indirection code in C, because scope dicts must be fast. For exposing a pair to Python code, here are two possibilities: 1. Make them Python objects in the first place. 2. Keep a second hash table in lockstep with the first (so that you can do a lookup to find the index in the first, and then use that same index with the second). In this table, store pair objects that have been created. (They can be weakrefed, as before. Unless it's impossible to weakref something you're returning?) This will save memory for pairs that aren't ever exposed. If compact dictionaries are implemented, the second hash table will be a second array instead. To use this kind of scopedict, functions would have to store a list of used names, which is memory overhead. But for what you're doing, some overhead will be necessary anyway. From leewangzhong+python at gmail.com Tue Dec 15 08:45:55 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 15 Dec 2015 08:45:55 -0500 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) In-Reply-To: References: Message-ID: On Tue, Dec 15, 2015 at 8:04 AM, Victor Stinner wrote: > Is it expected that "not x.__eq__(y)" can be different than > "x.__ne__(y)"? Is it part of the Python semantic? In Numpy, `x != y` returns an array of bools, while `not x == y` creates an array of bools and then tries to convert it to a bool, which fails, because a non-singleton Numpy array is not allowed to be converted to a bool. But in the context of `if`, both `not x == y` and `x != y` will fail. >From the docs, on implementing comparison: https://docs.python.org/3/reference/datamodel.html#object.__ne__ """ By default, __ne__() delegates to __eq__() and inverts the result unless it is NotImplemented. There are no other implied relationships among the comparison operators, for example, the truth of (x A lot of talks and patches around how to cross-compile python for andriod ... Dear python-dev@, I just want to say thanks to all of you for the high quality cross-platform code. Using alternative Android NDK named CrystaX (home page - https://www.crystax.net ) which provides high quality posix support in comparison with google's one we managed to cross-compile python 2.7 and 3.5 completely without any patches applied. -------------- next part -------------- An HTML attachment was scrubbed... URL: From olemis at gmail.com Tue Dec 15 11:53:13 2015 From: olemis at gmail.com (Olemis Lang) Date: Tue, 15 Dec 2015 11:53:13 -0500 Subject: [Python-Dev] Python for android - successfully cross-compiled without patches In-Reply-To: References: Message-ID: Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you tried ? On 12/15/15, Vitaly Murashev wrote: > A lot of talks and patches around how to cross-compile python for andriod > ... > > Dear python-dev@, > I just want to say thanks to all of you for the high quality cross-platform > code. > > Using alternative Android NDK named CrystaX (home page - > https://www.crystax.net ) which provides high quality posix support in > comparison with google's one we managed to cross-compile python 2.7 and 3.5 > completely without any patches applied. > -- Regards, Olemis - @olemislc Apache? Bloodhound contributor http://issues.apache.org/bloodhound http://blood-hound.net Brython committer http://brython.info http://github.com/brython-dev/brython Blog ES: http://simelo-es.blogspot.com/ Blog EN: http://simelo-en.blogspot.com/ Featured article: From dm at crystax.net Tue Dec 15 13:42:40 2015 From: dm at crystax.net (Dmitry Moskalchuk) Date: Tue, 15 Dec 2015 18:42:40 +0000 (UTC) Subject: [Python-Dev] Python for android - successfully cross-compiled without patches References: Message-ID: Olemis Lang gmail.com> writes: > > Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you tried ? > Hi Olemis, I'm Dmitry Moskalchuk, initial author and main contributor of CrystaX NDK. I could provide details if needed. Answering your question, I assume by ISA you mean "Instruction Set Architecture", isn't? We've running Python on ARMv7 (32-bit) and ARMv8 (64-bit) devices, as well as on x86 (32-bit) tablets. We'll run it on x86_64 and mips devices too with time. We'd like to include comprehensive testing of Python into process of automatic regression testing of CrystaX NDK and we'd be very appreciated if you or someone else could point us to documentation or examples how to do that. -- Dmitry Moskalchuk From brett at python.org Tue Dec 15 14:33:29 2015 From: brett at python.org (Brett Cannon) Date: Tue, 15 Dec 2015 19:33:29 +0000 Subject: [Python-Dev] Python for android - successfully cross-compiled without patches In-Reply-To: References: Message-ID: On Tue, 15 Dec 2015 at 10:48 Dmitry Moskalchuk wrote: > Olemis Lang gmail.com> writes: > > > > > Wow ! Awesome ! What specific ISA version(s) and/or device(s) have you > tried ? > > > > Hi Olemis, > > I'm Dmitry Moskalchuk, initial author and main contributor of CrystaX NDK. > I could provide details if needed. > > Answering your question, I assume by ISA you mean > "Instruction Set Architecture", isn't? > > We've running Python on ARMv7 (32-bit) and ARMv8 (64-bit) devices, > as well as on x86 (32-bit) tablets. We'll run it on x86_64 and mips devices > too with time. > > We'd like to include comprehensive testing of Python into process of > automatic regression testing of CrystaX NDK and we'd be very appreciated > if you or someone else could point us to documentation or examples how to > do that. > If you want to run the CPython test suite you can look at https://docs.python.org/devguide/runtests.html . -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwilliams at lyft.com Tue Dec 15 14:56:43 2015 From: rwilliams at lyft.com (Roy Williams) Date: Tue, 15 Dec 2015 11:56:43 -0800 Subject: [Python-Dev] async/await behavior on multiple calls Message-ID: Howdy, I'm experimenting with async/await in Python 3, and one very surprising behavior has been what happens when calling `await` twice on an Awaitable. In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python, calling `await` multiple times results in all future results getting back `None`. Here's a small example program: async def echo_hi(): result = '' echo_proc = await asyncio.create_subprocess_exec( 'echo', 'hello', 'world', stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.DEVNULL) result = await echo_proc.stdout.read() await echo_proc.wait() return result async def await_twice(awaitable): print('first time is {}'.format(await awaitable)) print('second time is {}'.format(await awaitable)) loop = asyncio.get_event_loop() loop.run_until_complete(await_twice(echo_hi())) This makes writing composable APIs using async/await in Python very difficult since anything that takes an `awaitable` has to know that it wasn't already awaited. Also, since the behavior is radically different than in the other programming languages implementing async/await it makes adopting Python's flavor of async/await difficult for folks coming from a language where it's already implemented. In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that can be awaited multiple times and either returns the result or throws any thrown exceptions. It doesn't appear that the Awaitable class in Python has a `result` or `exception` field but `asyncio.Future` does. Would it make sense to shift from having `await` functions return a ` *Future-like`* return object to returning a Future? Thanks, Roy -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Dec 15 15:08:37 2015 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Dec 2015 12:08:37 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: Message-ID: I think this goes back all the way to a debate we had when we were discussing PEP 380 (which introduced 'yield from', on which 'await' is built). In fact I believe that the reason PEP 380 didn't make it into Python 2.7 was that this issue was unresolved at the time (the PEP author and I preferred the current approach, but there was one vocal opponent who disagreed -- although my memory is only about 60% reliable on this :-). In any case, problem is that in order to implement the behavior you're asking for, the generator object would have to somehow hold on to its return value so that each time __next__ is called after it has already terminated it can raise StopIteration with the saved return value. This would extend the lifetime of the returned object indefinitely (until the generator object itself is GC'ed) in order to handle a pretty obscure corner case. I don't know how long you have been using async/await, but I wonder if it's possible that you just haven't gotten used to the typical usage patterns? In particular, your claim "anything that takes an `awaitable` has to know that it wasn't already awaited" makes me sound that you're just using it in an atypical way (perhaps because your model is based on other languages). In typical asyncio code, one does not usually take an awaitable, wait for it, and then return it -- one either awaits it and then extracts the result, or one returns it without awaiting it. On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams wrote: > Howdy, > > I'm experimenting with async/await in Python 3, and one very surprising > behavior has been what happens when calling `await` twice on an Awaitable. > In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python, > calling `await` multiple times results in all future results getting back > `None`. Here's a small example program: > > > async def echo_hi(): > result = '' > echo_proc = await asyncio.create_subprocess_exec( > 'echo', 'hello', 'world', > stdout=asyncio.subprocess.PIPE, > stderr=asyncio.subprocess.DEVNULL) > result = await echo_proc.stdout.read() > await echo_proc.wait() > return result > > async def await_twice(awaitable): > print('first time is {}'.format(await awaitable)) > print('second time is {}'.format(await awaitable)) > > loop = asyncio.get_event_loop() > loop.run_until_complete(await_twice(echo_hi())) > > This makes writing composable APIs using async/await in Python very > difficult since anything that takes an `awaitable` has to know that it > wasn't already awaited. Also, since the behavior is radically different > than in the other programming languages implementing async/await it makes > adopting Python's flavor of async/await difficult for folks coming from a > language where it's already implemented. > > In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that > can be awaited multiple times and either returns the result or throws any > thrown exceptions. It doesn't appear that the Awaitable class in Python > has a `result` or `exception` field but `asyncio.Future` does. > > Would it make sense to shift from having `await` functions return a ` > *Future-like`* return object to returning a Future? > > Thanks, > Roy > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Tue Dec 15 15:24:52 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 15 Dec 2015 15:24:52 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: Message-ID: <56707714.2080901@gmail.com> Hi Roy and Guido, On 2015-12-15 3:08 PM, Guido van Rossum wrote: [..] > > I don't know how long you have been using async/await, but I wonder if > it's possible that you just haven't gotten used to the typical usage > patterns? In particular, your claim "anything that takes an > `awaitable` has to know that it wasn't already awaited" makes me sound > that you're just using it in an atypical way (perhaps because your > model is based on other languages). In typical asyncio code, one does > not usually take an awaitable, wait for it, and then return it -- one > either awaits it and then extracts the result, or one returns it > without awaiting it. I agree. Holding a return value just so that coroutine can return it again seems wrong to me. However, since coroutines are now a separate type (although they share a lot of code with generators internally), maybe we can change them to throw an error when they are awaited on more than one time? That should be better than letting them return `None`: coro = coroutine() await coro await coro # <- will raise RuntimeError I'd also add a check that the coroutine isn't being awaited by more than one coroutine simultaneously (another, completely different issue, more on which here: https://github.com/python/asyncio/issues/288). This was fixed in asyncio in debug mode, but ideally, we should fix this in the interpreter core. Yury From dm at crystax.net Tue Dec 15 15:15:56 2015 From: dm at crystax.net (Dmitry Moskalchuk) Date: Tue, 15 Dec 2015 23:15:56 +0300 Subject: [Python-Dev] Python for android - successfully cross-compiled without patches In-Reply-To: References: Message-ID: <567074FC.60009@crystax.net> On 15/12/15 22:33, Brett Cannon wrote: > If you want to run the CPython test suite you can look at > https://docs.python.org/devguide/runtests.html . Thanks Brett, I'll look on it. -- Dmitry Moskalchuk -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 1495 bytes Desc: OpenPGP digital signature URL: From andrew.svetlov at gmail.com Tue Dec 15 15:27:25 2015 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 15 Dec 2015 22:27:25 +0200 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <56707714.2080901@gmail.com> References: <56707714.2080901@gmail.com> Message-ID: Both Yury's suggestions sounds reasonable. On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov wrote: > Hi Roy and Guido, > > On 2015-12-15 3:08 PM, Guido van Rossum wrote: > [..] >> >> >> I don't know how long you have been using async/await, but I wonder if >> it's possible that you just haven't gotten used to the typical usage >> patterns? In particular, your claim "anything that takes an `awaitable` has >> to know that it wasn't already awaited" makes me sound that you're just >> using it in an atypical way (perhaps because your model is based on other >> languages). In typical asyncio code, one does not usually take an awaitable, >> wait for it, and then return it -- one either awaits it and then extracts >> the result, or one returns it without awaiting it. > > > I agree. Holding a return value just so that coroutine can return it again > seems wrong to me. > > However, since coroutines are now a separate type (although they share a lot > of code with generators internally), maybe we can change them to throw an > error when they are awaited on more than one time? > > That should be better than letting them return `None`: > > coro = coroutine() > await coro > await coro # <- will raise RuntimeError > > > I'd also add a check that the coroutine isn't being awaited by more than one > coroutine simultaneously (another, completely different issue, more on which > here: https://github.com/python/asyncio/issues/288). This was fixed in > asyncio in debug mode, but ideally, we should fix this in the interpreter > core. > > Yury > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From victor.stinner at gmail.com Tue Dec 15 15:29:57 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Dec 2015 21:29:57 +0100 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: 2015-12-15 12:23 GMT+01:00 Franklin? Lee : > I was thinking (as an alternative to versioning dicts) about a > dictionary which would be able to return name/value pairs, which would > also be internally used by the dictionary. This would be way less > sensitive to irrelevant changes in the scope dictionary, but cost an > extra pointer to each key. Do you have an estimation of the cost of the "extra pointer"? Impact on memory and CPU. dict is really a very important type for the performance of Python. If you make dict slower, I'm sure that Python overall will be slower. > del scope[name] > assert pair.key is None It looks tricky to keep the dict and the pair objects consistent, especially in term of atomaticity. You will need to keep a reference to the pair object in the dict entry, which will also make the dict larger (use more memory), right? > You won't have to keep looking up keys (unless the name is deleted), and > functions are allowed to change. For inlining, you can detect whether > the function has been redefined by testing the saved pair.value > against the saved function, and go into the slow path if needed (or > recompile the inlining). For builtin functions, I also need to detect when a key is created in the global namespace. How do you handle this case with pairs? > If memory is a real concern, deleted pairs can be weakrefed (and saved > in a second dict?) until they are reused. This way, pairs which aren't > saved by something outside will be removed. Supporting weak references also has a cost on the memory footprint... For FAT Python, not being able to detect quickly when a new key is created is a blocker point. Victor From guido at python.org Tue Dec 15 15:41:36 2015 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Dec 2015 12:41:36 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: Agreed. (But let's hear from the OP first.) On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov wrote: > Both Yury's suggestions sounds reasonable. > > On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov > wrote: > > Hi Roy and Guido, > > > > On 2015-12-15 3:08 PM, Guido van Rossum wrote: > > [..] > >> > >> > >> I don't know how long you have been using async/await, but I wonder if > >> it's possible that you just haven't gotten used to the typical usage > >> patterns? In particular, your claim "anything that takes an `awaitable` > has > >> to know that it wasn't already awaited" makes me sound that you're just > >> using it in an atypical way (perhaps because your model is based on > other > >> languages). In typical asyncio code, one does not usually take an > awaitable, > >> wait for it, and then return it -- one either awaits it and then > extracts > >> the result, or one returns it without awaiting it. > > > > > > I agree. Holding a return value just so that coroutine can return it > again > > seems wrong to me. > > > > However, since coroutines are now a separate type (although they share a > lot > > of code with generators internally), maybe we can change them to throw an > > error when they are awaited on more than one time? > > > > That should be better than letting them return `None`: > > > > coro = coroutine() > > await coro > > await coro # <- will raise RuntimeError > > > > > > I'd also add a check that the coroutine isn't being awaited by more than > one > > coroutine simultaneously (another, completely different issue, more on > which > > here: https://github.com/python/asyncio/issues/288). This was fixed in > > asyncio in debug mode, but ideally, we should fix this in the interpreter > > core. > > > > Yury > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > > https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com > > > > -- > Thanks, > Andrew Svetlov > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Dec 15 16:10:59 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 15 Dec 2015 16:10:59 -0500 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: More thoughts. (Stealing your style of headers.) Just store a pointer to value ============================= Instead of having the inner dict store k_v pairs. In C, the values in our hash tables will be: struct refcell{ PyObject *value; // NULL if deleted }; It's not necessary to store the key. I think I only had it so I could mark it None in the Python implementation, to denote a deleted key. But a deleted entry could just have `cell.value is ScopeDict.NULL` (C: cell.value == NULL). The scope dict will own all values which don't have exposed references, and all exposed references (which own the value they reference). (Alternatively, store the value directly in the hash table. If something asks for a reference to it, replace the value with a PyObject that refers to it, and flag that entry in the auxilary hash table.) This might be what PyCellObject is, which is how I realized that I didn't need the key: https://docs.python.org/3.5/c-api/cell.html Deleting from scope =================== When deleting a key, don't remove the key from the inner dict, and just set the reference to NULL. Outside code should never get the raw C `refcell`, only a Python object. This makes it possible to clean up unused references when the dict expands or contracts: for each `refcell`, if it has no Pair object or its Pair object is not referenced by anything else, and if its value is NULL (i.e. deleted), don't store it in the new hash table. Get pairs before their keys are defined ======================================= When the interpreter compiles a function, it can request references which _don't exist yet_. The scope dict would simply create the entry in its inner dict and fill it in when needed. That means that each name only needs to be looked up when a function is created. scope = Scopedict() f = scope.ref('f') scope['f'] = float f.value('NaN') This would be a memory issue if many functions are created with typo'd names. But - You're not making a gigantic amount of functions in the first place. - You'll eventually remove these unused entries when you resize the inner dict. (See previous section.) I was concerned about which scope would be responsible for creating the entry, but it turns out that if you use a name in a function, every use of that name has to be for the same scope. So the following causes a NameError: def f(): def g(x): return abs(x) for i in range(30): print(g(i)) if i == 10: def abs(x): return "abs" + str(x) if d == 20: del abs and you can tell which scope to ask for the reference during function compilation. Does this simplify closures? ============================ (I haven't yet looked at Python's closure implementation.) A refcell (C struct) will be exposed as a RefCell (PyObject), which owns it. This means that RefCell is reference-counted, and if something saved a reference to it, it will persist even after its owning dict is deleted. Thus, when a scope dict is deleted, each refcell without a RefCell object is deleted (and its value is DecRef'd), and the other ones just have their RefCell object decrement a reference. Then closures are free: each inner function has refcounted references to the cells that it uses, and it doesn't need to know whether its parent is alive. (The implementation of closures involves cell objects.) Overhead ======== If inner functions are being created a lot, that's extra work. But I guess you should expect a lot of overhead if you're doing such a thing. Readonly refs ============= It might be desirable to have refs that are allowed to write (e.g. from `global` and `nonlocal`) and refs that aren't. The CellObject would just hold a count for the number of writing refs, separate from the number of refs. This might result in some optimizations for values with no writing refs. For example, it's possible to implement copying of dicts as a shallow copy if it's KNOWN that any modification would result in a call to its set/del functions, which would initiate a deep copy. On Tue, Dec 15, 2015 at 3:29 PM, Victor Stinner wrote: > 2015-12-15 12:23 GMT+01:00 Franklin? Lee : >> I was thinking (as an alternative to versioning dicts) about a >> dictionary which would be able to return name/value pairs, which would >> also be internally used by the dictionary. This would be way less >> sensitive to irrelevant changes in the scope dictionary, but cost an >> extra pointer to each key. > > Do you have an estimation of the cost of the "extra pointer"? Impact > on memory and CPU. dict is really a very important type for the > performance of Python. If you make dict slower, I'm sure that Python > overall will be slower. I'm proposing it as a subclass. > It looks tricky to keep the dict and the pair objects consistent, > especially in term of atomaticity. You will need to keep a reference > to the pair object in the dict entry, which will also make the dict > larger (use more memory), right? Yes, but it will be about 25% bigger than the underlying dict's tables. You store an extra pointer, while the underlying tables are (hash, key, value), which is a 64-bit value and two 32-bit values. If Python moves to a compact dict implementation, it will still be 25% bigger, because the secondary table will be kept in lockstep with the compact array instead of the sparse array. >> You won't have to keep looking up keys (unless the name is deleted), and >> functions are allowed to change. For inlining, you can detect whether >> the function has been redefined by testing the saved pair.value >> against the saved function, and go into the slow path if needed (or >> recompile the inlining). > > For builtin functions, I also need to detect when a key is created in > the global namespace. How do you handle this case with pairs? (I realized you don't need to keep a key, so I threw away pairs.) Instead of detecting key insertion, I allow the creation of references before there's anything to reference. In other words, when a function is created that uses a name which isn't yet defined, an entry (to NULL) is created in the scope's inner dict, and a Python object for that entry. I really think this is a good approach to that problem. In the case of global versus __builtin__, the entry would be created in the globals() dict, which initially points to __builtin__'s entry. This would require a double dereference, but I think there's no other way to have nested ambiguous scoping like that (where you don't know where you're looking it up until you need it). If there is, the Python object can hold a "number of indirections". This would allow passing through to __builtin__, but still allow saving CellRefs to Python variables. On deletion, it would re-look-up the builtin version. If you repeatedly create and delete `map` in module scope, it would have to keep looking up the reference to the builtin when you delete. But if you're repeatedly deleting and reusing the same name, you're kind of being a jerk. This can be solved with more overhead. Alternatively, module scope dicts can be even more special, and hold a pair of references: one to the module scope *value*, one to the __builtin__ *reference*. So for a module scope reference, it will try to return its own value, and if it's NULL, it will ask the builtin reference to try. This means each module has the overhead of its own names plus the overhead of referring to module names, and builtins will have a name for... every single module's names. Eh. I'd rather just punish the jerk. In fact, don't have globals() save a reference to __builtin__'s entry unless it exists at some point. `__builtins__.__dict__.ref("argargarg", create_if_none=False) => None`. From victor.stinner at gmail.com Tue Dec 15 17:38:11 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 15 Dec 2015 23:38:11 +0100 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: 2015-12-15 22:10 GMT+01:00 Franklin? Lee : > (Stealing your style of headers.) I'm using reStructured Text, it's not really a new style :-) > Overhead > ======== > > If inner functions are being created a lot, that's extra work. But I > guess you should expect a lot of overhead if you're doing such a > thing. Sorry, I didn't read carefully your email, but I don't think that it's acceptable to make Python namespaces slower. In FAT mode, we need versionned dictionaries for module namespace, type namespace, global namespace, etc. >> Do you have an estimation of the cost of the "extra pointer"? Impact >> on memory and CPU. dict is really a very important type for the >> performance of Python. If you make dict slower, I'm sure that Python >> overall will be slower. > > I'm proposing it as a subclass. Please read the "Versionned dictionary" section of my email: https://mail.python.org/pipermail/python-dev/2015-December/142397.html I explained why using a subclass doesn't work in practice. Victor From kevinjacobconway at gmail.com Tue Dec 15 18:35:42 2015 From: kevinjacobconway at gmail.com (Kevin Conway) Date: Tue, 15 Dec 2015 23:35:42 +0000 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: I think there may be somewhat of a language barrier here. OP appears to be mixing the terms of coroutines and futures. The behavior OP describes is that of promised or async tasks in other languages. Consider a JS promise that has been resolved: promise.then(function (value) {...}); promise.then(function (value) {...}); Both of the above will execute the callback function with the resolved value regardless of how much earlier the promise was resolved. This is not entirely different from how Futures work in Python when using 'add_done_callback'. The code example from OP, however, is showing the behaviour of awaiting a coroutine twice rather than awaiting a Future twice. Both objects are awaitable but both exhibit different behaviour when awaited multiple times. A scenario I believe deserves a test is what happens in the asyncio coroutine scheduler when a promise is awaited multiple times. The current __await__ behaviour is to return self only when not done and then to return the value after resolution for each subsequent await. The Task, however, requires that it must be a Future emitted from the coroutine and not a primitive value. Awaiting a resolved future should result On Tue, Dec 15, 2015, 14:44 Guido van Rossum wrote: > Agreed. (But let's hear from the OP first.) > > On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov > wrote: > >> Both Yury's suggestions sounds reasonable. >> >> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov >> wrote: >> > Hi Roy and Guido, >> > >> > On 2015-12-15 3:08 PM, Guido van Rossum wrote: >> > [..] >> >> >> >> >> >> I don't know how long you have been using async/await, but I wonder if >> >> it's possible that you just haven't gotten used to the typical usage >> >> patterns? In particular, your claim "anything that takes an >> `awaitable` has >> >> to know that it wasn't already awaited" makes me sound that you're just >> >> using it in an atypical way (perhaps because your model is based on >> other >> >> languages). In typical asyncio code, one does not usually take an >> awaitable, >> >> wait for it, and then return it -- one either awaits it and then >> extracts >> >> the result, or one returns it without awaiting it. >> > >> > >> > I agree. Holding a return value just so that coroutine can return it >> again >> > seems wrong to me. >> > >> > However, since coroutines are now a separate type (although they share >> a lot >> > of code with generators internally), maybe we can change them to throw >> an >> > error when they are awaited on more than one time? >> > >> > That should be better than letting them return `None`: >> > >> > coro = coroutine() >> > await coro >> > await coro # <- will raise RuntimeError >> > >> > >> > I'd also add a check that the coroutine isn't being awaited by more >> than one >> > coroutine simultaneously (another, completely different issue, more on >> which >> > here: https://github.com/python/asyncio/issues/288). This was fixed in >> > asyncio in debug mode, but ideally, we should fix this in the >> interpreter >> > core. >> > >> > Yury >> > _______________________________________________ >> > Python-Dev mailing list >> > Python-Dev at python.org >> > https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> > >> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com >> >> >> >> -- >> Thanks, >> Andrew Svetlov >> > _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwilliams at lyft.com Tue Dec 15 19:39:06 2015 From: rwilliams at lyft.com (Roy Williams) Date: Tue, 15 Dec 2015 16:39:06 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: Message-ID: Thanks for the insight Guido. I've mostly used async/await inside of HHVM/Hack, and used Guava/Java Futures extensively in the past so I found this behavior to be quite surprising. I'd like to use Awaitables to represent a DAG of work that needs to get done. For example, I used to be one of the maintainers of Buck (a build tool similar to Bazel) and we used a collection of futures for building all of our dependencies. For each rule, we'd effectively: dependency_results = await asyncio.gather(*dependencies) # Proceed with building. Rules were free to depend on the same dependency and since the Future would just return the same result when resolved more than one time things just worked. Similarly when building up the results for say a web request, I effectively want to construct a DAG of work that needs to get done and then just await on that DAG in a similar manner without having to enforce that the DAG is actually a tree. I can of course write a function to wrap everything in Futures, but this seems to be against the spirit of async/await. Thanks, Roy On Tue, Dec 15, 2015 at 12:08 PM, Guido van Rossum wrote: > I think this goes back all the way to a debate we had when we were > discussing PEP 380 (which introduced 'yield from', on which 'await' is > built). In fact I believe that the reason PEP 380 didn't make it into > Python 2.7 was that this issue was unresolved at the time (the PEP author > and I preferred the current approach, but there was one vocal opponent who > disagreed -- although my memory is only about 60% reliable on this :-). > > In any case, problem is that in order to implement the behavior you're > asking for, the generator object would have to somehow hold on to its > return value so that each time __next__ is called after it has already > terminated it can raise StopIteration with the saved return value. This > would extend the lifetime of the returned object indefinitely (until the > generator object itself is GC'ed) in order to handle a pretty obscure > corner case. > > I don't know how long you have been using async/await, but I wonder if > it's possible that you just haven't gotten used to the typical usage > patterns? In particular, your claim "anything that takes an `awaitable` has > to know that it wasn't already awaited" makes me sound that you're just > using it in an atypical way (perhaps because your model is based on other > languages). In typical asyncio code, one does not usually take an > awaitable, wait for it, and then return it -- one either awaits it and then > extracts the result, or one returns it without awaiting it. > > On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams wrote: > >> Howdy, >> >> I'm experimenting with async/await in Python 3, and one very surprising >> behavior has been what happens when calling `await` twice on an Awaitable. >> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python, >> calling `await` multiple times results in all future results getting back >> `None`. Here's a small example program: >> >> >> async def echo_hi(): >> result = '' >> echo_proc = await asyncio.create_subprocess_exec( >> 'echo', 'hello', 'world', >> stdout=asyncio.subprocess.PIPE, >> stderr=asyncio.subprocess.DEVNULL) >> result = await echo_proc.stdout.read() >> await echo_proc.wait() >> return result >> >> async def await_twice(awaitable): >> print('first time is {}'.format(await awaitable)) >> print('second time is {}'.format(await awaitable)) >> >> loop = asyncio.get_event_loop() >> loop.run_until_complete(await_twice(echo_hi())) >> >> This makes writing composable APIs using async/await in Python very >> difficult since anything that takes an `awaitable` has to know that it >> wasn't already awaited. Also, since the behavior is radically different >> than in the other programming languages implementing async/await it makes >> adopting Python's flavor of async/await difficult for folks coming from a >> language where it's already implemented. >> >> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise that >> can be awaited multiple times and either returns the result or throws any >> thrown exceptions. It doesn't appear that the Awaitable class in Python >> has a `result` or `exception` field but `asyncio.Future` does. >> >> Would it make sense to shift from having `await` functions return a ` >> *Future-like`* return object to returning a Future? >> >> Thanks, >> Roy >> >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Tue Dec 15 19:57:33 2015 From: guido at python.org (Guido van Rossum) Date: Tue, 15 Dec 2015 16:57:33 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: Message-ID: On Tue, Dec 15, 2015 at 4:39 PM, Roy Williams wrote: > Thanks for the insight Guido. > > I've mostly used async/await inside of HHVM/Hack, and used Guava/Java > Futures extensively in the past so I found this behavior to be quite > surprising. I'd like to use Awaitables to represent a DAG of work that > needs to get done. For example, I used to be one of the maintainers of > Buck (a build tool similar to Bazel) and we used a collection of futures > for building all of our dependencies. For each rule, we'd effectively: > > dependency_results = await asyncio.gather(*dependencies) > # Proceed with building. > > Rules were free to depend on the same dependency and since the Future > would just return the same result when resolved more than one time things > just worked. > > Similarly when building up the results for say a web request, I > effectively want to construct a DAG of work that needs to get done and then > just await on that DAG in a similar manner without having to enforce that > the DAG is actually a tree. I can of course write a function to wrap > everything in Futures, but this seems to be against the spirit of > async/await. > Why would that be against the spirit? It's the only thing that will work the way you're asking, and there is in fact already a function that does this (asyncio.ensure_future()). > Thanks, > Roy > > On Tue, Dec 15, 2015 at 12:08 PM, Guido van Rossum > wrote: > >> I think this goes back all the way to a debate we had when we were >> discussing PEP 380 (which introduced 'yield from', on which 'await' is >> built). In fact I believe that the reason PEP 380 didn't make it into >> Python 2.7 was that this issue was unresolved at the time (the PEP author >> and I preferred the current approach, but there was one vocal opponent who >> disagreed -- although my memory is only about 60% reliable on this :-). >> >> In any case, problem is that in order to implement the behavior you're >> asking for, the generator object would have to somehow hold on to its >> return value so that each time __next__ is called after it has already >> terminated it can raise StopIteration with the saved return value. This >> would extend the lifetime of the returned object indefinitely (until the >> generator object itself is GC'ed) in order to handle a pretty obscure >> corner case. >> >> I don't know how long you have been using async/await, but I wonder if >> it's possible that you just haven't gotten used to the typical usage >> patterns? In particular, your claim "anything that takes an `awaitable` has >> to know that it wasn't already awaited" makes me sound that you're just >> using it in an atypical way (perhaps because your model is based on other >> languages). In typical asyncio code, one does not usually take an >> awaitable, wait for it, and then return it -- one either awaits it and then >> extracts the result, or one returns it without awaiting it. >> >> On Tue, Dec 15, 2015 at 11:56 AM, Roy Williams >> wrote: >> >>> Howdy, >>> >>> I'm experimenting with async/await in Python 3, and one very surprising >>> behavior has been what happens when calling `await` twice on an Awaitable. >>> In C#, Hack/HHVM, and the new async/await spec in Ecmascript 7. In Python, >>> calling `await` multiple times results in all future results getting back >>> `None`. Here's a small example program: >>> >>> >>> async def echo_hi(): >>> result = '' >>> echo_proc = await asyncio.create_subprocess_exec( >>> 'echo', 'hello', 'world', >>> stdout=asyncio.subprocess.PIPE, >>> stderr=asyncio.subprocess.DEVNULL) >>> result = await echo_proc.stdout.read() >>> await echo_proc.wait() >>> return result >>> >>> async def await_twice(awaitable): >>> print('first time is {}'.format(await awaitable)) >>> print('second time is {}'.format(await awaitable)) >>> >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(await_twice(echo_hi())) >>> >>> This makes writing composable APIs using async/await in Python very >>> difficult since anything that takes an `awaitable` has to know that it >>> wasn't already awaited. Also, since the behavior is radically different >>> than in the other programming languages implementing async/await it makes >>> adopting Python's flavor of async/await difficult for folks coming from a >>> language where it's already implemented. >>> >>> In C#/Hack/JS calls to `await` return a Task/AwaitableHandle/Promise >>> that can be awaited multiple times and either returns the result or throws >>> any thrown exceptions. It doesn't appear that the Awaitable class in >>> Python has a `result` or `exception` field but `asyncio.Future` does. >>> >>> Would it make sense to shift from having `await` functions return a ` >>> *Future-like`* return object to returning a Future? >>> >>> Thanks, >>> Roy >>> >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Tue Dec 15 19:59:47 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Tue, 15 Dec 2015 19:59:47 -0500 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: I realized yet another thing, which will reduce overhead: the original array can store values directly, and you maintain the refs by repeatedly updating them when moving refs around. RefCells will point to a pointer to the value cell (which already exists in the table). - `getitem` will be almost the same as a normal dict: it has to check if value is valid (which it already checked, but it will be invalid a lot more often). - `setitem` the same as a normal dict (since the RefCells will just point to the _address_ of the value pointer in the main table), except that the dict will be bigger, and compaction/expansion has more overhead. No creation of refcells here. - `delitem` will just null the value pointer, which shouldn't cost much more, if it doesn't cost less. - Expansion and compaction will cost more, since we have to copy over the RefCell pointers and also check whether they should be copied. - Deletion of the dict will cost more, due to the additional logic for deciding what to delete, and the RefCells can no longer point into the entry table. They would have to point at the value (requiring logic, or the replacement of a function pointer) or at a new allocated object (requiring an allocation of sizeof(PyObject*) bytes). On Tue, Dec 15, 2015 at 5:38 PM, Victor Stinner wrote: > Sorry, I didn't read carefully your email, but I don't think that it's > acceptable to make Python namespaces slower. In FAT mode, we need > versionned dictionaries for module namespace, type namespace, global > namespace, etc. It was actually more "it might be a problem" than "it will be a problem". I don't know if the overhead will be high enough to worry about. It might be dominated by whatever savings there would be by not having to look up names more than once. (Unless builtins get mixed with globals? I think that's solvable, though. It's just that the solutions I can think of have different tradeoffs.) I am confident that the time overhead and the savings will beat the versioning dict. The versioning dict method has to save a reference to the variable value and a reference to the name, and regularly test whether the dict has changed. This method only has to save a reference to a reference to the value (though it might need the name to allow debugging), doesn't care if it's changed, will be an identity (to NULL?) test if it's deleted (and only if it's not replaced after), and absolutely doesn't care if the dict had other updates (which might increase the version number). >>> Do you have an estimation of the cost of the "extra pointer"? Impact >>> on memory and CPU. dict is really a very important type for the >>> performance of Python. If you make dict slower, I'm sure that Python >>> overall will be slower. >> >> I'm proposing it as a subclass. > > Please read the "Versionned dictionary" section of my email: > https://mail.python.org/pipermail/python-dev/2015-December/142397.html > > I explained why using a subclass doesn't work in practice. I've read it again. By subclass, I mean that it implements the same interface. But at the C level, I want to have it be a fork(?) of the current dict implementation. As for `exec`, I think it might be okay for it to be slower at the early stages of this game. Here's the lookup function for a string-only dict (used both for setting and getting): https://github.com/python/cpython/blob/master/Objects/dictobject.c#L443 I want to break that up into two parts: - Figure out the index of the {hash, *key, *val} entry in the array. - Do whatever to it. (In the original: make *value_addr point to the value pointer.) I want to do this so that I can use that index to point into ANOTHER array, which will be the metadata for the refcells (whatever it ends up being). This will mean that there's no second lookup. This has to be done at the C level, because the dict object doesn't expose the index of the {hash, *key, *val} entries on lookup. If you don't want to make it a subclass, then we can propose a new function `get_ref` (or something) for dict's C API (probably a hard sell), which returns RefCell objects, and an additional pointer in `dict` to the RefCells table (so a total of two pointers). When `get_ref` is first called, it will - calloc the RefCell table (which will be the same length as the entry table) - replace all of the dict's functions with ones that know how to deal with the RefCells, - replace itself with a function that knows how to return these refs. - call its replacement. If the dict never gets RefCells, you only pay a few pointers in size, and a few creation/deletion values. This is possible now that the dictionary itself will store values as normal. There might be more necessary. For example, the replaced functions might need to keep pointers to their originals (so that you can slip additional deep C subclasses in). And it might be nice if the `get_index` function could be internally relied upon by the C-level subclasses, because "keeping a metadata table index-synchronized with the real one" is something I've wanted to do for two different dict subclasses now. From rwilliams at lyft.com Tue Dec 15 20:29:26 2015 From: rwilliams at lyft.com (Roy Williams) Date: Tue, 15 Dec 2015 17:29:26 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: @Kevin correct, that's the point I'd like to discuss. Most other mainstream languages that implements async/await expose the programming model with Tasks/Futures/Promises as opposed to coroutines PEP 492 states 'Objects with __await__ method are called Future-like objects in the rest of this PEP.' but their behavior differs from that of Futures in this core way. Given that most other languages have standardized around async returning a Future as opposed to a coroutine I think it's worth exploring why Python differs. There's a lot of benefits to making the programming model coroutines without a doubt. It's absolutely brilliant that I can just call code annotated with @asyncio.coroutine and have it just work. Code using the old @asyncio.coroutine/yield from syntax should absolutely stay the same. Similarly, since ES7 async/await is backed by Promises it'll just work for any existing code out there using Promises. My proposal would be to automatically wrap the return value from an `async` function or any object implementing `__await__` in a future with `asyncio.ensure_future()`. This would allow async/await code to behave in a similar manner to other languages implementing async/await and would remain compatible with existing code using asyncio. What's your thoughts? Thanks, Roy On Tue, Dec 15, 2015 at 3:35 PM, Kevin Conway wrote: > I think there may be somewhat of a language barrier here. OP appears to be > mixing the terms of coroutines and futures. The behavior OP describes is > that of promised or async tasks in other languages. > > Consider a JS promise that has been resolved: > > promise.then(function (value) {...}); > > promise.then(function (value) {...}); > > Both of the above will execute the callback function with the resolved > value regardless of how much earlier the promise was resolved. This is not > entirely different from how Futures work in Python when using > 'add_done_callback'. > > The code example from OP, however, is showing the behaviour of awaiting a > coroutine twice rather than awaiting a Future twice. Both objects are > awaitable but both exhibit different behaviour when awaited multiple times. > > A scenario I believe deserves a test is what happens in the asyncio > coroutine scheduler when a promise is awaited multiple times. The current > __await__ behaviour is to return self only when not done and then to return > the value after resolution for each subsequent await. The Task, however, > requires that it must be a Future emitted from the coroutine and not a > primitive value. Awaiting a resolved future should result > > On Tue, Dec 15, 2015, 14:44 Guido van Rossum wrote: > >> Agreed. (But let's hear from the OP first.) >> >> On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov < >> andrew.svetlov at gmail.com> wrote: >> >>> Both Yury's suggestions sounds reasonable. >>> >>> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov >>> wrote: >>> > Hi Roy and Guido, >>> > >>> > On 2015-12-15 3:08 PM, Guido van Rossum wrote: >>> > [..] >>> >> >>> >> >>> >> I don't know how long you have been using async/await, but I wonder if >>> >> it's possible that you just haven't gotten used to the typical usage >>> >> patterns? In particular, your claim "anything that takes an >>> `awaitable` has >>> >> to know that it wasn't already awaited" makes me sound that you're >>> just >>> >> using it in an atypical way (perhaps because your model is based on >>> other >>> >> languages). In typical asyncio code, one does not usually take an >>> awaitable, >>> >> wait for it, and then return it -- one either awaits it and then >>> extracts >>> >> the result, or one returns it without awaiting it. >>> > >>> > >>> > I agree. Holding a return value just so that coroutine can return it >>> again >>> > seems wrong to me. >>> > >>> > However, since coroutines are now a separate type (although they share >>> a lot >>> > of code with generators internally), maybe we can change them to throw >>> an >>> > error when they are awaited on more than one time? >>> > >>> > That should be better than letting them return `None`: >>> > >>> > coro = coroutine() >>> > await coro >>> > await coro # <- will raise RuntimeError >>> > >>> > >>> > I'd also add a check that the coroutine isn't being awaited by more >>> than one >>> > coroutine simultaneously (another, completely different issue, more on >>> which >>> > here: https://github.com/python/asyncio/issues/288). This was fixed >>> in >>> > asyncio in debug mode, but ideally, we should fix this in the >>> interpreter >>> > core. >>> > >>> > Yury >>> > _______________________________________________ >>> > Python-Dev mailing list >>> > Python-Dev at python.org >>> > https://mail.python.org/mailman/listinfo/python-dev >>> > Unsubscribe: >>> > >>> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com >>> >>> >>> >>> -- >>> Thanks, >>> Andrew Svetlov >>> >> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> >> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com >> > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/rwilliams%40lyft.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barry at python.org Tue Dec 15 20:41:37 2015 From: barry at python.org (Barry Warsaw) Date: Tue, 15 Dec 2015 20:41:37 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: <20151215204137.12761c11@anarchist.wooz.org> On Dec 15, 2015, at 05:29 PM, Roy Williams wrote: >@Kevin correct, that's the point I'd like to discuss. Most other >mainstream languages that implements async/await expose the programming >model with Tasks/Futures/Promises as opposed to coroutines PEP 492 states >'Objects with __await__ method are called Future-like objects in the rest >of this PEP.' but their behavior differs from that of Futures in this core >way. Given that most other languages have standardized around async >returning a Future as opposed to a coroutine I think it's worth exploring >why Python differs. I'll just note something I've mentioned before, when a bunch of us sprinted on an asyncio based smtp server. The asyncio library documentation *really* needs a good overview and/or tutorial. These are difficult concepts to understand and it seems like bringing experience from other languages may not help (and may even hinder) understanding of Python's model. After a while, you get it, but I think it would be good to help folks get there sooner, especially if you're new to the whole area. Maybe those of you who have been steeped in asyncio for a long time could write that up? I don't think I'm the right person to do that, but I'd be very happy to review it. Cheers, -Barry From abarnert at yahoo.com Tue Dec 15 21:13:29 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Tue, 15 Dec 2015 18:13:29 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: <74191872-F2CB-48AB-A1C7-422410179E82@yahoo.com> On Dec 15, 2015, at 17:29, Roy Williams wrote: > > My proposal would be to automatically wrap the return value from an `async` function or any object implementing `__await__` in a future with `asyncio.ensure_future()`. This would allow async/await code to behave in a similar manner to other languages implementing async/await and would remain compatible with existing code using asyncio. Two questions: Is it possible (and at all reasonable) to write code that actually depends on getting raw coroutines from async? If not, is there any significant performance impact for code that works with raw coroutines and doesn't need real futures to get them wrapped in futures anyway? From yselivanov.ml at gmail.com Tue Dec 15 22:33:13 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Tue, 15 Dec 2015 22:33:13 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: <5670DB79.40502@gmail.com> Roy, On 2015-12-15 8:29 PM, Roy Williams wrote: [..] > > My proposal would be to automatically wrap the return value from an > `async` function or any object implementing `__await__` in a future > with `asyncio.ensure_future()`. This would allow async/await code to > behave in a similar manner to other languages implementing async/await > and would remain compatible with existing code using asyncio. > > What's your thoughts? Other languages, such as JavaScript, have a notion of event loop integrated on a very deep level. In Python, there is no centralized event loop, and asyncio is just one way of implementing one. In asyncio, Future objects are designed to inter-operate with an event loop (that's also true for JS Promises), which means that in order to automatically wrap Python coroutines in Futures, we'd have to define the event loop deep in Python core. Otherwise it's impossible to implement 'Future.add_done_callback', since there would be nothing that calls the callbacks on completion. To avoid adding a built-in event loop, PEP 492 introduced coroutines as an abstract language concept. David Beazley, for instance, doesn't like Futures, and his new framework 'curio' does not have them at all. I highly doubt that we want to add a generalized event loop in Python core, define a generalized Future interface, and make coroutines return it. It's simply too much work with no clear wins. Now, your initial email highlights another problem: coro = coroutine() print(await coro) # will print the result of coroutine await coro # prints None This is a bug that needs to be fixed. We have two options: 1. Cache the result when the coroutine object is awaited first time. Return the cached result when the coroutine object is awaited again. 2. Raise an error if the coroutine object is awaited more than once. The (1) option would solve your problem. But it also introduces new complexity: the GC of result will be delayed; more importantly, some users will wonder if we cache the result or run the coroutine again. It's just not obvious. The (2) option is Pythonic and simple to understand/debug, IMHO. In this case, the best way for you to solve your initial problem, would be to have a decorator around your tasks. The decorator should wrap coroutines with Futures (with asyncio.ensure_future) and everything will work as you expect. Thanks, Yury From kevinjacobconway at gmail.com Wed Dec 16 00:55:04 2015 From: kevinjacobconway at gmail.com (Kevin Conway) Date: Wed, 16 Dec 2015 05:55:04 +0000 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <5670DB79.40502@gmail.com> References: <56707714.2080901@gmail.com> <5670DB79.40502@gmail.com> Message-ID: I agree with Barry. We need more material that introduces the community to the new async/await syntax and the new concepts they bring. We borrowed the words from other languages but not all of their behaviours. With coroutines in particular, we can do a better job of describing the differences between them and the previous generator-coroutines, the rules regarding what - if anything - is emitted from a '.send()', and how await resolves to a value. If you read through the asyncio Task code enough you'll figure it out, but we can't expect the community as a whole to learn the language, or asyncio, that way. Back to the OP's issue. The behaviour you are seeing of None being the value of an exhausted coroutine is consistent with that of an exhausted generator. Pushing the iterator with __next__() or .send() after completion results in a StopIteration being raised with a value of None regardless of what the final yielded/returned value was. Futures can be awaited multiple times because the __iter__/__await__ method defined causes them to raise StopIteration with the resolved value. I think the list is trying to tell you that awaiting a coro multiple times is simply not a valid case in Python because they are exhaustible resources. In asyncio, they are primarily a helpful mechanism for shipping promises to the Task wrapper. In virtually all cases the pattern is: > await some_async_def() and almost never: > coro = some_async_def() > await coro On Tue, Dec 15, 2015 at 9:34 PM Yury Selivanov wrote: > Roy, > > On 2015-12-15 8:29 PM, Roy Williams wrote: > [..] > > > > My proposal would be to automatically wrap the return value from an > > `async` function or any object implementing `__await__` in a future > > with `asyncio.ensure_future()`. This would allow async/await code to > > behave in a similar manner to other languages implementing async/await > > and would remain compatible with existing code using asyncio. > > > > What's your thoughts? > > Other languages, such as JavaScript, have a notion of event loop > integrated on a very deep level. In Python, there is no centralized > event loop, and asyncio is just one way of implementing one. > > In asyncio, Future objects are designed to inter-operate with an event > loop (that's also true for JS Promises), which means that in order to > automatically wrap Python coroutines in Futures, we'd have to define the > event loop deep in Python core. Otherwise it's impossible to implement > 'Future.add_done_callback', since there would be nothing that calls the > callbacks on completion. > > To avoid adding a built-in event loop, PEP 492 introduced coroutines as > an abstract language concept. David Beazley, for instance, doesn't like > Futures, and his new framework 'curio' does not have them at all. > > I highly doubt that we want to add a generalized event loop in Python > core, define a generalized Future interface, and make coroutines return > it. It's simply too much work with no clear wins. > > Now, your initial email highlights another problem: > > coro = coroutine() > print(await coro) # will print the result of coroutine > await coro # prints None > > This is a bug that needs to be fixed. We have two options: > > 1. Cache the result when the coroutine object is awaited first time. > Return the cached result when the coroutine object is awaited again. > > 2. Raise an error if the coroutine object is awaited more than once. > > The (1) option would solve your problem. But it also introduces new > complexity: the GC of result will be delayed; more importantly, some > users will wonder if we cache the result or run the coroutine again. > It's just not obvious. > > The (2) option is Pythonic and simple to understand/debug, IMHO. In > this case, the best way for you to solve your initial problem, would be > to have a decorator around your tasks. The decorator should wrap > coroutines with Futures (with asyncio.ensure_future) and everything will > work as you expect. > > Thanks, > Yury > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Wed Dec 16 01:01:14 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Dec 2015 16:01:14 +1000 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) In-Reply-To: References: Message-ID: On 15 December 2015 at 23:11, Victor Stinner wrote: > I guess that the optimizations on "in" and "is" operators are fine, > but optimizations on all other operations must be removed to not break > the Python semantic. Right, this is why we have functools.total_ordering as a class decorator to "fill in" the other comparison implementations based on the ones in the class body. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Dec 16 00:58:20 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Dec 2015 15:58:20 +1000 Subject: [Python-Dev] "python.exe is not a valid Win32 app" In-Reply-To: <20151215151402.189671B10004@webabinitio.net> References: <975950385.13382217.1448980225872.JavaMail.yahoo.ref@mail.yahoo.com> <975950385.13382217.1448980225872.JavaMail.yahoo@mail.yahoo.com> <201512011913.tB1JDYAv007962@fido.openend.se> <201512151441.tBFEfZxa031982@fido.openend.se> <20151215151402.189671B10004@webabinitio.net> Message-ID: On 16 December 2015 at 01:14, R. David Murray wrote: > That said, I'm not sure whether or not there is a way we could add > "supported versions" to the main docs that would make sense and be > useful...your bugs.python.org issue would be useful for discussing that. Having "minimum supported version" for Windows and Mac OS X in the "using" guide would likely make sense. For Linux, supported versions are handled by redistributors, so the most we could do is offer guidance to folks on checking their version and ensuring they're looking at the right documentation. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Wed Dec 16 01:11:43 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 16 Dec 2015 16:11:43 +1000 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <20151215204137.12761c11@anarchist.wooz.org> References: <56707714.2080901@gmail.com> <20151215204137.12761c11@anarchist.wooz.org> Message-ID: On 16 December 2015 at 11:41, Barry Warsaw wrote: > The asyncio library documentation *really* needs a good overview and/or > tutorial. These are difficult concepts to understand and it seems like > bringing experience from other languages may not help (and may even hinder) > understanding of Python's model. After a while, you get it, but I think it > would be good to help folks get there sooner, especially if you're new to the > whole area. > > Maybe those of you who have been steeped in asyncio for a long time could > write that up? I don't think I'm the right person to do that, but I'd be very > happy to review it. One smaller step that may be helpful is changing the titles of a couple of the sections from: * 18.5.4. Transports and protocols (low-level API) * 18.5.5. Streams (high-level API) to: * 18.5.4. Transports and protocols (callback based API) * 18.5.5. Streams (coroutine based API) That's based on a sample size of one though (a friend for whom light dawned once I explained that low-level=callbacks and high-level=coroutines), which is why I hadn't written a patch for it. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From victor.stinner at gmail.com Wed Dec 16 02:01:32 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 16 Dec 2015 08:01:32 +0100 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: Le mercredi 16 d?cembre 2015, Franklin? Lee a ?crit : > > I am confident that the time overhead and the savings will beat the > versioning dict. The versioning dict method has to save a reference to > the variable value and a reference to the name, and regularly test > whether the dict has changed. The performance of guards matters less than the performance of regular usage of dict. If we have to make a choice, I prefer "slow" guard but no impact on regular dict methods. It's very important that enabling FAT mode doesn't kill performances. Remember that FAT python is a static optimizer and so can only optimize some patterns, not all Python code. In my current implementation, a lookup is only needed when aguard is checked if the dict was modified. The dict version doesn't change if a mutable object was modified in place for example. I didn't benchmark, but I expect that the lookup is avoided in most cases. You should try FAT python and implement statistics before going too far in your idea. > I've read it again. By subclass, I mean that it implements the same > interface. But at the C level, I want to have it be a fork(?) of the > current dict implementation. As for `exec`, I think it might be okay > for it to be slower at the early stages of this game. Be careful, dict methods are hardcoded in the C code. If your type is not a subtype, there is risk of crashes. I fixed issues in Python/ceval.c, but it's not enough. You may also have to fix issues in third party C extensions why only expect dict for namespaces. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Wed Dec 16 04:34:37 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Dec 2015 11:34:37 +0200 Subject: [Python-Dev] Python semantic: Is it ok to replace not x == y with x != y? (no) In-Reply-To: References: Message-ID: On 15.12.15 15:04, Victor Stinner wrote: > Should Python emit a warning when __eq__() is implemented but not __ne__()? No. Actually I had removed a number of redundant (and often incorrect) __ne__ implementations after fixing object.__ne__. > Should Python be modified to call "not __eq__()" when __ne__() is not > implemented? __ne__() always is implemented (inherited from object). Default __ne__ implementation calls __eq__() and negate it's result (if not NotImplemented). But user class can define __ne__ with arbitrary semantic. That is the purpose of adding __ne__. From rwilliams at lyft.com Wed Dec 16 04:50:20 2015 From: rwilliams at lyft.com (Roy Williams) Date: Wed, 16 Dec 2015 01:50:20 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <5670DB79.40502@gmail.com> References: <56707714.2080901@gmail.com> <5670DB79.40502@gmail.com> Message-ID: I totally agree that async/await should not be tied to any underlying message pump/event loop. Ensuring that async/await works with existing systems like Tornado is great. As for the two options, option 1 is the expected behavior from developers coming from other languages implementing async/await which is why I found the existing behavior to be so unintuitive. To Barry and Kevin's point, this problem is exacerbated by a lack of documentation and examples that one can follow to learn about the Pythonic approach to async/await. Thanks, Roy On Tue, Dec 15, 2015 at 7:33 PM, Yury Selivanov wrote: > Roy, > > On 2015-12-15 8:29 PM, Roy Williams wrote: > [..] > >> >> My proposal would be to automatically wrap the return value from an >> `async` function or any object implementing `__await__` in a future with >> `asyncio.ensure_future()`. This would allow async/await code to behave in >> a similar manner to other languages implementing async/await and would >> remain compatible with existing code using asyncio. >> >> What's your thoughts? >> > > Other languages, such as JavaScript, have a notion of event loop > integrated on a very deep level. In Python, there is no centralized event > loop, and asyncio is just one way of implementing one. > > In asyncio, Future objects are designed to inter-operate with an event > loop (that's also true for JS Promises), which means that in order to > automatically wrap Python coroutines in Futures, we'd have to define the > event loop deep in Python core. Otherwise it's impossible to implement > 'Future.add_done_callback', since there would be nothing that calls the > callbacks on completion. > > To avoid adding a built-in event loop, PEP 492 introduced coroutines as an > abstract language concept. David Beazley, for instance, doesn't like > Futures, and his new framework 'curio' does not have them at all. > > I highly doubt that we want to add a generalized event loop in Python > core, define a generalized Future interface, and make coroutines return > it. It's simply too much work with no clear wins. > > Now, your initial email highlights another problem: > > coro = coroutine() > print(await coro) # will print the result of coroutine > await coro # prints None > > This is a bug that needs to be fixed. We have two options: > > 1. Cache the result when the coroutine object is awaited first time. > Return the cached result when the coroutine object is awaited again. > > 2. Raise an error if the coroutine object is awaited more than once. > > The (1) option would solve your problem. But it also introduces new > complexity: the GC of result will be delayed; more importantly, some users > will wonder if we cache the result or run the coroutine again. It's just > not obvious. > > The (2) option is Pythonic and simple to understand/debug, IMHO. In this > case, the best way for you to solve your initial problem, would be to have > a decorator around your tasks. The decorator should wrap coroutines with > Futures (with asyncio.ensure_future) and everything will work as you expect. > > Thanks, > Yury > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/rwilliams%40lyft.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmiscml at gmail.com Wed Dec 16 06:25:05 2015 From: pmiscml at gmail.com (Paul Sokolovsky) Date: Wed, 16 Dec 2015 13:25:05 +0200 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> Message-ID: <20151216132505.08d0bd10@x230> Hello, On Tue, 15 Dec 2015 17:29:26 -0800 Roy Williams wrote: > @Kevin correct, that's the point I'd like to discuss. Most other > mainstream languages that implements async/await expose the > programming model with Tasks/Futures/Promises as opposed to > coroutines PEP 492 states 'Objects with __await__ method are called > Future-like objects in the rest of this PEP.' but their behavior > differs from that of Futures in this core way. Given that most other > languages have standardized around async returning a Future as > opposed to a coroutine I think it's worth exploring why Python > differs. Sorry, but what makes you think that it's worth exploring why Python Python differs, and not why other languages differ? For example, JavaScript has hard heritage of callback mess. To address that at least somehow, Promises where introduced, which is still too low-level concurrency mechanism. When they finally picked up coroutines, they still have to be carry all that burden of callback mess and Promises, and that's why "ES7" differs. Also, what "most other languages" do you mean? Lua was a pioneer of coroutine usage in scripting languages, with research behind that. It doesn't have any "futures" or "promises" as part of the language. It has only coroutines. For niche cases when "futures" or "promises" needed, they can be implemented on top of coroutines. And that's actually the problem with Python's asyncio - it tries to marry all the orthogonal concurrency concepts, unfortunately good deal o'mess ensues. It doesn't help on "PR" side too, because coroutine lovers blame it for not being based entirely on language's native coroutines, strangers from other languages want to twist it to be based entirely on foreign concepts like futures, Twisted haters hate that it has too much complication taken from Twisted, etc. > > There's a lot of benefits to making the programming model coroutines > without a doubt. It's absolutely brilliant that I can just call code > annotated with @asyncio.coroutine and have it just work. Code using > the old @asyncio.coroutine/yield from syntax should absolutely stay > the same. Similarly, since ES7 async/await is backed by Promises > it'll just work for any existing code out there using Promises. > > My proposal would be to automatically wrap the return value from an > `async` function or any object implementing `__await__` in a future > with `asyncio.ensure_future()`. This would allow async/await code to > behave in a similar manner to other languages implementing > async/await and would remain compatible with existing code using > asyncio. > > What's your thoughts? My thought is "what other languages told when you approached them with the proposal to behave like Python?". Also, wrapping objects in other objects is expensive. Especially if the latter kind of objects isn't really needed - it's perfectly possibly to write applications which don't use or need any futures at all, using just coroutines. Moreover, some people argue that most apps real people would write are such, and Futures are niche feature, so can't be center of the world. > > Thanks, > Roy > [] -- Best regards, Paul mailto:pmiscml at gmail.com From storchaka at gmail.com Wed Dec 16 09:12:47 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Dec 2015 16:12:47 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing Message-ID: I'm bringing this up again, since the results of the previous poll did not give an unambiguous result. Related links: [1], [2], [3], [4]. Let me remind you that we are talking about adding the following macro. It is needed for safe replacement links. For now there is at least one open crash report that can be solved with this macro [5] (I think there is yet one, but can't find it just now). And 50 potential bugs for which we just still do not have a reproducer. #define Py_XXX(ptr, value) \ { \ PyObject *__tmp__ = ptr; \ ptr = new_value; \ Py_DECREF(__tmp__); \ } The problem is only in the macro name. There are objections against any proposed name, and no one name gained convincing majority. Here are names gained the largest numbers of votes plus names proposed during polling. 1. Py_SETREF 2. Py_DECREF_REPLACE 3. Py_REPLACE 4. Py_SET_POINTER 5. Py_SET_ATTR 6. Py_REPLACE_REF Please put your vote (a floating number from -1 to 1 including) for every of proposed name. You also can propose new name. [1] https://mail.python.org/pipermail/python-dev/2008-May/079862.html [2] http://comments.gmane.org/gmane.comp.python.devel/145346 [3] http://comments.gmane.org/gmane.comp.python.devel/145974 [4] http://bugs.python.org/issue20440 [5] http://bugs.python.org/issue24103 From yselivanov.ml at gmail.com Wed Dec 16 09:53:56 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 16 Dec 2015 09:53:56 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> <5670DB79.40502@gmail.com> Message-ID: <56717B04.9050802@gmail.com> On 2015-12-16 12:55 AM, Kevin Conway wrote: > I think the list is trying to tell you that awaiting a coro multiple > times is simply not a valid case in Python because they are > exhaustible resources. In asyncio, they are primarily a helpful > mechanism for shipping promises to the Task wrapper. In virtually all > cases the pattern is: > > > await some_async_def() > > and almost never: > > > coro = some_async_def() > > await coro > That's exactly right, thank you, Kevin. Yury From random832 at fastmail.com Wed Dec 16 09:53:52 2015 From: random832 at fastmail.com (Random832) Date: Wed, 16 Dec 2015 09:53:52 -0500 Subject: [Python-Dev] New poll about a macro for safe reference replacing References: Message-ID: <878u4umolb.fsf@fastmail.com> Serhiy Storchaka writes: > I'm bringing this up again, since the results of the previous poll did > not give an unambiguous result. Related links: [1], [2], [3], [4]. > > Let me remind you that we are talking about adding the following > macro. It is needed for safe replacement links. For now there is at > least one open crash report that can be solved with this macro [5] (I > think there is yet one, but can't find it just now). And 50 potential > bugs for which we just still do not have a reproducer. > > #define Py_XXX(ptr, value) \ > { \ > PyObject *__tmp__ = ptr; \ > ptr = new_value; \ > Py_DECREF(__tmp__); \ > } At the risk of bikeshedding, this needs do { ... } while(0), or it almost certainly will eventually be called incorrectly in an if/else statement. Yes, it's ugly, but that's part of the cost of using macros. If it were implemented as below, then it could evaluate ptr only once at the cost of requiring it to refer to an addressable pointer object: PyObject **__tmpp__ == &(ptr); PyObject *__tmp__ = *__tmpp__; *__tmpp__ = (new_value); PY_DECREF(__tmp__); I'm not entirely sure of the benefit of a macro over an inline function. Or why it doesn't INCREF the new value, maintaining the invariant that ptr is an owned reference. > 1. Py_SETREF > 2. Py_DECREF_REPLACE > 3. Py_REPLACE > 4. Py_SET_POINTER > 5. Py_SET_ATTR > 6. Py_REPLACE_REF I think "SET" names imply that it's safe if the original reference is NULL. This isn't an objection to the names, but if it is given one of those names I think it should use Py_XDECREF. From storchaka at gmail.com Wed Dec 16 10:12:19 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 16 Dec 2015 17:12:19 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: <878u4umolb.fsf@fastmail.com> References: <878u4umolb.fsf@fastmail.com> Message-ID: On 16.12.15 16:53, Random832 wrote: > At the risk of bikeshedding, this needs do { ... } while(0), or > it almost certainly will eventually be called incorrectly in an > if/else statement. Yes, it's ugly, but that's part of the cost > of using macros. Yes, of course, and the patch for issue20440 uses this idiom. Here it is omitted for clearness. > If it were implemented as below, then it could evaluate ptr only > once at the cost of requiring it to refer to an addressable > pointer object: > PyObject **__tmpp__ == &(ptr); > PyObject *__tmp__ = *__tmpp__; > *__tmpp__ = (new_value); > PY_DECREF(__tmp__); > > I'm not entirely sure of the benefit of a macro over an inline > function. Because the first argument is passed by reference (as in Py_INCREF etc). > Or why it doesn't INCREF the new value, maintaining > the invariant that ptr is an owned reference. Because in the majority of using cases stealing a reference is what is needed. Otherwise we would virtually always need to decref a reference just after using this macro. And couldn't use it as Py_XXX(obj->attr, PySomething_New()). > I think "SET" names imply that it's safe if the original > reference is NULL. This isn't an objection to the names, but if > it is given one of those names I think it should use Py_XDECREF. Originally I proposed pairs of functions with and withot X in the name (as Py_DECREF/Py_XDECREF). In this poll this detail is omitted for clearness. Later we can create a new poll if needed. From rymg19 at gmail.com Wed Dec 16 10:44:15 2015 From: rymg19 at gmail.com (Ryan Gonzalez) Date: Wed, 16 Dec 2015 09:44:15 -0600 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: <1E5A6394-4344-4AC2-B295-88DE1B3F6015@gmail.com> On December 16, 2015 8:12:47 AM CST, Serhiy Storchaka wrote: >I'm bringing this up again, since the results of the previous poll did >not give an unambiguous result. Related links: [1], [2], [3], [4]. > >Let me remind you that we are talking about adding the following macro. > >It is needed for safe replacement links. For now there is at least one >open crash report that can be solved with this macro [5] (I think there > >is yet one, but can't find it just now). And 50 potential bugs for >which >we just still do not have a reproducer. > >#define Py_XXX(ptr, value) \ > { \ > PyObject *__tmp__ = ptr; \ > ptr = new_value; \ > Py_DECREF(__tmp__); \ > } > >The problem is only in the macro name. There are objections against any > >proposed name, and no one name gained convincing majority. > >Here are names gained the largest numbers of votes plus names proposed >during polling. > >1. Py_SETREF >2. Py_DECREF_REPLACE >3. Py_REPLACE >4. Py_SET_POINTER >5. Py_SET_ATTR >6. Py_REPLACE_REF > 5 kinda sucks, since this has virtually nothing to do with attributes. 3 sounds like it does an operation on the object itself. 4 sounds stupid. So: 1. +0 2. +0.5 3. -1 4. -1 5. -1 6. +1 >Please put your vote (a floating number from -1 to 1 including) for >every of proposed name. You also can propose new name. > Py_RESET? Like C++'s shared_ptr::reset: http://en.cppreference.com/w/cpp/memory/shared_ptr/reset. > >[1] https://mail.python.org/pipermail/python-dev/2008-May/079862.html >[2] http://comments.gmane.org/gmane.comp.python.devel/145346 >[3] http://comments.gmane.org/gmane.comp.python.devel/145974 >[4] http://bugs.python.org/issue20440 >[5] http://bugs.python.org/issue24103 > >_______________________________________________ >Python-Dev mailing list >Python-Dev at python.org >https://mail.python.org/mailman/listinfo/python-dev >Unsubscribe: >https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com -- Sent from my Nexus 5 with K-9 Mail. Please excuse my brevity. From random832 at fastmail.com Wed Dec 16 11:15:04 2015 From: random832 at fastmail.com (Random832) Date: Wed, 16 Dec 2015 11:15:04 -0500 Subject: [Python-Dev] New poll about a macro for safe reference replacing References: <878u4umolb.fsf@fastmail.com> Message-ID: <87zixal69j.fsf@fastmail.com> Serhiy Storchaka writes: >> I'm not entirely sure of the benefit of a macro over an inline >> function. > > Because the first argument is passed by reference (as in Py_INCREF > etc). Then a macro implemented using an inline function, e.g., #define Py_REPLACE(p, x) Py_REPLACE_impl(&(p), x). Were INCREF implemented this way it could return the reference (imagine Py_REPLACE(foo, Py_INCREF(bar))). The other advantage to an inline function is that it lets the compiler make the decision about optimizing for size or time. >> I think "SET" names imply that it's safe if the original >> reference is NULL. This isn't an objection to the names, but if >> it is given one of those names I think it should use Py_XDECREF. > > Originally I proposed pairs of functions with and withot X in the name > (as Py_DECREF/Py_XDECREF). In this poll this detail is omitted for > clearness. Later we can create a new poll if needed. I think that any variant on "SET" strongly implies that it need not have already been set, and think even a "SET/REPLACE" pair would be better than "XSET/SET". From guido at python.org Wed Dec 16 12:57:31 2015 From: guido at python.org (Guido van Rossum) Date: Wed, 16 Dec 2015 09:57:31 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> <5670DB79.40502@gmail.com> Message-ID: On Wed, Dec 16, 2015 at 1:50 AM, Roy Williams wrote: > I totally agree that async/await should not be tied to any underlying > message pump/event loop. Ensuring that async/await works with existing > systems like Tornado is great. > > As for the two options, option 1 is the expected behavior from developers > coming from other languages implementing async/await which is why I found > the existing behavior to be so unintuitive. To Barry and Kevin's point, > this problem is exacerbated by a lack of documentation and examples that > one can follow to learn about the Pythonic approach to async/await. > I don't disagree that more intro docs are needed. However, just to cut short a fruitless discussion, there is zero chance that Python will change (nor is there any chance that the other languages will change). Language features that look the same often don't behave the same (e.g. variables in Python are entirely different beasts than in C#, and also behave quite differently from variables in JavaScript). Also, if you aren't giving up on changing Python, please move to python-ideas, which is the designated place to discuss possible language changes. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Dec 16 13:55:00 2015 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 16 Dec 2015 13:55:00 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> <20151215204137.12761c11@anarchist.wooz.org> Message-ID: <5671B384.9000106@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 12/16/2015 01:11 AM, Nick Coghlan wrote: > One smaller step that may be helpful is changing the titles of a > couple of the sections from: > > * 18.5.4. Transports and protocols (low-level API) * 18.5.5. Streams > (high-level API) > > to: > > * 18.5.4. Transports and protocols (callback based API) * 18.5.5. > Streams (coroutine based API) > > That's based on a sample size of one though (a friend for whom light > dawned once I explained that low-level=callbacks and > high-level=coroutines), which is why I hadn't written a patch for it. +1. That certainly tripped the switch for me. I wish more of the asyncio stuff would illuminate itself so smoothly. ;) Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQIcBAEBAgAGBQJWcbN/AAoJEPKpaDSJE9HYbyUQAJIzzMz7ksUucMZWyF5PvSyg r0Y1ULmGeLxXloR1hw26ToKQe4EvcgUXU39sIE2Ck7HyDtNHl8CorMyd0aVkcjKW zFODj0DsEvphlQk+vQPnZZhWxb8xuKlsWmr2PqSZdVRlGK+xkaraSzJsa+loI70/ Vw8FipfS1ytpq1qlI3i8h4UWZLg+CPsa96Lgwz+GW+TmYawYHjzCr/NNhlT6UnsA JoGxRLZpNOgYeS6/Xo7p2gBOz8MZxE3e7UNeHmE2H96aIz7n6E6A3EKsJ2ms9kWy cjuMJ21N+SmSODXfBxovfiTOE0QJ+GAqRc26vWWjbYqTrtmPrg8F7tCrGZlpILsK sYNJvUAzWCgOhG3eI/SJ8NHVdfuPszPdDVZvm2jk0om2UKMjXKmoxago5aW7Ijb7 T9sLLVUWuvxx/54QkJEaFdLYwmEK2DnyVdNvPf7xrMNtKfXrsmFxxtnSjN3pSkuK tucucR7VVlM2Bm1uxwB7Oqqks44lthEU0LNWNiurujOFX8sUpgIy9rSPntv/7mnK f44v43Rmshshc3SrPAJuzafpAG4kPrM2J6PTF4OSNwg5ZYj8hlMPI6vIdsrX/q7U iJKbyYkHAY09yjYOASmTYUXTS/2gyjMG6uKgIabBuQmVhWR63ezlq1D17aCHCVBG hM0dlyReGudfk/0jM6NW =II0x -----END PGP SIGNATURE----- From abarnert at yahoo.com Wed Dec 16 14:35:13 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Wed, 16 Dec 2015 11:35:13 -0800 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <20151216132505.08d0bd10@x230> References: <56707714.2080901@gmail.com> <20151216132505.08d0bd10@x230> Message-ID: <94307B8A-BADA-4A27-801B-194F67F440D6@yahoo.com> > On Dec 16, 2015, at 03:25, Paul Sokolovsky wrote: > > Hello, > > On Tue, 15 Dec 2015 17:29:26 -0800 > Roy Williams wrote: > >> @Kevin correct, that's the point I'd like to discuss. Most other >> mainstream languages that implements async/await expose the >> programming model with Tasks/Futures/Promises as opposed to >> coroutines PEP 492 states 'Objects with __await__ method are called >> Future-like objects in the rest of this PEP.' but their behavior >> differs from that of Futures in this core way. Given that most other >> languages have standardized around async returning a Future as >> opposed to a coroutine I think it's worth exploring why Python >> differs. > > Sorry, but what makes you think that it's worth exploring why Python > Python differs, and not why other languages differ? They're really the same question. Python differs from C# in that it builds async on top of language-level coroutines instead of hiding them under the hood, it only requires a simple event loop (which can be trivially built on a select-like function and a loop) rather than a powerful OS/VM-level task scheduler, it's designed to allow pluggable schedulers (maybe even multiple schedulers in one app), it doesn't have a static type system to assist it, ... Turn it around and ask how C# differs from Python and you get the same differences. And there's no value judgment either way. So, do any of those explain why some Python awaitables aren't safely re-awaitable? Yes: the fact that Python uses language-level coroutines instead of hiding them under the covers means that it makes sense to be able to directly await coroutines (and to make async functions return those coroutines when called), which raises a question that doesn't exist in C#. What happens when you await an already-consumed awaitables? That question doesn't arise in C# because it doesn't have consumable awaitables. Python _could_ just punt on that by not allowing coroutines to be awaitable, or auto-wrapping them, but that would be giving up a major positive benefit over C#. So, that means Python instead has to decide what happens. In general, the semantics of awaiting an awaitable are that you get its value or an exception. Can you preserve those semantics even with raw coroutines as awaitables? Sure; as two people have pointed out in this thread, just make awaiting a consumed coroutine raise. Problem solved. But if nobody had asked about the differences between Python and C#, it would have been a lot harder to solve (or even see) the question. > Also, what "most other languages" do you mean? Well, what he said was "Most other mainstream languages that implements async/await". But you're right; clearly what he meant was just C#, because that's the only other mainstream language that implements async/await today. Others (JS, Scala) are implementing it or considering doing so, but, just like Python, they're borrowing it from C# anyway. (Unless you want to call F# async blocks and let! binding the same feature--but if so, C# borrowed from F# and everyone else borrowed from C#, so it's still the same.) > Lua was a pioneer of > coroutine usage in scripting languages, with research behind that. > It doesn't have any "futures" or "promises" as part of the language. > It has only coroutines. For niche cases when "futures" or "promises" > needed, they can be implemented on top of coroutines. > > And that's actually the problem with Python's asyncio - it tries to > marry all the orthogonal concurrency concepts, unfortunately good > deal o'mess ensues. The fact that futures can be built on top of coroutines, or on top of promises and callbacks, means they're a way to tie together pieces of asynchronous code written in different styles. And the idea of a simple supertype of both futures and coroutines that's sufficient for a large set of problems, means you rarely need wrappers to transform one into the other; just use whichever one you have as an awaitable and it works. So, you can write 80% of your code in terms of awaitables, but if the last 20% needs to get at the native coroutines, or to integrate with legacy code using callbacks, it's easy to do so. In C#, you instead have to simulate those coroutines with promises even when you're not integrating with legacy code; in a language without futures you'd have to wrap each call into and out of legacy code manually. If you were designing a new language, you could probably get away with something a lot simpler. (If the only thing you could ever need a future for is to cache an awaitable value, it's a one-liner.) But for Python (and JS, Scala, C#, etc.) that isn't an option. > It doesn't help on "PR" side too, because coroutine > lovers blame it for not being based entirely on language's native > coroutines, strangers from other languages want to twist it to be based > entirely on foreign concepts like futures, Twisted haters hate that it > has too much complication taken from Twisted, etc. There is definitely a PR problem, but I think that's tied directly to the documentation problem, not anything about the design. Unless you've come to things in the same order as Guido, it's hard to figure out even where to dive in to start learning. So you try to write something, fail, get frustrated, and write an angry blog post about why Python asyncio sucks, which actually just exposes your own ignorance of how it works, but since 90% of your readers are just as ignorant of how it works, they believe you're right. Part of the problem is that there are so many different mediocre paradigms for async programming that each have a million people who sort of know them just well enough to use them. A tutorial that would explain asyncio to someone who's written lots of traditional JS-style callbacks will be useless to someone who's written C-style reactors or Lua-style coroutines. So we probably need a bunch of separate tutorials just to get different classes of people thinking in the right terms before they can read the more detailed documentation. Also, as with every async design, the first 30 tutorials anyone writes all completely neglect the problem of communicating between tasks (e.g., building a chat server instead of an echo server), so people think that what was easy in their familiar paradigm (because they've gotten used to it, and it's been years since they had to figure it out for themselves because none of the tutorials covered it so they forgot that part) is hard in the new one, and therefore the new one sucks. >> There's a lot of benefits to making the programming model coroutines >> without a doubt. It's absolutely brilliant that I can just call code >> annotated with @asyncio.coroutine and have it just work. Code using >> the old @asyncio.coroutine/yield from syntax should absolutely stay >> the same. Similarly, since ES7 async/await is backed by Promises >> it'll just work for any existing code out there using Promises. >> >> My proposal would be to automatically wrap the return value from an >> `async` function or any object implementing `__await__` in a future >> with `asyncio.ensure_future()`. This would allow async/await code to >> behave in a similar manner to other languages implementing >> async/await and would remain compatible with existing code using >> asyncio. >> >> What's your thoughts? > > My thought is "what other languages told when you approached them with > the proposal to behave like Python?". I'm pretty sure if you approached the C# team and asked them why re-awaiting a coroutine doesn't produce nil, they'd explain that they deliberately chose not to expose coroutines (actually, I believe they were thinking in terms of continuations, as in F#, but...) under the theory that awaitables are all you'll ever need, which means that problem doesn't come up in the first place. The language can implicitly add such a wrapper and then easily optimize it away when possible because the user never sees inside the wrapper. And if you asked the ES7 committee, they might tell you they actually wanted something closer to Python, but it was just too hard to fit it into their brittle language, so they can't expose awaitables as anything but futures and hope their clever interpreters can optimize out the extra abstraction that you usually don't need, so the question doesn't arise for them either. And if you asked the Scala await fork developers, they'd probably point out that the idiomatic Scala equivalents to returning None and to raising are both returning an empty optional value, so the question doesn't arise for them for a different reason. And in F#, you can build a let!-awaitable out of a raw continuation instead of an async expression, but you have to write the code for that yourself, so you can decide what it does when re-awaited; it's not up to the language or stdlib. And so on. But, even if I'm wrong, and asking those questions would improve those languages, it still wouldn't improve Python. > Also, wrapping objects in other objects is expensive. Especially if > the latter kind of objects isn't really needed - it's perfectly > possibly to write applications which don't use or need any futures at > all, using just coroutines. Moreover, some people argue that most apps > real people would write are such, and Futures are niche feature, so > can't be center of the world. Well, the whole point of the async model is that most apps real people write only depend on awaitables, and they almost never care whether they're futures or coroutines. This means a language can avoid the overhead of wrapping coroutines in futures (like Python), or keep coroutines out of the user-visible data model (like C#), and work almost the same way. The problem is that Python is the first mainstream language to adopt awaitables built on top of native, user-visible coroutines, so it has to answer a few questions that C# dodged--like what happens when you await the same coroutine multiple times. That's not a negative judgment on Python, it's just a natural consequence of Python being a little more powerful here than the language it's borrowing from. Refusing to look at the differences between Python and C# would mean not noticing that and leaving it for some future language to solve instead of letting future languages copy from Python (which is always the best way to be consistent with everyone else, of course). From steve at holdenweb.com Wed Dec 16 15:33:20 2015 From: steve at holdenweb.com (Steve Holden) Date: Wed, 16 Dec 2015 20:33:20 +0000 Subject: [Python-Dev] [Webmaster] Python keeps installing as 32 bit In-Reply-To: References: Message-ID: Hi Robb, This address is really for web site issues, but we are mostly old hands, and reasonably well-connected, so we try to act as a helpful channel when we can. In this case I can't personally help (though another webmaster may, if available, be able to offer advice). I stopped doing system administration for anything but my own machines a long time ago, having done far too much :-) The many mailing list channels available are listed at https://mail.python.org/mailman/listinfo. I would recommend that you try the distutils list at https://mail.python.org/mailman/listinfo/distutils-sig; they don't actually build the Python installers (the dev who does that lives on python-dev, so that would be the place to go to get the scoop, and your email shows enough signs of competence that you need not fear adverse reactions). It seems like a reasonable enquiry to me, and I'm sorry I can't answer it. I've Cc'd this email to python-dev on the off-chance that someone will recognise my name and let it through, but I don't know how many people are working on the Windows installer or how busy they are. There are plenty of people smart enough to answer your question out there now, it's just a question of finding them. stackoverflow.com has a pretty good Python channel too. In any case, good luck, and thanks for reaching out to Python. regards Steve On Wed, Dec 16, 2015 at 7:29 PM, Mullins, Robb wrote: > Hi, > > > > Not quite sure where to ask this. > > > > I don?t use Python myself. I keep user desktops updated. Everything?s > 64-bit. In the past I was able to install 32-bit Python on 32-bit machines > and 64-bit Python on 64-bit machines. Now it?s just the one msi file to > install, at least for 3.5.1. I do have a couple Python 2.7.9 users. > We?re all 64-bit for machines, but I keep having Python install as 32-bit. > I?m not sure if it recognizes something on the machine and matches it for > being 32-bit that I?m not aware of. It can be tricky to uninstall, so it > becomes a slight issue. I just want to get 64-bit Python on my user > machines, unless it?s not possible. > > > > Is there a better place to ask this? > > > > > > Thanks, > > RM > > > > Desktop Support Specialist > > Center for Innovation in Teaching & Learning > > citl-techsupport at mx.uillinois.edu *(For computer issues, please use the > ticket system.)* > > (217) 333-2146 > > > > _______________________________________________ > Webmaster mailing list > Webmaster at python.org > https://mail.python.org/mailman/listinfo/webmaster > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Wed Dec 16 15:39:27 2015 From: brett at python.org (Brett Cannon) Date: Wed, 16 Dec 2015 20:39:27 +0000 Subject: [Python-Dev] [Webmaster] Python keeps installing as 32 bit In-Reply-To: References: Message-ID: I can say for certain that Python 3.5.1 will install as 64-bit as that's what I'm personally running on the Windows 10 laptop that I'm writing this email on. If you look at https://www.python.org/downloads/release/python-351/ you will notice there are explicit 64-bit installers that you can use. Did you get your copy of Python by going straight to python.org/download and clicking the yellow "Download Python 3.5.1" button? On Wed, 16 Dec 2015 at 12:33 Steve Holden wrote: > Hi Robb, > > This address is really for web site issues, but we are mostly old hands, > and reasonably well-connected, so we try to act as a helpful channel when > we can. > > In this case I can't personally help (though another webmaster may, if > available, be able to offer advice). I stopped doing system administration > for anything but my own machines a long time ago, having done far too much > :-) > > The many mailing list channels available are listed at > https://mail.python.org/mailman/listinfo. I would recommend that you try > the distutils list at > https://mail.python.org/mailman/listinfo/distutils-sig; they don't > actually build the Python installers (the dev who does that lives on > python-dev, so that would be the place to go to get the scoop, and your > email shows enough signs of competence that you need not fear adverse > reactions). It seems like a reasonable enquiry to me, and I'm sorry I can't > answer it. > > I've Cc'd this email to python-dev on the off-chance that someone will > recognise my name and let it through, but I don't know how many people are > working on the Windows installer or how busy they are. > > There are plenty of people smart enough to answer your question out there > now, it's just a question of finding them. stackoverflow.com has a pretty > good Python channel too. > > In any case, good luck, and thanks for reaching out to Python. > > regards > Steve > > On Wed, Dec 16, 2015 at 7:29 PM, Mullins, Robb > wrote: > >> Hi, >> >> >> >> Not quite sure where to ask this. >> >> >> >> I don?t use Python myself. I keep user desktops updated. Everything?s >> 64-bit. In the past I was able to install 32-bit Python on 32-bit machines >> and 64-bit Python on 64-bit machines. Now it?s just the one msi file to >> install, at least for 3.5.1. I do have a couple Python 2.7.9 users. >> We?re all 64-bit for machines, but I keep having Python install as 32-bit. >> I?m not sure if it recognizes something on the machine and matches it for >> being 32-bit that I?m not aware of. It can be tricky to uninstall, so it >> becomes a slight issue. I just want to get 64-bit Python on my user >> machines, unless it?s not possible. >> >> >> >> Is there a better place to ask this? >> >> >> >> >> >> Thanks, >> >> RM >> >> >> >> Desktop Support Specialist >> >> Center for Innovation in Teaching & Learning >> >> citl-techsupport at mx.uillinois.edu *(For computer issues, please use the >> ticket system.)* >> >> (217) 333-2146 >> >> >> >> _______________________________________________ >> Webmaster mailing list >> Webmaster at python.org >> https://mail.python.org/mailman/listinfo/webmaster >> >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Wed Dec 16 16:14:12 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 16 Dec 2015 22:14:12 +0100 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: 2015-12-16 15:12 GMT+01:00 Serhiy Storchaka : > Here are names gained the largest numbers of votes plus names proposed > during polling. > > 1. Py_SETREF +1: obvious name > 2. Py_DECREF_REPLACE -1: too long > 3. Py_REPLACE 0: less explicit than but: not mention of reference > 4. Py_SET_POINTER -1: a reference is not a pointer > 5. Py_SET_ATTR -1: it's not an attribute > 6. Py_REPLACE_REF +0.5: close to Py_SETREF, but longer and if I recall correctly "set" is more common than "replace" in the Python language Victor From victor.stinner at gmail.com Wed Dec 16 16:16:03 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Wed, 16 Dec 2015 22:16:03 +0100 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: <878u4umolb.fsf@fastmail.com> Message-ID: 2015-12-16 16:12 GMT+01:00 Serhiy Storchaka : > Originally I proposed pairs of functions with and withot X in the name (as > Py_DECREF/Py_XDECREF). In this poll this detail is omitted for clearness. > Later we can create a new poll if needed. I would prefer a single macro to avoid bugs, I don't think that such macro has a critical impact on performances. It's more designed for safety, no? Victor From rmullins at illinois.edu Wed Dec 16 15:49:15 2015 From: rmullins at illinois.edu (Mullins, Robb) Date: Wed, 16 Dec 2015 20:49:15 +0000 Subject: [Python-Dev] [Webmaster] Python keeps installing as 32 bit In-Reply-To: References: Message-ID: Yeah, I was using Windows x86-64 executable installer from that page. I tried unzipping it just in case, no luck. I?m thinking I?ll probably just use 32-bit though. I found a post saying 64-bit might have issues compiling. I don?t think users will know or care. And there x86 installers are there. http://www.howtogeek.com/197947/how-to-install-python-on-windows/ [cid:image001.jpg at 01D13810.EF3A90C0] The only other thing I was thinking was something with the chip maybe. I ran into this about a year ago. (Or more now?) I Python down for 32 vs 64-bit. Then I noticed some 64-bit machines were still doing 32-bit, but I only have the x86-64.exe. I can?t force x64 on it. It?s not a huge issue at this point. Once I figure it out, it will save time. I?m planning on manually uninstalling versions of Python and then installing the current one (leaning toward x86 now) so all the user machines are consistent. Thanks, Robb Desktop Support Specialist Center for Innovation in Teaching & Learning citl-techsupport at mx.uillinois.edu (For computer issues, please use the ticket system.) (217) 333-2146 From: Brett Cannon [mailto:brett at python.org] Sent: Wednesday, December 16, 2015 2:39 PM To: Steve Holden ; Mullins, Robb Cc: webmaster at python.org; python-dev at python.org Subject: Re: [Python-Dev] [Webmaster] Python keeps installing as 32 bit I can say for certain that Python 3.5.1 will install as 64-bit as that's what I'm personally running on the Windows 10 laptop that I'm writing this email on. If you look at https://www.python.org/downloads/release/python-351/ you will notice there are explicit 64-bit installers that you can use. Did you get your copy of Python by going straight to python.org/download and clicking the yellow "Download Python 3.5.1" button? On Wed, 16 Dec 2015 at 12:33 Steve Holden > wrote: Hi Robb, This address is really for web site issues, but we are mostly old hands, and reasonably well-connected, so we try to act as a helpful channel when we can. In this case I can't personally help (though another webmaster may, if available, be able to offer advice). I stopped doing system administration for anything but my own machines a long time ago, having done far too much :-) The many mailing list channels available are listed at https://mail.python.org/mailman/listinfo. I would recommend that you try the distutils list at https://mail.python.org/mailman/listinfo/distutils-sig; they don't actually build the Python installers (the dev who does that lives on python-dev, so that would be the place to go to get the scoop, and your email shows enough signs of competence that you need not fear adverse reactions). It seems like a reasonable enquiry to me, and I'm sorry I can't answer it. I've Cc'd this email to python-dev on the off-chance that someone will recognise my name and let it through, but I don't know how many people are working on the Windows installer or how busy they are. There are plenty of people smart enough to answer your question out there now, it's just a question of finding them. stackoverflow.com has a pretty good Python channel too. In any case, good luck, and thanks for reaching out to Python. regards Steve On Wed, Dec 16, 2015 at 7:29 PM, Mullins, Robb > wrote: Hi, Not quite sure where to ask this. I don?t use Python myself. I keep user desktops updated. Everything?s 64-bit. In the past I was able to install 32-bit Python on 32-bit machines and 64-bit Python on 64-bit machines. Now it?s just the one msi file to install, at least for 3.5.1. I do have a couple Python 2.7.9 users. We?re all 64-bit for machines, but I keep having Python install as 32-bit. I?m not sure if it recognizes something on the machine and matches it for being 32-bit that I?m not aware of. It can be tricky to uninstall, so it becomes a slight issue. I just want to get 64-bit Python on my user machines, unless it?s not possible. Is there a better place to ask this? Thanks, RM Desktop Support Specialist Center for Innovation in Teaching & Learning citl-techsupport at mx.uillinois.edu (For computer issues, please use the ticket system.) (217) 333-2146 _______________________________________________ Webmaster mailing list Webmaster at python.org https://mail.python.org/mailman/listinfo/webmaster _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 27656 bytes Desc: image001.jpg URL: From steve.dower at python.org Wed Dec 16 17:12:01 2015 From: steve.dower at python.org (Steve Dower) Date: Thu, 17 Dec 2015 09:12:01 +1100 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: x2 for all of Victor's votes and reasoning. Top-posted from my Windows Phone -----Original Message----- From: "Victor Stinner" Sent: ?12/?17/?2015 8:16 To: "Serhiy Storchaka" Cc: "Python Dev" Subject: Re: [Python-Dev] New poll about a macro for safe reference replacing 2015-12-16 15:12 GMT+01:00 Serhiy Storchaka : > Here are names gained the largest numbers of votes plus names proposed > during polling. > > 1. Py_SETREF +1: obvious name > 2. Py_DECREF_REPLACE -1: too long > 3. Py_REPLACE 0: less explicit than but: not mention of reference > 4. Py_SET_POINTER -1: a reference is not a pointer > 5. Py_SET_ATTR -1: it's not an attribute > 6. Py_REPLACE_REF +0.5: close to Py_SETREF, but longer and if I recall correctly "set" is more common than "replace" in the Python language Victor _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Dec 16 17:40:34 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 16 Dec 2015 17:40:34 -0500 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: <5671E862.7060705@gmail.com> > Here are names gained the largest numbers of votes plus names proposed > during polling. > > 1. Py_SETREF > 2. Py_DECREF_REPLACE > 3. Py_REPLACE > 4. Py_SET_POINTER > 5. Py_SET_ATTR > 6. Py_REPLACE_REF > I like Py_SETREF, so +1 for it. 0 for other names. Yury From brett at python.org Wed Dec 16 18:23:37 2015 From: brett at python.org (Brett Cannon) Date: Wed, 16 Dec 2015 23:23:37 +0000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: <5671E862.7060705@gmail.com> References: <5671E862.7060705@gmail.com> Message-ID: On Wed, 16 Dec 2015 at 14:41 Yury Selivanov wrote: > > > Here are names gained the largest numbers of votes plus names proposed > > during polling. > > > > 1. Py_SETREF > > 2. Py_DECREF_REPLACE > > 3. Py_REPLACE > > 4. Py_SET_POINTER > > 5. Py_SET_ATTR > > 6. Py_REPLACE_REF > > > I like Py_SETREF, so +1 for it. 0 for other names. > +1 for Py_SETREF. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov.ml at gmail.com Wed Dec 16 21:34:41 2015 From: yselivanov.ml at gmail.com (Yury Selivanov) Date: Wed, 16 Dec 2015 21:34:41 -0500 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: References: <56707714.2080901@gmail.com> <20151215204137.12761c11@anarchist.wooz.org> Message-ID: <56721F41.60300@gmail.com> On 2015-12-16 1:11 AM, Nick Coghlan wrote: > On 16 December 2015 at 11:41, Barry Warsaw wrote: >> The asyncio library documentation *really* needs a good overview and/or >> tutorial. These are difficult concepts to understand and it seems like >> bringing experience from other languages may not help (and may even hinder) >> understanding of Python's model. After a while, you get it, but I think it >> would be good to help folks get there sooner, especially if you're new to the >> whole area. >> >> Maybe those of you who have been steeped in asyncio for a long time could >> write that up? I don't think I'm the right person to do that, but I'd be very >> happy to review it. > One smaller step that may be helpful is changing the titles of a > couple of the sections from: > > * 18.5.4. Transports and protocols (low-level API) > * 18.5.5. Streams (high-level API) > > to: > > * 18.5.4. Transports and protocols (callback based API) > * 18.5.5. Streams (coroutine based API) > > That's based on a sample size of one though (a friend for whom light > dawned once I explained that low-level=callbacks and > high-level=coroutines), which is why I hadn't written a patch for it. Nick, I've applied your suggested change in https://hg.python.org/cpython/rev/f02c61f08333 I think it makes sense, as at least it gives some useful information about the section. "low-level" and "high-level" start to mean something when you already understand asyncio pretty well. Yury From steve.dower at python.org Wed Dec 16 21:35:54 2015 From: steve.dower at python.org (Steve Dower) Date: Thu, 17 Dec 2015 13:35:54 +1100 Subject: [Python-Dev] async/await behavior on multiple calls In-Reply-To: <94307B8A-BADA-4A27-801B-194F67F440D6@yahoo.com> References: <56707714.2080901@gmail.com> <20151216132505.08d0bd10@x230> <94307B8A-BADA-4A27-801B-194F67F440D6@yahoo.com> Message-ID: To briefly clarify/correct some of the C# statements that seem to keep being made: * C# produces a future/promise (spelled Task) for each call to an async function * awaiting a Task will return the result if its available, else schedule a continuation in the current loop (spelled synchronization context) - you can have multiple such loops in a single thread though it makes things confusing (by extension, you can also have multiple per process, which is better) * results stick around until the Task is garbage collected, so you can await a task multiple times * async/await in C# is a compile time transform - you could hand-code exactly equivalent behavior in terms of Task if you so desired As someone who's dealt extensively with using, debugging and implementing C# awaiters, Python's approach is very similar. The main difference is that async creates something similar, but lighter weight than a regular future. Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Andrew Barnert via Python-Dev" Sent: ?12/?17/?2015 6:37 To: "Paul Sokolovsky" Cc: "Python-Dev" Subject: Re: [Python-Dev] async/await behavior on multiple calls > On Dec 16, 2015, at 03:25, Paul Sokolovsky wrote: > > Hello, > > On Tue, 15 Dec 2015 17:29:26 -0800 > Roy Williams wrote: > >> @Kevin correct, that's the point I'd like to discuss. Most other >> mainstream languages that implements async/await expose the >> programming model with Tasks/Futures/Promises as opposed to >> coroutines PEP 492 states 'Objects with __await__ method are called >> Future-like objects in the rest of this PEP.' but their behavior >> differs from that of Futures in this core way. Given that most other >> languages have standardized around async returning a Future as >> opposed to a coroutine I think it's worth exploring why Python >> differs. > > Sorry, but what makes you think that it's worth exploring why Python > Python differs, and not why other languages differ? They're really the same question. Python differs from C# in that it builds async on top of language-level coroutines instead of hiding them under the hood, it only requires a simple event loop (which can be trivially built on a select-like function and a loop) rather than a powerful OS/VM-level task scheduler, it's designed to allow pluggable schedulers (maybe even multiple schedulers in one app), it doesn't have a static type system to assist it, ... Turn it around and ask how C# differs from Python and you get the same differences. And there's no value judgment either way. So, do any of those explain why some Python awaitables aren't safely re-awaitable? Yes: the fact that Python uses language-level coroutines instead of hiding them under the covers means that it makes sense to be able to directly await coroutines (and to make async functions return those coroutines when called), which raises a question that doesn't exist in C#. What happens when you await an already-consumed awaitables? That question doesn't arise in C# because it doesn't have consumable awaitables. Python _could_ just punt on that by not allowing coroutines to be awaitable, or auto-wrapping them, but that would be giving up a major positive benefit over C#. So, that means Python instead has to decide what happens. In general, the semantics of awaiting an awaitable are that you get its value or an exception. Can you preserve those semantics even with raw coroutines as awaitables? Sure; as two people have pointed out in this thread, just make awaiting a consumed coroutine raise. Problem solved. But if nobody had asked about the differences between Python and C#, it would have been a lot harder to solve (or even see) the question. > Also, what "most other languages" do you mean? Well, what he said was "Most other mainstream languages that implements async/await". But you're right; clearly what he meant was just C#, because that's the only other mainstream language that implements async/await today. Others (JS, Scala) are implementing it or considering doing so, but, just like Python, they're borrowing it from C# anyway. (Unless you want to call F# async blocks and let! binding the same feature--but if so, C# borrowed from F# and everyone else borrowed from C#, so it's still the same.) > Lua was a pioneer of > coroutine usage in scripting languages, with research behind that. > It doesn't have any "futures" or "promises" as part of the language. > It has only coroutines. For niche cases when "futures" or "promises" > needed, they can be implemented on top of coroutines. > > And that's actually the problem with Python's asyncio - it tries to > marry all the orthogonal concurrency concepts, unfortunately good > deal o'mess ensues. The fact that futures can be built on top of coroutines, or on top of promises and callbacks, means they're a way to tie together pieces of asynchronous code written in different styles. And the idea of a simple supertype of both futures and coroutines that's sufficient for a large set of problems, means you rarely need wrappers to transform one into the other; just use whichever one you have as an awaitable and it works. So, you can write 80% of your code in terms of awaitables, but if the last 20% needs to get at the native coroutines, or to integrate with legacy code using callbacks, it's easy to do so. In C#, you instead have to simulate those coroutines with promises even when you're not integrating with legacy code; in a language without futures you'd have to wrap each call into and out of legacy code manually. If you were designing a new language, you could probably get away with something a lot simpler. (If the only thing you could ever need a future for is to cache an awaitable value, it's a one-liner.) But for Python (and JS, Scala, C#, etc.) that isn't an option. > It doesn't help on "PR" side too, because coroutine > lovers blame it for not being based entirely on language's native > coroutines, strangers from other languages want to twist it to be based > entirely on foreign concepts like futures, Twisted haters hate that it > has too much complication taken from Twisted, etc. There is definitely a PR problem, but I think that's tied directly to the documentation problem, not anything about the design. Unless you've come to things in the same order as Guido, it's hard to figure out even where to dive in to start learning. So you try to write something, fail, get frustrated, and write an angry blog post about why Python asyncio sucks, which actually just exposes your own ignorance of how it works, but since 90% of your readers are just as ignorant of how it works, they believe you're right. Part of the problem is that there are so many different mediocre paradigms for async programming that each have a million people who sort of know them just well enough to use them. A tutorial that would explain asyncio to someone who's written lots of traditional JS-style callbacks will be useless to someone who's written C-style reactors or Lua-style coroutines. So we probably need a bunch of separate tutorials just to get different classes of people thinking in the right terms before they can read the more detailed documentation. Also, as with every async design, the first 30 tutorials anyone writes all completely neglect the problem of communicating between tasks (e.g., building a chat server instead of an echo server), so people think that what was easy in their familiar paradigm (because they've gotten used to it, and it's been years since they had to figure it out for themselves because none of the tutorials covered it so they forgot that part) is hard in the new one, and therefore the new one sucks. >> There's a lot of benefits to making the programming model coroutines >> without a doubt. It's absolutely brilliant that I can just call code >> annotated with @asyncio.coroutine and have it just work. Code using >> the old @asyncio.coroutine/yield from syntax should absolutely stay >> the same. Similarly, since ES7 async/await is backed by Promises >> it'll just work for any existing code out there using Promises. >> >> My proposal would be to automatically wrap the return value from an >> `async` function or any object implementing `__await__` in a future >> with `asyncio.ensure_future()`. This would allow async/await code to >> behave in a similar manner to other languages implementing >> async/await and would remain compatible with existing code using >> asyncio. >> >> What's your thoughts? > > My thought is "what other languages told when you approached them with > the proposal to behave like Python?". I'm pretty sure if you approached the C# team and asked them why re-awaiting a coroutine doesn't produce nil, they'd explain that they deliberately chose not to expose coroutines (actually, I believe they were thinking in terms of continuations, as in F#, but...) under the theory that awaitables are all you'll ever need, which means that problem doesn't come up in the first place. The language can implicitly add such a wrapper and then easily optimize it away when possible because the user never sees inside the wrapper. And if you asked the ES7 committee, they might tell you they actually wanted something closer to Python, but it was just too hard to fit it into their brittle language, so they can't expose awaitables as anything but futures and hope their clever interpreters can optimize out the extra abstraction that you usually don't need, so the question doesn't arise for them either. And if you asked the Scala await fork developers, they'd probably point out that the idiomatic Sc ala equivalents to returning None and to raising are both returning an empty optional value, so the question doesn't arise for them for a different reason. And in F#, you can build a let!-awaitable out of a raw continuation instead of an async expression, but you have to write the code for that yourself, so you can decide what it does when re-awaited; it's not up to the language or stdlib. And so on. But, even if I'm wrong, and asking those questions would improve those languages, it still wouldn't improve Python. > Also, wrapping objects in other objects is expensive. Especially if > the latter kind of objects isn't really needed - it's perfectly > possibly to write applications which don't use or need any futures at > all, using just coroutines. Moreover, some people argue that most apps > real people would write are such, and Futures are niche feature, so > can't be center of the world. Well, the whole point of the async model is that most apps real people write only depend on awaitables, and they almost never care whether they're futures or coroutines. This means a language can avoid the overhead of wrapping coroutines in futures (like Python), or keep coroutines out of the user-visible data model (like C#), and work almost the same way. The problem is that Python is the first mainstream language to adopt awaitables built on top of native, user-visible coroutines, so it has to answer a few questions that C# dodged--like what happens when you await the same coroutine multiple times. That's not a negative judgment on Python, it's just a natural consequence of Python being a little more powerful here than the language it's borrowing from. Refusing to look at the differences between Python and C# would mean not noticing that and leaving it for some future language to solve instead of letting future languages copy from Python (which is always the best way to be consistent with everyone else, of course). _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From vadmium+py at gmail.com Wed Dec 16 22:29:16 2015 From: vadmium+py at gmail.com (Martin Panter) Date: Thu, 17 Dec 2015 03:29:16 +0000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 16/12/2015, Serhiy Storchaka wrote: > Here are names gained the largest numbers of votes plus names proposed > during polling. > > 1. Py_SETREF +0. I can live with it, but SET sounds like a complement to CLEAR, or that it ignores the old value. > 2. Py_DECREF_REPLACE +0.5 > 3. Py_REPLACE +1. Fairly obvious what it does. > 4. Py_SET_POINTER -1 > 5. Py_SET_ATTR -1 ** -1. What?s the attribute name? > 6. Py_REPLACE_REF +0.5 Ryan?s Py_RESET: -1, it sounds too much like CLEAR From ncoghlan at gmail.com Thu Dec 17 01:22:31 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 17 Dec 2015 16:22:31 +1000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 17 December 2015 at 00:12, Serhiy Storchaka wrote: > The problem is only in the macro name. There are objections against any > proposed name, and no one name gained convincing majority. > > Here are names gained the largest numbers of votes plus names proposed > during polling. > > 1. Py_SETREF +1 if it always uses Py_XDECREF on the previous value (as I'd expect this to work even if the previous value was NULL) -0 for a Py_SETREF/Py_XSETREF pair (the problem I see is that it's unclear that it's the target location that's allowed to be NULL in the latter case) > 2. Py_DECREF_REPLACE -1: too long > 3. Py_REPLACE +0 if it uses Py_DECREF on the previous value as part of a Py_REPLACE/Py_SETREF pair However, I'm not sure we need the micro-optimisation offering by skipping the "Is the previous value NULL?" check, and it's always easier to add an API later than it is to remove one. > 4. Py_SET_POINTER -1: As Victor says, "pointer" tends to mean "void *" in out C code, not "PyObject *". > 5. Py_SET_ATTR -1: This operation is useful for updating any reachable reference to another object, not just attributes > 6. Py_REPLACE_REF -0: this is like 3, only with a slightly longer name I'm also in favour of Serhiy claiming the casting vote if there's no clear consensus :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From leewangzhong+python at gmail.com Thu Dec 17 05:54:39 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 05:54:39 -0500 Subject: [Python-Dev] Third milestone of FAT Python In-Reply-To: References: Message-ID: On Wed, Dec 16, 2015 at 2:01 AM, Victor Stinner wrote: > Le mercredi 16 d?cembre 2015, Franklin? Lee > a ?crit : >> >> I am confident that the time overhead and the savings will beat the >> versioning dict. The versioning dict method has to save a reference to >> the variable value and a reference to the name, and regularly test >> whether the dict has changed. > > > The performance of guards matters less than the performance of regular usage > of dict. If we have to make a choice, I prefer "slow" guard but no impact on > regular dict methods. It's very important that enabling FAT mode doesn't > kill performances. Remember that FAT python is a static optimizer and so can > only optimize some patterns, not all Python code. > > In my current implementation, a lookup is only needed when aguard is checked > if the dict was modified. The dict version doesn't change if a mutable > object was modified in place for example. I didn't benchmark, but I expect > that the lookup is avoided in most cases. You should try FAT python and > implement statistics before going too far in your idea. My suggestion should improve *all* function calls which refer to outside names. Each function keeps an indirect, automagically updated reference to the current value of the names they use, and will never need to look things up again.[*] There wouldn't be a need to save global names as default arguments (`def f(x, list=list):`). [*]: Not exactly true. Nested scopes can cause an issue. I'm not sure what happens if you redefine the __builtin__ name after these functions are defined. My suggestion would not know that the __builtin__ was switched out, since it saves a ref into the original __builtin__. I'm not sure about how to deal with nested scopes (for example, class inheritance). I think the "chained RefCells" idea works. Chained Refcell idea: There are three cases where we can have nested scopes: 1. An object's __dict__ nests its class. 2. A class's __dict__ nests its superclasses. 3. globals() nests __builtin__. 4? Does a package nest its modules? 5?. Does `from module import *` nest the module's __dict__? (Nonlocal variables in nested functions, and probably nested classes, are not a case of nested scope, since scope of each name is determined during compilation of the function/class.) RefCells of nested scopes will have a pointer to their value (or NULL), and an array/linkedlist of pointers to refcells from their parent dicts (or to their parent dicts if a refcell wasn't successfully acquired yet). When you request a RefCell from a nested Scope, it will return its value if it exists. Otherwise, it requests refcells from each parent (parents won't create refcells unless there's a value) until it gets one. When you ask a RefCell to resolve, it will check its own value, then ask each parent for a value (creating intermediate refcells *if* value exist). It will not need to do lookups in parents if it got a refcell before (though the refcell might be null). Problem: If you have class A(B, C), and you resolve a refcell for a name which exists in C but not B, you will look things up through B's dict every single time. It will fail, every single time. We can't skip B, since B is allowed to get such a name later, but I don't want to add refs to names that might never exist. This can be solved either through versioning or by checking whether a dict is read-only (such as for built-in types). In fact, in the code I wrote at the end of this email, RefCell.resolve() might even look things up in a shared ancestor multiple times. However, this would be incorrect anyway, since it doesn't respect Python's MRO resolution. So we can just fix that. RefCell.resolve would need a `search_parents: bool` parameter. >> I've read it again. By subclass, I mean that it implements the same >> interface. But at the C level, I want to have it be a fork(?) of the >> current dict implementation. As for `exec`, I think it might be okay >> for it to be slower at the early stages of this game. > > > Be careful, dict methods are hardcoded in the C code. If your type is not a > subtype, there is risk of crashes. Not exactly, and this is important. Many functions are called via pointer. It's like C++'s virtual methods, but more dynamic, since they can be changed per object. See https://github.com/python/cpython/blob/master/Objects/dict-common.h#L17. For example, the lookup method for str-only dicts swaps itself for a general object lookup method if it finds a non-string key. See https://github.com/python/cpython/blob/master/Objects/dictobject.c#L549. I'm now suggesting that there be additional space in the dict object itself to hold more function pointers (get_ref and get_hash_table_index), which would slightly increase memory cost and creation time. It won't have extra cost for running normal methods. When the dict gets a request for a reference, it will swap in methods that knows how to handle metadata, which WILL make (a few) things (a little) slower upon resizing. You only pay for what you ask for (except the extra dict API methods, which will slightly increase the cost and creation time). A few more pointers shouldn't hurt, since PyObjects are already big (see the overhead of dicts here: https://github.com/python/cpython/blob/master/Objects/dictobject.c#L2748). I'm not sure that get_hash_table_index is necessary. (I misunderstood something when rereading the lookup functions.) It should be possible to calculate the index in the hash table by subtracting the lookup's return value by the base index. == Some pseudocode == Notes: - A lot of repeated lookups are made in the code below. No repeated lookups (in a single call) are necessary in C. - I made a few style choices which wouldn't be Pythonic (e.g. explicitly testing for key) to make it easier to see what the C would do. - I wrote it as a subclass. It doesn't have to be. - We can ask for getref to become standard. It could be useful for a few purposes. (Namely, implementing scope dicts when writing interpreters for other languages, and for pass-by-reference in those interpreters.) - `parents` can be a special thing for nested scope dicts (such as those listed above). - Like I said before, we can plug in the more expensive functions the first time getref is called. A dict can dynamically become a dict_with_refs. - I'm being sloppy with self.refs. Sorry. Sometimes I write `self.refs[key] is NULL` and sometimes I write `key not in self.refs`. It's the same thing. - `NULL` might be `dummy` (which is used in the dict implementation). - `delete` means `Py_XDECREF` or `Py_DECREF`. This is only used when I feel like emphasizing the memory management. - I remembered that class dicts already use shared keys. I should look into that to see if we can leverage the mechanisms there. - We can decide instead that RefCells only own their value if they don't belong to a living scope. Meaning, they don't try to delete anything when they're deleted unless their owning scope is dead. - NestedRefCells can be a subclass. It would save a pointer in some cases. (But it's a PyObject, so it'd not save much.) - In C, the KeyErrors would instead return NULL. The code follows. class ScopeDict(dict): __slots__ = { '__inner__': Dict[Any, Nullable[Any]], # == super(). 'refs': SharedDict[Any, Nullable[RefCells]], # Shares keys with __inner__. 'parents': List[ScopeDict], } class RefCell: __slots__ = { 'key': Nullable[str], # Or not nullable? 'value_ptr': Pointer[Nullable[Any]], # Pointer to the pointer to the value object. 'parents': Nullable[ScopeDict | RefCell], 'indirect': bool, # True: # The owning dict is alive. # value_ptr is reference to pointer to value. # This is the normal case. # False: # The owning dict is gone. # value_ptr is counted reference to value. # The cell owns the reference. # This bit can be packed into the value_ptr. # In fact, this isn't necessary: # The .resolve function can be dynamic. } def ScopeDict.getref(self, key, create_if_none=True): """ Get a ref to a key. Raise KeyError if it doesn't exist and if not create_if_none. """ if self.refs[key] is not NULL: # refcell exists return self.refs[key] if key in self: #value exists, refcell doesn't # Create refcell to the value pointer return self.create_ref(key) # Value does not exist. Search direct parents. # Stop at the first parent cell, even if it doesn't # have a value. One parent cell is enough to justify # the creation of a cell. for i, parent in enumerate(self.parents if self.parents else ()): try: ref = parent.getref(key, create_if_none=False) index = i break except KeyError: pass else: #No parent has the key if create_if_none: # Create ref return self.create_ref(key) else: #Give up raise KeyError(key) # Found a parent with a refcell. # Save a reference to it. cell = self.create_ref(key) cell.parents[index] = ref return cell def ScopeDict.create_ref(self, key): """ Create a refcell. """ # Add key to inner dict if it doesn't exist. if key not in self.__inner__: self.__inner__[key] = NULL # Wrap the address of the value pointer in a refcell. cell = RefCell(&(self.__inner__.value_pointer(key))) self.refs[key] = cell if self.parents: # Save the parents. # This is in case value == NULL # and it needs to look it up in the parents. cell.parents = self.parents.copy() else: cell.parents = NULL # Not necessary if no parents. # (Except for tracebacks?) cell.key = key return cell def RefCell.resolve(self): """ Resolve cell to a value. Will not return NULL. Will raise KeyError if fail. (In C, it would return NULL instead.) """ if not self.indirect: if self.value_ptr is not NULL: return self.value_ptr elif self.value_ptr.deref() is not NULL: return self.value_ptr.deref() # No parents to search if self.parents is NULL: raise KeyError # Search parents for value. for i, parent in enumerate(self.parents): # We want the parent CELL. if not isinstance(parent, RefCell): # Try to ask for a ref from parent. assert isinstance(parent, ScopeDict) try: parent = parent.getref(self.key, create_if_none=False) except KeyError: continue # Give up on this parent. self.parents[i] = parent # Try to get parent cell to resolve. try: return parent.resolve() except KeyError: continue raise KeyError Here are some of the wrapper algorithms for the dict methods. # No change. ScopeDict.__setitem__ = dict.__setitem__ def ScopeDict.keys(self): # Example of iteration. # This is already the algorithm, probably, so there's no extra cost. for key, value in self.__inner__.items(): if value is not NULL: yield key def ScopeDict.__getitem__(self, key): result = self.__inner__.get(key, NULL) # This is an extra check. if result is not NULL: return result if key in self.refs: # This is an extra check. # Only necessary for nested scopes. # So a regular dict doesn't even need this. # In fact, for class dicts, you don't want it, # since it skips the MRO. try: self.refs[key].resolve() except KeyError: pass raise KeyError(key) def ScopeDict.__delitem__(self, key): if self.__inner__.get(key, NULL) is NULL: # extra check? raise KeyError(key) delete self.__inner__[key] self.__inner__[key] = NULL def ScopeDict.__del__(self): """ Delete this dict. """ for key in self.__inner__: ref = self.refs[key] if ref is NULL: # no ref (standard dict case) delete self.__inner__[key] # DecRef the value else: if ref.__refcount > 1: #ref is exposed # Make it point directly to the value. ref.value_ptr = self.__inner__[key] ref.indirect = True self.refs[key] = NULL delete ref # DecRef, not dict removal def ScopeDict.compact(self): """ Compact the dictionary. (Similarly for expanding.) """ new_table = {} new_refs = {} # Remove unnecessary entries # Let dict.__table be the internal entry table for key in self.__inner__: ref = self.refs[key] if ref is not NULL and ref.__refcount == 1: # Ref exists but is not exposed. # Delete unused reference. ref.value_ptr = NULL delete ref ref = NULL if value is not NULL or ref is not NULL: # Add it to the new table using normal dict # compact algorithm. (I don't know it.) new_table[key] = value new_refs[key] = ref # Can add a check here: If there are no live refs, # convert to a regular dict. self.__inner__ = new_table self.refs = new_refs def RefCell.__del__(self): if self.indirect: #pointing at the pointer delete self.value_ptr.deref() else: #pointing at the value delete self.value_ptr delete self.key delete self.parents From victor.stinner at gmail.com Thu Dec 17 06:53:13 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 17 Dec 2015 12:53:13 +0100 Subject: [Python-Dev] Idea: Dictionary references Message-ID: 2015-12-17 11:54 GMT+01:00 Franklin? Lee : > My suggestion should improve *all* function calls which refer to > outside names. Ok, I now think that you should stop hijacking the FAT Python thread. I start a new thread. IMHO your dictionary reference idea is completly unrelated to FAT Python. FAT Python is about guards and specialized bytecode. > Each function keeps an indirect, automagically updated > reference to the current value of the names they use, and will never > need to look things up again.[*] Indirections, nested dictionaries, creation of new "reference" objects... IMHO you are going to have major implementation issues :-/ The design looks *very* complex. I'm quite sure that you are going to make namespace lookups *slower*. It reminds me Python before the with statement and PyPy garbage collector. Many applications relied on the exact behaviour of CPython garbage collector. For example, they expected that a file is written on disk when the last reference to the file object is destroyed. In PyPy, it wasn't (it isn't) true, the write can be delayed. I guess that with all your complex machinery for dict lookups, you will have similar issues of object lifetime. It's unclear to me when and how "reference" objects are destroyed, nor when dict values are destroyed. What happens if a dict key is removed and a reference object is still alive? Is the dict value immediatly destroyed? Does the reference object contain a strong or a weak reference to the value? Victor From steve at pearwood.info Thu Dec 17 08:48:25 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 18 Dec 2015 00:48:25 +1100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: <20151217134825.GF1609@ando.pearwood.info> On Thu, Dec 17, 2015 at 12:53:13PM +0100, Victor Stinner quoted: > 2015-12-17 11:54 GMT+01:00 Franklin? Lee : > > Each function keeps an indirect, automagically updated > > reference to the current value of the names they use, Isn't that a description of globals()? If you want to look up a name "spam", you grab an indirect reference to it: globals()["spam"] which returns the current value of the name "spam". > > and will never need to look things up again.[*] How will this work? Naively, it sounds to me like Franklin is suggesting that on every global assignment, the interpreter will have to touch every single function in the module to update that name. Something like this: # on a global assignment spam = 23 # the interpreter must do something like this: for function in module.list_of_functions: if "spam" in function.__code__.__global_names__: function.__code__.__global_names__["spam"] = spam As I said, that's a very naive way to implement this. Unless you have something much cleverer, I think this will be horribly slow. And besides, you *still* need to deal with the case that the name isn't a global at all, but in the built-ins namespace. -- Steve From fijall at gmail.com Thu Dec 17 09:42:41 2015 From: fijall at gmail.com (Maciej Fijalkowski) Date: Thu, 17 Dec 2015 16:42:41 +0200 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: <20151217134825.GF1609@ando.pearwood.info> References: <20151217134825.GF1609@ando.pearwood.info> Message-ID: You can very easily implement this with version tags on the globals dictionaries - means that the dictionaries have versions and the guard checking if everything is ok just checks the version tag on globals. Generally speaking, such optimizations have been done in the past (even in places like pypy, but also in literature) and as soon as we have dynamic compilation (and FAT is a form of it), you can do such tricks. On Thu, Dec 17, 2015 at 3:48 PM, Steven D'Aprano wrote: > On Thu, Dec 17, 2015 at 12:53:13PM +0100, Victor Stinner quoted: >> 2015-12-17 11:54 GMT+01:00 Franklin? Lee : > >> > Each function keeps an indirect, automagically updated >> > reference to the current value of the names they use, > > Isn't that a description of globals()? If you want to look up a name > "spam", you grab an indirect reference to it: > > globals()["spam"] > > which returns the current value of the name "spam". > > >> > and will never need to look things up again.[*] > > How will this work? > > Naively, it sounds to me like Franklin is suggesting that on every > global assignment, the interpreter will have to touch every single > function in the module to update that name. Something like this: > > # on a global assignment > spam = 23 > > # the interpreter must do something like this: > for function in module.list_of_functions: > if "spam" in function.__code__.__global_names__: > function.__code__.__global_names__["spam"] = spam > > As I said, that's a very naive way to implement this. Unless you have > something much cleverer, I think this will be horribly slow. > > And besides, you *still* need to deal with the case that the name isn't > a global at all, but in the built-ins namespace. > > > -- > Steve > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com From leewangzhong+python at gmail.com Thu Dec 17 10:38:50 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 10:38:50 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: On Thu, Dec 17, 2015 at 6:53 AM, Victor Stinner wrote: > 2015-12-17 11:54 GMT+01:00 Franklin? Lee : >> My suggestion should improve *all* function calls which refer to >> outside names. > > Ok, I now think that you should stop hijacking the FAT Python thread. > I start a new thread. IMHO your dictionary reference idea is completly > unrelated to FAT Python. > > FAT Python is about guards and specialized bytecode. Yeah, maybe it's out of the scope of bytecode optimization. But I think this would make guards unnecessary, since once a name is found, there's a quick way to refer to it. >> Each function keeps an indirect, automagically updated >> reference to the current value of the names they use, and will never >> need to look things up again.[*] > > Indirections, nested dictionaries, creation of new "reference" > objects... IMHO you are going to have major implementation issues :-/ > The design looks *very* complex. I'm quite sure that you are going to > make namespace lookups *slower*. The nested dictionaries are only for nested scopes (and inner functions don't create nested scopes). Nested scopes will already require multiple lookups in parents. I think this is strictly an improvement, except perhaps in memory. Guards would also have an issue with nested scopes. You have a note on your website about it: (https://faster-cpython.readthedocs.org/fat_python.html#call-pure-builtins) "The optimization is disabled when the builtin function is modified or if a variable with the same name is added to the global namespace of the function." With a NestedRefCell, it would check globals() (a simple dereference and `pointer != NULL`) and then check __builtin__. If it finds it in __builtin__, it will save a reference to that. It will only do repeated lookups in __builtin__ if each of the previous lookups fail. As far as I know, these repeated lookups are already necessary, and anything that can be used to avoid them (e.g. guards) can be used for repeated failed lookups, too. For non-nested scopes, it will look things up once, costing an extra RefCell creation if necessary, and the only other costs are on resizing, deletion of the dict, and working with a larger dict in general. The important parts of the design is pretty much in the code that I posted. We keep an extra hash table for refs, and keep it the same size as the original hash table, so that we pay a single lookup cost to get the index in both. > It reminds me Python before the with statement and PyPy garbage > collector. Many applications relied on the exact behaviour of CPython > garbage collector. For example, they expected that a file is written > on disk when the last reference to the file object is destroyed. In > PyPy, it wasn't (it isn't) true, the write can be delayed. It should not affect the semantic. Things should still happen as they used to, as far as I can figure. Or at least as far as the rules of the interpreter are concerned. (That is, values might live a little longer in PyPy, but can't be forced to live longer than they were formerly allowed to.) > I guess that with all your complex machinery for dict lookups, The only cost to a normal getitem (again, other than from looking it up in a bigger dict) is to make sure the return value isn't NULL. The machinery is involved in function creation and resolution of names: On function creation, get refs to each name used. When the name is used, try to resolve the refs. > you > will have similar issues of object lifetime. It's unclear to me when > and how "reference" objects are destroyed, nor when dict values are > destroyed. RefCells are ref-counted PyObjects. That is not an issue. A RefCell will live while it is useful (= it has an outside reference) or while it's not useful but its owner dict hasn't been resized/deleted yet (at which time RefCells without outside references will be deleted). RefCells "know" whether they're part of a living dict. (The dict marks them as such upon its death.) If they are not, they will decref their value upon their death. They do not hold a reference to their owner dict. If it's part of a living dict, we have a choice: the dict can be responsible for deletion, or the RefCell can be responsible for deletion. It doesn't really matter which design we go with. > What happens if a dict key is removed and a reference object is still > alive? Is the dict value immediatly destroyed? Does the reference > object contain a strong or a weak reference to the value? If a dict key is removed, the inner dict will still have the key (which is true in the current implementation), but the value will be decref'd and the value pointer will be NULL'd. The RefCell will not need to be updated, since (as part of a living dict) it's pointing at the pointer to the value, not the object itself. If it is detached from its owner dict (due to owner death), it will own a strong reference to the value. This is necessarily the case, since things that have a (strong) reference to the RefCell expect to find the value (or lack of a value) there. From leewangzhong+python at gmail.com Thu Dec 17 10:56:29 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 10:56:29 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: <20151217134825.GF1609@ando.pearwood.info> References: <20151217134825.GF1609@ando.pearwood.info> Message-ID: (Previous thread was here, by the way: https://mail.python.org/pipermail/python-dev/2015-December/142437.html) On Thu, Dec 17, 2015 at 8:48 AM, Steven D'Aprano wrote: > On Thu, Dec 17, 2015 at 12:53:13PM +0100, Victor Stinner quoted: >> 2015-12-17 11:54 GMT+01:00 Franklin? Lee : > >> > Each function keeps an indirect, automagically updated >> > reference to the current value of the names they use, > > Isn't that a description of globals()? If you want to look up a name > "spam", you grab an indirect reference to it: > > globals()["spam"] > > which returns the current value of the name "spam". The *current value*. I'm proposing that we instead do this: spamref = globals().getref('spam') Every time we want to find the current, updated value of 'spam', we just do spam = spamref.resolve() which will skip the hash lookup and go directly to the value. >> > and will never need to look things up again.[*] > > How will this work? > > Naively, it sounds to me like Franklin is suggesting that on every > global assignment, the interpreter will have to touch every single > function in the module to update that name. A refcell holds a pointer to the location in the dict itself where the value pointer is. When the value is updated in the dict, the refcell will not need to be updated. My original proposal wanted to keep cells in the "real" dict, and update them. Like so: class RefCell: __slots__ = ['value'] class ScopeDict(dict): def __getitem__(self, key): value = super()[key].value #may raise if value is NULL: raise KeyError(key) return value def __setitem__(self, key, value): if key in super(): super()[key].value = value else: cell = super()[key] = RefCell() cell.value = value def __delitem__(self, key, value): cell = super()[key] #may raise if cell.value is NULL: raise KeyError(key) cell.value = NULL I realized later that this isn't necessary. Most dict operations don't need to know about the indirection, so I make the inner dict a normal dict (with a few more holes than normal). But this would show how you can avoid manual updates for references. > And besides, you *still* need to deal with the case that the name isn't > a global at all, but in the built-ins namespace. globals() to __builtin__ is a nesting relationship. At the bottom of the following email, I have a pseudocode implementation which knows how to deal with nested scopes. https://mail.python.org/pipermail/python-dev/2015-December/142489.html From carlos.barera at gmail.com Thu Dec 17 11:18:27 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Thu, 17 Dec 2015 18:18:27 +0200 Subject: [Python-Dev] pypi simple index Message-ID: Hi, I'm using install_requires in setup.py to specify a specific package my project is dependant on. When running python setup.py install, apparently the simple index is used as an older package is taken from pypi. While in https://pypi.python.org/pypi, there's a newer package. When installing directly using pip, the latest package is installed successfully. Several questions: 1. What's the difference between the pypi simple index and the general pypi index? 2. Why is setup.py defaulting to the simple index? 3. How can I make the setup.py triggered install use the main pypi index instead of simple Thanks! Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at gmail.com Thu Dec 17 11:50:35 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Thu, 17 Dec 2015 17:50:35 +0100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: 2015-12-17 16:38 GMT+01:00 Franklin? Lee : > Yeah, maybe it's out of the scope of bytecode optimization. But I > think this would make guards unnecessary, since once a name is found, > there's a quick way to refer to it. FAT Python requires and supports various kinds of guards: * type of a function argument * dictionary key * function (func.__code__) I guess that you are talking about the dictionary key guards. In fact, they are 4 subtypes of dict guards: * guard on builtins.__dict__['key'] * guard on globals()['key'] * guard on MyClass.__dict__['key'] * guard on dict['key'] The implementation of the check if this guard currently relies on a new "version" (global dict version, incremented at each change) to avoid lookup if possible. The guard stores a strong reference to the key and the value. If the value is different, the guard checks returns False. I don't understand how you plan to avoid guards. The purpose of guards is to respect the Python semantic by falling back to the "slow" bytecode if something changes. So I don't understand your idea of avoiding completly guards. Again, that's why I guess that it's unrelated to FAT Python... Or it's just that your idea is an alternative to dict versionning to get efficient guards on dict keys, right? > Guards would also have an issue > with nested scopes. You have a note on your website about it: > (https://faster-cpython.readthedocs.org/fat_python.html#call-pure-builtins) > > "The optimization is disabled when the builtin function is > modified or if a variable with the same name is added to the global > namespace of the function." FAT Python doesn't emit a specialized version if it requires a builtin function, but a local variable with the same name is found. The check is done in the current function but also in the parent namespaces, up to the global namespace. I'm talking about the static analysis of the code. If the specialized version is built, a guard is created on the builtin namespace and another on the global namespace. Sorry, I don't understand the "problem" you are referring to. Can you maybe show an example of code where FAT Python doesn't work or where you idea can help? > RefCells "know" whether they're part of a living dict. (The dict marks > them as such upon its death.) If they are not, they will decref their > value upon their death. They do not hold a reference to their owner > dict. The dict contains the list all of its "own" RefCell objects, right? > If it's part of a living dict, we have a choice: the dict can be > responsible for deletion, or the RefCell can be responsible for > deletion. It doesn't really matter which design we go with. I see your RefCell idea like dictionary views. Views are linked to the dict. If the dict is modified, views are "updated" too. It would be confusing to have a view disconnected from its container. In short, RefCell is a view on a single dict entry, to be able to "watch" a dict entry without the cost of a lookup? And a RefCell can be created, even if the dict entry doesn't exist right? Hum, creating many RefCell introduces an overhead somewhere. For example, deleting a key has to update N RefCell objects linked to this key, right? So del dict[x] takes O(number of RefCell), right? > If a dict key is removed, the inner dict will still have the key > (which is true in the current implementation), but the value will be > decref'd and the value pointer will be NULL'd. The RefCell will not > need to be updated, since (as part of a living dict) it's pointing at > the pointer to the value, not the object itself. Ok, so deleting a dict key always destroy the value, but the key may stay alive longer than expected (until all RefCell objects are destroyed). Usually, dict keys are constants, so keeping them alive doesn't matter so much. It's rare to have large keys. > If it is detached from its owner dict (due to owner death), it will > own a strong reference to the value. This is necessarily the case, > since things that have a (strong) reference to the RefCell expect to > find the value (or lack of a value) there. I don't understand this part. You said that deleting a key destroys the value. Destroying a dict means clearing all keys, so destroying all values. No? What is the use case of having a RefCell no more connected to the dict? Victor From carlos.barera at gmail.com Thu Dec 17 12:13:28 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Thu, 17 Dec 2015 19:13:28 +0200 Subject: [Python-Dev] pypi simple index Message-ID: Hi, I'm using install_requires in setup.py to specify a specific package my project is dependant on. When running python setup.py install, apparently the simple index is used as an older package is taken from pypi. While in https://pypi.python.org/pypi, there's a newer package. When installing directly using pip, the latest package is installed successfully. I noticed that the new package is only available as a wheel and older versions of setup tools won't install wheels for install_requires. However, upgrading setuptools didn't help. Several questions: 1. What's the difference between the pypi simple index and the general pypi index? 2. Why is setup.py defaulting to the simple index? 3. How can I make the setup.py triggered install use the main pypi index instead of simple Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Thu Dec 17 12:30:24 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 17 Dec 2015 09:30:24 -0800 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: On Dec 17, 2015, at 07:38, Franklin? Lee wrote: > > The nested dictionaries are only for nested scopes (and inner > functions don't create nested scopes). Nested scopes will already > require multiple lookups in parents. I think I understand what you're getting at here, but it's a really confusing use of terminology. In Python, and in programming in general, nested scopes refer to exactly inner functions (and classes) being lexically nested and doing lookup through outer scopes. The fact that this is optimized at compile time to FAST vs. CELL vs. GLOBAL/NAME, cells are optimized at function-creation time, and only global and name have to be resolved at the last second doesn't mean that there's no scoping, or some other form of scoping besides lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E vs. further-out E can be optimized. What you're talking about here is global lookups falling back to builtin lookups. There's no more general notion of nesting or scoping involved, so why use those words? Also, reading your earlier post, it sounds like you're trying to treat attribute lookup as a special case of global lookup, only with a chain of superclasses beyond the class instead of just a single builtins. But they're totally different. Class lookup doesn't just look in a series of dicts, it calls __getattribute__ which usually calls __getattr__ which may or may not look in the __dict__s (which may not even exist) to find a descriptor and then calls its __get__ method to get the value. You'd have to somehow handle the case where the search only went through object.__getattribute__ and __getattr__ and found a result by looking in a dict, to make a RefCell to that dict which is marked in some way that says "I'm not a value, I'm a descriptor you have to call each time", and then apply some guards that will detect whether that class or any intervening class dict touched that key, whether the MRO changed, whether that class or any intervening class added or changed implementations for __getatttibute__ or __getattr__, and probably more things I haven't thought of. What do those guards look like? (Also, you need a different set of rules to cache, and guard for, special method lookup--you could just ignore that, but I think those are the lookups that would benefit most from optimization.) So, trying to generalize global vs. builtin to a general notion of "nested scope" that isn't necessary for builtins and doesn't work for anything else seems like overcomplicating things for no benefit. > I think this is strictly an > improvement, except perhaps in memory. Guards would also have an issue > with nested scopes. You have a note on your website about it: > (https://faster-cpython.readthedocs.org/fat_python.html#call-pure-builtins) From robertc at robertcollins.net Thu Dec 17 12:36:31 2015 From: robertc at robertcollins.net (Robert Collins) Date: Fri, 18 Dec 2015 06:36:31 +1300 Subject: [Python-Dev] pypi simple index In-Reply-To: References: Message-ID: On 18 December 2015 at 06:13, Carlos Barera wrote: > Hi, > > I'm using install_requires in setup.py to specify a specific package my > project is dependant on. > When running python setup.py install, apparently the simple index is used > as an older package is taken from pypi. While > What's happening here is that easy-install is triggering - which does not support wheels. Use 'pip install .' instead. > in https://pypi.python.org/pypi, there's a newer package. > When installing directly using pip, the latest package is installed > successfully. > I noticed that the new package is only available as a wheel and older > versions of setup tools won't install wheels for install_requires. > However, upgrading setuptools didn't help. > > Several questions: > 1. What's the difference between the pypi simple index and the general > pypi index? > The '/simple' API is for machine consumption, /pypi is for humans, other than that there should be not be any difference. > 2. Why is setup.py defaulting to the simple index? > Because it is the only index :). > 3. How can I make the setup.py triggered install use the main pypi index > instead of simple > You can't - the issue is not the index being consulted, but your use of 'python setup.py install' which does not support wheels. Cheers, Rob -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Thu Dec 17 12:22:08 2015 From: brett at python.org (Brett Cannon) Date: Thu, 17 Dec 2015 17:22:08 +0000 Subject: [Python-Dev] pypi simple index In-Reply-To: References: Message-ID: PyPI questions are best directed towards the distutils-sig as they manage PyPI and not python-dev. On Thu, 17 Dec 2015 at 08:20 Carlos Barera wrote: > Hi, > > I'm using install_requires in setup.py to specify a specific package my > project is dependant on. > When running python setup.py install, apparently the simple index is used > as an older package is taken from pypi. While in > https://pypi.python.org/pypi, there's a newer package. > When installing directly using pip, the latest package is installed > successfully. > > Several questions: > 1. What's the difference between the pypi simple index and the general > pypi index? > 2. Why is setup.py defaulting to the simple index? > 3. How can I make the setup.py triggered install use the main pypi index > instead of simple > > Thanks! > Carlos > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Thu Dec 17 13:59:32 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 13:59:32 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: On Thu, Dec 17, 2015 at 11:50 AM, Victor Stinner wrote: > I don't understand how you plan to avoid guards. The purpose of guards > is to respect the Python semantic by falling back to the "slow" > bytecode if something changes. So I don't understand your idea of > avoiding completly guards. Again, that's why I guess that it's > unrelated to FAT Python... Yeah, I guess it is. Maybe we could've moved to python-ideas. As far as I understand, this concept can be put into CPython. (Note to self: Look up how PyPy handles repeated lookups.) >> Guards would also have an issue >> with nested scopes. You have a note on your website about it: >> (https://faster-cpython.readthedocs.org/fat_python.html#call-pure-builtins) >> >> "The optimization is disabled when the builtin function is >> modified or if a variable with the same name is added to the global >> namespace of the function." > > FAT Python doesn't emit a specialized version if it requires a builtin > function, but a local variable with the same name is found. > > The check is done in the current function but also in the parent > namespaces, up to the global namespace. I'm talking about the static > analysis of the code. > > If the specialized version is built, a guard is created on the builtin > namespace and another on the global namespace. > > Sorry, I don't understand the "problem" you are referring to. Can you > maybe show an example of code where FAT Python doesn't work or where > you idea can help? If we have to look in scopes A, B, C in order, where the name is in C but never in B, and there's no nesting relationship between B and C. In that case, I do not create a refcell in B chained to C (because there's no relationship), so I keep doing lookups in B. That's the problem. For that, guards and versioning can prevent unnecessary lookups in B. Though I think a better solution might be: If a name is found in C, create empty refcells in B and A (i.e. in all previous dicts). (There's a nesting relationship between globals() and __builtin__, so that's fine.) >> RefCells "know" whether they're part of a living dict. (The dict marks >> them as such upon its death.) If they are not, they will decref their >> value upon their death. They do not hold a reference to their owner >> dict. > > The dict contains the list all of its "own" RefCell objects, right? It contains a table of pointers. The pointers are to RefCell objects or NULL. The refs table is exactly the same size as the internal hash table. This makes indexing it efficient: to find the pointer to a refcell, find the index of the key in the hash table, then use that SAME index on the refs table. You never need to find a refcell without also finding its hash index, so this is cheap. > In short, RefCell is a view on a single dict entry, to be able to > "watch" a dict entry without the cost of a lookup? And a RefCell can > be created, even if the dict entry doesn't exist right? My "implementation", which had nesting and recursion in mind, had a "create_if_none" parameter, which meant that the requester can ask for it to be created even if the key didn't exist in the table. Pre-creation is useful for functions which refer to globals() names before they're defined. No-creation is useful in... I can only think of nesting as a use (globals() -> __builtin__ shouldn't create empty cells in __builtin__). See `getref` in here: https://mail.python.org/pipermail/python-dev/2015-December/142489.html > Hum, creating many RefCell introduces an overhead somewhere. For > example, deleting a key has to update N RefCell objects linked to this > key, right? So del dict[x] takes O(number of RefCell), right? There are no "outside" updates, except when a dict moves to a different internal table or deletes its internal table. In that case, the dict has to move and update each exposed RefCell. For each dict, for each key, there is at most one RefCell. As long as the dict is alive, that RefCell will hold a pointer to the pointer to the value (enforced by the dict). When the dict dies, it makes the Refcell point to the object, and tells the RefCell it's free (so it's in charge of cleaning up its value). Dict entries look like this. typedef struct { /* Cached hash code of me_key. */ Py_hash_t me_hash; PyObject *me_key; PyObject *me_value; /* This field is only meaningful for combined tables */ } PyDictKeyEntry; The internal table (which the ref table will sync with) is an array of PyDictKeyEntrys. (Raymond Hettinger made a design with a smaller table, and the hash lookup would be into an array of indices. This would make synced metadata tables both easier and smaller. See https://mail.python.org/pipermail/python-dev/2012-December/123028.html and latest relevant discussion https://mail.python.org/pipermail/python-ideas/2015-December/037468.html ) The refcell will hold this: RefCell(&PyDictKeyEntry.me_value) A pointer to the field, not to the value itself. This means NO extra updates are necessary, and NO O(n) anything anywhere (except on resizing and destruction, and nesting can be O(n) in the number of parents). >> If a dict key is removed, the inner dict will still have the key >> (which is true in the current implementation), but the value will be >> decref'd and the value pointer will be NULL'd. The RefCell will not >> need to be updated, since (as part of a living dict) it's pointing at >> the pointer to the value, not the object itself. > > Ok, so deleting a dict key always destroy the value, but the key may > stay alive longer than expected (until all RefCell objects are > destroyed). Usually, dict keys are constants, so keeping them alive > doesn't matter so much. It's rare to have large keys. It MIGHT stay alive longer than the current implementation, yes. But that's not necessarily a bad thing. If there's no RefCell: The current dict implementation removes the key on deletion. In my "implementation", it doesn't remove the key on deletion. This isn't necessary: it can safely remove the key if there's no ref. (This adds an extra pointer check to delitem.) If there's a RefCell: Current dict removes key. A refdict should not remove a key with an exposed RefCell, because that key marks the spot as "used" (so that no key can be put inside). This is okay, because the refs are made to avoid lookups, so while we're keeping an extra reference to the key, the owner of the RefCell does NOT need to keep a reference to the key. There can be fewer total references to the key than with the current implementation, even with the extra reference. Especially in function objects, which is what I'm trying to solve. >> If it is detached from its owner dict (due to owner death), it will >> own a strong reference to the value. This is necessarily the case, >> since things that have a (strong) reference to the RefCell expect to >> find the value (or lack of a value) there. > > I don't understand this part. > > You said that deleting a key destroys the value. Destroying a dict > means clearing all keys, so destroying all values. No? > > What is the use case of having a RefCell no more connected to the dict? Honestly, I can only think of closures. But it's the right thing to do. (RefCells with refcount == 1 will be deleted upon dict destruction.) Consider: If thing X holds a RefCell into a dict, and the dict is destroyed, what should happen? Without RefCells: Thing X would be looking things up in the dict. That means thing X would have had a strong reference to the dict. With RefCells: The dict might be deleted where it wasn't deleted before. So thing X should have a disconnected RefCell to the value. (If it looked things up via a weakref to the dict, it should create a weakref to the RefCell. But if I understood an earlier message correctly, you can't weakref a dict.) From leewangzhong+python at gmail.com Thu Dec 17 14:19:43 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 14:19:43 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: On Thu, Dec 17, 2015 at 12:30 PM, Andrew Barnert wrote: > On Dec 17, 2015, at 07:38, Franklin? Lee wrote: >> >> The nested dictionaries are only for nested scopes (and inner >> functions don't create nested scopes). Nested scopes will already >> require multiple lookups in parents. > > I think I understand what you're getting at here, but it's a really confusing use of terminology. In Python, and in programming in general, nested scopes refer to exactly inner functions (and classes) being lexically nested and doing lookup through outer scopes. The fact that this is optimized at compile time to FAST vs. CELL vs. GLOBAL/NAME, cells are optimized at function-creation time, and only global and name have to be resolved at the last second doesn't mean that there's no scoping, or some other form of scoping besides lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E vs. further-out E can be optimized. Oh, I've never actually read the Python scoping rules spelled out. I wasn't sure if there were other cases. The other two cases I thought of as "nesting" were: object to its class, and class to its superclasses. > Also, reading your earlier post, it sounds like you're trying to treat attribute lookup as a special case of global lookup, only with a chain of superclasses beyond the class instead of just a single builtins. But they're totally different. Class lookup doesn't just look in a series of dicts, it calls __getattribute__ which usually calls __getattr__ which may or may not look in the __dict__s (which may not even exist) to find a descriptor and then calls its __get__ method to get the value. You'd have to somehow handle the case where the search only went through object.__getattribute__ and __getattr__ and found a result by looking in a dict, to make a RefCell to that dict which is marked in some way that says "I'm not a value, I'm a descriptor you have to call each time", and then apply some guards that will detect whether that class or any intervening class dict touched that key, whether the MRO changed, whether that class or any intervening class added or changed implementations for __getatttibute__ or __getattr__, and probably more things I haven't thought of. What do those guards look like? (Also, you need a different set of rules to cache, and guard for, special method lookup--you could just ignore that, but I think those are the lookups that would benefit most from optimization.) Doesn't __getattr__ only get called if all the mro __dict__ lookups failed? I forgot about __getattribute__. That might be the point at which refs are optimized. As for descriptors versus RefCells, I'm guessing that can be resolved, as soon as I figure out how descriptors actually work... If descriptors don't modify the __dict__, then RefCells shouldn't get involved. If they do, then there's some unwrapping going on there, and RefCells should fit right in (though whether they'll improve anything is a different question). RefCells are just a shortcut for dict lookups. For guards, I think Victor Stinner's idea could supplement this. Alternatively, in my other email, I said there could be a rule of, "Create intermediate RefCells for anything BEFORE a successful lookup." So if we look in A, B, C, D, and find it in C, then we create and save RefCells in A, B, C, but not D (where D = object). This MIGHT result in a lot of intermediate RefCells, but I'd guess most things aren't looked up just once, and it's saying, "It's possible for B to gain member B.x and catch me on my way to C.x." > So, trying to generalize global vs. builtin to a general notion of "nested scope" that isn't necessary for builtins and doesn't work for anything else seems like overcomplicating things for no benefit. Probably. The globals() and __builtin__ case is simpler than the class case. From abarnert at yahoo.com Thu Dec 17 16:37:54 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 17 Dec 2015 21:37:54 +0000 (UTC) Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> On Thursday, December 17, 2015 11:19 AM, Franklin? Lee wrote: > ... > as soon as I figure out how descriptors actually work... I think you need to learn what LOAD_ATTR and the machinery around it actually does before I can explain why trying to optimize it like globals-vs.-builtins doesn't make sense. Maybe someone who's better at explaining than me can come up with something clearer than the existing documentation, but I can't. From abarnert at yahoo.com Thu Dec 17 17:17:39 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 17 Dec 2015 14:17:39 -0800 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Dec 17, 2015, at 13:37, Andrew Barnert via Python-Dev wrote: > > On Thursday, December 17, 2015 11:19 AM, Franklin? Lee wrote: > > >> ... >> as soon as I figure out how descriptors actually work... > > > I think you need to learn what LOAD_ATTR and the machinery around it actually does before I can explain why trying to optimize it like globals-vs.-builtins doesn't make sense. Maybe someone who's better at explaining than me can come up with something clearer than the existing documentation, but I can't. I take that back. First, it was harsher than I intended. Second, I think I can explain things. First, for non-attribute lookups: (Non-shared) locals just load and save from an array. Free variables and shared locals load and save by going through an extra dereference on a cell object in an array. Globals do a single dict lookup. Builtins do two dict lookups. So, the only thing you can optimize there is builtins. But maybe that's worth it. Next, for attribute lookups (not counting special methods): Everything calls __getattribute__. Assuming that's not overridden and uses the object implementation: Instance attributes do one dict lookup. Class attributes (including normal methods, @property, etc.) do two or more dict lookups--first the instance, then the class, then each class on the class's MRO. Then, if the result has a __get__ method, it's called with the instance and class to get the actual value. This is how bound methods get created, property lookup functions get called, etc. The result of the descriptor call can't get cached (that would mean, for example, that every time you access the same @property on an instance, you'd get the same value). Dynamic attributes from a __getattr__ do all that plus whatever __getattr__ does. If __getattribute__ is overloaded, it's entirely up to that implementation to do whatever it wants. Things are similar for set and del: they call __setattr__/__delattr__, and the default versions of those look in the instance dict first, then look for a descriptor the same as with get except that they call a different method on the descriptor (and if it's not a descriptor, instead of using it, they ignore it and go back to the instance dict). So, your mechanism can't significantly speed up method lookups, properties, or most other things. It could speed up lookups for class attributes that aren't descriptors, but only at the cost of increasing the size of every instance--and how often do those matter anyway? A different mechanism that cached references to descriptors instead of to the resulting attributes could speed up method lookups, etc., but only by a very small amount, and with the same space cost. A mechanism that didn't try to get involved with the instance dict, and just flattened out the MRO search once that failed (and was out of the way before the descriptor call or __getattr__ even entered the picture) might speed methods up in deeply nested hierarchies, and with only a per-class rather than a per-instance space cost. But how often do you have deeply-nested hierarchies? And the speedup still isn't going to be that big: You're basically turning 5 dict lookups plus 2 method calls into 2 dict lookups plus 2 method calls. And it would still be much harder to guard than the globals dict: if any superclass changes its __bases__ or adds or removes a __getattribute__ or various other things, all of your references have to get re-computed. That's rare enough that the speed may not matter, but the code complexity probably does. If short: if you can't cache the bound methods (and as far as I can tell, in general you can't--even though 99% of the time it would work), I don't think there's any other significant win here. So, if the globals->builtins optimization is worth doing, don't tie it to another optimization that's much more complicated and less useful like this, or we'll never get your simple and useful idea. From victor.stinner at gmail.com Thu Dec 17 18:32:43 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 18 Dec 2015 00:32:43 +0100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: 2015-12-17 23:17 GMT+01:00 Andrew Barnert via Python-Dev : > Builtins do two dict lookups. > > So, the only thing you can optimize there is builtins. But maybe that's worth it. FYI I implemented an optimization in FAT Python to avoid lookups for builtin functions, builtin functions are copied to code constants at runtime: https://faster-cpython.readthedocs.org/fat_python.html#copy-builtin-functions-to-constants It's nothing new, it's the generalization of common hacks, like 'def func(len=len): return len(3)'. The optimization is restricted to loading builtin symbols which are not expected to be modified ;-) (Right now the optimization is disabled by default, because the optimizer is unable to detect when builtins are modified in the current function, and so it breaks the Python semantic.) > Class attributes (including normal methods, @property, etc.) do two or more dict lookups--first the instance, then the class, then each class on the class's MRO. Note: Types have an efficient cache for name lookups ;-) Thanks for this cache, it's no more an issue to have a deep hierarchy of classes. Victor From leewangzhong+python at gmail.com Thu Dec 17 18:41:29 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 18:41:29 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: I already know that we can't use recursion, because it bypasses MRO. I'm also not yet sure whether it makes sense to use refs for classes in the first place. As I understood it, an attribute will resolve in this order: - __getattribute__ up the MRO. (raises AttributeError) - __dict__ up the MRO. (raises KeyError) - __getattr__ up the MRO. (raises AttributeError) My new understanding: - __getattribute__. (raises AttributeError) - (default implementation:) __dict__.__getitem__. (raises KeyError) - __getattr__ up the MRO. (raises AttributeError) If this is the case, then (the default) __getattribute__ will be making the repeated lookups, and might be the one requesting the refcells (for the ones it wants). Descriptors seem to be implemented as: Store a Descriptor object as an attribute. When a Descriptor is accessed, if it is being accessed from its owner, then unbox it and use its methods. Otherwise, it's a normal attribute. Then Descriptors are in the dict, so MIGHT benefit from refcells. The memory cost might be higher, though. On Thu, Dec 17, 2015 at 5:17 PM, Andrew Barnert wrote: > On Dec 17, 2015, at 13:37, Andrew Barnert via Python-Dev wrote: >> >> On Thursday, December 17, 2015 11:19 AM, Franklin? Lee wrote: >> >> >>> ... >>> as soon as I figure out how descriptors actually work... >> >> >> I think you need to learn what LOAD_ATTR and the machinery around it actually does before I can explain why trying to optimize it like globals-vs.-builtins doesn't make sense. Maybe someone who's better at explaining than me can come up with something clearer than the existing documentation, but I can't. > > I take that back. First, it was harsher than I intended. Second, I think I can explain things. I appreciate it! Tracking function definitions in the source can make one want to do something else. > First, for non-attribute lookups: > > (Non-shared) locals just load and save from an array. > > Free variables and shared locals load and save by going through an extra dereference on a cell object in an array. In retrospect, of course they do. It sounds like the idea is what's already used there, except the refs are synced to the locals array instead of a hash table. > Globals do a single dict lookup. A single dict lookup per function definition per name used? That's what I'm proposing. For example, (and I only just remembered that comprehensions and gen expressions create scope) [f(x) for x in range(10000)] would look up the name `f` at most twice (once in globals(), once in builtins() if needed), and will always have the latest version of `f`. And if it's in a function, the refcell(s) would be saved by the function. > Builtins do two dict lookups. Two? > Class attributes (including normal methods, @property, etc.) do two or more dict lookups--first the instance, then the class, then each class on the class's MRO. Then, if the result has a __get__ method, it's called with the instance and class to get the actual value. This is how bound methods get created, property lookup functions get called, etc. The result of the descriptor call can't get cached (that would mean, for example, that every time you access the same @property on an instance, you'd get the same value). Yeah, I would only try to save in a dict lookup to get the descriptor, and I'm not sure it's worth it. (Victor's response says that class attributes are already efficient, though.) > So, if the globals->builtins optimization is worth doing, don't tie it to another optimization that's much more complicated and less useful like this, or we'll never get your simple and useful idea. Sure. I couldn't figure out where to even save the refcells for attributes, so I only really saw an opportunity for name lookups. Since locals and nonlocals don't require dict lookups, this means globals() and __builtin__. From leewangzhong+python at gmail.com Thu Dec 17 18:42:33 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Thu, 17 Dec 2015 18:42:33 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Thu, Dec 17, 2015 at 6:41 PM, Franklin? Lee wrote: > Then Descriptors are in the dict, so MIGHT benefit from refcells. The > memory cost might be higher, though. Might be worse than the savings, I mean. From abarnert at yahoo.com Thu Dec 17 22:37:58 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Thu, 17 Dec 2015 19:37:58 -0800 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Dec 17, 2015, at 15:41, Franklin? Lee wrote: > > I already know that we can't use recursion, because it bypasses MRO. > I'm also not yet sure whether it makes sense to use refs for classes > in the first place. > > As I understood it, an attribute will resolve in this order: > - __getattribute__ up the MRO. (raises AttributeError) > - __dict__ up the MRO. (raises KeyError) > - __getattr__ up the MRO. (raises AttributeError) > > > My new understanding: > - __getattribute__. (raises AttributeError) > - (default implementation:) __dict__.__getitem__. (raises KeyError) > - __getattr__ up the MRO. (raises AttributeError) No, still completely wrong. If __getattribute__ raises an AttributeError (or isn't found, but that only happens in special cases like somehow calling a method on a type that hasn't been constructed), that's the end of the line; there's no fallback, and everything else (IIRC: searching MRO dicts for data descriptors, searching the instance dict, searching MRO dicts for non-data descriptors or non-descriptors, special-method-lookup-and-call __getattr__, raise AttributeError... and then doing the appropriate descriptor call at the end if needed). I was going to say that the only custom __getattribute__ you'll find in builtins or stdlib is on type, which does the exact same thing except when it calls a descriptor it does __get__(None, cls) instead of __get__(obj, type(obj)), and if you find any third-party __getattribute__ you should just assume it's going to do something crazy and don't bother trying to help it. But then I remembered that super must have a custom __getattribute__, so... you'd probably need to search the code for others. > If this is the case, then (the default) __getattribute__ will be > making the repeated lookups, and might be the one requesting the > refcells (for the ones it wants). Yes, the default and type __getattribute__ are what you'd want to optimize, if anything. And maybe special-method lookup. > Descriptors seem to be implemented as: > Store a Descriptor object as an attribute. When a Descriptor is > accessed, if it is being accessed from its owner, then unbox it and > use its methods. Otherwise, it's a normal attribute. Depending on what you mean by "owner", I think you have that backward. If the instance itself stores a descriptor, it's just used as itself; if the instance's _type_ (or a supertype) stores one, it's called to get the instance attribute. > Then Descriptors are in the dict, so MIGHT benefit from refcells. The > memory cost might be higher, though. Same memory cost. They're just objects whose type's dicts happen to have a __get__ method (just like iterables are just objects whose type's dicts happen to have an __iter__ method). The point is that you can't cache the result of the descriptor call, you can cache the descriptor itself but it will rarely help, and the builtin method cache probably already takes care of 99% of the cases where it would help, so I don't see what you're going to get here. >> On Thu, Dec 17, 2015 at 5:17 PM, Andrew Barnert wrote: >>> On Dec 17, 2015, at 13:37, Andrew Barnert via Python-Dev wrote: >>> >>> On Thursday, December 17, 2015 11:19 AM, Franklin? Lee wrote: >>> >>> >>>> ... >>>> as soon as I figure out how descriptors actually work... >>> >>> >>> I think you need to learn what LOAD_ATTR and the machinery around it actually does before I can explain why trying to optimize it like globals-vs.-builtins doesn't make sense. Maybe someone who's better at explaining than me can come up with something clearer than the existing documentation, but I can't. >> >> I take that back. First, it was harsher than I intended. Second, I think I can explain things. > > I appreciate it! Tracking function definitions in the source can make > one want to do something else. The documentation is pretty good for this stuff (and getting better every year). You mainly want the data model chapter of the reference and the descriptor howto guide; the dis and inspect docs in the library can also be helpful. Together they'll answer most of what you need. If they don't, maybe I will try to write up an explanation as a blog post, but I don't think it needs to get sent to the list (except for the benefit of core devs calling me out of I screw up, but they have better things to do with their time). >> First, for non-attribute lookups: >> >> (Non-shared) locals just load and save from an array. >> >> Free variables and shared locals load and save by going through an extra dereference on a cell object in an array. > > In retrospect, of course they do. > > It sounds like the idea is what's already used there, except the refs > are synced to the locals array instead of a hash table. Yes, which is already faster than what you want to do. More importantly, trying to put globals into the locals dict as you've described isn't going to do any good--first, because the locals dict is ignored in favor of the fast array (which has to be structured at compile time), and second, because it wouldn't be any faster than the globals dict anyway; a dict lookup is a dict lookup. In case you're wondering why globals don't just work the same way as nonlocals, as just one more nested scope: I'd guess it's to allow the global scope to be more dynamic than local scopes. You can create and pass around functions that reference globals that haven't been defined yet, exec up new types and functions, modify globals(), execute a module statement by statement instead of having to do it all at once (which would make the REPL more painful), use a partially-constructed module (so if a and b both import each other, that's only a problem if they access each others' globals at module scope), etc. >> Globals do a single dict lookup. > > A single dict lookup per function definition per name used? That's > what I'm proposing. Each LOAD_GLOBAL is a dict lookup at call time. A major point of Victor's FAT Python, as I understand it, is to change that to a dict lookup at function build time (I think meaning MAKE_FUNCTION/MAKE_CLOSURE, not compile time) instead of call time, with guards to restore the original slow code if the results are out of date, by storing the result in the code object's constants table (which basically works like the fast-locals array, but even better, because it's a static part of the code object rather than copied into the end of the stack frame). I don't see what other optimizations you can add on top of that that would help anything, except maybe in some weird edge case where FAT's guards keep getting tripped but somehow the references could be preserved. For normal code, you're just adding overhead for no benefit. But builtins, FAT apparently has a problem that's not trivial to solve. So if you could make builtins appear like globals in a way that makes FAT's globals dict guard actually work correctly with them, that sounds like it would be a major contribution. > For example, (and I only just remembered that comprehensions and gen > expressions create scope) Yes, because they define and call an anonymous function, and function definitions create scopes. > [f(x) for x in range(10000)] > > would look up the name `f` at most twice (once in globals(), once in > builtins() if needed), and will always have the latest version of `f`. > > And if it's in a function, the refcell(s) would be saved by the function. I don't know what this last sentence means. > > >> Builtins do two dict lookups. > > Two? Actually, I guess three: first you fail to find the name in globals, then you find __builtins__ in globals, then you find the name in __builtins__ or __builtins__.__dict__. >> Class attributes (including normal methods, @property, etc.) do two or more dict lookups--first the instance, then the class, then each class on the class's MRO. Then, if the result has a __get__ method, it's called with the instance and class to get the actual value. This is how bound methods get created, property lookup functions get called, etc. The result of the descriptor call can't get cached (that would mean, for example, that every time you access the same @property on an instance, you'd get the same value). > > Yeah, I would only try to save in a dict lookup to get the descriptor, > and I'm not sure it's worth it. > > (Victor's response says that class attributes are already efficient, though.) > > >> So, if the globals->builtins optimization is worth doing, don't tie it to another optimization that's much more complicated and less useful like this, or we'll never get your simple and useful idea. > > Sure. I couldn't figure out where to even save the refcells for > attributes, so I only really saw an opportunity for name lookups. > Since locals and nonlocals don't require dict lookups, this means > globals() and __builtin__. From rosuav at gmail.com Thu Dec 17 22:55:13 2015 From: rosuav at gmail.com (Chris Angelico) Date: Fri, 18 Dec 2015 14:55:13 +1100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: On Fri, Dec 18, 2015 at 2:37 PM, Andrew Barnert via Python-Dev wrote: > If __getattribute__ raises an AttributeError (or isn't found, but that only happens in special cases like somehow calling a method on a type that hasn't been constructed) Wow. How do you do that? Is it possible with pure Python? ChrisA From victor.stinner at gmail.com Fri Dec 18 01:49:46 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Fri, 18 Dec 2015 07:49:46 +0100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: <986887788.314598.1450388274907.JavaMail.yahoo@mail.yahoo.com> Message-ID: Le vendredi 18 d?cembre 2015, Andrew Barnert via Python-Dev < python-dev at python.org> a ?crit : > > >> Builtins do two dict lookups. > > > > Two? > > Actually, I guess three: first you fail to find the name in globals, then > you find __builtins__ in globals, then you find the name in __builtins__ or > __builtins__.__dict__. > Getting builtins from globals in done when the frame object is created, and the lookup is skipped in the common case if qu recall correctly. LOAD_NAME does a lookup on function globals, if the key doesn't exist (common case for builtins), a ceond lookup is done in frame builtins. Open Python/ceval.c and see to code. There is an optimisation for fast lookup in two dict. Victor -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Fri Dec 18 07:56:04 2015 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 18 Dec 2015 23:56:04 +1100 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: References: Message-ID: <20151218125604.GI1609@ando.pearwood.info> On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev wrote: > On Dec 17, 2015, at 07:38, Franklin? Lee wrote: > > > > The nested dictionaries are only for nested scopes (and inner > > functions don't create nested scopes). Nested scopes will already > > require multiple lookups in parents. > > I think I understand what you're getting at here, but it's a really > confusing use of terminology. In Python, and in programming in > general, nested scopes refer to exactly inner functions (and classes) > being lexically nested and doing lookup through outer scopes. The fact > that this is optimized at compile time to FAST vs. CELL vs. > GLOBAL/NAME, cells are optimized at function-creation time, and only > global and name have to be resolved at the last second doesn't mean > that there's no scoping, or some other form of scoping besides > lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E > vs. further-out E can be optimized. In Python 2, the LOAD_NAME byte-code can return a local, even though it normally doesn't: py> x = "global" py> def spam(): ... exec "x = 'local'" ... print x ... py> spam() local py> x == 'global' True If we look at the byte-code, we see that the lookup is *not* optimized to inspect locals only (LOAD_FAST), but uses the regular LOAD_NAME that normally gets used for globals and builtins: py> import dis py> dis.dis(spam) 2 0 LOAD_CONST 1 ("x = 'local'") 3 LOAD_CONST 0 (None) 6 DUP_TOP 7 EXEC_STMT 3 8 LOAD_NAME 0 (x) 11 PRINT_ITEM 12 PRINT_NEWLINE 13 LOAD_CONST 0 (None) 16 RETURN_VALUE > What you're talking about here is global lookups falling back to > builtin lookups. There's no more general notion of nesting or scoping > involved, so why use those words? I'm not quite sure about this. In principle, every name lookup looks in four scopes, LEGB as you describe above: - locals - non-locals, a.k.a. enclosing or lexical scope(s) - globals (i.e. the module) - builtins although Python can (usually?) optimise away some of those lookups. The relationship of locals to enclosing scopes, and to globals in turn, involve actual nesting of indented blocks in Python, but that's not necessarily the case. One might imagine a hypothetical capability for manipulating scopes directly, e.g.: def spam(): ... def ham(): ... set_enclosing(ham, spam) # like: # def spam(): # def ham(): ... The adventurous or fool-hardy can probably do something like that now with byte-code hacking :-) Likewise, one might consider that builtins is a scope which in some sense encloses the global scope. Consider it a virtual code block that is outdented from the top-level scope :-) > So, trying to generalize global vs. builtin to a general notion of > "nested scope" that isn't necessary for builtins and doesn't work for > anything else seems like overcomplicating things for no benefit. Well, putting aside the question of whether this is useful or not, and putting aside efficiency concerns, let's just imagine a hypothetical implementation where name lookups used ChainMaps instead of using separate LOAD_* lookups of special dicts. Then a function could set up a ChainMap: function.__scopes__ = ChainMap(locals, enclosing, globals, builtins) and a name lookup for (say) "x" would always be a simple: function.__scopes__["x"] Of course this would be harder to optimize, and hence probably slower, than the current arrangement, but I think it would allow some interesting experiments with scoping rules: ChainMap(locals, enclosing, globals, application_globals, builtins) You could implement dynamic scoping by inserting the caller's __scopes__ ChainMap into the front of the called function's ChainMap. And attribute lookups would be something like this simplified scope: ChainMap(self.__dict__, type(self).__dict__) to say nothing of combinations of the two. So I think there's something interesting here, even if we don't want to use it in production code, it would make for some nice experiments. -- Steve From timlegrand.perso at gmail.com Fri Dec 18 06:51:11 2015 From: timlegrand.perso at gmail.com (Tim Legrand) Date: Fri, 18 Dec 2015 12:51:11 +0100 Subject: [Python-Dev] Typo in PEP-0423 Message-ID: Hi guys, It's said on the Python repos page that this mailing list is the official maintainer of the peps repo , so here I am writing my question. There's is a typo in the PEP-0423 description, in which it is said: "See Registering with the Package Index [27] for details." but the provided link is broken (error 404). In the source file written by Guido van Rossum, the link's placeholder is "Registering with the Package Index". What is the right link ? Thanks, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlos.barera at gmail.com Fri Dec 18 03:17:16 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Fri, 18 Dec 2015 10:17:16 +0200 Subject: [Python-Dev] pypi simple index In-Reply-To: References: Message-ID: Thanks Rob! On Thu, Dec 17, 2015 at 7:36 PM, Robert Collins wrote: > > > On 18 December 2015 at 06:13, Carlos Barera > wrote: > >> Hi, >> >> I'm using install_requires in setup.py to specify a specific package my >> project is dependant on. >> When running python setup.py install, apparently the simple index is used >> as an older package is taken from pypi. While >> > > What's happening here is that easy-install is triggering - which does not > support wheels. Use 'pip install .' instead. > > >> in https://pypi.python.org/pypi, there's a newer package. >> When installing directly using pip, the latest package is installed >> successfully. >> I noticed that the new package is only available as a wheel and older >> versions of setup tools won't install wheels for install_requires. >> However, upgrading setuptools didn't help. >> >> Several questions: >> 1. What's the difference between the pypi simple index and the general >> pypi index? >> > > The '/simple' API is for machine consumption, /pypi is for humans, other > than that there should be not be any difference. > > >> 2. Why is setup.py defaulting to the simple index? >> > > Because it is the only index :). > > >> 3. How can I make the setup.py triggered install use the main pypi index >> instead of simple >> > > You can't - the issue is not the index being consulted, but your use of > 'python setup.py install' which does not support wheels. > > Cheers, > Rob > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sziebadam at gmail.com Fri Dec 18 08:58:55 2015 From: sziebadam at gmail.com (Szieberth =?UTF-8?B?w4Fkw6Ft?=) Date: Fri, 18 Dec 2015 14:58:55 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server Message-ID: <20151218145855.76a627ea@gmail.com> Hi Developers! This is my first post. Please excuse me my poor English. If anyone is interested, I wrote a small introduction on my homepage. Link is at the bottom. This post is about how to effectively implement the new asynchronous context manager in a typical network server. I would appreciate and welcome any confirmation or critics whether my thinking is right or wrong. Thanks in advance! So, a typical server main code I used to see around is like this: srv = loop.run_until_complete(create_server(handler, host, port)) try: loop.run_forever() except KeyboardInterrupt: pass finally: # other tear down code may be here srv.close() loop.run_until_complete(srv.wait_closed()) loop.close() Note that `create_server()` here is not necessary `BaseEventLoop.create_server()`. The above code is not prepared to handle `OSError`s or any other `Exception`s (including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the server, it just prints the traceback to the console which is not user friendly. Moreover, I would expect from a server to handle the SIGTERM signal as well and tell its clients that it stops serving when not force killed. How the main code should create server, maintain the serving, deal with errors and close properly both the connections and the event loop when exiting without letting pending tasks around is not trivial. There are many questions on SO and other places of the internet regarding of this problem. My idea was to provide a simple code which is robust in terms of these concerns by profiting from the new asynchronous context manager pattern. The code of the magic methods of a typical awaitable `CreateServer` object seems rather trivial: async def __aenter__(self): self.server = await self return self.server async def __aexit__(self, exc_type, exc_value, traceback): # other tear down code may be here self.server.close() await self.server.wait_closed() However, to make it work, a task has to be created: async def server_task(): async with CreateServer(handler, host, port) as srv: await asyncio.Future() # wait forever I write some remarks regarding the above code to the end of this post. Note that `srv` is unreachable from outside which could be a problem in some cases. What is unavoidable: this task has to get cancelled explicitely by the main code which should look like this: srvtsk = loop.create_task(server_task()) signal.signal(signal.SIGTERM, lambda si, fr: loop.call_soon(srvtsk.cancel)) while True: try: loop.run_until_complete(srvtsk) except KeyboardInterrupt: srvtsk.cancel() except asyncio.CancelledError: break except Exception as err: print(err) break loop.close() Note that when `CancelledError` gets raised, the tear down process is already done. Remarks: * It would be nice to have an `asyncio.wait_forever()` coroutine for dummy context bodies. * Moreover, I also imagined an `BaseEventLoop.create_context_task(awithable, body_coro_func=None)` method. The `body_coro_func` should default to `asyncio.wait_forever()`, otherwise it should get whatever is returned by `__aenter__` as a single argument. The returned Task object should also provide a reference to that object. Best regards, ?d?m (http://szieberthadam.github.io/) From andrew.svetlov at gmail.com Fri Dec 18 11:29:35 2015 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 18 Dec 2015 18:29:35 +0200 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <20151218145855.76a627ea@gmail.com> References: <20151218145855.76a627ea@gmail.com> Message-ID: I my asyncio code typical initialization/finalization procedures are much more complicated. I doubt if common code can be extracted into asyncio. Personally I don't feel the need for `wait_forever()` or `loop.creae_context_task()`. But even if you need it you may create it from scratch easy, isn't it? On Fri, Dec 18, 2015 at 3:58 PM, Szieberth ?d?m wrote: > Hi Developers! > > This is my first post. Please excuse me my poor English. If anyone is > interested, I wrote a small introduction on my homepage. Link is at the bottom. > > This post is about how to effectively implement the new asynchronous context > manager in a typical network server. > > I would appreciate and welcome any confirmation or critics whether my thinking > is right or wrong. Thanks in advance! > > So, a typical server main code I used to see around is like this: > > srv = loop.run_until_complete(create_server(handler, host, port)) > try: > loop.run_forever() > except KeyboardInterrupt: > pass > finally: > # other tear down code may be here > srv.close() > loop.run_until_complete(srv.wait_closed()) > loop.close() > > Note that `create_server()` here is not necessary > `BaseEventLoop.create_server()`. > > The above code is not prepared to handle `OSError`s or any other `Exception`s > (including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the server, > it just prints the traceback to the console which is not user friendly. > Moreover, I would expect from a server to handle the SIGTERM signal as well > and tell its clients that it stops serving when not force killed. > > How the main code should create server, maintain the serving, deal with errors > and close properly both the connections and the event loop when exiting > without letting pending tasks around is not trivial. There are many questions > on SO and other places of the internet regarding of this problem. > > My idea was to provide a simple code which is robust in terms of these > concerns by profiting from the new asynchronous context manager pattern. > > The code of the magic methods of a typical awaitable `CreateServer` object > seems rather trivial: > > async def __aenter__(self): > self.server = await self > return self.server > > async def __aexit__(self, exc_type, exc_value, traceback): > # other tear down code may be here > self.server.close() > await self.server.wait_closed() > > However, to make it work, a task has to be created: > > async def server_task(): > async with CreateServer(handler, host, port) as srv: > await asyncio.Future() # wait forever > > I write some remarks regarding the above code to the end of this post. Note > that `srv` is unreachable from outside which could be a problem in some cases. > What is unavoidable: this task has to get cancelled explicitely by the main > code which should look like this: > > srvtsk = loop.create_task(server_task()) > > signal.signal(signal.SIGTERM, lambda si, fr: loop.call_soon(srvtsk.cancel)) > > while True: > try: > loop.run_until_complete(srvtsk) > except KeyboardInterrupt: > srvtsk.cancel() > except asyncio.CancelledError: > break > except Exception as err: > print(err) > break > loop.close() > > Note that when `CancelledError` gets raised, the tear down process is already > done. > > Remarks: > > * It would be nice to have an `asyncio.wait_forever()` coroutine for dummy > context bodies. > * Moreover, I also imagined an `BaseEventLoop.create_context_task(awithable, > body_coro_func=None)` method. The `body_coro_func` should default to > `asyncio.wait_forever()`, otherwise it should get whatever is returned by > `__aenter__` as a single argument. The returned Task object should also > provide a reference to that object. > > Best regards, > ?d?m > > (http://szieberthadam.github.io/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com -- Thanks, Andrew Svetlov From guido at python.org Fri Dec 18 11:41:09 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 08:41:09 -0800 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: Which of the top links of this query do you think it should be? https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8 On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand wrote: > Hi guys, > > It's said on the Python repos page that this > mailing list is the official maintainer of the peps repo > , so here I am writing my question. > > There's is a typo in the PEP-0423 description, in which it is said: > > "See Registering with the Package Index > [27] for > details." > > but the provided link is broken (error 404). > > In the source file > written > by Guido van Rossum, the link's placeholder is "Registering with the > Package Index". > > What is the right link ? > > Thanks, > Tim > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Dec 18 11:59:09 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 08:59:09 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <20151218145855.76a627ea@gmail.com> References: <20151218145855.76a627ea@gmail.com> Message-ID: I agree with Andrew that there are too many different scenarios and requirements to make this a useful library function. Some notes on the actual code you posted: - Instead of calling signal.signal() yourself, you should use loop.add_signal_handler(). It makes sure your signal handler doesn't run while another handler is already running. - If you add a handler for SIGINT you can control what happens when the user hits ^C (again, ensuring the handler already running isn't interrupted halfway through). - I'm unclear on why you want a wait_forever() instead of using loop.run_forever(). Can you clarify? - In theory, instead of waiting for a Future that is cancelled by a handler, you should be able to use asyncio.sleep() with a very large number (e.g. a million seconds). Your handler could then just call loop.stop(). However, I just tested this and it raises "RuntimeError: Event loop stopped before Future completed." so ignore this until we've fixed it. :-) On Fri, Dec 18, 2015 at 5:58 AM, Szieberth ?d?m wrote: > Hi Developers! > > This is my first post. Please excuse me my poor English. If anyone is > interested, I wrote a small introduction on my homepage. Link is at the > bottom. > > This post is about how to effectively implement the new asynchronous > context > manager in a typical network server. > > I would appreciate and welcome any confirmation or critics whether my > thinking > is right or wrong. Thanks in advance! > > So, a typical server main code I used to see around is like this: > > srv = loop.run_until_complete(create_server(handler, host, port)) > try: > loop.run_forever() > except KeyboardInterrupt: > pass > finally: > # other tear down code may be here > srv.close() > loop.run_until_complete(srv.wait_closed()) > loop.close() > > Note that `create_server()` here is not necessary > `BaseEventLoop.create_server()`. > > The above code is not prepared to handle `OSError`s or any other > `Exception`s > (including a `KeyboardInterrupt` by a rapid Ctr+C) when setting up the > server, > it just prints the traceback to the console which is not user friendly. > Moreover, I would expect from a server to handle the SIGTERM signal as well > and tell its clients that it stops serving when not force killed. > > How the main code should create server, maintain the serving, deal with > errors > and close properly both the connections and the event loop when exiting > without letting pending tasks around is not trivial. There are many > questions > on SO and other places of the internet regarding of this problem. > > My idea was to provide a simple code which is robust in terms of these > concerns by profiting from the new asynchronous context manager pattern. > > The code of the magic methods of a typical awaitable `CreateServer` object > seems rather trivial: > > async def __aenter__(self): > self.server = await self > return self.server > > async def __aexit__(self, exc_type, exc_value, traceback): > # other tear down code may be here > self.server.close() > await self.server.wait_closed() > > However, to make it work, a task has to be created: > > async def server_task(): > async with CreateServer(handler, host, port) as srv: > await asyncio.Future() # wait forever > > I write some remarks regarding the above code to the end of this post. Note > that `srv` is unreachable from outside which could be a problem in some > cases. > What is unavoidable: this task has to get cancelled explicitely by the main > code which should look like this: > > srvtsk = loop.create_task(server_task()) > > signal.signal(signal.SIGTERM, lambda si, fr: > loop.call_soon(srvtsk.cancel)) > > while True: > try: > loop.run_until_complete(srvtsk) > except KeyboardInterrupt: > srvtsk.cancel() > except asyncio.CancelledError: > break > except Exception as err: > print(err) > break > loop.close() > > Note that when `CancelledError` gets raised, the tear down process is > already > done. > > Remarks: > > * It would be nice to have an `asyncio.wait_forever()` coroutine for dummy > context bodies. > * Moreover, I also imagined an > `BaseEventLoop.create_context_task(awithable, > body_coro_func=None)` method. The `body_coro_func` should default to > `asyncio.wait_forever()`, otherwise it should get whatever is returned by > `__aenter__` as a single argument. The returned Task object should also > provide a reference to that object. > > Best regards, > ?d?m > > (http://szieberthadam.github.io/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Dec 18 12:01:04 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 18 Dec 2015 12:01:04 -0500 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> Message-ID: <20151218170113.C1219251100@webabinitio.net> On Fri, 18 Dec 2015 18:29:35 +0200, Andrew Svetlov wrote: > I my asyncio code typical initialization/finalization procedures are > much more complicated. > I doubt if common code can be extracted into asyncio. > Personally I don't feel the need for `wait_forever()` or > `loop.creae_context_task()`. > > But even if you need it you may create it from scratch easy, isn't it? In my own asyncio code I wrote a generic context manager to hold references to all the top level tasks my ap needs, which automatically handles the teardown when loop.stop() is called from my SIGTERM signal handler. However, (and here we get to the python-dev content of this post :), I think we are too early in the uptake of asyncio to be ready to say what additional high-level features are well defined enough and useful enough to become part of the standard library. In any case discussions like this really belong on the asyncio-specific mailing list, which I gather is the python-tulip Google Group (I suppose I really ought to sign up...) --David From status at bugs.python.org Fri Dec 18 12:08:35 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 18 Dec 2015 18:08:35 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20151218170835.14E8F56263@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-12-11 - 2015-12-18) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5324 (+27) closed 32341 (+38) total 37665 (+65) Open issues with patches: 2344 Issues opened (45) ================== #7283: test_site failure when .local/lib/pythonX.Y/site-packages hasn http://bugs.python.org/issue7283 reopened by serhiy.storchaka #25591: refactor imaplib tests http://bugs.python.org/issue25591 reopened by maciej.szulik #25843: lambdas on the same line may incorrectly share code objects http://bugs.python.org/issue25843 opened by Tijs Van Oevelen #25844: Pylauncher, launcher.c: Assigning NULL to a pointer instead of http://bugs.python.org/issue25844 opened by Alexander Riccio #25846: Use of Py_ARRAY_LENGTH on pointer in posixmodule.c, win32_wchd http://bugs.python.org/issue25846 opened by Alexander Riccio #25847: CPython not using Visual Studio code analysis! http://bugs.python.org/issue25847 opened by Alexander Riccio #25848: Tkinter tests failed on Windows buildbots http://bugs.python.org/issue25848 opened by serhiy.storchaka #25849: files, opened in unicode (text): write() returns symbols count http://bugs.python.org/issue25849 opened by mmarkk #25850: Building extensions with MSVC 2015 Express fails http://bugs.python.org/issue25850 opened by Sami Salonen #25852: smtplib's SMTP.connect() should store the server name in ._hos http://bugs.python.org/issue25852 opened by labrat #25853: Compile error with pytime.h - struct timespec declared inside http://bugs.python.org/issue25853 opened by jamespharvey20 #25856: The __module__ attribute of non-heap classes is not interned http://bugs.python.org/issue25856 opened by serhiy.storchaka #25858: Structure field size/ofs __str__ wrong with large size fields http://bugs.python.org/issue25858 opened by Charles Machalow #25859: EOFError in test_nntplib.NetworkedNNTPTests.test_starttls() http://bugs.python.org/issue25859 opened by martin.panter #25860: os.fwalk() silently skips remaining directories when error occ http://bugs.python.org/issue25860 opened by Samson Lee #25862: TextIOWrapper assertion failure after read() and SEEK_CUR http://bugs.python.org/issue25862 opened by martin.panter #25863: ISO-2022 seeking forgets state http://bugs.python.org/issue25863 opened by martin.panter #25864: collections.abc.Mapping should include a __reversed__ that rai http://bugs.python.org/issue25864 opened by abarnert #25865: 7.2 Assignment statements documentation is vague and slightly http://bugs.python.org/issue25865 opened by abarnert #25866: Reference 3. Data Model: miscellaneous minor cleanups on the w http://bugs.python.org/issue25866 opened by abarnert #25867: os.stat raises exception when using unicode and no locale is s http://bugs.python.org/issue25867 opened by sejvlond #25868: test_eintr.test_sigwaitinfo() hangs on "AMD64 FreeBSD CURRENT http://bugs.python.org/issue25868 opened by haypo #25869: Faster ElementTree deepcopying http://bugs.python.org/issue25869 opened by serhiy.storchaka #25872: multithreading traceback KeyError when modifying file http://bugs.python.org/issue25872 opened by Michael Allen #25873: Faster ElementTree iterating http://bugs.python.org/issue25873 opened by serhiy.storchaka #25874: Add notice that XP is not supported on Python 3.5+ http://bugs.python.org/issue25874 opened by crwilcox #25876: test_gdb: use subprocess._args_from_interpreter_flags() to tes http://bugs.python.org/issue25876 opened by haypo #25878: CPython on Windows builds with /W3, not /W4 http://bugs.python.org/issue25878 opened by Alexander Riccio #25880: u'..'.encode('idna') ??? UnicodeError: label empty or too long http://bugs.python.org/issue25880 opened by spaceone #25881: A little faster ElementTree serializing http://bugs.python.org/issue25881 opened by serhiy.storchaka #25882: argparse help error: arguments created by add_mutually_exclusi http://bugs.python.org/issue25882 opened by balage #25883: python 2.7.11 mod_wsgi regression on windows http://bugs.python.org/issue25883 opened by stephan #25884: inspect.getmro() fails when base class lacks __bases__ attribu http://bugs.python.org/issue25884 opened by billyziege #25887: awaiting on coroutine more than once should be an error http://bugs.python.org/issue25887 opened by yselivanov #25888: awaiting on coroutine that is being awaited should be an error http://bugs.python.org/issue25888 opened by yselivanov #25894: unittest subTest failure causes result to be omitted from list http://bugs.python.org/issue25894 opened by zach.ware #25895: urllib.parse.urljoin does not handle WebSocket URLs http://bugs.python.org/issue25895 opened by imrehg #25896: array.array accepting byte-order codes in format strings http://bugs.python.org/issue25896 opened by Zoinkity.. #25898: Check for subsequence inside a sequence http://bugs.python.org/issue25898 opened by seblin #25900: unittest ignores the first ctrl-c when it shouldn't http://bugs.python.org/issue25900 opened by mgedmin #25901: make test crash http://bugs.python.org/issue25901 opened by fluyy #25902: Fixed various refcount issues in ElementTree iteration http://bugs.python.org/issue25902 opened by serhiy.storchaka #25905: IDLE fails to display the README file http://bugs.python.org/issue25905 opened by serhiy.storchaka #25906: Worker stall in multiprocessing.Pool http://bugs.python.org/issue25906 opened by chroxvi #25907: Documentation i18n: Added trans tags in sphinx templates http://bugs.python.org/issue25907 opened by sizeof Most recent 15 issues with no replies (15) ========================================== #25907: Documentation i18n: Added trans tags in sphinx templates http://bugs.python.org/issue25907 #25906: Worker stall in multiprocessing.Pool http://bugs.python.org/issue25906 #25905: IDLE fails to display the README file http://bugs.python.org/issue25905 #25902: Fixed various refcount issues in ElementTree iteration http://bugs.python.org/issue25902 #25901: make test crash http://bugs.python.org/issue25901 #25900: unittest ignores the first ctrl-c when it shouldn't http://bugs.python.org/issue25900 #25896: array.array accepting byte-order codes in format strings http://bugs.python.org/issue25896 #25876: test_gdb: use subprocess._args_from_interpreter_flags() to tes http://bugs.python.org/issue25876 #25872: multithreading traceback KeyError when modifying file http://bugs.python.org/issue25872 #25866: Reference 3. Data Model: miscellaneous minor cleanups on the w http://bugs.python.org/issue25866 #25863: ISO-2022 seeking forgets state http://bugs.python.org/issue25863 #25862: TextIOWrapper assertion failure after read() and SEEK_CUR http://bugs.python.org/issue25862 #25860: os.fwalk() silently skips remaining directories when error occ http://bugs.python.org/issue25860 #25834: getpass falls back when sys.stdin is changed http://bugs.python.org/issue25834 #25830: _TypeAlias: Discrepancy between docstring and behavior http://bugs.python.org/issue25830 Most recent 15 issues waiting for review (15) ============================================= #25907: Documentation i18n: Added trans tags in sphinx templates http://bugs.python.org/issue25907 #25902: Fixed various refcount issues in ElementTree iteration http://bugs.python.org/issue25902 #25900: unittest ignores the first ctrl-c when it shouldn't http://bugs.python.org/issue25900 #25898: Check for subsequence inside a sequence http://bugs.python.org/issue25898 #25895: urllib.parse.urljoin does not handle WebSocket URLs http://bugs.python.org/issue25895 #25888: awaiting on coroutine that is being awaited should be an error http://bugs.python.org/issue25888 #25887: awaiting on coroutine more than once should be an error http://bugs.python.org/issue25887 #25881: A little faster ElementTree serializing http://bugs.python.org/issue25881 #25878: CPython on Windows builds with /W3, not /W4 http://bugs.python.org/issue25878 #25876: test_gdb: use subprocess._args_from_interpreter_flags() to tes http://bugs.python.org/issue25876 #25874: Add notice that XP is not supported on Python 3.5+ http://bugs.python.org/issue25874 #25873: Faster ElementTree iterating http://bugs.python.org/issue25873 #25869: Faster ElementTree deepcopying http://bugs.python.org/issue25869 #25860: os.fwalk() silently skips remaining directories when error occ http://bugs.python.org/issue25860 #25859: EOFError in test_nntplib.NetworkedNNTPTests.test_starttls() http://bugs.python.org/issue25859 Top 10 most discussed issues (10) ================================= #25843: lambdas on the same line may incorrectly share code objects http://bugs.python.org/issue25843 34 msgs #19475: Add timespec optional flag to datetime isoformat() to choose t http://bugs.python.org/issue19475 18 msgs #25847: CPython not using Visual Studio code analysis! http://bugs.python.org/issue25847 15 msgs #25849: files, opened in unicode (text): write() returns symbols count http://bugs.python.org/issue25849 14 msgs #25864: collections.abc.Mapping should include a __reversed__ that rai http://bugs.python.org/issue25864 13 msgs #1753718: base64 "legacy" functions violate RFC 3548 http://bugs.python.org/issue1753718 11 msgs #25846: Use of Py_ARRAY_LENGTH on pointer in posixmodule.c, win32_wchd http://bugs.python.org/issue25846 8 msgs #25880: u'..'.encode('idna') ??? UnicodeError: label empty or too long http://bugs.python.org/issue25880 8 msgs #25878: CPython on Windows builds with /W3, not /W4 http://bugs.python.org/issue25878 7 msgs #25823: Speed-up oparg decoding on little-endian machines http://bugs.python.org/issue25823 6 msgs Issues closed (40) ================== #6478: time.tzset does not reset _strptime's locale time cache http://bugs.python.org/issue6478 closed by serhiy.storchaka #19771: runpy should check ImportError.name before wrapping it http://bugs.python.org/issue19771 closed by martin.panter #20837: Ambiguity words in base64 documentation http://bugs.python.org/issue20837 closed by martin.panter #20954: Bug in subprocess._args_from_interpreter_flags causes MemoryEr http://bugs.python.org/issue20954 closed by gregory.p.smith #21436: Consider leaving importlib.abc.Loader.load_module() http://bugs.python.org/issue21436 closed by berker.peksag #23788: test_urllib2_localnet.test_bad_address fails: OSError not rais http://bugs.python.org/issue23788 closed by martin.panter #25272: asyncio tests are getting noisy http://bugs.python.org/issue25272 closed by yselivanov #25495: binascii documentation incorrect http://bugs.python.org/issue25495 closed by r.david.murray #25580: async and await missing from token list http://bugs.python.org/issue25580 closed by yselivanov #25608: ascynio readexactly() should raise ValueError if passed length http://bugs.python.org/issue25608 closed by yselivanov #25610: Add typing.Awaitable http://bugs.python.org/issue25610 closed by gvanrossum #25683: __context__ for yields inside except clause http://bugs.python.org/issue25683 closed by yselivanov #25696: "make -j9 install" fails because bininstall target requires th http://bugs.python.org/issue25696 closed by haypo #25755: Test test_property failed if run twice http://bugs.python.org/issue25755 closed by berker.peksag #25773: Deprecate deleting with PyObject_SetAttr, PyObject_SetAttrStri http://bugs.python.org/issue25773 closed by serhiy.storchaka #25809: "Invalid" tests on locales http://bugs.python.org/issue25809 closed by martin.panter #25838: Lib/httplib.py: Resend http request on server close connection http://bugs.python.org/issue25838 closed by r.david.murray #25842: Installer does not set permissions correctly? http://bugs.python.org/issue25842 closed by zach.ware #25845: _ctypes\cfield.c identical subexpressions in Z_set http://bugs.python.org/issue25845 closed by martin.panter #25851: installing 3.5 on windows server 2003 x86 R2 Standard Edition http://bugs.python.org/issue25851 closed by eryksun #25854: rest in _interpolate_some is a list not str http://bugs.python.org/issue25854 closed by rhettinger #25855: str.title() http://bugs.python.org/issue25855 closed by ezio.melotti #25857: csv: unexpected result http://bugs.python.org/issue25857 closed by r.david.murray #25861: Can't use Pickle. AttributeError: 'module' object has no attri http://bugs.python.org/issue25861 closed by serhiy.storchaka #25870: textwrap is very slow on long words without spaces http://bugs.python.org/issue25870 closed by r.david.murray #25871: textwrap.dedent doesn't find common substring when spaces and http://bugs.python.org/issue25871 closed by Chris Tozer #25875: PYODBC talk to Oracle under Windows 10. http://bugs.python.org/issue25875 closed by r.david.murray #25877: python av docs has broken links http://bugs.python.org/issue25877 closed by r.david.murray #25879: Code objects from same line can compare equal http://bugs.python.org/issue25879 closed by KirkMcDonald #25885: ast Str type does not annotate the string type when it parses http://bugs.python.org/issue25885 closed by brett.cannon #25886: ast module is combining string literals that are concatenated http://bugs.python.org/issue25886 closed by SilentGhost #25889: Find_BOM accepts a char*, but is passed an unsigned char*; and http://bugs.python.org/issue25889 closed by serhiy.storchaka #25890: PyObject *po in _listdir_windows_no_opendir is initialized but http://bugs.python.org/issue25890 closed by serhiy.storchaka #25891: Stray variable meth_idx in enable_symlink http://bugs.python.org/issue25891 closed by serhiy.storchaka #25892: PyObject *exc in encode_code_page_strict is initialized but no http://bugs.python.org/issue25892 closed by serhiy.storchaka #25893: Second variable DWORD reqdSize in getpythonregpath is initiali http://bugs.python.org/issue25893 closed by serhiy.storchaka #25897: Python 3.5.1 and Active Tcl/Tk 8.6.4.1 http://bugs.python.org/issue25897 closed by ned.deily #25899: Unnecessary non-ASCII characters in standard library http://bugs.python.org/issue25899 closed by serhiy.storchaka #25903: SUGGESTION: Optimize code in PYO http://bugs.python.org/issue25903 closed by r.david.murray #25904: SUGGESTION: New Datatypes http://bugs.python.org/issue25904 closed by mark.dickinson From timlegrand.perso at gmail.com Fri Dec 18 12:34:23 2015 From: timlegrand.perso at gmail.com (Tim Legrand) Date: Fri, 18 Dec 2015 18:34:23 +0100 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: Well, this looks like a rhetorical question :) As I am totally new to Python packaging and publication, I had no precise idea of what I should get from this link. So my guess would be https://docs.python.org/2/distutils/packageindex.html (since I was expecting Python 2.7 resources, not 3.x, but I didn't mention that before). Let me know if you want/I am allowed to fix the link in the original page . I have no idea how to contribute to these repo too :) Thanks, Tim 2015-12-18 17:41 GMT+01:00 Guido van Rossum : > Which of the top links of this query do you think it should be? > > > https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8 > > On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand > wrote: > >> Hi guys, >> >> It's said on the Python repos page that this >> mailing list is the official maintainer of the peps repo >> , so here I am writing my question. >> >> There's is a typo in the PEP-0423 description, in which it is said: >> >> "See Registering with the Package Index >> [27] for >> details." >> >> but the provided link is broken (error 404). >> >> In the source file >> written >> by Guido van Rossum, the link's placeholder is "Registering with the >> Package Index". >> >> What is the right link ? >> >> Thanks, >> Tim >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> > > > -- > --Guido van Rossum (python.org/~guido) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Dec 18 12:44:48 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 09:44:48 -0800 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: On Fri, Dec 18, 2015 at 9:34 AM, Tim Legrand wrote: > Well, this looks like a rhetorical question :) > It wasn't, I was hoping you'd be quicker at picking one than me (I don't publish packages on PyPI much myself so the docs all look like Greek to me :-). > As I am totally new to Python packaging and publication, I had no precise > idea of what I should get from this link. > Ah, so it was Greek to you too. :-) > So my guess would be https://docs.python.org/2/distutils/packageindex.html > (since I was expecting Python 2.7 resources, not 3.x, but I didn't mention > that before). > Hm, but we are really trying to nudge people towards Python 3. > Let me know if you want/I am allowed to fix the link in the original page > . I have no idea how to > contribute to these repo too :) > This particular repo is managed by the "PEP editors": https://www.python.org/dev/peps/pep-0001/#id29 In this case I've just pushed the fix. Thanks for reporting it! --Guido > Thanks, > Tim > > 2015-12-18 17:41 GMT+01:00 Guido van Rossum : > >> Which of the top links of this query do you think it should be? >> >> >> https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8 >> >> On Fri, Dec 18, 2015 at 3:51 AM, Tim Legrand >> wrote: >> >>> Hi guys, >>> >>> It's said on the Python repos page that this >>> mailing list is the official maintainer of the peps repo >>> , so here I am writing my question. >>> >>> There's is a typo in the PEP-0423 description, in which it is said: >>> >>> "See Registering with the Package Index >>> [27] for >>> details." >>> >>> but the provided link is broken (error 404). >>> >>> In the source file >>> written >>> by Guido van Rossum, the link's placeholder is "Registering with the >>> Package Index". >>> >>> What is the right link ? >>> >>> Thanks, >>> Tim >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> https://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> https://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >>> >> >> >> -- >> --Guido van Rossum (python.org/~guido) >> > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sziebadam at gmail.com Fri Dec 18 13:25:24 2015 From: sziebadam at gmail.com (Szieberth =?UTF-8?B?w4Fkw6Ft?=) Date: Fri, 18 Dec 2015 19:25:24 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> Message-ID: <20151218192524.68ae468f@gmail.com> Thanks for your reply Guido! > - Instead of calling signal.signal() yourself, you should use > loop.add_signal_handler(). It makes sure your signal handler doesn't run > while another handler is already running. I was opted to the signal module because `signal` documentation suggest that it alos supports Windows while asyncio documentation states that `loop. add_signal_handler()` is UNIX only. > - I'm unclear on why you want a wait_forever() instead of using > loop.run_forever(). Can you clarify? As I see `loop.run_forever()` is an issue from _outside_ while an `await wait_forever()` would be an _inside_ declaration making explicit what the task does (serving forever). My OP suggest that it seemed to me quite helpful inside async context. However, I wanted to share my approach to get a confirmation that I am not on a totally wrong way with this. > - In theory, instead of waiting for a Future that is cancelled by a > handler, you should be able to use asyncio.sleep() with a very large number > (e.g. a million seconds). I was thinking on this too but it seemed less explicit to me than awaiting a pure Future with a short comment. Moreover, even millions of seconds can pass. > Your handler could then just call loop.stop(). For some reason I don't like bothering with the event loop from inside awaitables. It seems hacky to me since it breaks the hierarhy of who controlls who. > However, I just tested this and it raises "RuntimeError: Event loop stopped > before Future completed." so ignore this until we've fixed it. :-) This is the exception I saw so many times by trying to close an asyncio program! I guess I am not the only one. This may be one of the most frustrating aspects of the library. Yet, it inspired me to figure out a plain pattern to avoid it, which may not the right one. However, I would like to signal that it would be nice to help developers with useful patterns and documentation to avoid RuntimeErrors and the frustration that goes with them. ?d?m (http://szieberthadam.github.io/) PS: I will replay to others as well, but first I had to play with my son. :) From guido at python.org Fri Dec 18 13:36:29 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 10:36:29 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <20151218192524.68ae468f@gmail.com> References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: On Fri, Dec 18, 2015 at 10:25 AM, Szieberth ?d?m wrote: > Thanks for your reply Guido! > > > - Instead of calling signal.signal() yourself, you should use > > loop.add_signal_handler(). It makes sure your signal handler doesn't run > > while another handler is already running. > > I was opted to the signal module because `signal` documentation suggest > that > it alos supports Windows while asyncio documentation states that `loop. > add_signal_handler()` is UNIX only. > Unfortunately that's true, but using the signal module with asyncio the way you did is *not* safe. The only safe way is to use the loop.add_signal_handler() interface. > > - I'm unclear on why you want a wait_forever() instead of using > > loop.run_forever(). Can you clarify? > > As I see `loop.run_forever()` is an issue from _outside_ while an `await > wait_forever()` would be an _inside_ declaration making explicit what the > task > does (serving forever). > > My OP suggest that it seemed to me quite helpful inside async context. > However, I wanted to share my approach to get a confirmation that I am not > on > a totally wrong way with this. > Well, if you look at the toy servers in the asyncio examples directory, they all use run_forever(). I agree that from within the loop that's not possible, but I don't think it's such a common thing (you typically write a framework for creating servers once and that's the only place where you would need this). IOW I think your solution of waiting for a Future is the right way. > > - In theory, instead of waiting for a Future that is cancelled by a > > handler, you should be able to use asyncio.sleep() with a very large > number > > (e.g. a million seconds). > > I was thinking on this too but it seemed less explicit to me than awaiting > a > pure Future with a short comment. Moreover, even millions of seconds can > pass. > 11 years. That's quite some trust you put in your hardware... But you can use a billion. I think by 11000 years from now you can retire your server. :-) > > Your handler could then just call loop.stop(). > > For some reason I don't like bothering with the event loop from inside > awaitables. It seems hacky to me since it breaks the hierarhy of who > controlls > who. > Fair enough -- you've actually internalized the asyncio philosophy quite well. > > However, I just tested this and it raises "RuntimeError: Event loop > stopped > > before Future completed." so ignore this until we've fixed it. :-) > > This is the exception I saw so many times by trying to close an asyncio > program! I guess I am not the only one. This may be one of the most > frustrating aspects of the library. Yet, it inspired me to figure out a > plain > pattern to avoid it, which may not the right one. However, I would like to > signal that it would be nice to help developers with useful patterns and > documentation to avoid RuntimeErrors and the frustration that goes with > them. > Maybe you can help by submitting a patch that prevents this error! Are you interested? > ?d?m > (http://szieberthadam.github.io/) > > PS: I will replay to others as well, but first I had to play with my son. > :) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Fri Dec 18 14:32:53 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Fri, 18 Dec 2015 11:32:53 -0800 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: <20151218125604.GI1609@ando.pearwood.info> References: <20151218125604.GI1609@ando.pearwood.info> Message-ID: <09A0F7CE-EAAE-47B7-98CD-972F865A80F9@yahoo.com> > On Dec 18, 2015, at 04:56, Steven D'Aprano wrote: > >>> On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev wrote: >>> On Dec 17, 2015, at 07:38, Franklin? Lee wrote: >>> >>> The nested dictionaries are only for nested scopes (and inner >>> functions don't create nested scopes). Nested scopes will already >>> require multiple lookups in parents. >> >> I think I understand what you're getting at here, but it's a really >> confusing use of terminology. In Python, and in programming in >> general, nested scopes refer to exactly inner functions (and classes) >> being lexically nested and doing lookup through outer scopes. The fact >> that this is optimized at compile time to FAST vs. CELL vs. >> GLOBAL/NAME, cells are optimized at function-creation time, and only >> global and name have to be resolved at the last second doesn't mean >> that there's no scoping, or some other form of scoping besides >> lexical. The actual semantics are LEGB, even if L vs. E vs. GB and E >> vs. further-out E can be optimized. > > In Python 2, the LOAD_NAME byte-code can return a local, even though it > normally doesn't: > > py> x = "global" > py> def spam(): > ... exec "x = 'local'" > ... print x > ... > py> spam() > local > py> x == 'global' > True > > > If we look at the byte-code, we see that the lookup is *not* optimized > to inspect locals only (LOAD_FAST), but uses the regular LOAD_NAME that > normally gets used for globals and builtins: > > py> import dis > py> dis.dis(spam) > 2 0 LOAD_CONST 1 ("x = 'local'") > 3 LOAD_CONST 0 (None) > 6 DUP_TOP > 7 EXEC_STMT > > 3 8 LOAD_NAME 0 (x) > 11 PRINT_ITEM > 12 PRINT_NEWLINE > 13 LOAD_CONST 0 (None) > 16 RETURN_VALUE > > > >> What you're talking about here is global lookups falling back to >> builtin lookups. There's no more general notion of nesting or scoping >> involved, so why use those words? > > I'm not quite sure about this. In principle, every name lookup looks in > four scopes, LEGB as you describe above: > > - locals > - non-locals, a.k.a. enclosing or lexical scope(s) > - globals (i.e. the module) > - builtins > > > although Python can (usually?) optimise away some of those lookups. I think it kind of _has_ to optimize away, or at least tweak, some of those things, rather than just acting as if globals and builtins were just two more enclosing scopes. For example, global to builtins has to go through globals()['__builtins__'], or act as if it does, or code that relies on, say, the documented behavior of exec can be broken. And you have to be able to modify the global scope after compile time and have that modification be effective, which means you'd have to allow the same things on locals and closures if they were to act the same. > The > relationship of locals to enclosing scopes, and to globals in turn, > involve actual nesting of indented blocks in Python, but that's not > necessarily the case. One might imagine a hypothetical capability for > manipulating scopes directly, e.g.: > > def spam(): ... > def ham(): ... > set_enclosing(ham, spam) > # like: > # def spam(): > # def ham(): ... But that doesn't work; a closure has to link to a particular invocation of its outer function, not just to the function. Consider a trivial example: def spam(): x=time() def ham(): return x set_enclosing(ham, spam) ham() There's no actual x value in scope. So you need something like this if you want to actually be able to call it: def spam(helper): x=time() helper = bind_closure(helper, sys._getframe()) return helper() def ham(): return x set_enclosing(ham, spam) spam(ham) Of course you could make that getframe implicit; the point is there has to be a frame from an invocation of spam, not just the function itself, to make lexical scoping (errr... dynamically-generated fake-lexical scoping?) useful. > The adventurous or fool-hardy can probably do something like that now > with byte-code hacking :-) Yeah; I actually played with something like this a few years ago. I did it directly in terms of creating cell and free vars, not circumventing the existing LEGB system, which means you have to modify not just ham, but spam, in that set_enclosing. (In fact, you also have to modify all functions lexically or faux-lexically enclosing spam or enclosed by ham, which my code didn't do, but there were lots of other ways to fake it...). You need a bit of ctypes.pythonapi, not just bytecode hacks, to do the bind_closure() hack (the cell constructor isn't callable from Python, and you can't even fake it with a wrapper around a cell because cell_contents is immutable from Python...), but it's all doable. Anyway, my original goal was to make it possible to get the effect of nonlocal in Python 2, by calling "set_enclosing(spam, ham, force_cells=('eggs,')", which converted "eggs" to a freevar even if it was local (normally it only tried to convert globals), which mostly worked and only occasionally segfaulted. :) At any rate, even in languages designed for this kind of hacking (like Scheme), the scopes are still described as nested scopes, and the intended behavior can even be defined in terms of "as if you took the sexpr/AST/whatever for ham and spam and nested and re-compiled them", so whatever hacks you actually do are just optimizations over that re-compile. > Likewise, one might consider that builtins is a scope which in some > sense encloses the global scope. Consider it a virtual code block that > is outdented Sent from my iPhone > > >> So, trying to generalize global vs. builtin to a general notion of >> "nested scope" that isn't necessary for builtins and doesn't work for >> anything else seems like overcomplicating things for no benefit. > > Well, putting aside the question of whether this is useful or not, and > putting aside efficiency concerns, let's just imagine a hypothetical > implementation where name lookups used ChainMaps instead of using > separate LOAD_* lookups of special dicts. Then a function could set up a > ChainMap: This is basically one of the two original ways of doing lexical scoping in Lisp: when a function is constructed, it stores a reference to the stack frame, and when that function is called, its frame stores a reference to the function, so you always have a linked chain of stack frames to walk through to do your lookups (and assignments). The first problem with this is that using closures keeps alive a ton of garbage that can't be reclaimed for a long time. One solution to that is to lift out the variables, and only keep alive the ones that are actually referenced--but then you need some rule to decide variables are actually referenced, and the easiest place to do that is at function compile time. Which means that if you eval up new bindings or manipulate frame environments, they may or may not get closures, and it gets very confusing. It's simpler just to make them not work at all, at which point you've got pretty much the same rules cellvars have in Python. But you don't want to apply those rules at global scope; you need to be able to build that scope iteratively. (Even a simple recursive function--or a function that references itself to call set_enclosing--needs to be able to defer looking for the name's scope until call time. Which, besides the trivial "making the basics work", allows Python to do all kinds of other fun stuff, like write a function that calls itself, then replace it with an optimized version, and now all existing invocations of that function--like, say, generators--recurse to the optimized version.) The traditional solution is to provide a way to declare two kinds of bindings: one, lexically scoped through the chain, and the other either flat-global or dynamically scoped. Then almost everything at the top level ends up global (or ends up don't-care) anyway. What Python does (special-case global lookup to be completely late and completely flat, assume the whole globals dict can live forever, and make top-level code actually use the globals dict as its locals) is just a simplification of that, which handles 99% of what you normally want. Then you add a module system, and you effectively need two levels of global, and then you've got Python's behavior exactly, so you might as well optimize it the same way as Python. :) But maybe if you went back and found a different solution to the first problem, you could come up with different results. If you start with unlinkable stacks instead of adding them on later (or decouple the environments from the stacks, as you already did, and make them unlinkable ChainMaps), maybe there's a better way. Or maybe you could just decide that the garbage isn't a problem--or, rather, that it's an application-level problem; at compile time, users can just say "bind everything", but they can also provide a list of names to bind (and can also bind things by copy instead), so they have a way to manage the garbage growth. (This is C++'s solution to closures; I'm not sure how well it would transplant to Python, or Lisp, but it might work--maybe with hooks to do some things at runtime that C++ can only do at compile time?) > function.__scopes__ = ChainMap(locals, enclosing, globals, builtins) > > and a name lookup for (say) "x" would always be a simple: > > function.__scopes__["x"] > > Of course this would be harder to optimize, and hence probably slower, > than the current arrangement, but I think it would allow some > interesting experiments with scoping rules: > > ChainMap(locals, enclosing, globals, application_globals, builtins) > > > You could implement dynamic scoping by inserting the caller's __scopes__ > ChainMap into the front of the called function's ChainMap. I'd think you'd normally want to dynamically scope on a per-variable basis, not a per-scope basis--but maybe that's because most languages that have optional dynamic scoping do it that way, so I never imagined what I could do with the other? Any cool ideas? > And attribute > lookups would be something like this simplified scope: > > ChainMap(self.__dict__, type(self).__dict__) What about inheritance? You still need to get (base.__dict__ for base in type(self).__mro__) in there (and, for class attributes, the same for self.__mro__). But that would make things less dynamic (if a type's bases change at runtime, its subtypes aren't affected). You could link to your bases' ChainMaps instead of your entire MRO's dicts, but then I'm not sure how you linearize multiple inheritance. (Can you write a "C3ChainMap" that relies on a flag on each dict that says "if you see this flag you have to recompute the chain from your original inputs", which you set on __bases__ change?) > to say nothing of combinations of the two. You mean implicit self, where your chain the self and its type in there ahead of locals? The hard part there isn't lookups, but bindings. Unless you want a "self x" declaration akin to "nonlocal x" (or, if that's the default for methods, a "local x"--maybe implicit declarations are less important a feature than a single lookup chain?). > So I think there's something interesting here, even if we don't want to > use it in production code, it would make for some nice experiments. Most this seems like it would be easier to experiment with by building a new Python-like language than by hacking on Python. (Also, either way, it seems more like a thread for -ideas than -dev...) From v+python at g.nevcal.com Fri Dec 18 14:29:23 2015 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 18 Dec 2015 11:29:23 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: <56745E93.709@g.nevcal.com> On 12/18/2015 10:36 AM, Guido van Rossum wrote: > > I was opted to the signal module because `signal` documentation > suggest that > it alos supports Windows while asyncio documentation states that > `loop. > add_signal_handler()` is UNIX only. > > > Unfortunately that's true, but using the signal module with asyncio > the way you did is *not* safe. The only safe way is to use the > loop.add_signal_handler() interface. Does this mean Windows users should not bother trying to use asyncio ? (I haven't yet, due to lack of time, but I'd hate to think of folks, including myself in the future, investing a lot of time developing something and then discover it can never be reliable, due to this sort of "unsafe" or "not-available-on-Windows" feature.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Dec 18 14:44:27 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 11:44:27 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <56745E93.709@g.nevcal.com> References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> <56745E93.709@g.nevcal.com> Message-ID: No, it just means Windows users should not try to catch signals on Windows. Signals don't really exist there, and the simulation supporting only a few signals is awful (last I tried ^C was only processed when the process was waiting for input from stdin, and I had to use the BREAK key to stop runaway processes, which killed my shell window as well as the Python process). If you want orderly shutdown of a server process on Windows, you should probably listen for connections on a dedicated port on localhost and use that as an indication to stop the server. On Fri, Dec 18, 2015 at 11:29 AM, Glenn Linderman wrote: > On 12/18/2015 10:36 AM, Guido van Rossum wrote: > > I was opted to the signal module because `signal` documentation suggest >> that >> it alos supports Windows while asyncio documentation states that `loop. >> add_signal_handler()` is UNIX only. >> > > Unfortunately that's true, but using the signal module with asyncio the > way you did is *not* safe. The only safe way is to use the > loop.add_signal_handler() interface. > > > Does this mean Windows users should not bother trying to use asyncio ? > > (I haven't yet, due to lack of time, but I'd hate to think of folks, > including myself in the future, investing a lot of time developing > something and then discover it can never be reliable, due to this sort of > "unsafe" or "not-available-on-Windows" feature.) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Fri Dec 18 15:29:25 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Fri, 18 Dec 2015 12:29:25 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <20151218192524.68ae468f@gmail.com> References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: <755EF80C-B878-4FD2-A538-E42B676E656F@yahoo.com> On Dec 18, 2015, at 10:25, Szieberth ?d?m wrote: > >> - In theory, instead of waiting for a Future that is cancelled by a >> handler, you should be able to use asyncio.sleep() with a very large number >> (e.g. a million seconds). > > I was thinking on this too but it seemed less explicit to me than awaiting a > pure Future with a short comment. Moreover, even millions of seconds can pass. Yes, and these are really fun to debug. When a customer comes to you with "it was running fine for a few months and then suddenly it started going crazy, but I can't reproduce it", unless you happen to remember that you decided 10 million seconds was "forever" and ask whether "a few months" specifically means a few days short of 4 months... (At least with 24 and 49 days I know to look for which library used a C integer for milliseconds.) Really, I don't see anything wrong with the way the OP wrote it. Is that just because I have bad C habits (/* Useless select because there's no actual sleep function that allows SIGUSR to wake us without allowing all signals to wake us that works on both Solaris and IRIX */) and it really does look misleading to people who aren't warped like that? If so, would it be worth having an actual way to say "sleep forever (until canceled)"? Even if, under the covers, this only sleeps for 50000 years or so, a Y52K problem that can be solved by just pushing a new patch release for Python instead of for every separate server written in Python is probably a bit nicer. :) From abarnert at yahoo.com Fri Dec 18 15:45:47 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Fri, 18 Dec 2015 12:45:47 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: On Dec 18, 2015, at 10:36, Guido van Rossum wrote: > >> On Fri, Dec 18, 2015 at 10:25 AM, Szieberth ?d?m wrote: >> Thanks for your reply Guido! >> >> > - In theory, instead of waiting for a Future that is cancelled by a >> > handler, you should be able to use asyncio.sleep() with a very large number >> > (e.g. a million seconds). >> >> I was thinking on this too but it seemed less explicit to me than awaiting a >> pure Future with a short comment. Moreover, even millions of seconds can pass. > > 11 years. It's 11 days. Which is pretty reasonable server uptime. And probably just outside the longest test you're ever going to run. I don't trust myself to pick "a big number" when the numbers get this big. But I still sometimes sneak one past myself somehow. Hence my suggestion for a way to actually say "forever". -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Fri Dec 18 16:09:31 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 13:09:31 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: On Fri, Dec 18, 2015 at 12:45 PM, Andrew Barnert wrote: > On Dec 18, 2015, at 10:36, Guido van Rossum wrote: > > On Fri, Dec 18, 2015 at 10:25 AM, Szieberth ?d?m > wrote: > >> Thanks for your reply Guido! >> >> > - In theory, instead of waiting for a Future that is cancelled by a >> > handler, you should be able to use asyncio.sleep() with a very large >> number >> > (e.g. a million seconds). >> >> I was thinking on this too but it seemed less explicit to me than >> awaiting a >> pure Future with a short comment. Moreover, even millions of seconds can >> pass. >> > > 11 years. > > > It's 11 days. Which is pretty reasonable server uptime. > Oops, blame the repr() of datetime.timedelta. I'm sorry I so rashly thought I could do better than the OP. > And probably just outside the longest test you're ever going to run. I > don't trust myself to pick "a big number" when the numbers get this big. > But I still sometimes sneak one past myself somehow. Hence my suggestion > for a way to actually say "forever". > I guess we could make the default arg to sleep() 1e9. Or make it None and special-case it. I don't feel strongly about this -- I'm not sure how baffling it would be to accidentally leave out the delay and find your code sleeps forever rather than raising an error (since if you don't expect the infinite default you may not expect this kind of behavior). But I do feel it's not important enough to add a new function or method. However, I don't think "forever" and "until cancelled" are really the same thing. "Forever" can only be interrupted by loop.stop(); "until cancelled" requires indicating how to cancel it, and there the OP's approach is about the best you can do. (Or you could use the Event class, but that's really just a wrapper on top of a Future made to look more like threading.Event in its API.) -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sziebadam at gmail.com Fri Dec 18 16:13:02 2015 From: sziebadam at gmail.com (Szieberth =?UTF-8?B?w4Fkw6Ft?=) Date: Fri, 18 Dec 2015 22:13:02 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> Message-ID: <20151218221302.3c3322f4@gmail.com> Thanks for your reply Andrew! > Personally I don't feel the need for `wait_forever()` or > `loop.creae_context_task()`. > > But even if you need it you may create it from scratch easy, isn't it? Indeed. I was prepared for such opinions which is OK. It is better to think it through several times twice before introducing a new feature to an API. I myself feel that `loop.create_context_task()` may be too specific. The `asyncio.wait_forever()` coro seems much simple. Surely it must get investigated whether there are a significal amount of patterns where this coro could take part. I introduced one but surely that is not enough, only if it is so awesome that everyone starts using it which I doubt. :) ?d?m (http://szieberthadam.github.io/) From sziebadam at gmail.com Fri Dec 18 16:21:35 2015 From: sziebadam at gmail.com (Szieberth =?UTF-8?B?w4Fkw6Ft?=) Date: Fri, 18 Dec 2015 22:21:35 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: <20151218222135.2f9e4fef@gmail.com> > Maybe you can help by submitting a patch that prevents this error! Are you > interested? I'd be honored. ?d?m (http://szieberthadam.github.io/) P.S.: Was thinking about a longer answer but finally I ended up with this one :) From sziebadam at gmail.com Fri Dec 18 16:39:12 2015 From: sziebadam at gmail.com (Szieberth =?UTF-8?B?w4Fkw6Ft?=) Date: Fri, 18 Dec 2015 22:39:12 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: <20151218223912.6d322fb8@gmail.com> > I guess we could make the default arg to sleep() 1e9. Or make it None and > special-case it. By writing the OP, I considered suggesting this approach and rejected. I would have suggest the using Ellipsis (`...`) for the special case which seemed to explain more what is done plus it can hardly given unintentionally. I ended up suggesting `wait_forever()` though. ?d?m (http://szieberthadam.github.io/) From abarnert at yahoo.com Fri Dec 18 16:42:54 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Fri, 18 Dec 2015 21:42:54 +0000 (UTC) Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: Message-ID: <1344286986.701953.1450474974651.JavaMail.yahoo@mail.yahoo.com> On Friday, December 18, 2015 1:09 PM, Guido van Rossum wrote: >I guess we could make the default arg to sleep() 1e9. Or make it None and special-case it. I don't feel strongly about this -- I'm not sure how baffling it would be to accidentally leave out the delay and find your code sleeps forever rather than raising an error (since if you don't expect the infinite default you may not expect this kind of behavior). Yeah, that is a potential problem. The traditional C solution is to just allow passing -1 to mean "forever",* ideally with a constant so you can just say "sleep(FOREVER)". Which, in Python terms, would presumably mean "asyncio.sleep(asyncio.forever)", and it could be a unique object or an enum value or something instead of actually being -1. * Or at least "until this rolls over 31/32/63/64 bits", which is where you get those 49-day bugs from... but that wouldn't be an issue in Python > But I do feel it's not important enough to add a new function or method. Definitely agreed. >However, I don't think "forever" and "until cancelled" are really the same thing. "Forever" can only be interrupted by loop.stop(); "until cancelled" requires indicating how to cancel it, and there the OP's approach is about the best you can do. (Or you could use the Event class, but that's really just a wrapper on top of a Future made to look more like threading.Event in its API.) OK, I thought the OP's code looked pretty clear as written: he wants to wait until cancelled, so he waits on something that pretty clearly won't ever finish until he's cancelled. If that (or an Event or whatever) is the best way to spell this, then I can't really think of any good uses for sleep(forever). From guido at python.org Fri Dec 18 16:48:19 2015 From: guido at python.org (Guido van Rossum) Date: Fri, 18 Dec 2015 13:48:19 -0800 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: <1344286986.701953.1450474974651.JavaMail.yahoo@mail.yahoo.com> References: <1344286986.701953.1450474974651.JavaMail.yahoo@mail.yahoo.com> Message-ID: Using an Event is slightly better because you just wait for it -- you don't have to catch an exception. It's just not one of the better-known parts of asyncio. On Fri, Dec 18, 2015 at 1:42 PM, Andrew Barnert wrote: > On Friday, December 18, 2015 1:09 PM, Guido van Rossum > wrote: > > > >I guess we could make the default arg to sleep() 1e9. Or make it None and > special-case it. I don't feel strongly about this -- I'm not sure how > baffling it would be to accidentally leave out the delay and find your code > sleeps forever rather than raising an error (since if you don't expect the > infinite default you may not expect this kind of behavior). > > Yeah, that is a potential problem. > > The traditional C solution is to just allow passing -1 to mean "forever",* > ideally with a constant so you can just say "sleep(FOREVER)". Which, in > Python terms, would presumably mean "asyncio.sleep(asyncio.forever)", and > it could be a unique object or an enum value or something instead of > actually being -1. > > * Or at least "until this rolls over 31/32/63/64 bits", which is where you > get those 49-day bugs from... but that wouldn't be an issue in Python > > > But I do feel it's not important enough to add a new function or method. > > Definitely agreed. > >However, I don't think "forever" and "until cancelled" are really the > same thing. "Forever" can only be interrupted by loop.stop(); "until > cancelled" requires indicating how to cancel it, and there the OP's > approach is about the best you can do. (Or you could use the Event class, > but that's really just a wrapper on top of a Future made to look more like > threading.Event in its API.) > > > OK, I thought the OP's code looked pretty clear as written: he wants to > wait until cancelled, so he waits on something that pretty clearly won't > ever finish until he's cancelled. If that (or an Event or whatever) is the > best way to spell this, then I can't really think of any good uses for > sleep(forever). > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From leewangzhong+python at gmail.com Fri Dec 18 17:44:00 2015 From: leewangzhong+python at gmail.com (Franklin? Lee) Date: Fri, 18 Dec 2015 17:44:00 -0500 Subject: [Python-Dev] Idea: Dictionary references In-Reply-To: <09A0F7CE-EAAE-47B7-98CD-972F865A80F9@yahoo.com> References: <20151218125604.GI1609@ando.pearwood.info> <09A0F7CE-EAAE-47B7-98CD-972F865A80F9@yahoo.com> Message-ID: On Fri, Dec 18, 2015 at 2:32 PM, Andrew Barnert via Python-Dev wrote: > (Also, either way, it seems more like a thread for -ideas than -dev...) I said this early on in this thread! Should I try to write up my idea as a single thing, instead of a bunch of responses, and post it in -ideas? Should I call them "parent scope" and "parent refcell"? On Fri, Dec 18, 2015 at 7:56 AM, Steven D'Aprano wrote: > I'm not quite sure about this. In principle, every name lookup looks in > four scopes, LEGB as you describe above: > > - locals > - non-locals, a.k.a. enclosing or lexical scope(s) > - globals (i.e. the module) > - builtins > > > although Python can (usually?) optimise away some of those lookups. The > relationship of locals to enclosing scopes, and to globals in turn, > involve actual nesting of indented blocks in Python, but that's not > necessarily the case. As I understand, L vs E vs GB is known at compile-time. That is, your exec example doesn't work for me in Python 3, because all names are scoped at compile-time. x = 5 def f(): exec('x = 111') print(x) f() #prints 5 print(x) #prints 5 This means that my idea only really works for GB lookups. > On Thu, Dec 17, 2015 at 09:30:24AM -0800, Andrew Barnert via Python-Dev wrote: >> So, trying to generalize global vs. builtin to a general notion of >> "nested scope" that isn't necessary for builtins and doesn't work for >> anything else seems like overcomplicating things for no benefit. > > Well, putting aside the question of whether this is useful or not, and > putting aside efficiency concerns, let's just imagine a hypothetical > implementation where name lookups used ChainMaps instead of using > separate LOAD_* lookups of special dicts. Then a function could set up a > ChainMap: > > function.__scopes__ = ChainMap(locals, enclosing, globals, builtins) > > and a name lookup for (say) "x" would always be a simple: > > function.__scopes__["x"] > > Of course this would be harder to optimize, and hence probably slower, > than the current arrangement, This is where the ChainRefCell idea comes in. If a ChainRefCell is empty, it would ask its parent dicts for a value. If it finds a value in parent n, it would replace parent n with a refcell into parent n, and similarly for parents 0, 1, ... n-1. It won't need to do hash lookups in those parents again, while allowing for those parents to acquire names. (This means parent n+1 won't need to create refcells, so we don't make unnecessary refcells in `object` and `__builtin__`.) Unfortunately, classes are more complicated than nested scopes. 1. We skip MRO if we define classes as having their direct supers as parents. (Solution: Define classes as having all supers as parents, and make non-recursive Refcell.resolve() requests.) (Objects have their class as a parent, always.) 2. Classes can replace their bases. (I have some ideas for this, but see #3.) 3. I get the impression that attribute lookups are already pretty optimized. On Fri, Dec 18, 2015 at 2:32 PM, Andrew Barnert via Python-Dev wrote: > I think it kind of _has_ to optimize away, or at least tweak, some of those things, rather than just acting as if globals and builtins were just two more enclosing scopes. For example, global to builtins has to go through globals()['__builtins__'], or act as if it does, or code that relies on, say, the documented behavior of exec can be broken. It would or could, in my idea of __builtins__ being a parent scope of globals() (though I'm not sure whether it'd be the case for any other kind of nesting). Each refcell in globals() will hold a reference to __builtins__ (if they didn't successfully look it up yet) or to a refcell in __builtins__ (if there was once a successful lookup). Since globals() knows when globals()['__builtins__'] is modified, it can invalidate all its refcells' parent cells (by making them hold references to the new __builtins__). This will be O(len(table) + (# of refcells)), but swapping out __builtins__ shouldn't be something you keep doing. Even if it is a concern, I have More Ideas to remove the "len(table) +" (but with Raymond Hettinger's compact dicts, it wouldn't be necessary). It would be worse for classes, because it would require potentially many notifications. (But it would also save future lookups. And I have More Ideas.) This idea (of the owner dict "knowing" about its changed parent) also applies to general chained scopes, but flattenings like MRO would mess it up. Again, though, More Ideas. And more importantly, from what I understand of Victor's response, the current implementation would probably be efficient enough, or more efficient. > And you have to be able to modify the global scope after compile time and have that modification be effective, which means you'd have to allow the same things on locals and closures if they were to act the same. Not sure what you mean, but since I demand (possibly empty) refcells from globals() at compile time, they will always have the most updated value from globals. Not so much from __builtins__, but each refcell in globals will only have to make one successful lookup in __builtins__ (until it's swapped out). > The first problem with this is that using closures keeps alive a ton of garbage that can't be reclaimed for a long time. One solution to that is to lift out the variables, and only keep alive the ones that are actually referenced--but then you need some rule to decide variables are actually referenced, and the easiest place to do that is at function compile time. Which means that if you eval up new bindings or manipulate frame environments, they may or may not get closures, and it gets very confusing. It's simpler just to make them not work at all, at which point you've got pretty much the same rules cellvars have in Python. I don't know enough to confidentally say whether it would be an improvement to closures, but the refs concept I want for dict works for pretty much any data structure. You just keep a second container of pointers to RefCells, synced to the size of the original container. For a dict, that means syncing a second table with the same hash indices. For a resizable array, it means keeping an array of pointers of the same size. When an internal function refers to a local, it requests a refcell. When the external function call dies, the array cleans up its unexposed variables and releases its ref'd variables to the refcells (which might be held by an unexposed variable and thus later get DecRef'd anyway). The logic is pretty simple and doesn't need to "know" about closures. It just piggybacks onto Python's refcounting. But it would mean that inner functions create Python objects where they didn't used to (but this might be solvable at compile-time). And again, I don't know enough to say it's an improvement. > But you don't want to apply those rules at global scope; you need to be able to build that scope iteratively. (Even a simple recursive function--or a function that references itself to call set_enclosing--needs to be able to defer looking for the name's scope until call time. Which, besides the trivial "making the basics work", allows Python to do all kinds of other fun stuff, like write a function that calls itself, then replace it with an optimized version, and now all existing invocations of that function--like, say, generators--recurse to the optimized version.) My idea would allow that, with only one lookup at compile-time. It just creates cells that might never be used. (But by requesting such a cell, you're saying that it INTENDS to be used.) >> So I think there's something interesting here, even if we don't want to >> use it in production code, it would make for some nice experiments. > > Most this seems like it would be easier to experiment with by building a new Python-like language than by hacking on Python. I think it would be pretty much the same difficulty. From rmullins at illinois.edu Fri Dec 18 16:34:16 2015 From: rmullins at illinois.edu (Mullins, Robb) Date: Fri, 18 Dec 2015 21:34:16 +0000 Subject: [Python-Dev] [Webmaster] Python keeps installing as 32 bit References: Message-ID: Hi, Please remove these posts/liservs, etc. if possible, or strip my contact info/name/phone/email off the posts please. I?m getting calls from people trying to help with my Python install issue. http://code.activestate.com/lists/python-dev/138936/ http://blog.gmane.org/gmane.comp.python.devel This was not supposed to be posted online. I just wanted to know if there was a trick to forcing x64 Python to install on x64 machines. Thanks, RM Desktop Support Specialist Center for Innovation in Teaching & Learning citl-techsupport at mx.uillinois.edu (For computer issues, please use the ticket system.) (217) 333-2146 From: Mullins, Robb Sent: Wednesday, December 16, 2015 2:49 PM To: 'Brett Cannon' ; Steve Holden Cc: webmaster at python.org; python-dev at python.org Subject: RE: [Python-Dev] [Webmaster] Python keeps installing as 32 bit Yeah, I was using Windows x86-64 executable installer from that page. I tried unzipping it just in case, no luck. I?m thinking I?ll probably just use 32-bit though. I found a post saying 64-bit might have issues compiling. I don?t think users will know or care. And there x86 installers are there. http://www.howtogeek.com/197947/how-to-install-python-on-windows/ [cid:image001.jpg at 01D139A9.8D88BCE0] The only other thing I was thinking was something with the chip maybe. I ran into this about a year ago. (Or more now?) I Python down for 32 vs 64-bit. Then I noticed some 64-bit machines were still doing 32-bit, but I only have the x86-64.exe. I can?t force x64 on it. It?s not a huge issue at this point. Once I figure it out, it will save time. I?m planning on manually uninstalling versions of Python and then installing the current one (leaning toward x86 now) so all the user machines are consistent. Thanks, Robb Desktop Support Specialist Center for Innovation in Teaching & Learning citl-techsupport at mx.uillinois.edu (For computer issues, please use the ticket system.) (217) 333-2146 From: Brett Cannon [mailto:brett at python.org] Sent: Wednesday, December 16, 2015 2:39 PM To: Steve Holden >; Mullins, Robb > Cc: webmaster at python.org; python-dev at python.org Subject: Re: [Python-Dev] [Webmaster] Python keeps installing as 32 bit I can say for certain that Python 3.5.1 will install as 64-bit as that's what I'm personally running on the Windows 10 laptop that I'm writing this email on. If you look at https://www.python.org/downloads/release/python-351/ you will notice there are explicit 64-bit installers that you can use. Did you get your copy of Python by going straight to python.org/download and clicking the yellow "Download Python 3.5.1" button? On Wed, 16 Dec 2015 at 12:33 Steve Holden > wrote: Hi Robb, This address is really for web site issues, but we are mostly old hands, and reasonably well-connected, so we try to act as a helpful channel when we can. In this case I can't personally help (though another webmaster may, if available, be able to offer advice). I stopped doing system administration for anything but my own machines a long time ago, having done far too much :-) The many mailing list channels available are listed at https://mail.python.org/mailman/listinfo. I would recommend that you try the distutils list at https://mail.python.org/mailman/listinfo/distutils-sig; they don't actually build the Python installers (the dev who does that lives on python-dev, so that would be the place to go to get the scoop, and your email shows enough signs of competence that you need not fear adverse reactions). It seems like a reasonable enquiry to me, and I'm sorry I can't answer it. I've Cc'd this email to python-dev on the off-chance that someone will recognise my name and let it through, but I don't know how many people are working on the Windows installer or how busy they are. There are plenty of people smart enough to answer your question out there now, it's just a question of finding them. stackoverflow.com has a pretty good Python channel too. In any case, good luck, and thanks for reaching out to Python. regards Steve On Wed, Dec 16, 2015 at 7:29 PM, Mullins, Robb > wrote: Hi, Not quite sure where to ask this. I don?t use Python myself. I keep user desktops updated. Everything?s 64-bit. In the past I was able to install 32-bit Python on 32-bit machines and 64-bit Python on 64-bit machines. Now it?s just the one msi file to install, at least for 3.5.1. I do have a couple Python 2.7.9 users. We?re all 64-bit for machines, but I keep having Python install as 32-bit. I?m not sure if it recognizes something on the machine and matches it for being 32-bit that I?m not aware of. It can be tricky to uninstall, so it becomes a slight issue. I just want to get 64-bit Python on my user machines, unless it?s not possible. Is there a better place to ask this? Thanks, RM Desktop Support Specialist Center for Innovation in Teaching & Learning citl-techsupport at mx.uillinois.edu (For computer issues, please use the ticket system.) (217) 333-2146 _______________________________________________ Webmaster mailing list Webmaster at python.org https://mail.python.org/mailman/listinfo/webmaster _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 27656 bytes Desc: image001.jpg URL: From ncoghlan at gmail.com Sat Dec 19 05:55:26 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 19 Dec 2015 20:55:26 +1000 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: On 19 December 2015 at 03:44, Guido van Rossum wrote: > On Fri, Dec 18, 2015 at 9:34 AM, Tim Legrand > wrote: >> >> Well, this looks like a rhetorical question :) > > It wasn't, I was hoping you'd be quicker at picking one than me (I don't > publish packages on PyPI much myself so the docs all look like Greek to me > :-). There's an effort currently underway to significantly improve the getting started tutorials on packaging.python.org, but it's unfortunately going to be a long time before we can retire the legacy docs completely - while parts of them have aged badly (and are entirely superseded by packaging.python.org), other parts unfortunately aren't covered anywhere else yet :( Even once the new docs are in place, getting them to the top of search of results ahead of archived material that may be years out of date is likely to still be a challenge - for example, even considering just the legacy distutils docs, the "3.1" and "2" docs appear in the results at https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8, but the latest "3" docs don't. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From guido at python.org Sat Dec 19 11:26:54 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Dec 2015 08:26:54 -0800 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: Maybe we need to find a S.E.O. expert. I betcha some are lurking on this list. On Sat, Dec 19, 2015 at 2:55 AM, Nick Coghlan wrote: > On 19 December 2015 at 03:44, Guido van Rossum wrote: > > On Fri, Dec 18, 2015 at 9:34 AM, Tim Legrand > > > wrote: > >> > >> Well, this looks like a rhetorical question :) > > > > It wasn't, I was hoping you'd be quicker at picking one than me (I don't > > publish packages on PyPI much myself so the docs all look like Greek to > me > > :-). > > There's an effort currently underway to significantly improve the > getting started tutorials on packaging.python.org, but it's > unfortunately going to be a long time before we can retire the legacy > docs completely - while parts of them have aged badly (and are > entirely superseded by packaging.python.org), other parts > unfortunately aren't covered anywhere else yet :( > > Even once the new docs are in place, getting them to the top of search > of results ahead of archived material that may be years out of date is > likely to still be a challenge - for example, even considering just > the legacy distutils docs, the "3.1" and "2" docs appear in the > results at > https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8 > , > but the latest "3" docs don't. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias at urlichs.de Sat Dec 19 10:43:55 2015 From: matthias at urlichs.de (Matthias Urlichs) Date: Sat, 19 Dec 2015 15:43:55 +0000 (UTC) Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) Message-ID: The following code has a problem: the generator returned by .wait() has a finally: section. When self.stopped is set, it still needs to run. As it is asynchronous (it needs to re-acquire the lock), I need to come up with a reliable way to wait for it. If I don't, .release() will throw an exception because the lock is still unlocked. The best method to do this that I've come up with is the five marked lines. I keep thinking there must be a better way to do this (taking into account that I have no idea whether the 'await r' part of this is even necessary). ``` class StopMe(BaseException): pass class Foo: async dev some_method(self): self.uptodate = asyncio.Condition() self.stopped = asyncio.Future() ? await self.uptodate.acquire() try: while self.some_condition(): w = self.uptodate.wait() await asyncio.wait([w,self.stopped], loop=self.conn._loop, return_when=asyncio.FIRST_COMPLETED) with contextlib.suppress(StopMe): # FIXME? r = w.throw(StopMe()) # FIXME? if r is not None: # FIXME? await r # FIXME? await w # FIXME? finally: self.uptodate.release() ``` From tjreedy at udel.edu Sat Dec 19 13:40:49 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 19 Dec 2015 13:40:49 -0500 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: On 12/19/2015 5:55 AM, Nick Coghlan wrote: > Even once the new docs are in place, getting them to the top of search > of results ahead of archived material that may be years out of date is > likely to still be a challenge - for example, even considering just > the legacy distutils docs, the "3.1" and "2" docs appear in the > results at https://www.google.com/search?q=registering+with+the+package+index+site%3Apython.org&ie=utf-8&oe=utf-8, > but the latest "3" docs don't. Can we retroactively modify old docs by adding a link to newer docs at the top? "UPDATE 2016: This doc is obsolete. You probably should be looking at https: ...." -- Terry Jan Reedy From tjreedy at udel.edu Sat Dec 19 14:01:38 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Sat, 19 Dec 2015 14:01:38 -0500 Subject: [Python-Dev] [Webmaster] Python keeps installing as 32 bit In-Reply-To: References: Message-ID: On 12/18/2015 4:34 PM, Mullins, Robb wrote: > Please remove these posts/liservs, etc. if possible, or strip my contact > info/name/phone/email off the posts please. I?m getting calls from > people trying to help with my Python install issue. > http://code.activestate.com/lists/python-dev/138936/ > http://blog.gmane.org/gmane.comp.python.devel There is no connection between PSF and either Activestate or Gmane.org. We have no control over their mirrors. We occasionally remove defamatory material posted to python-list, but even that is somewhat futile as python list is mirrored on gmane, usenet, google-groups, and many other places. pydev has fewer mirrors, but as you noticed, there are at least 2. > This was not supposed to be posted online. I just wanted to know if > there was a trick to forcing x64 Python to install on x64 machines. There is no easy way to edit and forward at the same time. > Thanks, > > RM > > Desktop Support Specialist > > Center for Innovation in Teaching & Learning > > citl-techsupport at mx.uillinois.edu > /(For computer issues, please > use the ticket system.)/ > > (217) 333-xxxx This time, *you* publicly posted your phone number, which I obscured. Perhaps you should have two signatures or two accounts with different signatures, depending on your mail/news software. -- Terry Jan Reedy From amk at amk.ca Sat Dec 19 14:02:53 2015 From: amk at amk.ca (A.M. Kuchling) Date: Sat, 19 Dec 2015 14:02:53 -0500 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: References: Message-ID: <20151219190253.GA3963@DATLANDREWK.local> On Sat, Dec 19, 2015 at 08:55:26PM +1000, Nick Coghlan wrote: > Even once the new docs are in place, getting them to the top of search > of results ahead of archived material that may be years out of date is > likely to still be a challenge - for example, even considering just > the legacy distutils docs, the "3.1" and "2" docs appear ... We probably need to update https://docs.python.org/robots.txt, which currently contains: # Prevent development and old documentation from showing up in search results. User-agent: * # Disallow: /dev Disallow: /release The intent was to allow the latest version of the docs to be crawled. Unfortunately, with the current hierarchy we'd have to disallow each version, e.g. Disallow: /2.6/* Disallow: /3.0/* Disallow: /3.1/* And we'd need to update it for each new major release. --amk From guido at python.org Sat Dec 19 14:25:16 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Dec 2015 11:25:16 -0800 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: References: Message-ID: Perhaps you can add a check for a simple boolean 'stop' flag to your condition check, and when you want to stop the loop you set that flag and then call notify() on the condition. Then you can follow the standard condition variable protocol instead of all this nonsense. :-) class Foo: async def some_method(self): self.uptodate = asyncio.Condition() self.stopped = False ? await self.uptodate.acquire() try: while (not self.stopped) and self.some_condition(): await self.uptodate.wait() finally: self.uptodate.release() def stop_it(self): self.stopped = True self.uptodate.notify() On Sat, Dec 19, 2015 at 7:43 AM, Matthias Urlichs wrote: > The following code has a problem: the generator returned by .wait() has a > finally: section. When self.stopped is set, it still needs to run. As it is > asynchronous (it needs to re-acquire the lock), I need to come up with a > reliable way to wait for it. If I don't, .release() will throw an exception > because the lock is still unlocked. > > The best method to do this that I've come up with is the five marked lines. > I keep thinking there must be a better way to do this (taking into account > that I have no idea whether the 'await r' part of this is even necessary). > > > ``` > class StopMe(BaseException): > pass > class Foo: > async dev some_method(self): > self.uptodate = asyncio.Condition() > self.stopped = asyncio.Future() > ? > await self.uptodate.acquire() > try: > while self.some_condition(): > w = self.uptodate.wait() > await asyncio.wait([w,self.stopped], loop=self.conn._loop, > return_when=asyncio.FIRST_COMPLETED) > with contextlib.suppress(StopMe): # FIXME? > r = w.throw(StopMe()) # FIXME? > if r is not None: # FIXME? > await r # FIXME? > await w # FIXME? > finally: > self.uptodate.release() > ``` > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias at urlichs.de Sat Dec 19 16:40:58 2015 From: matthias at urlichs.de (Matthias Urlichs) Date: Sat, 19 Dec 2015 22:40:58 +0100 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: References: Message-ID: <5675CEEA.5090801@urlichs.de> On 19.12.2015 20:25, Guido van Rossum wrote: > Perhaps you can add a check for a simple boolean 'stop' flag to your > condition check, and when you want to stop the loop you set that flag > and then call notify() on the condition. Then you can follow the > standard condition variable protocol instead of all this nonsense. :-) Your example does not work. > def stop_it(self): > self.stopped = True > self.uptodate.notify() self.uptodate needs to be locked before I can call .notify() on it. Creating a new task just for that seems like overkill, and I'd have to add a generation counter to prevent a race condition. Doable, but ugly. However, this doesn't fix the generic problem; Condition.wait() was just what bit me today. When a non-async generator goes out of scope, its finally: blocks will execute. An async procedure call whose refcount reaches zero without completing simply goes away; finally: blocks are *not* called and there is *no* warning. I consider that to be a bug. -- -- Matthias Urlichs From gjcarneiro at gmail.com Sat Dec 19 17:59:10 2015 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Sat, 19 Dec 2015 22:59:10 +0000 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: <5675CEEA.5090801@urlichs.de> References: <5675CEEA.5090801@urlichs.de> Message-ID: I tried to reproduce the problem you describe, but failed. Here's my test program (forgive the awful tab indentation, long story): -------------- import asyncio async def foo(): print("resource acquire") try: await asyncio.sleep(100) finally: print("resource release") async def main(): task = asyncio.ensure_future(foo()) print("task created") await asyncio.sleep(0) print("about to cancel task") task.cancel() print("task cancelled, about to wait for it") try: await task except asyncio.CancelledError: pass print("waited for cancelled task") if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() ----------- I get this output: ---------------- 10:54:28 ~/Documents$ python3.5 foo.py task created resource acquire about to cancel task task cancelled, about to wait for it resource release waited for cancelled task ---------------- Which seems to indicate that the finally clause is correctly executed when the task is waited for, after being cancelled. But maybe I completely misunderstood your problem... On 19 December 2015 at 21:40, Matthias Urlichs wrote: > On 19.12.2015 20:25, Guido van Rossum wrote: > > Perhaps you can add a check for a simple boolean 'stop' flag to your > > condition check, and when you want to stop the loop you set that flag > > and then call notify() on the condition. Then you can follow the > > standard condition variable protocol instead of all this nonsense. :-) > Your example does not work. > > > def stop_it(self): > > self.stopped = True > > self.uptodate.notify() > > self.uptodate needs to be locked before I can call .notify() on it. > Creating a new task just for that seems like overkill, and I'd have to > add a generation counter to prevent a race condition. Doable, but ugly. > > However, this doesn't fix the generic problem; Condition.wait() was just > what bit me today. > When a non-async generator goes out of scope, its finally: blocks will > execute. An async procedure call whose refcount reaches zero without > completing simply goes away; finally: blocks are *not* called and there > is *no* warning. > I consider that to be a bug. > > -- > -- Matthias Urlichs > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com > -- Gustavo J. A. M. Carneiro Gambit Research "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinjacobconway at gmail.com Sat Dec 19 19:26:03 2015 From: kevinjacobconway at gmail.com (Kevin Conway) Date: Sun, 20 Dec 2015 00:26:03 +0000 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: References: <5675CEEA.5090801@urlichs.de> Message-ID: > An async procedure call whose refcount reaches zero without completing simply goes away; finally: blocks are *not* called and there is *no* warning. I believe OP is looking at these two scenarios: def generator(): try: yield None yield None finally: print('finally') gen = generator() gen.send(None) del gen # prints finally on GC class Awaitable: def __await__(self): return self def __next__(self): return self async def coroutine(): try: await Awaitable() await Awaitable() finally: print('finally') coro = coroutine() coro.send(None) del coro # prints finally on GC I don't see any difference in the behaviour between the two. My guess is that OP's code is not hitting a zero refcount. On Sat, Dec 19, 2015 at 5:00 PM Gustavo Carneiro wrote: > I tried to reproduce the problem you describe, but failed. Here's my test > program (forgive the awful tab indentation, long story): > > -------------- > import asyncio > > async def foo(): > print("resource acquire") > try: > await asyncio.sleep(100) > finally: > print("resource release") > > > async def main(): > task = asyncio.ensure_future(foo()) > print("task created") > await asyncio.sleep(0) > print("about to cancel task") > task.cancel() > print("task cancelled, about to wait for it") > try: > await task > except asyncio.CancelledError: > pass > print("waited for cancelled task") > > > if __name__ == '__main__': > loop = asyncio.get_event_loop() > loop.run_until_complete(main()) > loop.close() > ----------- > > I get this output: > > ---------------- > 10:54:28 ~/Documents$ python3.5 foo.py > task created > resource acquire > about to cancel task > task cancelled, about to wait for it > resource release > waited for cancelled task > ---------------- > > Which seems to indicate that the finally clause is correctly executed when > the task is waited for, after being cancelled. > > But maybe I completely misunderstood your problem... > > > On 19 December 2015 at 21:40, Matthias Urlichs > wrote: > >> On 19.12.2015 20:25, Guido van Rossum wrote: >> > Perhaps you can add a check for a simple boolean 'stop' flag to your >> > condition check, and when you want to stop the loop you set that flag >> > and then call notify() on the condition. Then you can follow the >> > standard condition variable protocol instead of all this nonsense. :-) >> Your example does not work. >> >> > def stop_it(self): >> > self.stopped = True >> > self.uptodate.notify() >> >> self.uptodate needs to be locked before I can call .notify() on it. >> Creating a new task just for that seems like overkill, and I'd have to >> add a generation counter to prevent a race condition. Doable, but ugly. >> >> However, this doesn't fix the generic problem; Condition.wait() was just >> what bit me today. >> When a non-async generator goes out of scope, its finally: blocks will >> execute. An async procedure call whose refcount reaches zero without >> completing simply goes away; finally: blocks are *not* called and there >> is *no* warning. >> I consider that to be a bug. >> >> -- >> -- Matthias Urlichs >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com >> > > > > -- > Gustavo J. A. M. Carneiro > Gambit Research > "The universe is always one step beyond logic." -- Frank Herbert > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Dec 19 20:00:00 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 19 Dec 2015 17:00:00 -0800 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: <5675CEEA.5090801@urlichs.de> References: <5675CEEA.5090801@urlichs.de> Message-ID: On Sat, Dec 19, 2015 at 1:40 PM, Matthias Urlichs wrote: > On 19.12.2015 20:25, Guido van Rossum wrote: > > Perhaps you can add a check for a simple boolean 'stop' flag to your > > condition check, and when you want to stop the loop you set that flag > > and then call notify() on the condition. Then you can follow the > > standard condition variable protocol instead of all this nonsense. :-) > Your example does not work. > > > def stop_it(self): > > self.stopped = True > > self.uptodate.notify() > > self.uptodate needs to be locked before I can call .notify() on it. > Fair enough. > Creating a new task just for that seems like overkill, and I'd have to > add a generation counter to prevent a race condition. Doable, but ugly. > I guess that's due to some application logic, but whatever. You don't really seem to care about finding a solution for this problem anyways: > However, this doesn't fix the generic problem; Condition.wait() was just > what bit me today. > When a non-async generator goes out of scope, its finally: blocks will > execute. An async procedure call whose refcount reaches zero without > completing simply goes away; finally: blocks are *not* called and there > is *no* warning. > I consider that to be a bug. > If that's so, can you demonstrate that without invoking all these other things? Other traffic in this thread seems to indicate it may be not as simple as that. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias at urlichs.de Sun Dec 20 00:29:40 2015 From: matthias at urlichs.de (Matthias Urlichs) Date: Sun, 20 Dec 2015 06:29:40 +0100 Subject: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() ) In-Reply-To: References: <5675CEEA.5090801@urlichs.de> Message-ID: <56763CC4.7010102@urlichs.de> On 20.12.2015 01:26, Kevin Conway wrote: > async def coroutine(): > try: > await Awaitable() > await Awaitable() > finally: > print('finally') Try adding another "await Awaitable()" after the "finally:". I have to take back my "doesn't print an error" comment, however; there's another reference to the Condition.wait() generator (the task asyncio.wait() creates to wrap the generator in), and the "Task was destroyed but it is pending!" message got delayed sufficiently that I missed it. (Dying test cases tend to spew many of these.) Testcase: import asyncio import gc cond = asyncio.Condition() loop = asyncio.get_event_loop() async def main(): async with cond: # asyncio.wait() does this, if we don't w = asyncio.ensure_future(cond.wait()) await asyncio.wait([w],timeout=1) print(gc.get_referrers(w)) loop.run_until_complete(main()) Time to refactor my code to do the wait/timeout outside the "async with cond". -- -- Matthias Urlichs From stephane at wirtel.be Sun Dec 20 09:15:13 2015 From: stephane at wirtel.be (Stephane Wirtel) Date: Sun, 20 Dec 2015 15:15:13 +0100 Subject: [Python-Dev] Deadline for PythonFOSDEM 2016 is today. Message-ID: <20151220141513.GA2026@sg1> Just inform you that the deadline for the CfP of the PythonFOSDEM will finish this evening. If you have a last talk to submit, please do it. Call For Proposals ================== This is the official call for sessions for the Python devroom at FOSDEM 2016. FOSDEM is the Free and Open source Software Developers' European Meeting, a free and non-commercial two-day week-end that offers open source contributors a place to meet, share ideas and collaborate. FOSDEM is in Brussels in Belgium on 30th January. It's the biggest event in Europe with +5000 hackers, +400 speakers. For this edition, Python will be represented by its Community. If you want to discuss with a lot of Python Users, it's the place to be! Important dates =============== * Submission deadlines: 2015-12-20 * Acceptance notifications: 2015-12-24 Practical ========= * The duration for talks will be 30 minutes, including presentations and questions and answers. * Presentation can be recorded and streamed, sending your proposal implies giving permission to be recorded. * A mailing list for the Python devroom is available for discussions about devroom organisation. You can register at this address: https://lists.fosdem.org/listinfo/python-devroom How to submit ============= All submissions are made in the Pentabarf event planning tool at https://penta.fosdem.org/submission/FOSDEM16 When submitting your talk in Pentabarf, make sure to select the Python devroom as the Track. Of course, if you already have a user account, please reuse it. Questions ========= Any questions, please sned an email to info AT python-fosdem DOT org Thank you for submitting your sessions and see you soon in Brussels to talk about Python. If you want to keep informed for this edition, you can follow our twitter account @PythonFOSDEM. * FOSDEM 2016: https://fosdem.org/2016 * Python Devroom: http://python-fosdem.org * Twitter: https://twitter.com/PythonFOSDEM Thank you so much, Stephane -- St?phane Wirtel - http://wirtel.be - @matrixise From srkunze at mail.de Sun Dec 20 17:01:16 2015 From: srkunze at mail.de (Sven R. Kunze) Date: Sun, 20 Dec 2015 23:01:16 +0100 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: <5677252C.8030303@mail.de> On 18.12.2015 22:09, Guido van Rossum wrote: > I guess we could make the default arg to sleep() 1e9. Or make it None > and special-case it. I don't feel strongly about this -- I'm not sure > how baffling it would be to accidentally leave out the delay and find > your code sleeps forever rather than raising an error (since if you > don't expect the infinite default you may not expect this kind of > behavior). But I do feel it's not important enough to add a new > function or method. Why still guessing the best surrogate for infinity? Seems like python is just missing int('inf'). :/ Best, Sven From alexander.belopolsky at gmail.com Sun Dec 20 17:02:07 2015 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Dec 2015 17:02:07 -0500 Subject: [Python-Dev] Asynchronous context manager in a typical network server In-Reply-To: References: <20151218145855.76a627ea@gmail.com> <20151218192524.68ae468f@gmail.com> Message-ID: On Fri, Dec 18, 2015 at 4:09 PM, Guido van Rossum wrote: > >> It's 11 days. Which is pretty reasonable server uptime. >> > > Oops, blame the repr() of datetime.timedelta. I'm sorry I so rashly > thought I could do better than the OP. > A helpful trivia: a year is approximately ? times 10 million seconds. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sun Dec 20 17:28:21 2015 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 21 Dec 2015 09:28:21 +1100 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) Message-ID: On Mon, Dec 21, 2015 at 9:02 AM, Alexander Belopolsky wrote: > On Fri, Dec 18, 2015 at 4:09 PM, Guido van Rossum wrote: >>> >>> >>> It's 11 days. Which is pretty reasonable server uptime. >> >> >> Oops, blame the repr() of datetime.timedelta. I'm sorry I so rashly >> thought I could do better than the OP. > > > A helpful trivia: a year is approximately ? times 10 million seconds. Sadly doesn't help here, as the timedelta for a number of years looks like this: >>> datetime.timedelta(days=365*11) datetime.timedelta(4015) Would there be value in changing the repr to use keyword arguments? Positional arguments might well not correspond to the way they were created, and unless you happen to know what the fields mean, they're a little obscure: >>> datetime.timedelta(weeks=52,minutes=1488) datetime.timedelta(365, 2880) Worse, help(datetime.timedelta) in 3.6 doesn't document the constructor at all. There's no mention of __init__ at all, __new__ has this useless information: | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. and aside from there being three data descriptors, there's nothing to suggest that you construct these things with timedelta(days, seconds, microseconds). Definitely no indication that you can use other keyword args. Is this something worth fixing, or is it acceptable to drive people to fuller documentation than help()? ChrisA From guido at python.org Sun Dec 20 17:33:46 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Dec 2015 14:33:46 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 2:28 PM, Chris Angelico wrote: > On Mon, Dec 21, 2015 at 9:02 AM, Alexander Belopolsky > wrote: > > On Fri, Dec 18, 2015 at 4:09 PM, Guido van Rossum > wrote: > >>> > >>> > >>> It's 11 days. Which is pretty reasonable server uptime. > >> > >> > >> Oops, blame the repr() of datetime.timedelta. I'm sorry I so rashly > >> thought I could do better than the OP. > > > > > > A helpful trivia: a year is approximately ? times 10 million seconds. > > Sadly doesn't help here, as the timedelta for a number of years looks like > this: > > >>> datetime.timedelta(days=365*11) > datetime.timedelta(4015) > > Would there be value in changing the repr to use keyword arguments? > Positional arguments might well not correspond to the way they were > created, and unless you happen to know what the fields mean, they're a > little obscure: > > >>> datetime.timedelta(weeks=52,minutes=1488) > datetime.timedelta(365, 2880) > > Worse, help(datetime.timedelta) in 3.6 doesn't document the > constructor at all. There's no mention of __init__ at all, __new__ has > this useless information: > > | __new__(*args, **kwargs) from builtins.type > | Create and return a new object. See help(type) for accurate > signature. > > and aside from there being three data descriptors, there's nothing to > suggest that you construct these things with timedelta(days, seconds, > microseconds). Definitely no indication that you can use other keyword > args. > > Is this something worth fixing, or is it acceptable to drive people to > fuller documentation than help()? > That fix occurred to me too. However, I didn't propose it, since it's always a little easy to blame one's own mistakes on the software. Still, I can't be the only one to ever have been fooled by this, and it is definitely pretty arcane knowledge what the positional arguments to timedelta(). I'm just curious on the backward compatibility impact. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sun Dec 20 18:15:25 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Dec 2015 15:15:25 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 3:05 PM, Emanuel Barry wrote: > From: guido at python.org > > > I'm just curious on the backward compatibility impact. > > I'm just curious on the number of programs depending on the repr() of any > object at all in production (not counting tests). I could be wrong, but it > seems foolish to rely on that, especially since this is something that we > *can* change on an (almost) arbitrary basis. IMO, the repr() is meant to > aid the programmer - not specifying keyword arguments here does quite the > opposite of that :) > Not sure if you meant that as a rhetorical question or sarcastically. While you're right that ideally changing the repr() of an object shouldn't affect production work, in practice it can break any number of things, for example over-specified unit tests or poor integrations that end up parsing the string (perhaps in a different language than Python). We've encountered such issues many times in the past (for example, massive doctest breakage when we randomized the string hash) so we have to at least consider the possibility. In this case I expect there will be little effect, but it doesn't hurt asking around -- who knows what someone reading this remembers (besides asking pedantic questions :-). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Sun Dec 20 20:00:53 2015 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Dec 2015 20:00:53 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 5:28 PM, Chris Angelico wrote: > > A helpful trivia: a year is approximately ? times 10 million seconds. > > Sadly doesn't help here, as the timedelta for a number of years looks like > this: > > >>> datetime.timedelta(days=365*11) > datetime.timedelta(4015) > > The original issue was how long is a million seconds. The bit of trivia that I suggested helps to establish that it cannot be a multiple of years. > Would there be value in changing the repr to use keyword arguments? > I don't think translating from seconds to years will be any simpler with any alternative repr, but I would really like to see a change in the repr of negative timedeltas: >>> timedelta(minutes=-1) datetime.timedelta(-1, 86340) And str() is not much better: >>> print(timedelta(minutes=-1)) -1 day, 23:59:00 The above does not qualify as a human readable representation IMO. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vgr255 at live.ca Sun Dec 20 18:05:07 2015 From: vgr255 at live.ca (Emanuel Barry) Date: Sun, 20 Dec 2015 18:05:07 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: , Message-ID: From: guido at python.org Date: Sun, 20 Dec 2015 14:33:46 -0800 To: rosuav at gmail.com Subject: Re: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) CC: python-dev at python.org > I'm just curious on the backward compatibility impact. I'm just curious on the number of programs depending on the repr() of any object at all in production (not counting tests). I could be wrong, but it seems foolish to rely on that, especially since this is something that we *can* change on an (almost) arbitrary basis. IMO, the repr() is meant to aid the programmer - not specifying keyword arguments here does quite the opposite of that :) > > -- > --Guido van Rossum (python.org/~guido) -Emanuel _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/vgr255%40live.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From vgr255 at live.ca Sun Dec 20 18:30:43 2015 From: vgr255 at live.ca (Emanuel Barry) Date: Sun, 20 Dec 2015 18:30:43 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: , , Message-ID: Half-rhetorical half-genuine; you know better than me the history of breakage due to such changes, anyway. I can't really think of anything you haven't, so I'll just sit back. From: guido at python.org Date: Sun, 20 Dec 2015 15:15:25 -0800 Subject: Re: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) To: vgr255 at live.ca CC: rosuav at gmail.com; python-dev at python.org On Sun, Dec 20, 2015 at 3:05 PM, Emanuel Barry wrote: From: guido at python.org > I'm just curious on the backward compatibility impact. I'm just curious on the number of programs depending on the repr() of any object at all in production (not counting tests). I could be wrong, but it seems foolish to rely on that, especially since this is something that we *can* change on an (almost) arbitrary basis. IMO, the repr() is meant to aid the programmer - not specifying keyword arguments here does quite the opposite of that :) Not sure if you meant that as a rhetorical question or sarcastically. While you're right that ideally changing the repr() of an object shouldn't affect production work, in practice it can break any number of things, for example over-specified unit tests or poor integrations that end up parsing the string (perhaps in a different language than Python). We've encountered such issues many times in the past (for example, massive doctest breakage when we randomized the string hash) so we have to at least consider the possibility. In this case I expect there will be little effect, but it doesn't hurt asking around -- who knows what someone reading this remembers (besides asking pedantic questions :-). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Sun Dec 20 20:46:52 2015 From: rosuav at gmail.com (Chris Angelico) Date: Mon, 21 Dec 2015 12:46:52 +1100 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Mon, Dec 21, 2015 at 12:00 PM, Alexander Belopolsky wrote: > On Sun, Dec 20, 2015 at 5:28 PM, Chris Angelico wrote: >> >> > A helpful trivia: a year is approximately ? times 10 million seconds. >> >> Sadly doesn't help here, as the timedelta for a number of years looks like >> this: >> >> >>> datetime.timedelta(days=365*11) >> datetime.timedelta(4015) >> > > The original issue was how long is a million seconds. The bit of trivia > that I suggested helps to establish that it cannot be a multiple of years. Ah, true. Still, it would deal with the confusion here, which I think is what Guido was referring to: >>> datetime.timedelta(seconds=1000000) datetime.timedelta(11, 49600) It's eleven somethings and some loose change. What if it came out like this, instead? >>> datetime.timedelta(seconds=1000000) datetime.timedelta(days=11, seconds=49600) Much more obviously eleven and a half days. >> Would there be value in changing the repr to use keyword arguments? > > > I don't think translating from seconds to years will be any simpler with any > alternative repr... A timedelta can't actually cope with years, per se, but for back-of-the-envelope calculations, 1000 days = 3 years (and round down). >>> datetime.timedelta(seconds=1e9) datetime.timedelta(11574, 6400) A billion seconds is thirty-odd years. That's about as good as timedelta's ever going to do for us. Changing the repr won't change this at all, except that it'll be obvious that the 11K figure is measured in days. > but I would really like to see a change in the repr of > negative timedeltas: > >>>> timedelta(minutes=-1) > datetime.timedelta(-1, 86340) > > And str() is not much better: > >>>> print(timedelta(minutes=-1)) > -1 day, 23:59:00 > > The above does not qualify as a human readable representation IMO. There are two plausible ways of describing negative intervals. 1) Show the largest unit in negative, and all others in positive. 2) Take the absolute value, generate a repr, and then stick a hyphen in front. If Python picks the former, you can easily request the latter: ZERO = datetime.timedelta(0) def display(td): if td < 0: return "-"+repr(-td) return repr(td) The converse isn't as easy. And both formats maintain certain invariants; the second has invariants regarding the magnitude of the delta (a movement of less than one day will never include the word 'day'), but the first has the rather useful invariant that arithmetic on datetimes doesn't affect units smaller than those changed: >>> td = datetime.timedelta(minutes=75) >>> td + datetime.timedelta(days=1) datetime.timedelta(1, 4500) >>> td + datetime.timedelta(days=-1) datetime.timedelta(-1, 4500) Also, it's consistent with the way Python handles modulo arithmetic elsewhere. If you think of a timedelta as a number of microseconds, the partitioning into days and seconds follows the normal rules for divmod with 86400 and 1000000. Yes, it looks a little strange in isolation, but I think it's justifiable for the .days, .seconds, .microseconds attributes. Should repr switch the display around? Perhaps, but I'm highly dubious. Should str? That's a bit more plausible - but since both formats are justifiable, I'd be more inclined to separate it out; or maybe do this in __format__ as an alternative formatting style: >>> class timedelta(datetime.timedelta): ... def __format__(self, fmt): ... if fmt == "-" and self.total_seconds() < 0: ... return "-" + str(-self) ... return str(self) ... >>> td=timedelta(minutes=-1) >>> f"Default: {td} By magnitude: {td:-}" 'Default: -1 day, 23:59:00 By magnitude: -0:01:00' ChrisA From guido at python.org Sun Dec 20 21:00:20 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Dec 2015 18:00:20 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 5:00 PM, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > > On Sun, Dec 20, 2015 at 5:28 PM, Chris Angelico wrote: > >> > A helpful trivia: a year is approximately ? times 10 million seconds. >> >> Sadly doesn't help here, as the timedelta for a number of years looks >> like this: >> >> >>> datetime.timedelta(days=365*11) >> datetime.timedelta(4015) >> >> > The original issue was how long is a million seconds. The bit of trivia > that I suggested helps to establish that it cannot be a multiple of years. > But it's entirely arbitrary, which makes it not that easy to remember. > Would there be value in changing the repr to use keyword arguments? >> > > I don't think translating from seconds to years will be any simpler with > any alternative repr, > Well it would have saved me an embarrassing moment -- I typed `datetime.timedelta(seconds=1e6)` at the command prompt and when the response came as `datetime.timedelta(11, 49600)` I mistook that as 11 years (I was in a hurry and trying hard not to have to think :-). > but I would really like to see a change in the repr of negative timedeltas: > > >>> timedelta(minutes=-1) > datetime.timedelta(-1, 86340) > > And str() is not much better: > > >>> print(timedelta(minutes=-1)) > -1 day, 23:59:00 > > The above does not qualify as a human readable representation IMO. > I'm sure that one often catches people by surprise. However, I don't think we can fix that one without also fixing the values of the attributes -- in that example days is -1 and seconds is 86340 (which will *also* catch people by surprise). And changing that would be much, much harder for backwards compatibility reasons-- we'd have to set days to 0 and seconds to -60, and suddenly we have a much murkier invariant, instead of the crisp 0 <= microseconds < 1000000 0 <= seconds < 60 (There is no such invariant for days -- they hold the sign bit.) In essence, you'd have to look at all three attributes to figure out on which side of 0 is was (or think of the right way to do it, which is to compare to timedelta(0)). I might still go for it, if it wasn't too late by over a decade (as Tim says). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Sun Dec 20 22:06:11 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 21 Dec 2015 12:06:11 +0900 Subject: [Python-Dev] [OT] Without thinking! [was: Change the repr for datetime.timedelta] In-Reply-To: References: Message-ID: <22135.27811.886349.323380@turnbull.sk.tsukuba.ac.jp> Guido van Rossum writes: > (I was in a hurry and trying hard not to have to think :-). That makes me feel much better! There *are* things that *aren't* obvious, even to those born Dutch! :-) Happy Holidays! From tim.peters at gmail.com Sun Dec 20 22:25:03 2015 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 20 Dec 2015 21:25:03 -0600 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: [Alexander Belopolsky] >> ... >> but I would really like to see a change in the repr of negative >> timedeltas: >> >> >>> timedelta(minutes=-1) >> datetime.timedelta(-1, 86340) >> >> And str() is not much better: >> >> >>> print(timedelta(minutes=-1)) >> -1 day, 23:59:00 >> >> The above does not qualify as a human readable representation IMO. [Guido] > I'm sure that one often catches people by surprise. However, I don't think > we can fix that one without also fixing the values of the attributes -- in > that example days is -1 and seconds is 86340 (which will *also* catch people > by surprise). And changing that would be much, much harder for backwards > compatibility reasons-- we'd have to set days to 0 and seconds to -60, and > suddenly we have a much murkier invariant, instead of the crisp > > 0 <= microseconds < 1000000 > 0 <= seconds < 60 > > (There is no such invariant for days -- they hold the sign bit.) > > In essence, you'd have to look at all three attributes to figure out on > which side of 0 is was (or think of the right way to do it, which is to > compare to timedelta(0)). I might still go for it, if it wasn't too late by > over a decade (as Tim says). Seems timedelta is over-specified, yes? For example, those invariants apply to CPython's internal representation, but have no direct effect on the set of representable timedeltas, and the constructor couldn't care less about them (other than to bash its inputs into those ranges; BTW, note that the invariant on `seconds` is actually < 86400, not < 60); e.g., >>> datetime.timedelta(days=1, seconds=1000000, microseconds=-3847384738473) datetime.timedelta(-32, 3815, 261527) So perhaps it would be better to document the practical truth ;-) That is, a timedelta is an integer number of microseconds in range(-86399999913600000000, 86400000000000000000) and all the rest is just more-or-less artificial complication due to choosing to _represent_ that range in a funky mixed-radix days/seconds/microseconds format. For >>> print(timedelta(minutes=-1)) I'd like to see: -00:01:00 But I wouldn't change repr() - the internal representation is fully documented, and it's appropriate for repr() to reflect documented internals as directly as possible. Spelling out the units with keyword days/seconds/microseconds arguments would be fine, though. If I had it to do over, I'd _require_ keyword arguments on timedelta(). I don't know how often I've seen, e.g., timedelta(1), and didn't remember that "the first" argument is fortnights ;-) From alexander.belopolsky at gmail.com Sun Dec 20 22:25:26 2015 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Dec 2015 22:25:26 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 9:00 PM, Guido van Rossum wrote: > but I would really like to see a change in the repr of negative timedeltas: >> >> >>> timedelta(minutes=-1) >> datetime.timedelta(-1, 86340) >> >> And str() is not much better: >> >> >>> print(timedelta(minutes=-1)) >> -1 day, 23:59:00 >> >> The above does not qualify as a human readable representation IMO. >> > > I'm sure that one often catches people by surprise. However, I don't think > we can fix that one without also fixing the values of the attributes > I don't see why we have to change td.days for say td = timedelta(minutes=-1) if we change its repr to "timedelta(minutes=-1)". For me an important invariant is td == eval(repr(td)) which will be preserved. > -- in that example days is -1 and seconds is 86340 (which will *also* > catch people by surprise). And changing that would be much, much harder for > backwards compatibility reasons-- we'd have to set days to 0 and seconds to > -60, and suddenly we have a much murkier invariant, instead of the crisp > > 0 <= microseconds < 1000000 > 0 <= seconds < 60 > > (There is no such invariant for days -- they hold the sign bit.) > > In essence, you'd have to look at all three attributes to figure out on > which side of 0 is was (or think of the right way to do it, which is to > compare to timedelta(0)). I might still go for it, if it wasn't too late by > over a decade (as Tim says). > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Sun Dec 20 22:30:22 2015 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Sun, 20 Dec 2015 22:30:22 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 10:25 PM, Tim Peters wrote: > For > > >>> print(timedelta(minutes=-1)) > > I'd like to see: > > -00:01:00 > > But I wouldn't change repr() - the internal representation is fully > documented, and it's appropriate for repr() to reflect documented > internals as directly as possible. > Note that in the case of float repr, the consideration of user convenience did win over "reflect documented internals as directly as possible." -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.peters at gmail.com Sun Dec 20 22:35:06 2015 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 20 Dec 2015 21:35:06 -0600 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: [Tim] >> But I wouldn't change repr() - the internal representation is fully >> documented, and it's appropriate for repr() to reflect documented >> internals as directly as possible. [Alex] > Note that in the case of float repr, the consideration of user convenience > did win over "reflect documented internals as directly as possible." ? Nothing is documented about float internals, beyond "whatever a platform C double is" in CPython. From guido at python.org Sun Dec 20 22:39:59 2015 From: guido at python.org (Guido van Rossum) Date: Sun, 20 Dec 2015 19:39:59 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 7:25 PM, Alexander Belopolsky < alexander.belopolsky at gmail.com> wrote: > > On Sun, Dec 20, 2015 at 9:00 PM, Guido van Rossum > wrote: > >> but I would really like to see a change in the repr of negative >>> timedeltas: >>> >>> >>> timedelta(minutes=-1) >>> datetime.timedelta(-1, 86340) >>> >>> And str() is not much better: >>> >>> >>> print(timedelta(minutes=-1)) >>> -1 day, 23:59:00 >>> >>> The above does not qualify as a human readable representation IMO. >>> >> >> I'm sure that one often catches people by surprise. However, I don't >> think we can fix that one without also fixing the values of the attributes >> > > I don't see why we have to change td.days for say td = > timedelta(minutes=-1) if we change its repr to "timedelta(minutes=-1)". > For me an important invariant is td == eval(repr(td)) which will be > preserved. > Then please just trust me. If the repr() shows different numbers than the attributes things are worse than now. People will casually look at the repr() and assume they've seen what the attributes will return, and spend hours debugging code that relies on that incorrect assumption. -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry at hastings.org Mon Dec 21 01:36:46 2015 From: larry at hastings.org (Larry Hastings) Date: Sun, 20 Dec 2015 22:36:46 -0800 Subject: [Python-Dev] [RELEASED] Python 3.4.4 is now available Message-ID: <56779DFE.9040709@hastings.org> On behalf of the Python development community and the Python 3.4 release team, I'm pleased to announce the availability of Python 3.4.4. Python 3.4.4 is the last version of Python 3.4.4 with binary installers, and the end of "bugfix" support. After this release, Python 3.4.4 moves into "security fixes only" mode, and future releases will be source-code-only. You can see what's changed in Python 3.4.4 (as compared to previous versions of 3.4) here: https://docs.python.org/3.4/whatsnew/changelog.html#python-3-4-4 And you can download Python 3.4.4 here: https://www.python.org/downloads/release/python-344/ Windows and Mac users: please read the important platform-specific "Notes on this release" section near the end of that page. One final note. 3.4.4 final marks the end of an era: it contains the last Windows installers that will be built by Martin von Loewis. Martin has been the Windows release "Platform Expert" since the Python 2.4 release cycle started more than twelve years ago--in other words, for more than half of Python's entire existence! On behalf of the Python community, and particularly on behalf of the Python release managers, I'd like to thank Martin for his years of service to the community, and for the care and professionalism he brought to his role. It was a pleasure working with him, and we wish him the very best in his future projects. We hope you enjoy Python 3.4.4! Happy holidays, //arry/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From storchaka at gmail.com Mon Dec 21 08:46:00 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Mon, 21 Dec 2015 15:46:00 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 16.12.15 16:12, Serhiy Storchaka wrote: > Please put your vote (a floating number from -1 to 1 including) for > every of proposed name. You also can propose new name. Thank you all for your votes. Results of the poll: Py_SETREF: +5 = +5 (Victor, Steve, Yury, Brett, Nick) +0 (Ryan, Martin) Py_REPLACE_REF: +2.5 = +2.5 (Ryan, Victor, Steve, Martin) -0 (Nick) Py_REPLACE: +0 = +1 (Martin) -1 (Ryan) +0 (Nick) Py_RESET: 0 = +1 (Ryan) -1 (Martin) Py_DECREF_REPLACE: -2 = +1 (Ryan, Martin) -3 (Victor, Steve, Nick) Py_SET_POINTER, Py_SET_ATTR: -5 (Ryan, Victor, Steve, Martin, Nick) Therefore Py_SETREF is the winner. But I want also to remember objections against it formulated in previous discussion. 1) By analogy with Py_INCREF and Py_DECREF that increment and decrement the reference counter of the object, Py_SETREF looks as it *sets* the reference counter of the object. 2) By analogy with PyList_SET_ITEM, PyTuple_SET_ITEM, PyCell_SET, etc, it is not expected that Py_SETREF decrement the refcounter of the old value before overwriting it. From ncoghlan at gmail.com Mon Dec 21 10:37:48 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 22 Dec 2015 01:37:48 +1000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 21 December 2015 at 23:46, Serhiy Storchaka wrote: > On 16.12.15 16:12, Serhiy Storchaka wrote: >> >> Please put your vote (a floating number from -1 to 1 including) for >> every of proposed name. You also can propose new name. > > > Thank you all for your votes. > > Results of the poll: > > Py_SETREF: +5 = +5 (Victor, Steve, Yury, Brett, Nick) +0 (Ryan, Martin) > > Py_REPLACE_REF: +2.5 = +2.5 (Ryan, Victor, Steve, Martin) -0 (Nick) > > Py_REPLACE: +0 = +1 (Martin) -1 (Ryan) +0 (Nick) > > Py_RESET: 0 = +1 (Ryan) -1 (Martin) > > Py_DECREF_REPLACE: -2 = +1 (Ryan, Martin) -3 (Victor, Steve, Nick) > > Py_SET_POINTER, Py_SET_ATTR: -5 (Ryan, Victor, Steve, Martin, Nick) > > Therefore Py_SETREF is the winner. > > But I want also to remember objections against it formulated in previous > discussion. > > 1) By analogy with Py_INCREF and Py_DECREF that increment and decrement the > reference counter of the object, Py_SETREF looks as it *sets* the reference > counter of the object. > > 2) By analogy with PyList_SET_ITEM, PyTuple_SET_ITEM, PyCell_SET, etc, it is > not expected that Py_SETREF decrement the refcounter of the old value before > overwriting it. Avoiding those misleading associations is a good argument in favour of Py_REPLACE over Py_SETREF - they didn't occur to me before casting my votes, and I can definitely see them causing confusion in the future. So perhaps the combination that makes the most sense is to add Py_REPLACE (uses Py_DECREF on destination) & Py_XREPLACE (uses Py_XDECREF on destination) to the existing Py_CLEAR? Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From random832 at fastmail.com Mon Dec 21 10:39:09 2015 From: random832 at fastmail.com (Random832) Date: Mon, 21 Dec 2015 10:39:09 -0500 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) References: Message-ID: <87h9jbdd5u.fsf@fastmail.com> Guido van Rossum writes: > I'm sure that one often catches people by surprise. However, I don't > think we can fix that one without also fixing the values of the > attributes -- in that example days is -1 and seconds is 86340 (which > will *also* catch people by surprise). And changing that would be > much, much harder for backwards compatibility reasons-- we'd have to > set days to 0 and seconds to -60, and suddenly we have a much murkier > invariant, instead of the crisp > > 0 <= microseconds < 1000000 > 0 <= seconds < 60 I don't really see it as murky: 0 <= abs(microseconds) < 1000000 0 <= abs(seconds) < 60 (days <= 0) == (seconds <= 0) == (microseconds <= 0) (days >= 0) == (seconds >= 0) == (microseconds >= 0) The latter are more easily phrased in english as "all nonzero attributes have the same sign". I think the current behavior is rather as if -1.1 were represented as "-2+.9". The attributes probably can't be fixed without breaking backwards compatibility, though. How about "-timedelta(0, 60)"? From guido at python.org Mon Dec 21 10:47:03 2015 From: guido at python.org (Guido van Rossum) Date: Mon, 21 Dec 2015 07:47:03 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: <87h9jbdd5u.fsf@fastmail.com> References: <87h9jbdd5u.fsf@fastmail.com> Message-ID: We're now thoroughly in python-ideas land. On Mon, Dec 21, 2015 at 7:39 AM, Random832 wrote: > Guido van Rossum writes: > > I'm sure that one often catches people by surprise. However, I don't > > think we can fix that one without also fixing the values of the > > attributes -- in that example days is -1 and seconds is 86340 (which > > will *also* catch people by surprise). And changing that would be > > much, much harder for backwards compatibility reasons-- we'd have to > > set days to 0 and seconds to -60, and suddenly we have a much murkier > > invariant, instead of the crisp > > > > 0 <= microseconds < 1000000 > > 0 <= seconds < 60 > > I don't really see it as murky: > > 0 <= abs(microseconds) < 1000000 > 0 <= abs(seconds) < 60 > (days <= 0) == (seconds <= 0) == (microseconds <= 0) > (days >= 0) == (seconds >= 0) == (microseconds >= 0) > > The latter are more easily phrased in english as "all nonzero > attributes have the same sign". I think the current behavior is > rather as if -1.1 were represented as "-2+.9". The attributes > probably can't be fixed without breaking backwards > compatibility, though. How about "-timedelta(0, 60)"? > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From shettyrajneesh at yahoo.com.au Mon Dec 21 12:51:21 2015 From: shettyrajneesh at yahoo.com.au (Rajneesh N. Shetty) Date: Mon, 21 Dec 2015 17:51:21 +0000 (UTC) Subject: [Python-Dev] Python-Dev Digest, Vol 149, Issue 38 References: <1806113239.1867472.1450720282090.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <1806113239.1867472.1450720282090.JavaMail.yahoo@mail.yahoo.com> hello everybody, I am new to this group. Always knew that you were good for a long time, but never really used your technology as such, because my father who passed away recently never wanted me to own a computer in his lifetime. He relented in 2003-04 (cannot remember exactly), due to my pestering him (he was a mechanical+electrical, village boy by background (farmers)) , but he bought me a HP Pavilion with two Intel CPU's (not dual-core) running MS Media Centre edition (full NTFS implemention was only released in this version & first in India). I took it to Australia with me & finally gave it away after it refused to run certain versions of Linux. Attached is my profile & I can assure you I will need lots of help to familiarise myself with your engine. regards, rajneesh tel :+61402350315 weblog : www.vishagotra.wordpress.com www.rns-thoughts.blogspot.com -------------------------------------------- On Tue, 22/12/15, python-dev-request at python.org wrote: Subject: Python-Dev Digest, Vol 149, Issue 38 To: python-dev at python.org Received: Tuesday, 22 December, 2015, 4:00 AM Send Python-Dev mailing list submissions to ??? python-dev at python.org To subscribe or unsubscribe via the World Wide Web, visit ??? https://mail.python.org/mailman/listinfo/python-dev or, via email, send a message with subject or body 'help' to ??? python-dev-request at python.org You can reach the person managing the list at ??? python-dev-owner at python.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Python-Dev digest..." Today's Topics: ???1. Re: New poll about a macro for safe reference replacing ? ? ? (Nick Coghlan) ???2. Re: Change the repr for datetime.timedelta (was Re: ? ? ? Asynchronous context manager in a typical network server) (Random832) ???3. Re: Change the repr for datetime.timedelta (was Re: ? ? ? Asynchronous context manager in a typical network server) ? ? ? (Guido van Rossum) ---------------------------------------------------------------------- Message: 1 Date: Tue, 22 Dec 2015 01:37:48 +1000 From: Nick Coghlan To: Serhiy Storchaka Cc: "python-dev at python.org" Subject: Re: [Python-Dev] New poll about a macro for safe reference ??? replacing Message-ID: ??? Content-Type: text/plain; charset=UTF-8 On 21 December 2015 at 23:46, Serhiy Storchaka wrote: > On 16.12.15 16:12, Serhiy Storchaka wrote: >> >> Please put your vote (a floating number from -1 to 1 including) for >> every of proposed name. You also can propose new name. > > > Thank you all for your votes. > > Results of the poll: > > Py_SETREF:? +5 = +5 (Victor, Steve, Yury, Brett, Nick) +0 (Ryan, Martin) > > Py_REPLACE_REF:? +2.5 = +2.5 (Ryan, Victor, Steve, Martin) -0 (Nick) > > Py_REPLACE: +0 = +1 (Martin) -1 (Ryan) +0 (Nick) > > Py_RESET:? 0 = +1 (Ryan) -1 (Martin) > > Py_DECREF_REPLACE: -2 = +1 (Ryan, Martin) -3 (Victor, Steve, Nick) > > Py_SET_POINTER, Py_SET_ATTR: -5 (Ryan, Victor, Steve, Martin, Nick) > > Therefore Py_SETREF is the winner. > > But I want also to remember objections against it formulated in previous > discussion. > > 1) By analogy with Py_INCREF and Py_DECREF that increment and decrement the > reference counter of the object, Py_SETREF looks as it *sets* the reference > counter of the object. > > 2) By analogy with PyList_SET_ITEM, PyTuple_SET_ITEM, PyCell_SET, etc, it is > not expected that Py_SETREF decrement the refcounter of the old value before > overwriting it. Avoiding those misleading associations is a good argument in favour of Py_REPLACE over Py_SETREF - they didn't occur to me before casting my votes, and I can definitely see them causing confusion in the future. So perhaps the combination that makes the most sense is to add Py_REPLACE (uses Py_DECREF on destination) & Py_XREPLACE (uses Py_XDECREF on destination) to the existing Py_CLEAR? Regards, Nick. -- Nick Coghlan???|???ncoghlan at gmail.com???|???Brisbane, Australia ------------------------------ Message: 2 Date: Mon, 21 Dec 2015 10:39:09 -0500 From: Random832 To: python-dev at python.org Subject: Re: [Python-Dev] Change the repr for datetime.timedelta (was ??? Re: Asynchronous context manager in a typical network server) Message-ID: <87h9jbdd5u.fsf at fastmail.com> Content-Type: text/plain Guido van Rossum writes: > I'm sure that one often catches people by surprise. However, I don't > think we can fix that one without also fixing the values of the > attributes -- in that example days is -1 and seconds is 86340 (which > will *also* catch people by surprise). And changing that would be > much, much harder for backwards compatibility reasons-- we'd have to > set days to 0 and seconds to -60, and suddenly we have a much murkier > invariant, instead of the crisp > > 0 <= microseconds < 1000000 > 0 <= seconds < 60 I don't really see it as murky: 0 <= abs(microseconds) < 1000000 0 <= abs(seconds) < 60 (days <= 0) == (seconds <= 0) == (microseconds <= 0) (days >= 0) == (seconds >= 0) == (microseconds >= 0) The latter are more easily phrased in english as "all nonzero attributes have the same sign".? I think the current behavior is rather as if -1.1 were represented as "-2+.9".? The attributes probably can't be fixed without breaking backwards compatibility, though.? How about "-timedelta(0, 60)"? ------------------------------ Message: 3 Date: Mon, 21 Dec 2015 07:47:03 -0800 From: Guido van Rossum To: Random832 Cc: Python-Dev Subject: Re: [Python-Dev] Change the repr for datetime.timedelta (was ??? Re: Asynchronous context manager in a typical network server) Message-ID: ??? Content-Type: text/plain; charset="utf-8" We're now thoroughly in python-ideas land. On Mon, Dec 21, 2015 at 7:39 AM, Random832 wrote: > Guido van Rossum writes: > > I'm sure that one often catches people by surprise. However, I don't > > think we can fix that one without also fixing the values of the > > attributes -- in that example days is -1 and seconds is 86340 (which > > will *also* catch people by surprise). And changing that would be > > much, much harder for backwards compatibility reasons-- we'd have to > > set days to 0 and seconds to -60, and suddenly we have a much murkier > > invariant, instead of the crisp > > > > 0 <= microseconds < 1000000 > > 0 <= seconds < 60 > > I don't really see it as murky: > > 0 <= abs(microseconds) < 1000000 > 0 <= abs(seconds) < 60 > (days <= 0) == (seconds <= 0) == (microseconds <= 0) > (days >= 0) == (seconds >= 0) == (microseconds >= 0) > > The latter are more easily phrased in english as "all nonzero > attributes have the same sign".? I think the current behavior is > rather as if -1.1 were represented as "-2+.9".? The attributes > probably can't be fixed without breaking backwards > compatibility, though.? How about "-timedelta(0, 60)"? > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev ------------------------------ End of Python-Dev Digest, Vol 149, Issue 38 ******************************************* -------------- next part -------------- A non-text attachment was scrubbed... Name: rajneesh.pdf Type: application/pdf Size: 278094 bytes Desc: not available URL: From steve.dower at python.org Mon Dec 21 16:57:14 2015 From: steve.dower at python.org (Steve Dower) Date: Tue, 22 Dec 2015 08:57:14 +1100 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: Was Py_MOVEREF (or MOVE_REF) ever suggested? Those are valid objections, and now they're raised I remember them from last time. But I don't think they're a huge concern - setting a ref count directly doesn't seem useful anyway, and the compiler/IDE will let you know pretty quick if you put an integer vs a PyObject* there. Cheers, Steve Top-posted from my Windows Phone -----Original Message----- From: "Nick Coghlan" Sent: ?12/?22/?2015 2:39 To: "Serhiy Storchaka" Cc: "python-dev at python.org" Subject: Re: [Python-Dev] New poll about a macro for safe reference replacing On 21 December 2015 at 23:46, Serhiy Storchaka wrote: > On 16.12.15 16:12, Serhiy Storchaka wrote: >> >> Please put your vote (a floating number from -1 to 1 including) for >> every of proposed name. You also can propose new name. > > > Thank you all for your votes. > > Results of the poll: > > Py_SETREF: +5 = +5 (Victor, Steve, Yury, Brett, Nick) +0 (Ryan, Martin) > > Py_REPLACE_REF: +2.5 = +2.5 (Ryan, Victor, Steve, Martin) -0 (Nick) > > Py_REPLACE: +0 = +1 (Martin) -1 (Ryan) +0 (Nick) > > Py_RESET: 0 = +1 (Ryan) -1 (Martin) > > Py_DECREF_REPLACE: -2 = +1 (Ryan, Martin) -3 (Victor, Steve, Nick) > > Py_SET_POINTER, Py_SET_ATTR: -5 (Ryan, Victor, Steve, Martin, Nick) > > Therefore Py_SETREF is the winner. > > But I want also to remember objections against it formulated in previous > discussion. > > 1) By analogy with Py_INCREF and Py_DECREF that increment and decrement the > reference counter of the object, Py_SETREF looks as it *sets* the reference > counter of the object. > > 2) By analogy with PyList_SET_ITEM, PyTuple_SET_ITEM, PyCell_SET, etc, it is > not expected that Py_SETREF decrement the refcounter of the old value before > overwriting it. Avoiding those misleading associations is a good argument in favour of Py_REPLACE over Py_SETREF - they didn't occur to me before casting my votes, and I can definitely see them causing confusion in the future. So perhaps the combination that makes the most sense is to add Py_REPLACE (uses Py_DECREF on destination) & Py_XREPLACE (uses Py_XDECREF on destination) to the existing Py_CLEAR? Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/steve.dower%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.barker at noaa.gov Mon Dec 21 17:07:56 2015 From: chris.barker at noaa.gov (Chris Barker) Date: Mon, 21 Dec 2015 14:07:56 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Sun, Dec 20, 2015 at 2:28 PM, Chris Angelico wrote: > > Would there be value in changing the repr to use keyword arguments? > this thread got long, but it sounds like that won't be worth the backwards compatibility... > Worse, help(datetime.timedelta) in 3.6 doesn't document the > constructor at all. There's no mention of __init__ at all, __new__ has > this useless information: > but this seems to have gotten lost in the shuffle. and aside from there being three data descriptors, there's nothing to > suggest that you construct these things with timedelta(days, seconds, > microseconds). Definitely no indication that you can use other keyword > args. > > Is this something worth fixing, or is it acceptable to drive people to > fuller documentation than help()? > Absolutlye worht fixing! maybe it' sjsut my weird workflow, but I find it very, very useful to use iPython's ? : In [10]: datetime.timedelta? Docstring: Difference between two datetime values.File: ~/miniconda2/lib/python2.7/lib-dynload/datetime.so Type: type and there are a LOT of next-to worthless docstrings in the stdlib -- it would be nice to clean them all up. Is there any reason not to, other than someone having to do the work? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvanrossum at gmail.com Mon Dec 21 17:14:13 2015 From: gvanrossum at gmail.com (Guido van Rossum) Date: Mon, 21 Dec 2015 14:14:13 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: Would you be able to submit a patch to address the docstring issues? --Guido (mobile) On Dec 21, 2015 2:09 PM, "Chris Barker" wrote: > On Sun, Dec 20, 2015 at 2:28 PM, Chris Angelico wrote: > >> >> Would there be value in changing the repr to use keyword arguments? >> > > this thread got long, but it sounds like that won't be worth the backwards > compatibility... > > >> Worse, help(datetime.timedelta) in 3.6 doesn't document the >> constructor at all. There's no mention of __init__ at all, __new__ has >> this useless information: >> > > but this seems to have gotten lost in the shuffle. > > and aside from there being three data descriptors, there's nothing to >> suggest that you construct these things with timedelta(days, seconds, >> microseconds). Definitely no indication that you can use other keyword >> args. >> >> Is this something worth fixing, or is it acceptable to drive people to >> fuller documentation than help()? >> > > Absolutlye worht fixing! maybe it' sjsut my weird workflow, but I find it > very, very useful to use iPython's ? : > > In [10]: datetime.timedelta? > Docstring: Difference between two datetime values.File: > ~/miniconda2/lib/python2.7/lib-dynload/datetime.so > Type: type > > and there are a LOT of next-to worthless docstrings in the stdlib -- it > would be nice to clean them all up. > > Is there any reason not to, other than someone having to do the work? > > -Chris > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvanrossum at gmail.com Mon Dec 21 17:15:02 2015 From: gvanrossum at gmail.com (Guido van Rossum) Date: Mon, 21 Dec 2015 14:15:02 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: I still think the repr change to use keywords has a good chance for 3.6. --Guido (mobile) On Dec 21, 2015 2:09 PM, "Chris Barker" wrote: > On Sun, Dec 20, 2015 at 2:28 PM, Chris Angelico wrote: > >> >> Would there be value in changing the repr to use keyword arguments? >> > > this thread got long, but it sounds like that won't be worth the backwards > compatibility... > > >> Worse, help(datetime.timedelta) in 3.6 doesn't document the >> constructor at all. There's no mention of __init__ at all, __new__ has >> this useless information: >> > > but this seems to have gotten lost in the shuffle. > > and aside from there being three data descriptors, there's nothing to >> suggest that you construct these things with timedelta(days, seconds, >> microseconds). Definitely no indication that you can use other keyword >> args. >> >> Is this something worth fixing, or is it acceptable to drive people to >> fuller documentation than help()? >> > > Absolutlye worht fixing! maybe it' sjsut my weird workflow, but I find it > very, very useful to use iPython's ? : > > In [10]: datetime.timedelta? > Docstring: Difference between two datetime values.File: > ~/miniconda2/lib/python2.7/lib-dynload/datetime.so > Type: type > > and there are a LOT of next-to worthless docstrings in the stdlib -- it > would be nice to clean them all up. > > Is there any reason not to, other than someone having to do the work? > > -Chris > > > > -- > > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chris.Barker at noaa.gov > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abarnert at yahoo.com Mon Dec 21 19:20:54 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Mon, 21 Dec 2015 16:20:54 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: <974F8BF9-A011-4BEC-A2AE-D9B2EEB6513A@yahoo.com> On Dec 21, 2015, at 14:07, Chris Barker wrote: > > and there are a LOT of next-to worthless docstrings in the stdlib -- it would be nice to clean them all up. > > Is there any reason not to, other than someone having to do the work? Is this just a matter of _datetimemodule.c (and various other things in the stdlib) not being (completely) argclinicified? Or is there something hairy about this type (and various other things in the stdlib) that makes them still useless even with argclinic? From breamoreboy at yahoo.co.uk Mon Dec 21 19:43:01 2015 From: breamoreboy at yahoo.co.uk (Mark Lawrence) Date: Tue, 22 Dec 2015 00:43:01 +0000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 21/12/2015 21:57, Steve Dower wrote: > Was Py_MOVEREF (or MOVE_REF) ever suggested? > > Those are valid objections, and now they're raised I remember them from > last time. But I don't think they're a huge concern - setting a ref > count directly doesn't seem useful anyway, and the compiler/IDE will let > you know pretty quick if you put an integer vs a PyObject* there. > > Cheers, > Steve > Or Py_SAFEREF or SAFE_REF? -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence From chris.barker at noaa.gov Mon Dec 21 21:34:20 2015 From: chris.barker at noaa.gov (Chris Barker - NOAA Federal) Date: Mon, 21 Dec 2015 18:34:20 -0800 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: <974F8BF9-A011-4BEC-A2AE-D9B2EEB6513A@yahoo.com> References: <974F8BF9-A011-4BEC-A2AE-D9B2EEB6513A@yahoo.com> Message-ID: <-638490489501544928@unknownmsgid> >> and there are a LOT of next-to worthless docstrings in the stdlib -- it would be nice to clean them all up. >> >> Is there any reason not to, other than someone having to do the work? And yes, I'd be willing to submit a patch. > Is this just a matter of _datetimemodule.c (and various other things in the stdlib) not being (completely) argclinicified? But clearly I'll need some help knowing where to add the docs... -CHB > Or is there something hairy about this type (and various other things in the stdlib) that makes them still useless even with argclinic? From victor.stinner at gmail.com Tue Dec 22 03:39:20 2015 From: victor.stinner at gmail.com (Victor Stinner) Date: Tue, 22 Dec 2015 09:39:20 +0100 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: repr() with keywords is called a method, no? Like isoformat() Victor Le lundi 21 d?cembre 2015, Guido van Rossum a ?crit : > I still think the repr change to use keywords has a good chance for 3.6. > > --Guido (mobile) > On Dec 21, 2015 2:09 PM, "Chris Barker" > wrote: > >> On Sun, Dec 20, 2015 at 2:28 PM, Chris Angelico > > wrote: >> >>> >>> Would there be value in changing the repr to use keyword arguments? >>> >> >> this thread got long, but it sounds like that won't be worth the >> backwards compatibility... >> >> >>> Worse, help(datetime.timedelta) in 3.6 doesn't document the >>> constructor at all. There's no mention of __init__ at all, __new__ has >>> this useless information: >>> >> >> but this seems to have gotten lost in the shuffle. >> >> and aside from there being three data descriptors, there's nothing to >>> suggest that you construct these things with timedelta(days, seconds, >>> microseconds). Definitely no indication that you can use other keyword >>> args. >>> >>> Is this something worth fixing, or is it acceptable to drive people to >>> fuller documentation than help()? >>> >> >> Absolutlye worht fixing! maybe it' sjsut my weird workflow, but I find it >> very, very useful to use iPython's ? : >> >> In [10]: datetime.timedelta? >> Docstring: Difference between two datetime values.File: >> ~/miniconda2/lib/python2.7/lib-dynload/datetime.so >> Type: type >> >> and there are a LOT of next-to worthless docstrings in the stdlib -- it >> would be nice to clean them all up. >> >> Is there any reason not to, other than someone having to do the work? >> >> -Chris >> >> >> >> -- >> >> Christopher Barker, Ph.D. >> Oceanographer >> >> Emergency Response Division >> NOAA/NOS/OR&R (206) 526-6959 voice >> 7600 Sand Point Way NE (206) 526-6329 fax >> Seattle, WA 98115 (206) 526-6317 main reception >> >> Chris.Barker at noaa.gov >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosuav at gmail.com Tue Dec 22 03:49:13 2015 From: rosuav at gmail.com (Chris Angelico) Date: Tue, 22 Dec 2015 19:49:13 +1100 Subject: [Python-Dev] Change the repr for datetime.timedelta (was Re: Asynchronous context manager in a typical network server) In-Reply-To: References: Message-ID: On Tue, Dec 22, 2015 at 7:39 PM, Victor Stinner wrote: > Le lundi 21 d?cembre 2015, Guido van Rossum a ?crit : >> >> I still think the repr change to use keywords has a good chance for 3.6. > > repr() with keywords is called a method, no? Like isoformat() > Not keyword arguments - the proposal is to change the repr from one format to another. Currently, the repr indicates a constructor call using positional arguments: >>> datetime.timedelta(1) datetime.timedelta(1) >>> datetime.timedelta(1,2) datetime.timedelta(1, 2) >>> datetime.timedelta(1,2,3) datetime.timedelta(1, 2, 3) >>> datetime.timedelta(1,2,3,4) datetime.timedelta(1, 2, 4003) The proposal is to make it show keyword args instead: >>> datetime.timedelta(days=1) datetime.timedelta(days=1) >>> datetime.timedelta(days=1,seconds=2) datetime.timedelta(days=1, seconds=2) >>> datetime.timedelta(days=1,seconds=2,microseconds=3) datetime.timedelta(days=1, seconds=2, microseconds=3) >>> datetime.timedelta(days=1,seconds=2,microseconds=3,milliseconds=4) datetime.timedelta(days=1, seconds=2, microseconds=4003) ChrisA From storchaka at gmail.com Tue Dec 22 04:50:22 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 22 Dec 2015 11:50:22 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 21.12.15 17:37, Nick Coghlan wrote: > Avoiding those misleading associations is a good argument in favour of > Py_REPLACE over Py_SETREF - they didn't occur to me before casting my > votes, and I can definitely see them causing confusion in the future. > > So perhaps the combination that makes the most sense is to add > Py_REPLACE (uses Py_DECREF on destination) & Py_XREPLACE (uses > Py_XDECREF on destination) to the existing Py_CLEAR? And we return to where we started. Although I personally prefer Py_REPLACE/Py_XREPLACE, I'm afraid that using them would look like I just ignore the results of the poll. Because Py_SETREF looks good for most developers at first glance, I hope this will not lead to confusion in the future. If there are no new objections, I will commit the trivial auto-generated patch today and will provide a patch that covers more non-trivial cases. Now is better than never, and we have been bikeshedding this too long for "right now". From storchaka at gmail.com Tue Dec 22 04:58:23 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Tue, 22 Dec 2015 11:58:23 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 21.12.15 23:57, Steve Dower wrote: > Was Py_MOVEREF (or MOVE_REF) ever suggested? This would be nice name. The macro moves the ownership. But I think it's too late. Otherwise we'll never finish the bikeshedding. From random832 at fastmail.com Tue Dec 22 11:03:03 2015 From: random832 at fastmail.com (Random832) Date: Tue, 22 Dec 2015 11:03:03 -0500 Subject: [Python-Dev] New poll about a macro for safe reference replacing References: Message-ID: <874mfatqrs.fsf@fastmail.com> Nick Coghlan writes: > Avoiding those misleading associations is a good argument in favour of > Py_REPLACE over Py_SETREF - they didn't occur to me before casting my > votes, and I can definitely see them causing confusion in the future. > > So perhaps the combination that makes the most sense is to add > Py_REPLACE (uses Py_DECREF on destination) & Py_XREPLACE (uses > Py_XDECREF on destination) to the existing Py_CLEAR? Is there a strong reason to have an X/plain pair? Py_CLEAR doesn't seem to have one. This wasn't a subject of the poll. From meadori at gmail.com Tue Dec 22 11:36:23 2015 From: meadori at gmail.com (Meador Inge) Date: Tue, 22 Dec 2015 10:36:23 -0600 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On Tue, Dec 22, 2015 at 3:58 AM, Serhiy Storchaka wrote: On 21.12.15 23:57, Steve Dower wrote: > >> Was Py_MOVEREF (or MOVE_REF) ever suggested? >> > > This would be nice name. The macro moves the ownership. But I think it's > too late. Otherwise we'll never finish the bikeshedding. FWIW, I like this name the best. It is increasingly popular for languages to talk about moving ownership (e.g. move semantics in C++, Rust, etc...). -- # Meador -------------- next part -------------- An HTML attachment was scrubbed... URL: From benjamin at python.org Tue Dec 22 19:35:01 2015 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 22 Dec 2015 16:35:01 -0800 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: <20151219190253.GA3963@DATLANDREWK.local> References: <20151219190253.GA3963@DATLANDREWK.local> Message-ID: <1450830901.2884827.474506697.57D6A18B@webmail.messagingengine.com> We've played around with robots.txt, but it's still useful for old docs to be indexed (e.g., for removed features), which just need to figure out how to get them deprecation in results. I wonder if in the old docs would help. On Sat, Dec 19, 2015, at 11:02, A.M. Kuchling wrote: > On Sat, Dec 19, 2015 at 08:55:26PM +1000, Nick Coghlan wrote: > > Even once the new docs are in place, getting them to the top of search > > of results ahead of archived material that may be years out of date is > > likely to still be a challenge - for example, even considering just > > the legacy distutils docs, the "3.1" and "2" docs appear ... > > We probably need to update https://docs.python.org/robots.txt, which > currently contains: > > # Prevent development and old documentation from showing up in search > results. > User-agent: * > # Disallow: /dev > Disallow: /release > > The intent was to allow the latest version of the docs to be crawled. > Unfortunately, with the current hierarchy we'd have to disallow each > version, e.g. > > Disallow: /2.6/* > Disallow: /3.0/* > Disallow: /3.1/* > > And we'd need to update it for each new major release. > > --amk > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/benjamin%40python.org From storchaka at gmail.com Wed Dec 23 09:50:21 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 23 Dec 2015 16:50:21 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 22.12.15 18:36, Meador Inge wrote: > On Tue, Dec 22, 2015 at 3:58 AM, Serhiy Storchaka > wrote: > > On 21.12.15 23:57, Steve Dower wrote: > > Was Py_MOVEREF (or MOVE_REF) ever suggested? > > > This would be nice name. The macro moves the ownership. But I think > it's too late. Otherwise we'll never finish the bikeshedding. > > > FWIW, I like this name the best. It is increasingly popular for > languages to talk about moving ownership (e.g. move semantics in C++, > Rust, etc...). Oh, I'm confused. Should I make a new poll? With new voters Py_MOVEREF can get more votes than Py_SETREF. From rosuav at gmail.com Wed Dec 23 09:52:15 2015 From: rosuav at gmail.com (Chris Angelico) Date: Thu, 24 Dec 2015 01:52:15 +1100 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On Thu, Dec 24, 2015 at 1:50 AM, Serhiy Storchaka wrote: > Oh, I'm confused. Should I make a new poll? With new voters Py_MOVEREF can > get more votes than Py_SETREF. I suggest cutting off the bikeshedding. Both of these options have reasonable support. Pick either and run with it, and don't worry about another vote. ChrisA From storchaka at gmail.com Wed Dec 23 10:02:20 2015 From: storchaka at gmail.com (Serhiy Storchaka) Date: Wed, 23 Dec 2015 17:02:20 +0200 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 23.12.15 16:52, Chris Angelico wrote: > On Thu, Dec 24, 2015 at 1:50 AM, Serhiy Storchaka wrote: >> Oh, I'm confused. Should I make a new poll? With new voters Py_MOVEREF can >> get more votes than Py_SETREF. > > I suggest cutting off the bikeshedding. Both of these options have > reasonable support. Pick either and run with it, and don't worry about > another vote. This would be a voluntarism. From ncoghlan at gmail.com Wed Dec 23 10:08:05 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 24 Dec 2015 01:08:05 +1000 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On 24 December 2015 at 00:50, Serhiy Storchaka wrote: > On 22.12.15 18:36, Meador Inge wrote: >> >> On Tue, Dec 22, 2015 at 3:58 AM, Serhiy Storchaka > > wrote: >> >> On 21.12.15 23:57, Steve Dower wrote: >> >> Was Py_MOVEREF (or MOVE_REF) ever suggested? >> >> >> This would be nice name. The macro moves the ownership. But I think >> it's too late. Otherwise we'll never finish the bikeshedding. >> >> >> FWIW, I like this name the best. It is increasingly popular for >> languages to talk about moving ownership (e.g. move semantics in C++, >> Rust, etc...). > > > Oh, I'm confused. Should I make a new poll? With new voters Py_MOVEREF can > get more votes than Py_SETREF. Within the Python context, the analogy from setattr and setitem at the Python level to Py_SETREF at the C level is pretty solid, so it likely makes sense to run with that as "good enough". In regards to Py_MOVEREF, while other languages are starting to pay more attention to "MOVE" semantics, we haven't really done so in Python yet (moving references in Rust isn't the same thing we're talking about here - this is just normal runtime reference counting). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From meadori at gmail.com Wed Dec 23 12:29:40 2015 From: meadori at gmail.com (Meador Inge) Date: Wed, 23 Dec 2015 11:29:40 -0600 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: On Wed, Dec 23, 2015 at 9:08 AM, Nick Coghlan wrote: > Within the Python context, the analogy from setattr and setitem at the > Python level to Py_SETREF at the C level is pretty solid, so it likely > makes sense to run with that as "good enough". > > In regards to Py_MOVEREF, while other languages are starting to pay > more attention to "MOVE" semantics, we haven't really done so in > Python yet (moving references in Rust isn't the same thing we're > talking about here - this is just normal runtime reference counting). > Oh. I misunderstood the intent of the macro before (from "The macro moves the ownership"). You are right. Move semantics in C++ and Rust is different. In this case the ownership is not being moved in the same sense as though languages. I withdraw my vote for Py_MOVEREF. Py_SETREF is fine. -- Meador -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Wed Dec 23 16:04:09 2015 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 23 Dec 2015 13:04:09 -0800 Subject: [Python-Dev] Typo in PEP-0423 In-Reply-To: <1450830901.2884827.474506697.57D6A18B@webmail.messagingengine.com> References: <20151219190253.GA3963@DATLANDREWK.local> <1450830901.2884827.474506697.57D6A18B@webmail.messagingengine.com> Message-ID: On Tue, Dec 22, 2015 at 4:35 PM, Benjamin Peterson wrote: > We've played around with robots.txt, but it's still useful for old docs > to be indexed (e.g., for removed features), which just need to figure > out how to get them deprecation in results. I wonder if ref="canonical"> in the old docs would help. Yes, this is probably the correct approach (though it's rel="canonical"): https://support.google.com/webmasters/answer/139066?hl=en It's always been an inconvenience when Google displays the docs for different, old versions (3.2, 3.3, etc) -- seemingly at random, and sometimes instead of the newest version. Fortunately, this seems to be improving over time. By using rel="canonical", you would have control over this and can signal to Google to display only the newest, stable version of a given doc. This would probably have other positive benefits like consolidating the "search juice" onto one page, so it's no longer spread thinly across multiple versions. There would still be a question of how you want to handle 2 versus 3. --Chris > > On Sat, Dec 19, 2015, at 11:02, A.M. Kuchling wrote: >> On Sat, Dec 19, 2015 at 08:55:26PM +1000, Nick Coghlan wrote: >> > Even once the new docs are in place, getting them to the top of search >> > of results ahead of archived material that may be years out of date is >> > likely to still be a challenge - for example, even considering just >> > the legacy distutils docs, the "3.1" and "2" docs appear ... >> >> We probably need to update https://docs.python.org/robots.txt, which >> currently contains: >> >> # Prevent development and old documentation from showing up in search >> results. >> User-agent: * >> # Disallow: /dev >> Disallow: /release >> >> The intent was to allow the latest version of the docs to be crawled. >> Unfortunately, with the current hierarchy we'd have to disallow each >> version, e.g. >> >> Disallow: /2.6/* >> Disallow: /3.0/* >> Disallow: /3.1/* >> >> And we'd need to update it for each new major release. >> >> --amk >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/benjamin%40python.org > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/chris.jerdonek%40gmail.com From stephen at xemacs.org Thu Dec 24 00:13:01 2015 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 24 Dec 2015 14:13:01 +0900 Subject: [Python-Dev] New poll about a macro for safe reference replacing In-Reply-To: References: Message-ID: <22139.32477.735780.999755@turnbull.sk.tsukuba.ac.jp> Serhiy Storchaka writes: > This would be a voluntarism. You did due diligence, took the poll, and got additional information as well. It is *very* clear to me at least that you are paying full attention to the poll. Yes, the bikeshedding should end but I think you should do as you think best in light of all the information. That is, don't worry about the *exact numerical* results of the poll if they conflict with your best judgment. He-who-does-the-work-makes-the-decisions-ly y'rs, From chris at simplistix.co.uk Thu Dec 24 06:17:58 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Thu, 24 Dec 2015 11:17:58 +0000 Subject: [Python-Dev] dynamic linking, libssl.1.0.0.dylib, libcrypto.1.0.0.dylib and Mac OS X Message-ID: <567BD466.4020308@simplistix.co.uk> Hi All, I hit this every time I install packages on Mac OS X that use libssl, it looks like extensions are built linking to .dylib's that are not resolveable when the library is actually used: >>> from OpenSSL import SSL Traceback (most recent call last): File "", line 1, in File "python2.7/site-packages/OpenSSL/__init__.py", line 8, in from OpenSSL import rand, crypto, SSL File "python2.7/site-packages/OpenSSL/rand.py", line 11, in from OpenSSL._util import ( File "python2.7/site-packages/OpenSSL/_util.py", line 6, in from cryptography.hazmat.bindings.openssl.binding import Binding File "python2.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 13, in from cryptography.hazmat.bindings._openssl import ffi, lib ImportError: dlopen(python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so, 2): Library not loaded: libssl.1.0.0.dylib Referenced from: python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so Reason: image not found Looking at what this links to, I see: $ otool -L lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so: libssl.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) Whereas the functional _ssl that ships with Python distributions on Mac OS X look like this: $ otool -L .../lib/python2.7/lib-dynload/_ssl.so .../lib/python2.7/lib-dynload/_ssl.so: @loader_path/../../libssl.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) What's going wrong here and what can be done differently to have 'pip install package_using_libssl' build a usable binary installation? Here's a couple of examples of this problem in the wild: https://github.com/alekstorm/backports.ssl/issues/9 http://stackoverflow.com/questions/32978365/how-do-i-run-psycopg2-on-el-capitan-without-hitting-a-libssl-error https://github.com/psycopg/psycopg2/issues/385 I'm well out of my depth here, I just want to use these libraries, but I'm happy to try and do the work to make the world a better place for Mac users of these libraries... Chris From cory at lukasa.co.uk Thu Dec 24 09:36:32 2015 From: cory at lukasa.co.uk (Cory Benfield) Date: Thu, 24 Dec 2015 14:36:32 +0000 Subject: [Python-Dev] dynamic linking, libssl.1.0.0.dylib, libcrypto.1.0.0.dylib and Mac OS X In-Reply-To: <567BD466.4020308@simplistix.co.uk> References: <567BD466.4020308@simplistix.co.uk> Message-ID: <0451A2DF-1067-4ECA-A298-F590881DE68D@lukasa.co.uk> > On 24 Dec 2015, at 11:17, Chris Withers wrote: > > Hi All, > > Here's a couple of examples of this problem in the wild: > > https://github.com/alekstorm/backports.ssl/issues/9 > http://stackoverflow.com/questions/32978365/how-do-i-run-psycopg2-on-el-capitan-without-hitting-a-libssl-error > https://github.com/psycopg/psycopg2/issues/385 > > I'm well out of my depth here, I just want to use these libraries, but I'm happy to try and do the work to make the world a better place for Mac users of these libraries... > > Chris Chris, I think this is actually nothing to do with Python itself, and everything to do with Mac OS X and the neutered way it ships OpenSSL. Given that the library you?re actually having difficulty with is cryptography, I recommend using their mailing list[0] to ask your question again. I happen to know that there have been a few problems with OS X and OpenSSL since El Capitan, so you?re probably not the first to encounter them. Cory [0]: https://mail.python.org/mailman/listinfo/cryptography-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From chris at simplistix.co.uk Thu Dec 24 09:40:15 2015 From: chris at simplistix.co.uk (Chris Withers) Date: Thu, 24 Dec 2015 14:40:15 +0000 Subject: [Python-Dev] dynamic linking, libssl.1.0.0.dylib, libcrypto.1.0.0.dylib and Mac OS X In-Reply-To: <0451A2DF-1067-4ECA-A298-F590881DE68D@lukasa.co.uk> References: <567BD466.4020308@simplistix.co.uk> <0451A2DF-1067-4ECA-A298-F590881DE68D@lukasa.co.uk> Message-ID: <567C03CF.1070700@simplistix.co.uk> On 24/12/2015 14:36, Cory Benfield wrote: > >> On 24 Dec 2015, at 11:17, Chris Withers wrote: >> >> Here's a couple of examples of this problem in the wild: >> >> https://github.com/alekstorm/backports.ssl/issues/9 >> http://stackoverflow.com/questions/32978365/how-do-i-run-psycopg2-on-el-capitan-without-hitting-a-libssl-error >> https://github.com/psycopg/psycopg2/issues/385 >> >> I'm well out of my depth here, I just want to use these libraries, but I'm happy to try and do the work to make the world a better place for Mac users of these libraries... >> > > I think this is actually nothing to do with Python itself, and everything to do with Mac OS X and the neutered way it ships OpenSSL. Given that the library you?re actually having difficulty with is cryptography, I recommend using their mailing list[0] to ask your question again. I happen to know that there have been a few problems with OS X and OpenSSL since El Capitan, so you?re probably not the first to encounter them. Hi Cory, I'm not not sure, _ssl included in a Python distribution works and does the right thing, it's third party packages built on the machines that appear to have the problem. How does Python itself "get it right" and how could psycopg2 and cryptography mirror that? This feels like a dynamic linking problem rather than something ssl-specific. cheers, Chris From chris at withers.org Thu Dec 24 06:17:33 2015 From: chris at withers.org (Chris Withers) Date: Thu, 24 Dec 2015 11:17:33 +0000 Subject: [Python-Dev] dynamic linking, libssl.1.0.0.dylib, libcrypto.1.0.0.dylib and Mac OS X Message-ID: <567BD44D.1090202@withers.org> Hi All, I hit this every time I install packages on Mac OS X that use libssl, it looks like extensions are built linking to .dylib's that are not resolveable when the library is actually used: >>> from OpenSSL import SSL Traceback (most recent call last): File "", line 1, in File "python2.7/site-packages/OpenSSL/__init__.py", line 8, in from OpenSSL import rand, crypto, SSL File "python2.7/site-packages/OpenSSL/rand.py", line 11, in from OpenSSL._util import ( File "python2.7/site-packages/OpenSSL/_util.py", line 6, in from cryptography.hazmat.bindings.openssl.binding import Binding File "python2.7/site-packages/cryptography/hazmat/bindings/openssl/binding.py", line 13, in from cryptography.hazmat.bindings._openssl import ffi, lib ImportError: dlopen(python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so, 2): Library not loaded: libssl.1.0.0.dylib Referenced from: python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so Reason: image not found Looking at what this links to, I see: $ otool -L lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so lib/python2.7/site-packages/cryptography/hazmat/bindings/_openssl.so: libssl.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) Whereas the functional _ssl that ships with Python distributions on Mac OS X look like this: $ otool -L .../lib/python2.7/lib-dynload/_ssl.so .../lib/python2.7/lib-dynload/_ssl.so: @loader_path/../../libssl.1.0.0.dylib (compatibility version 1.0.0, current version 1.0.0) What's going wrong here and what can be done differently to have 'pip install package_using_libssl' build a usable binary installation? Here's a couple of examples of this problem in the wild: https://github.com/alekstorm/backports.ssl/issues/9 http://stackoverflow.com/questions/32978365/how-do-i-run-psycopg2-on-el-capitan-without-hitting-a-libssl-error https://github.com/psycopg/psycopg2/issues/385 I'm well out of my depth here, I just want to use these libraries, but I'm happy to try and do the work to make the world a better place for Mac users of these libraries... Chris From cory at lukasa.co.uk Thu Dec 24 11:27:43 2015 From: cory at lukasa.co.uk (Cory Benfield) Date: Thu, 24 Dec 2015 16:27:43 +0000 Subject: [Python-Dev] dynamic linking, libssl.1.0.0.dylib, libcrypto.1.0.0.dylib and Mac OS X In-Reply-To: <567C03CF.1070700@simplistix.co.uk> References: <567BD466.4020308@simplistix.co.uk> <0451A2DF-1067-4ECA-A298-F590881DE68D@lukasa.co.uk> <567C03CF.1070700@simplistix.co.uk> Message-ID: <2442CEA0-A7D1-41DE-B996-E0BADC149DF1@lukasa.co.uk> > On 24 Dec 2015, at 14:40, Chris Withers wrote: > Hi Cory, > > I'm not not sure, _ssl included in a Python distribution works and does the right thing, it's third party packages built on the machines that appear to have the problem. > > How does Python itself "get it right" and how could psycopg2 and cryptography mirror that? > > This feels like a dynamic linking problem rather than something ssl-specific. Chris, Nope, it?s SSL-specific. OS X El Capitan ships a version of OpenSSL (specifically, OpenSSL 0.9.8zg). The library for this is where you?d expect to find it (/usr/lib/libssl.dylib): however, it ships without header files (that is, there is no /usr/include/ssl directory). Python distributions from python.org get around this problem by compiling and linking against, and including in the distribution, their own copy of libssl. This in principle works fine. Cryptography ordinarily does this too. If you use a remotely modern pip, ?pip install cryptography? on OS X will install a Python wheel. The wheel is a binary distribution, and it too includes a compiled copy of libssl. For this reason, I?d argue that cryptography *does* get it right, in the mainline case: a modern Python installation should get a perfectly functional copy of cryptography without requiring a compiler or encountering any problems like the one you?re discussing. The situations where it can go wrong are where cryptography is installed as a source distribution. This will require compilation on install, and here things start to get really tricky. The basic upshot of it, though, is that the OpenSSL shipped with OS X itself is simply not supported by cryptography: it?s ancient, and Apple doesn?t want people to use it, as shown by the fact that they don?t ship development headers for it. If you insist on installing cryptography from source, you?ll need to follow their installation instructions to do that: https://cryptography.io/en/latest/installation/#building-cryptography-on-os-x The TL;DR is: for cryptography on OS X, you either need a modern enough Python to support wheels, or you need to provide your own OpenSSL. Cory -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: Message signed with OpenPGP using GPGMail URL: From status at bugs.python.org Fri Dec 25 12:08:35 2015 From: status at bugs.python.org (Python tracker) Date: Fri, 25 Dec 2015 18:08:35 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20151225170835.80ED156672@psf.upfronthosting.co.za> ACTIVITY SUMMARY (2015-12-18 - 2015-12-25) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 5345 (+21) closed 32365 (+24) total 37710 (+45) Open issues with patches: 2351 Issues opened (38) ================== #24840: implement bool conversion for enums to prevent odd edge case http://bugs.python.org/issue24840 reopened by gregory.p.smith #25908: ProcessPoolExecutor deadlock on KeyboardInterrupt http://bugs.python.org/issue25908 opened by jacksontj #25909: Incorrect documentation for PyMapping_Items and like http://bugs.python.org/issue25909 opened by serhiy.storchaka #25910: Fixing links in documentation http://bugs.python.org/issue25910 opened by SilentGhost #25911: Regression: os.walk now using os.scandir() breaks bytes filena http://bugs.python.org/issue25911 opened by mont29 #25912: Use __spec__.__name__ instead of __name__ in the docs where ap http://bugs.python.org/issue25912 opened by Antony.Lee #25913: base64.a85decode adobe flag incorrectly utilizes <~ as a marke http://bugs.python.org/issue25913 opened by Soren Solari #25916: wave module readframes now returns bytes not str http://bugs.python.org/issue25916 opened by Iber Parodi Siri #25917: Fixing howto links in docs http://bugs.python.org/issue25917 opened by SilentGhost #25918: AssertionError in lib2to3 on 2.7.11 Windows http://bugs.python.org/issue25918 opened by schlamar #25919: htp.client PUT method ignores error responses sent immediatly http://bugs.python.org/issue25919 opened by Wiktor Niesiobedzki #25920: PyOS_AfterFork should reset socketmodule's lock http://bugs.python.org/issue25920 opened by emptysquare #25922: canceling a repair install breaks the ability to uninstall, re http://bugs.python.org/issue25922 opened by Kyle S (MrStonedOne) #25923: More const char http://bugs.python.org/issue25923 opened by serhiy.storchaka #25924: investigate if getaddrinfo(3) on OSX is thread-safe http://bugs.python.org/issue25924 opened by ronaldoussoren #25925: Coverage support for CPython 2 http://bugs.python.org/issue25925 opened by alecsandru.patrascu #25926: problems with "times" keyword in itertools.repeat http://bugs.python.org/issue25926 opened by Thomas Feldmann #25927: add dir_fd for mkstemp, and also maybe to all tempfile.* http://bugs.python.org/issue25927 opened by mmarkk #25928: Add Decimal.as_integer_ratio() http://bugs.python.org/issue25928 opened by johnwalker #25930: Document that os.remove is semantically identical to os.unlink http://bugs.python.org/issue25930 opened by Anthony Sottile #25931: os.fork() command distributed in windows Python27 (in SocketSe http://bugs.python.org/issue25931 opened by Sam Lobel #25933: Unhandled exception (TypeError) with ftplib in function retrbi http://bugs.python.org/issue25933 opened by Sam Adams #25934: ICC compiler: ICC treats denormal floating point numbers as 0. http://bugs.python.org/issue25934 opened by skrah #25935: OrderedDict prevents garbage collection if a circulary referen http://bugs.python.org/issue25935 opened by charettes #25936: Improve FastChildWatcher with WNOWAIT? http://bugs.python.org/issue25936 opened by WGH #25937: DIfference between utf8 and utf-8 when i define python source http://bugs.python.org/issue25937 opened by ?????? #25939: _ssl.enum_certificates() fails with ERROR_ACCESS_DENIED if pyt http://bugs.python.org/issue25939 opened by Chi Hsuan Yen #25940: SSL tests failed due to expired svn.python.org SSL certificate http://bugs.python.org/issue25940 opened by Chi Hsuan Yen #25941: Add 'How to Review a Patch' section to devguide http://bugs.python.org/issue25941 opened by Winterflower #25942: subprocess.call SIGKILLs too liberally http://bugs.python.org/issue25942 opened by Mike Pomraning #25943: Integer overflow in _bsddb leads to heap corruption http://bugs.python.org/issue25943 opened by Ned Williamson #25945: Type confusion in partial_setstate and partial_call leads to m http://bugs.python.org/issue25945 opened by Ned Williamson #25946: configure should pick /usr/bin/g++ automatically if present http://bugs.python.org/issue25946 opened by krichter #25947: Installation problem http://bugs.python.org/issue25947 opened by camilleri.jon at gmail.com #25948: Invalid MIME encoding generated by email.mime (line too long) http://bugs.python.org/issue25948 opened by vog #25949: Lazy creation of __dict__ in OrderedDict http://bugs.python.org/issue25949 opened by serhiy.storchaka #25951: SSLSocket.sendall() does not return None on success like socke http://bugs.python.org/issue25951 opened by ProgVal #25952: code_context not available in exec() http://bugs.python.org/issue25952 opened by Grzegorz Kraso?? Most recent 15 issues with no replies (15) ========================================== #25952: code_context not available in exec() http://bugs.python.org/issue25952 #25951: SSLSocket.sendall() does not return None on success like socke http://bugs.python.org/issue25951 #25949: Lazy creation of __dict__ in OrderedDict http://bugs.python.org/issue25949 #25948: Invalid MIME encoding generated by email.mime (line too long) http://bugs.python.org/issue25948 #25946: configure should pick /usr/bin/g++ automatically if present http://bugs.python.org/issue25946 #25943: Integer overflow in _bsddb leads to heap corruption http://bugs.python.org/issue25943 #25942: subprocess.call SIGKILLs too liberally http://bugs.python.org/issue25942 #25937: DIfference between utf8 and utf-8 when i define python source http://bugs.python.org/issue25937 #25936: Improve FastChildWatcher with WNOWAIT? http://bugs.python.org/issue25936 #25935: OrderedDict prevents garbage collection if a circulary referen http://bugs.python.org/issue25935 #25934: ICC compiler: ICC treats denormal floating point numbers as 0. http://bugs.python.org/issue25934 #25924: investigate if getaddrinfo(3) on OSX is thread-safe http://bugs.python.org/issue25924 #25923: More const char http://bugs.python.org/issue25923 #25917: Fixing howto links in docs http://bugs.python.org/issue25917 #25910: Fixing links in documentation http://bugs.python.org/issue25910 Most recent 15 issues waiting for review (15) ============================================= #25949: Lazy creation of __dict__ in OrderedDict http://bugs.python.org/issue25949 #25945: Type confusion in partial_setstate and partial_call leads to m http://bugs.python.org/issue25945 #25942: subprocess.call SIGKILLs too liberally http://bugs.python.org/issue25942 #25941: Add 'How to Review a Patch' section to devguide http://bugs.python.org/issue25941 #25939: _ssl.enum_certificates() fails with ERROR_ACCESS_DENIED if pyt http://bugs.python.org/issue25939 #25933: Unhandled exception (TypeError) with ftplib in function retrbi http://bugs.python.org/issue25933 #25925: Coverage support for CPython 2 http://bugs.python.org/issue25925 #25923: More const char http://bugs.python.org/issue25923 #25919: htp.client PUT method ignores error responses sent immediatly http://bugs.python.org/issue25919 #25917: Fixing howto links in docs http://bugs.python.org/issue25917 #25916: wave module readframes now returns bytes not str http://bugs.python.org/issue25916 #25913: base64.a85decode adobe flag incorrectly utilizes <~ as a marke http://bugs.python.org/issue25913 #25911: Regression: os.walk now using os.scandir() breaks bytes filena http://bugs.python.org/issue25911 #25910: Fixing links in documentation http://bugs.python.org/issue25910 #25909: Incorrect documentation for PyMapping_Items and like http://bugs.python.org/issue25909 Top 10 most discussed issues (10) ================================= #4709: Mingw-w64 and python on windows x64 http://bugs.python.org/issue4709 13 msgs #25911: Regression: os.walk now using os.scandir() breaks bytes filena http://bugs.python.org/issue25911 12 msgs #25928: Add Decimal.as_integer_ratio() http://bugs.python.org/issue25928 10 msgs #19475: Add timespec optional flag to datetime isoformat() to choose t http://bugs.python.org/issue19475 7 msgs #21579: Python 3.4: tempfile.close attribute does not work http://bugs.python.org/issue21579 7 msgs #25848: Tkinter tests failed on Windows buildbots http://bugs.python.org/issue25848 7 msgs #25919: htp.client PUT method ignores error responses sent immediatly http://bugs.python.org/issue25919 6 msgs #25930: Document that os.remove is semantically identical to os.unlink http://bugs.python.org/issue25930 6 msgs #12484: The Py_InitModule functions no longer exist, but remain in the http://bugs.python.org/issue12484 5 msgs #25933: Unhandled exception (TypeError) with ftplib in function retrbi http://bugs.python.org/issue25933 5 msgs Issues closed (23) ================== #17868: pprint long non-printable bytes as hexdump http://bugs.python.org/issue17868 closed by serhiy.storchaka #20782: base64 module docs do not use the terms 'bytes' and 'string' c http://bugs.python.org/issue20782 closed by r.david.murray #22227: Simplify tarfile iterator http://bugs.python.org/issue22227 closed by serhiy.storchaka #24103: Use after free in xmlparser_setevents (1) http://bugs.python.org/issue24103 closed by serhiy.storchaka #24580: Wrong or missing exception when compiling regexes with recursi http://bugs.python.org/issue24580 closed by serhiy.storchaka #25421: Make __sizeof__ for builtin types more subclass friendly http://bugs.python.org/issue25421 closed by serhiy.storchaka #25766: __bytes__ doesn't work for str subclasses http://bugs.python.org/issue25766 closed by serhiy.storchaka #25827: Support ICC in configure http://bugs.python.org/issue25827 closed by zach.ware #25844: Pylauncher, launcher.c: Assigning NULL to a pointer instead of http://bugs.python.org/issue25844 closed by serhiy.storchaka #25860: os.fwalk() silently skips remaining directories when error occ http://bugs.python.org/issue25860 closed by serhiy.storchaka #25869: Faster ElementTree deepcopying http://bugs.python.org/issue25869 closed by serhiy.storchaka #25873: Faster ElementTree iterating http://bugs.python.org/issue25873 closed by serhiy.storchaka #25902: Fixed various refcount issues in ElementTree iteration http://bugs.python.org/issue25902 closed by serhiy.storchaka #25905: IDLE fails to display the README file http://bugs.python.org/issue25905 closed by terry.reedy #25914: Fix OrderedDict.__sizeof__ http://bugs.python.org/issue25914 closed by serhiy.storchaka #25915: file.write() after file.read() adds text to the end of the fil http://bugs.python.org/issue25915 closed by Adam Wasik #25921: project files for wininst-14.0*.exe don't exist http://bugs.python.org/issue25921 closed by zach.ware #25929: When doing string.replace, it uses the entire 'find' string an http://bugs.python.org/issue25929 closed by r.david.murray #25932: Windows installer ships an outdated and insecure curl.exe http://bugs.python.org/issue25932 closed by zach.ware #25938: if sentence doesn't work with input() http://bugs.python.org/issue25938 closed by benjamin.peterson #25944: Type confusion in partial_setstate and partial_repr leads to c http://bugs.python.org/issue25944 closed by serhiy.storchaka #25950: svn.python.org SSL certificate expired, causing test failures http://bugs.python.org/issue25950 closed by berker.peksag #1753718: base64 "legacy" functions violate RFC 3548 http://bugs.python.org/issue1753718 closed by r.david.murray From brett at python.org Fri Dec 25 12:46:23 2015 From: brett at python.org (Brett Cannon) Date: Fri, 25 Dec 2015 17:46:23 +0000 Subject: [Python-Dev] Thanks for your hard work and my New Years resolutions Message-ID: I just wanted to quickly thank everyone for the work they put into this project. I realize most of us either get only a little bit of paid time to work on Python or none at all, so contributing easily ends up using personal time which I know is a precious thing. So thank you for caring enough about this project to put in your valuable time and effort while trying to keep it enjoyable for everyone else. As we go into 2016, I hope to do my part to indirectly thank everyone by making our developer workflow easier to work with so that not only the lives of the core developers become easier but we once again become a project that is viewed as welcoming to outside contribution (instead of our current reputation as having patches sit in the issue tracker, languishing; join core-workflow if you want to help out with that). I also hope to see zipimport rewritten -- either by someone else or me if necessary -- so that my importlib.resources idea can land in time for Python 3.6 (join the import-sig if that interests you). Otherwise I plan to keep promoting Python 3 as we get ever closer to 2020. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at lucidity.plus.com Sat Dec 26 17:20:28 2015 From: python at lucidity.plus.com (Erik) Date: Sat, 26 Dec 2015 22:20:28 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? Message-ID: <567F12AC.8070802@lucidity.plus.com> Hi. Looking at ceval.c and peephole.c, there is - of course - lots of specific hard-coded knowledge of the bytecode (e.g., number of operands and other attributes). I'd like to experiment at this level, but I can't seem to find a reference for the bytecode. Is there the equivalent of something like the ARM ARM(*) for Python bytecode? I can read Python or C code if it's encoded that way, but I'm looking for something that's a bit more immediate than deciphering what an interpreter or optimizer is trying to do (i.e., some sort of table layout or per-opcode set of attributes). BR, E. (*) http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0406c/index.html From jjevnik at quantopian.com Sat Dec 26 17:36:08 2015 From: jjevnik at quantopian.com (Joe Jevnik) Date: Sat, 26 Dec 2015 17:36:08 -0500 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <567F12AC.8070802@lucidity.plus.com> References: <567F12AC.8070802@lucidity.plus.com> Message-ID: The number and meaning of the arguments are documented in the dis module: https://docs.python.org/3.6/library/dis.html On Sat, Dec 26, 2015 at 5:20 PM, Erik wrote: > Hi. > > Looking at ceval.c and peephole.c, there is - of course - lots of specific > hard-coded knowledge of the bytecode (e.g., number of operands and other > attributes). I'd like to experiment at this level, but I can't seem to find > a reference for the bytecode. > > Is there the equivalent of something like the ARM ARM(*) for Python > bytecode? I can read Python or C code if it's encoded that way, but I'm > looking for something that's a bit more immediate than deciphering what an > interpreter or optimizer is trying to do (i.e., some sort of table layout > or per-opcode set of attributes). > > BR, > E. > > (*) > http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0406c/index.html > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/joe%40quantopian.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Dec 26 17:49:49 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 26 Dec 2015 15:49:49 -0700 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> Message-ID: Also there's a great talk by Allison Kaptur on YouTube about this topic: https://www.youtube.com/watch?v=HVUTjQzESeo -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Dec 26 17:59:08 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 26 Dec 2015 15:59:08 -0700 Subject: [Python-Dev] Thanks for your hard work and my New Years resolutions In-Reply-To: References: Message-ID: +1 Thanks to everyone who has contributed to Python! And thanks everyone for being such an awesome community. Oh, and thanks to Brett for taking on those unpopular jobs. --Guido On Fri, Dec 25, 2015 at 10:46 AM, Brett Cannon wrote: > I just wanted to quickly thank everyone for the work they put into this > project. I realize most of us either get only a little bit of paid time to > work on Python or none at all, so contributing easily ends up using > personal time which I know is a precious thing. So thank you for caring > enough about this project to put in your valuable time and effort while > trying to keep it enjoyable for everyone else. > > As we go into 2016, I hope to do my part to indirectly thank everyone by > making our developer workflow easier to work with so that not only the > lives of the core developers become easier but we once again become a > project that is viewed as welcoming to outside contribution (instead of our > current reputation as having patches sit in the issue tracker, languishing; > join core-workflow if you want to help out with that). I also hope to see > zipimport rewritten -- either by someone else or me if necessary -- so that > my importlib.resources idea can land in time for Python 3.6 (join the > import-sig if that interests you). Otherwise I plan to keep promoting > Python 3 as we get ever closer to 2020. :) > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at lucidity.plus.com Sat Dec 26 17:51:17 2015 From: python at lucidity.plus.com (Erik) Date: Sat, 26 Dec 2015 22:51:17 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> Message-ID: <567F19E5.8050107@lucidity.plus.com> Hi Joe, On 26/12/15 22:36, Joe Jevnik wrote: > The number and meaning of the arguments are documented in the dis > module: https://docs.python.org/3.6/library/dis.html OK - I *did* find that, but perhaps didn't immediately understand what it was telling me. So, something documented as "OP_CODE" is a 1-byte op, something documented as "OP_CODE(foo)" is a 2-byte op - and unless I missed one, there are no 3-byte ops? Thanks, E. From python at lucidity.plus.com Sat Dec 26 18:13:47 2015 From: python at lucidity.plus.com (Erik) Date: Sat, 26 Dec 2015 23:13:47 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> Message-ID: <567F1F2B.9000603@lucidity.plus.com> On 26/12/15 23:10, Joe Jevnik wrote: > All arguments are 2 bytes, if there needs to be more, EXTENDED_ARG is used OK, got it - many thanks. E. From ned at nedbatchelder.com Sat Dec 26 19:38:00 2015 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sat, 26 Dec 2015 19:38:00 -0500 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <567F1F2B.9000603@lucidity.plus.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> Message-ID: <567F32E8.8060709@nedbatchelder.com> On 12/26/15 6:13 PM, Erik wrote: > On 26/12/15 23:10, Joe Jevnik wrote: >> All arguments are 2 bytes, if there needs to be more, EXTENDED_ARG is >> used > > OK, got it - many thanks. One thing to understand that may not be immediately apparent: the byte code can (and does) change between versions, so Python 2.7 doesn't have the exact same byte code as 3.4, which is also different from 3.5. --Ned. > > E. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com From brett at python.org Sat Dec 26 20:06:57 2015 From: brett at python.org (Brett Cannon) Date: Sun, 27 Dec 2015 01:06:57 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <567F32E8.8060709@nedbatchelder.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> Message-ID: Ned also neglected to mention his byterun project which is a pure Python implementation of the CPython eval loop: https://github.com/nedbat/byterun On Sat, 26 Dec 2015, 16:38 Ned Batchelder wrote: > On 12/26/15 6:13 PM, Erik wrote: > > On 26/12/15 23:10, Joe Jevnik wrote: > >> All arguments are 2 bytes, if there needs to be more, EXTENDED_ARG is > >> used > > > > OK, got it - many thanks. > One thing to understand that may not be immediately apparent: the byte > code can (and does) change between versions, so Python 2.7 doesn't have > the exact same byte code as 3.4, which is also different from 3.5. > > --Ned. > > > > E. > > _______________________________________________ > > Python-Dev mailing list > > Python-Dev at python.org > > https://mail.python.org/mailman/listinfo/python-dev > > Unsubscribe: > > > https://mail.python.org/mailman/options/python-dev/ned%40nedbatchelder.com > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjevnik at quantopian.com Sat Dec 26 18:10:41 2015 From: jjevnik at quantopian.com (Joe Jevnik) Date: Sat, 26 Dec 2015 18:10:41 -0500 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <567F19E5.8050107@lucidity.plus.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> Message-ID: All arguments are 2 bytes, if there needs to be more, EXTENDED_ARG is used On Sat, Dec 26, 2015 at 5:51 PM, Erik wrote: > Hi Joe, > > On 26/12/15 22:36, Joe Jevnik wrote: > >> The number and meaning of the arguments are documented in the dis >> module: https://docs.python.org/3.6/library/dis.html >> > > OK - I *did* find that, but perhaps didn't immediately understand what it > was telling me. > > So, something documented as "OP_CODE" is a 1-byte op, something documented > as "OP_CODE(foo)" is a 2-byte op - and unless I missed one, there are no > 3-byte ops? > > Thanks, > E. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Sat Dec 26 21:23:01 2015 From: guido at python.org (Guido van Rossum) Date: Sat, 26 Dec 2015 19:23:01 -0700 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> Message-ID: On Sat, Dec 26, 2015 at 6:06 PM, Brett Cannon wrote: > Ned also neglected to mention his byterun project which is a pure Python > implementation of the CPython eval loop: https://github.com/nedbat/byterun > >From the commit log it looks like it's a co-production between Ned and Allison Kaptur (who gave the talk I mentioned). -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sat Dec 26 22:19:43 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 27 Dec 2015 13:19:43 +1000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> Message-ID: On 27 December 2015 at 12:23, Guido van Rossum wrote: > On Sat, Dec 26, 2015 at 6:06 PM, Brett Cannon wrote: >> >> Ned also neglected to mention his byterun project which is a pure Python >> implementation of the CPython eval loop: https://github.com/nedbat/byterun > > From the commit log it looks like it's a co-production between Ned and > Allison Kaptur (who gave the talk I mentioned). It occurred to me that "byterun" would make a good see-also link from the dis module docs, and looking into that idea brought me to this article Allison wrote about it for the "500 lines" project: http://aosabook.org/en/500L/a-python-interpreter-written-in-python.html For a detailed semantic reference, byterun's eval loop is likely one of the most readable sources of information: https://github.com/nedbat/byterun/blob/master/byterun/pyvm2.py In terms of formal documentation, the main problem with providing reference bytecode tables is keeping them up to date as the eval loop changes. However, it would theoretically be possible to create a custom Sphinx directive that uses the dis module to generate the tables automatically during the docs build process, rather than maintaining them by hand - something like that could be experimented with outside CPython, and potentially incorporated into the dis module docs if folks are able to figure out something that works well. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ned at nedbatchelder.com Sun Dec 27 06:01:15 2015 From: ned at nedbatchelder.com (Ned Batchelder) Date: Sun, 27 Dec 2015 06:01:15 -0500 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> Message-ID: <567FC4FB.80301@nedbatchelder.com> On 12/26/15 10:19 PM, Nick Coghlan wrote: > On 27 December 2015 at 12:23, Guido van Rossum wrote: >> On Sat, Dec 26, 2015 at 6:06 PM, Brett Cannon wrote: >>> Ned also neglected to mention his byterun project which is a pure Python >>> implementation of the CPython eval loop: https://github.com/nedbat/byterun >> From the commit log it looks like it's a co-production between Ned and >> Allison Kaptur (who gave the talk I mentioned). Yes, Allison was very helpful in pushing it forward. And I should also mention that I started with a dormant project that Paul Swartz wrote. And: it doesn't work completely. There are things it doesn't handle properly, and I turned to other projects some time ago. If someone wants to pick it up, that would be cool. > It occurred to me that "byterun" would make a good see-also link from > the dis module docs, and looking into that idea brought me to this > article Allison wrote about it for the "500 lines" project: > http://aosabook.org/en/500L/a-python-interpreter-written-in-python.html > > For a detailed semantic reference, byterun's eval loop is likely one > of the most readable sources of information: > https://github.com/nedbat/byterun/blob/master/byterun/pyvm2.py I started working on byterun because I knew I didn't understand bytecode well enough to implement branch coverage measurement in coverage.py. Its primary goal was teaching (me!) how the bytecode worked. Recently though, I've started a new implementation of branch coverage based on the ast rather than the bytecode. This was prompted by the "async" keywords in 3.5. "async for" and "for" compile very differently to bytecode, which caused headaches for a bytecode-based understanding of flow, so I'm trying out an ast-based understanding. --Ned. > In terms of formal documentation, the main problem with providing > reference bytecode tables is keeping them up to date as the eval loop > changes. However, it would theoretically be possible to create a > custom Sphinx directive that uses the dis module to generate the > tables automatically during the docs build process, rather than > maintaining them by hand - something like that could be experimented > with outside CPython, and potentially incorporated into the dis module > docs if folks are able to figure out something that works well. > > Regards, > Nick. > From python at lucidity.plus.com Sun Dec 27 18:49:32 2015 From: python at lucidity.plus.com (Erik) Date: Sun, 27 Dec 2015 23:49:32 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <567FC4FB.80301@nedbatchelder.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> Message-ID: <5680790C.5050701@lucidity.plus.com> Thanks for your help so far (I'm experimenting with the peephole optimizer - hence my question before as I was trying to work out how to know what the small integer hard-coded offsets should be when looking ahead in the bytecode). I've successfully added a new opcode (generated by the optimizer and understood by the interpreter loop) but when adding a second I unexpectedly got the following error. I'm not doing anything different to what I did with the first opcode as far as I can tell (I have a TARGET(FOO) in ceval.c and have obviously defined the new opcode's value in opcode.h). """ ./python -E -S -m sysconfig --generate-posix-vars ;\ if test $? -ne 0 ; then \ echo "generate-posix-vars failed" ; \ rm -f ./pybuilddir.txt ; \ exit 1 ; \ fi XXX lineno: 241, opcode: 1 Fatal Python error: Py_Initialize: can't import _frozen_importlib Traceback (most recent call last): File "", line 698, in File "", line 751, in BuiltinImporter File "", line 241, in _requires_builtin SystemError: unknown opcode Aborted (core dumped) generate-posix-vars failed make: *** [pybuilddir.txt] Error 1 """ If I #ifdef out the code in peephole.c which generates my new (2nd) opcode, then the error does not occur. I tried a "make clean" first, but that didn't help (I realise that does not necessarily rule out a makefile dependency issue). Does anyone know if this is a well-known symptom of forgetting to add something somewhere when adding a new opcode, or do I need to track it down some more myself? I did not have this problem when introducing my first new opcode. Thanks, E. From gvanrossum at gmail.com Sun Dec 27 19:41:08 2015 From: gvanrossum at gmail.com (Guido van Rossum) Date: Sun, 27 Dec 2015 17:41:08 -0700 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <5680790C.5050701@lucidity.plus.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> <5680790C.5050701@lucidity.plus.com> Message-ID: Can you show the diffs you have so far? Somebody's got to look at your code. --Guido (mobile) On Dec 27, 2015 16:51, "Erik" wrote: > Thanks for your help so far (I'm experimenting with the peephole optimizer > - hence my question before as I was trying to work out how to know what the > small integer hard-coded offsets should be when looking ahead in the > bytecode). > > > I've successfully added a new opcode (generated by the optimizer and > understood by the interpreter loop) but when adding a second I unexpectedly > got the following error. I'm not doing anything different to what I did > with the first opcode as far as I can tell (I have a TARGET(FOO) in ceval.c > and have obviously defined the new opcode's value in opcode.h). > > > """ > ./python -E -S -m sysconfig --generate-posix-vars ;\ > if test $? -ne 0 ; then \ > echo "generate-posix-vars failed" ; \ > rm -f ./pybuilddir.txt ; \ > exit 1 ; \ > fi > XXX lineno: 241, opcode: 1 > Fatal Python error: Py_Initialize: can't import _frozen_importlib > Traceback (most recent call last): > File "", line 698, in > File "", line 751, in BuiltinImporter > File "", line 241, in _requires_builtin > SystemError: unknown opcode > Aborted (core dumped) > generate-posix-vars failed > make: *** [pybuilddir.txt] Error 1 > """ > > If I #ifdef out the code in peephole.c which generates my new (2nd) > opcode, then the error does not occur. I tried a "make clean" first, but > that didn't help (I realise that does not necessarily rule out a makefile > dependency issue). > > Does anyone know if this is a well-known symptom of forgetting to add > something somewhere when adding a new opcode, or do I need to track it down > some more myself? I did not have this problem when introducing my first new > opcode. > > Thanks, E. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/guido%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Sun Dec 27 19:51:29 2015 From: brett at python.org (Brett Cannon) Date: Mon, 28 Dec 2015 00:51:29 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> <5680790C.5050701@lucidity.plus.com> Message-ID: You can look at https://docs.python.org/devguide/compiler.html to see if you missed something. As for the _frozen_importlib problem, that typically manifests itself when you have invalid bytecode (that module is frozen bytecode that gets compiled into the interpreter and is the first bit of Python code that gets run). On Sun, 27 Dec 2015, 16:41 Guido van Rossum wrote: > Can you show the diffs you have so far? Somebody's got to look at your > code. > > --Guido (mobile) > On Dec 27, 2015 16:51, "Erik" wrote: > >> Thanks for your help so far (I'm experimenting with the peephole >> optimizer - hence my question before as I was trying to work out how to >> know what the small integer hard-coded offsets should be when looking ahead >> in the bytecode). >> >> >> I've successfully added a new opcode (generated by the optimizer and >> understood by the interpreter loop) but when adding a second I unexpectedly >> got the following error. I'm not doing anything different to what I did >> with the first opcode as far as I can tell (I have a TARGET(FOO) in ceval.c >> and have obviously defined the new opcode's value in opcode.h). >> >> >> """ >> ./python -E -S -m sysconfig --generate-posix-vars ;\ >> if test $? -ne 0 ; then \ >> echo "generate-posix-vars failed" ; \ >> rm -f ./pybuilddir.txt ; \ >> exit 1 ; \ >> fi >> XXX lineno: 241, opcode: 1 >> Fatal Python error: Py_Initialize: can't import _frozen_importlib >> Traceback (most recent call last): >> File "", line 698, in >> File "", line 751, in BuiltinImporter >> File "", line 241, in _requires_builtin >> SystemError: unknown opcode >> Aborted (core dumped) >> generate-posix-vars failed >> make: *** [pybuilddir.txt] Error 1 >> """ >> >> If I #ifdef out the code in peephole.c which generates my new (2nd) >> opcode, then the error does not occur. I tried a "make clean" first, but >> that didn't help (I realise that does not necessarily rule out a makefile >> dependency issue). >> >> Does anyone know if this is a well-known symptom of forgetting to add >> something somewhere when adding a new opcode, or do I need to track it down >> some more myself? I did not have this problem when introducing my first new >> opcode. >> >> Thanks, E. >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> https://mail.python.org/mailman/listinfo/python-dev >> > Unsubscribe: >> https://mail.python.org/mailman/options/python-dev/guido%40python.org >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/brett%40python.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at lucidity.plus.com Sun Dec 27 20:00:24 2015 From: python at lucidity.plus.com (Erik) Date: Mon, 28 Dec 2015 01:00:24 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> <5680790C.5050701@lucidity.plus.com> Message-ID: <568089A8.3080503@lucidity.plus.com> On 28/12/15 00:41, Guido van Rossum wrote: > Can you show the diffs you have so far? Somebody's got to look at your code. Sounds like it's not a well-known symptom then. I agree, but that Somebody should be me (initially, at least) - I don't want to waste other people's time if I made a silly mistake. I'm happy to post my diffs once I'm done (if only to document that what I tried is not worth spending time on). E. From sturla.molden at gmail.com Mon Dec 28 13:33:20 2015 From: sturla.molden at gmail.com (Sturla Molden) Date: Mon, 28 Dec 2015 18:33:20 +0000 (UTC) Subject: [Python-Dev] Is there a reference manual for Python bytecode? References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> Message-ID: <1572604188473019951.684207sturla.molden-gmail.com@news.gmane.org> Brett Cannon wrote: > Ned also neglected to mention his byterun project which is a pure Python > implementation of the CPython eval loop: href="https://github.com/nedbat/byterun">https://github.com/nedbat/byterun I would also encourage you to take a look at Numba. It is an LLVM based JIT compiler for Python bytecode, written for hardcore numerical algorithms in Python. It can often achieve the same performance as -O2 in C after a short burn-in while inferring the types of the arguments and variables. Using it is mostly as easy as adding an @numba.jit decorator to the function we want to accelerate. Numba is rapidly becoming what Google's long dead swallow should have been. :-) Sturla From ncoghlan at gmail.com Mon Dec 28 21:46:28 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 29 Dec 2015 12:46:28 +1000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: <568089A8.3080503@lucidity.plus.com> References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> <5680790C.5050701@lucidity.plus.com> <568089A8.3080503@lucidity.plus.com> Message-ID: On 28 December 2015 at 11:00, Erik wrote: > On 28/12/15 00:41, Guido van Rossum wrote: >> >> Can you show the diffs you have so far? Somebody's got to look at your >> code. > > Sounds like it's not a well-known symptom then. The symptom is well known (at least to folks that have worked on the compiler and eval loop since the switch to importlib as the import system implementation), but the circumstances where it can arise are *very* limited. Specifically, being unable to load the import system while working on CPython is usually a sign that: 1. The interpreter's bytecode generation is inconsistent with the implementation of the eval loop 2. importlib._bootstrap includes code that triggers the inconsistent bytecode processing path 3. Freezing importlib._bootstrap to create _frozen_importlib thus produces a frozen module that won't load with the given eval loop implementation If you're not hacking on bytecode generation or the eval loop (1), or your changes to the bytecode generator and/or eval loop don't impact the code in importlib._bootstrap (2), then you won't see this kind of bug (3). > I agree, but that Somebody > should be me (initially, at least) - I don't want to waste other people's > time if I made a silly mistake. In this particular case, it's hard to help debug the error without being able to see both the new code generation changes and the corresponding eval loop changes. It's also the case that to rule out the bootstrapping cycle as a potential source of problems, you can try the following manual dance: 1. Revert to a clean checkout and rebuild 2. Apply the eval loop changes, and rebuild 3. Apply the code generation changes, and rebuild That generally *shouldn't* be necessary (it's why there's a separate build step to freeze the import system), but it can be a useful exercise to help figure out the source of the "unknown opcode" problem. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From python at lucidity.plus.com Tue Dec 29 05:32:40 2015 From: python at lucidity.plus.com (Erik) Date: Tue, 29 Dec 2015 10:32:40 +0000 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <567FC4FB.80301@nedbatchelder.com> <5680790C.5050701@lucidity.plus.com> <568089A8.3080503@lucidity.plus.com> Message-ID: <56826148.4040505@lucidity.plus.com> Hi Nick, On 29/12/15 02:46, Nick Coghlan wrote: > 1. The interpreter's bytecode generation is inconsistent with the > implementation of the eval loop Essentially, this was my problem. I'd neglected to add the reference to TARGET_NEW_OP2 to Python/opcode_targets.h (so staring hard at the op generation and ceval implementation did not help me: they were both fine). I'd forgotten adding the first op to that array, and section 24.8 of https://docs.python.org/devguide/compiler.html doesn't mention that file either. I will look at raising a docs bug on that. > It's also the case that to rule out the bootstrapping cycle as a > potential source of problems, you can try the following manual dance: > > 1. Revert to a clean checkout and rebuild > 2. Apply the eval loop changes, and rebuild > 3. Apply the code generation changes, and rebuild Thanks - this is useful to know. It's a bit chicken-and-egg if one has introduced a bug which stops the build-time execution of the python auto-generation scripts from executing correctly :) E. From wes.turner at gmail.com Tue Dec 29 08:59:25 2015 From: wes.turner at gmail.com (Wes Turner) Date: Tue, 29 Dec 2015 07:59:25 -0600 Subject: [Python-Dev] Is there a reference manual for Python bytecode? In-Reply-To: References: <567F12AC.8070802@lucidity.plus.com> <567F19E5.8050107@lucidity.plus.com> <567F1F2B.9000603@lucidity.plus.com> <567F32E8.8060709@nedbatchelder.com> <1572604188473019951.684207sturla.molden-gmail.com@news.gmane.org> Message-ID: numba * http://numba.pydata.org/numba-doc/0.16.0/modules/numba.html#module-numba.bytecode * https://github.com/numba/numba/blob/master/numba/bytecode.py pypy * http://doc.pypy.org/en/latest/interpreter.html * http://aosabook.org/en/pypy.html ... http://compilers.pydata.org/ #Bytecode Utilities On Dec 28, 2015 1:33 PM, "Sturla Molden" wrote: Brett Cannon wrote: > Ned also neglected to mention his byterun project which is a pure Python > implementation of the CPython eval loop: href="https://github.com/nedbat/byterun">https://github.com/nedbat/byterun I would also encourage you to take a look at Numba. It is an LLVM based JIT compiler for Python bytecode, written for hardcore numerical algorithms in Python. It can often achieve the same performance as -O2 in C after a short burn-in while inferring the types of the arguments and variables. Using it is mostly as easy as adding an @numba.jit decorator to the function we want to accelerate. Numba is rapidly becoming what Google's long dead swallow should have been. :-) Sturla _______________________________________________ Python-Dev mailing list Python-Dev at python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From facundobatista at gmail.com Tue Dec 29 13:27:30 2015 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 29 Dec 2015 15:27:30 -0300 Subject: [Python-Dev] PEP 257 and __init__ Message-ID: Hola! (I was doubting in sending this mail to this list or to the normal one, but as it affects a "style recommendation" we propose for the whole community, I finally sent it here) I was reading PEP 257 and it says that all public methods from a class (including __init__) should have a docstring. Why __init__? It's behaviour is well defined (inits the instance), and the initialization parameters should be described in the class' docstring itself, right? Or I am missing something? Should we remove "__init__" (the class method, *not* the package file) as to require docstrings in the PEP? Thanks! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From abarnert at yahoo.com Tue Dec 29 14:38:53 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Tue, 29 Dec 2015 11:38:53 -0800 Subject: [Python-Dev] PEP 257 and __init__ In-Reply-To: References: Message-ID: On Dec 29, 2015, at 10:27, Facundo Batista wrote: > I was reading PEP 257 and it says that all public methods from a class > (including __init__) should have a docstring. > > Why __init__? > > It's behaviour is well defined (inits the instance), and the > initialization parameters should be described in the class' docstring > itself, right? Isn't the same thing true for every special method? There are lots of classes where __add___ just says "a.__add__(b) = a + b" or (better following the PEP) "Return self + value." But, in the rare case where the semantics of "a + b" are a little tricky (think of "a / b" for pathlib.Path), where else could you put it but __add__? Similarly, for most classes, there's only one of __init__ or __new__, and the construction/initialization semantics are simple enough to describe in one line of the class docstring--but when things are more complicated and need to be documented, where else would you put it? Meanwhile, the useless one-liner docstrings for these methods aren't usually a problem except in trivial classes--and in trivial classes, I usually just don't bother. You can violate PEP 257 when it makes sense, just like PEP 8. They're just guidelines, not iron-clad rules. Unless you're working on a project that insists that we must follow those guidelines, usually for some good reason like having lots of devs who are more experienced in other languages and whose instinctive "taste" isn't sufficiently Pythonic. And for that use case, keeping the rules as simple as possible is probably helpful. Better to have one wasted line in every file than to have an extra rule that all those JS developers have to remember when they're working in Python. From fred at fdrake.net Tue Dec 29 14:40:31 2015 From: fred at fdrake.net (Fred Drake) Date: Tue, 29 Dec 2015 14:40:31 -0500 Subject: [Python-Dev] PEP 257 and __init__ In-Reply-To: References: Message-ID: On Tue, Dec 29, 2015 at 1:27 PM, Facundo Batista wrote: > I was reading PEP 257 and it says that all public methods from a class > (including __init__) should have a docstring. > > Why __init__? > > It's behaviour is well defined (inits the instance), and the > initialization parameters should be described in the class' docstring > itself, right? __init__ is not always the only constructor for a class; each constructor's arguments should be documented as part of the constructor. The class docstring should provide summary information for the class as a whole. I can also imagine cases where the __init__ isn't considered public, though I suspect that's exceedingly rare in practice. (Can't think of a case I've actually run across like that.) > Should we remove "__init__" (the class method, *not* the package file) > as to require docstrings in the PEP? I don't think so. The advice seems sound to me. -Fred -- Fred L. Drake, Jr. "A storm broke loose in my mind." --Albert Einstein From facundobatista at gmail.com Tue Dec 29 16:03:54 2015 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 29 Dec 2015 18:03:54 -0300 Subject: [Python-Dev] PEP 257 and __init__ In-Reply-To: References: Message-ID: On Tue, Dec 29, 2015 at 4:38 PM, Andrew Barnert wrote: > Isn't the same thing true for every special method? There are lots of classes where __add___ just says "a.__add__(b) = a + b" or (better following the PEP) "Return self + value." But, in the rare case where the semantics of "a + b" are a little tricky (think of "a / b" for pathlib.Path), where else could you put it but __add__? > > Similarly, for most classes, there's only one of __init__ or __new__, and the construction/initialization semantics are simple enough to describe in one line of the class docstring--but when things are more complicated and need to be documented, where else would you put it? Yeap. Note that I'm ok to include a docstring when the actual behaviour would deviate from the expected one as per Reference Docs. My point is to not make it mandatory. > I usually just don't bother. You can violate PEP 257 when it makes sense, just like PEP 8. They're just guidelines, not iron-clad rules. Yeap, but pep257 (the tool [0]) complains for __init__, and wanted to know how serious was it. [0] https://pypi.python.org/pypi/pep257 -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ Twitter: @facundobatista From tjreedy at udel.edu Tue Dec 29 16:32:06 2015 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 29 Dec 2015 16:32:06 -0500 Subject: [Python-Dev] PEP 257 and __init__ In-Reply-To: References: Message-ID: On 12/29/2015 2:40 PM, Fred Drake wrote: > On Tue, Dec 29, 2015 at 1:27 PM, Facundo Batista > wrote: >> I was reading PEP 257 and it says that all public methods from a class >> (including __init__) should have a docstring. >> >> Why __init__? >> >> It's behaviour is well defined (inits the instance), and the >> initialization parameters should be described in the class' docstring >> itself, right? > > __init__ is not always the only constructor for a class; each > constructor's arguments should be documented as part of the > constructor. I agree. >>> help(Someclass) first gives the class docstring, which should explain what the class is about, and then each method, with signature and docstring. The explanation of signatures for __new__, __init__, and any other constructor methods should follow the name and signature of the method. -- Terry Jan Reedy From ben+python at benfinney.id.au Tue Dec 29 16:53:37 2015 From: ben+python at benfinney.id.au (Ben Finney) Date: Wed, 30 Dec 2015 08:53:37 +1100 Subject: [Python-Dev] PEP 257 and __init__ References: Message-ID: <85k2nwvs4e.fsf@benfinney.id.au> Facundo Batista writes: > Note that I'm ok to include a docstring when the actual behaviour > would deviate from the expected one as per Reference Docs. My point is > to not make it mandatory. I disagree with the exception you're making for ?__init__?. The parameters to that function (and how the function behaves in response) should be documented in the docstring, just as for any other function. > Yeap, but pep257 (the tool [0]) complains for __init__, and wanted to > know how serious was it. Omitting a docstring violates PEP 257, regardless which function we're talking about. So the tool is correct to complain. -- \ ?If we don't believe in freedom of expression for people we | `\ despise, we don't believe in it at all.? ?Noam Chomsky, | _o__) 1992-11-25 | Ben Finney From abarnert at yahoo.com Tue Dec 29 18:41:58 2015 From: abarnert at yahoo.com (Andrew Barnert) Date: Tue, 29 Dec 2015 15:41:58 -0800 Subject: [Python-Dev] PEP 257 and __init__ In-Reply-To: References: Message-ID: <1FFB727F-1D01-4B4B-8C85-CEABEF28941F@yahoo.com> On Dec 29, 2015, at 13:03, Facundo Batista wrote: > >> On Tue, Dec 29, 2015 at 4:38 PM, Andrew Barnert wrote: >> I usually just don't bother. You can violate PEP 257 when it makes sense, just like PEP 8. They're just guidelines, not iron-clad rules. > > Yeap, but pep257 (the tool [0]) complains for __init__, and wanted to > know how serious was it. Of course. It's telling you that you're not following the standard, which is correct. It's also expected in this case, and if you think you have a good reason for breaking from the standard, that's perfectly fine. You probably want to configure the tool to meet your own standards. (I've worked on multiple projects that used custom pep8 configurations. I haven't used pep257 as much, but I believe I've seen configurations for the slightly different conventions of scientific/numerical programming and Django programming, so presumably coming up with your own configuration shouldn't be too hard--don't require docstrings on __init__, or on all special methods, or only when there no __new__, or whatever.) From carlos.barera at gmail.com Wed Dec 30 13:25:02 2015 From: carlos.barera at gmail.com (Carlos Barera) Date: Wed, 30 Dec 2015 20:25:02 +0200 Subject: [Python-Dev] subprocess check_output Message-ID: Hi, Trying to run a specific command (ibstat) installed in /usr/sbin on an Ubuntu 15.04 machine, using subprocess.check_output and getting "/bin/sh: /usr/sbin/ibstat: No such file or directory" I tried the following: - running the command providing full path - running with executable=bash - running with (['/bin/bash', '-c' , "/usr/sbin/ibstat"]) Nothing worked ... Any idea? -carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at trueblade.com Wed Dec 30 14:07:49 2015 From: eric at trueblade.com (Eric V. Smith) Date: Wed, 30 Dec 2015 20:07:49 +0100 Subject: [Python-Dev] subprocess check_output In-Reply-To: References: Message-ID: <56842B85.2080406@trueblade.com> This mailing list is for the development of future versions of Python. For questions about using Python, please use python-list: https://mail.python.org/mailman/listinfo/python-list Eric. On 12/30/2015 07:25 PM, Carlos Barera wrote: > Hi, > > Trying to run a specific command (ibstat) installed in /usr/sbin on an > Ubuntu 15.04 machine, using subprocess.check_output and getting > "/bin/sh: /usr/sbin/ibstat: No such file or directory" > > I tried the following: > - running the command providing full path > - running with executable=bash > - running with (['/bin/bash', '-c' , "/usr/sbin/ibstat"]) > > Nothing worked ... > > Any idea? > > > -carlos > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/eric%2Ba-python-dev%40trueblade.com >