From storchaka at gmail.com Sat Jun 1 03:02:13 2019 From: storchaka at gmail.com (Serhiy Storchaka) Date: Sat, 1 Jun 2019 10:02:13 +0300 Subject: [Python-Dev] Expected stability of PyCode_New() and types.CodeType() signatures In-Reply-To: <67fbae0d-0155-a21d-cd4b-6e16628debb6@gmail.com> References: <67fbae0d-0155-a21d-cd4b-6e16628debb6@gmail.com> Message-ID: 31.05.19 11:46, Petr Viktorin ????: > PEP 570 (Positional-Only Parameters) changed the signatures of > PyCode_New() and types.CodeType(), adding a new argument for "posargcount". > Our policy for such changes seems to be fragmented tribal knowledge. I'm > writing to check if my understanding is reasonable, so I can apply it > and document it explicitly. > > There is a surprisingly large ecosystem of tools that create code objects. > The expectation seems to be that these tools will need to be adapted for > each minor version of Python. I have a related proposition. Yesterday I have reported two bugs (and Pablo quickly fixed them) related to handling positional-only arguments. These bugs were occurred due to subtle changing the meaning of co_argcount. When we make some existing parameters positional-only, we do not add new arguments, but mark existing parameters. But co_argcount now means the only number of positional-or-keyword parameters. Most code which used co_argcount needs now to be changed to use co_posonlyargcount+co_argcount. I propose to make co_argcount meaning the number of positional parameters (i.e. positional-only + positional-or-keyword). This would remove the need of changing the code that uses co_argcount. As for the code object constructor, I propose to make posonlyargcount an optional parameter (default 0) added after existing parameters. PyCode_New() can be kept unchanged, but we can add new PyCode_New2() or PyCode_NewEx() with different signature. From solipsis at pitrou.net Sat Jun 1 05:30:32 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 1 Jun 2019 11:30:32 +0200 Subject: [Python-Dev] PEP 595: Improving bugs.python.org In-Reply-To: References: <20190531102224.0bf2229f@fsol> <43AB66F6-37CA-475F-BE78-E3D1FA726F43@python.org> Message-ID: <20190601113032.2103819c@fsol> On Fri, 31 May 2019 11:58:22 -0700 Nathaniel Smith wrote: > On Fri, May 31, 2019 at 11:39 AM Barry Warsaw wrote: > > > > On May 31, 2019, at 01:22, Antoine Pitrou wrote: > > > > > I second this. > > > > > > There are currently ~7000 bugs open on bugs.python.org. The Web UI > > > makes a good job of actually being able to navigate through these bugs, > > > search through them, etc. > > > > > > Did the Steering Council conduct a usability study of Github Issues > > > with those ~7000 bugs open? If not, then I think the acceptance of > > > migrating to Github is a rushed job. Please reconsider. > > > > Thanks for your feedback Antoine. > > > > This is a tricky issue, with many factors and tradeoffs to consider. I really appreciate Ezio and Berker working on PEP 595, so we can put all these issues on the table. > > > > I think one of the most important tradeoffs is balancing the needs of existing developers (those who actively triage bugs today), and future contributors. But this and other UX issues are difficult to compare on our actual data right now. I fully expect that just as with the switch to git, we?ll do lots of sample imports and prototyping to ensure that GitHub issues will actually work for us (given our unique requirements), and to help achieve the proper balance. It does us no good to switch if we just anger all the existing devs. > > > > IMHO, if the switch to GH doesn?t improve our workflow, then it definitely warrants a reevaluation. I think things will be better, but let?s prove it. > > Perhaps we should put an explicit step on the transition plan, after > the prototyping, that's "gather feedback from prototypes, re-evaluate, > make final go/no-go decision"? I assume we'll want to do that anyway, > and having it formally written down might reassure people. It might > also encourage more people to actually try out the prototypes if we > make it very clear that they're going to be asked for feedback. Indeed, regardless of the exact implementation details, I think "try first, decide after" is the right procedure here. Regards Antoine. From pablogsal at gmail.com Sat Jun 1 09:09:34 2019 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Sat, 1 Jun 2019 14:09:34 +0100 Subject: [Python-Dev] Expected stability of PyCode_New() and types.CodeType() signatures Message-ID: > > I propose to make co_argcount meaning the number of positional > parameters (i.e. positional-only + positional-or-keyword). This would > remove the need of changing the code that uses co_argcount. > I like the proposal, it will certainly make handling normal cases downstream much easier because if you do not care about positional-only arguments you can keep inspecting co_argcount and that will give you what you expect. Note that if we choose to do this, it has to be done now-ish IMHO to avoid making the change painful because it will change the semantics of co_argcount. > As for the code object constructor, I propose to make posonlyargcount an > optional parameter (default 0) added after existing parameters. > PyCode_New() can be kept unchanged, but we can add new PyCode_New2() or > PyCode_NewEx() with different signature. I am not convinced about having a default argument in the code constructor. The code constructor is kept with all arguments positional for efficiency and adding defaults will make it slower or having a more confusing an asymmetrical interface. Also, this will be misaligned on how keyword-only parameters are provided. This is by far not the first time this constructor has changed. On the Python side, the new code.replace should cover most of the Python-side use cases regarding creating code objects from the Python side. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: license.dash-license Type: application/octet-stream Size: 693 bytes Desc: not available URL: From stefan_ml at behnel.de Sat Jun 1 09:28:09 2019 From: stefan_ml at behnel.de (Stefan Behnel) Date: Sat, 1 Jun 2019 15:28:09 +0200 Subject: [Python-Dev] Expected stability of PyCode_New() and types.CodeType() signatures In-Reply-To: References: <67fbae0d-0155-a21d-cd4b-6e16628debb6@gmail.com> Message-ID: Serhiy Storchaka schrieb am 01.06.19 um 09:02: > I have a related proposition. Yesterday I have reported two bugs (and Pablo > quickly fixed them) related to handling positional-only arguments. These > bugs were occurred due to subtle changing the meaning of co_argcount. When > we make some existing parameters positional-only, we do not add new > arguments, but mark existing parameters. But co_argcount now means the only > number of positional-or-keyword parameters. Most code which used > co_argcount needs now to be changed to use co_posonlyargcount+co_argcount. > > I propose to make co_argcount meaning the number of positional parameters > (i.e. positional-only + positional-or-keyword). This would remove the need > of changing the code that uses co_argcount. Sounds reasonable to me. The main distinction points are positional arguments vs. keyword arguments vs. local variables. Whether the positional ones are positional or positional-only is irrelevant in many cases. > PyCode_New() can be kept unchanged, but we can add new PyCode_New2() or > PyCode_NewEx() with different signature. It's not a commonly used function, and it's easy for C code to adapt. I don't think it's worth adding a new function to the C-API here, compared to just changing the signature. Very few users would benefit, at the cost of added complexity. Stefan From pablogsal at gmail.com Sat Jun 1 11:55:32 2019 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Sat, 1 Jun 2019 16:55:32 +0100 Subject: [Python-Dev] Expected stability of PyCode_New() and types.CodeType() signatures Message-ID: Opened https://bugs.python.org/issue37122 to track this in the bug tracker. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.peters at gmail.com Sun Jun 2 01:56:52 2019 From: tim.peters at gmail.com (Tim Peters) Date: Sun, 2 Jun 2019 00:56:52 -0500 Subject: [Python-Dev] obmalloc (was Have a big machine and spare time? Here's a possible Python bug.) In-Reply-To: <20190526112415.33e4a02d@fsol> References: <64d3f69a-b900-d17d-679e-aa748d0a23ab@python.org> <20190526112415.33e4a02d@fsol> Message-ID: [Antoine Pitrou, replying to Thomas Wouters] > Interesting that a 20-year simple allocator (obmalloc) is able to do > better than the sophisticated TCMalloc. It's very hard to beat obmalloc (O) at what it does. TCMalloc (T) is actually very similar where they overlap, but has to be more complex because it's trying to do more than O. In either case, for small objects "the fast path" consists merely of taking the first block of memory off a singly-linked size-segregated free list. For freeing, the fast path is just linking the block back to the front of the appropriate free list. What _could_ be faster? A "bump allocator" allocates faster (just increment a highwater mark), but creates worlds of problems when freeing. But because O is only trying to deal with small (<= 512 bytes) requests, it can use a very fast method based on trivial address arithmetic to find the size of an allocated block by just reading it up from the start of the (4K) "pool" the address belongs to. T can't do that - it appears to need to look up the address in a more elaborate radix tree, to find info recording the size of the block (which may be just about anything - no upper limit). > (well, of course, obmalloc doesn't have to worry about concurrent > scenarios, which explains some of the simplicity) Right, T has a different collection of free lists for each thread. so on each entry has to figure out which collection to use (and so doesn't need to lock). That's not free. O only has one collection, and relies on the GIL. Against that, O burns cycles worrying about something else: because it was controversial when it was new, O thought it was necessary to handle free/realloc calls even when passed addresses that had actually been obtained from the system malloc/realloc. The T docs I saw said "don't do that - things will blow up in mysterious ways". That's where O's excruciating "address_in_range()" logic comes from. While that's zippy and scales extremely well (it doesn't depend on how many objects/arenas/pools exist), it's not free, and is a significant part of the "fast path" expense for both allocation and deallocation. It also limits us to a maximum pool size of 4K (to avoid possible segfaults when reading up memory that was actually obtained from the system malloc/realloc), and that's become increasingly painful: on 64-bit boxes the bytes lost to pool headers increased, and O changed to handle requests up to 512 bytes instead of its original limit of 256. O was intended to supply "a bunch" of usable blocks per pool, not just a handful. We "should" really at least double the pool and arena sizes now. I don't think we need to cater anymore to careless code that mixes system memory calls with O calls (e.g., if an extension gets memory via `malloc()`, it's its responsibility to call `free()`), and if not then `address_in_range()` isn't really necessary anymore either, and then we could increase the pool size. O would, however, need a new way to recognize when its version of malloc punted to the system malloc. BTW, one more: last I saw T never returns memory to "the system", but O does - indeed, the parent thread here was all about _enormous_ time waste due to that in O ;-) That's not free either, but doesn't affect O's fast paths. From ezio.melotti at gmail.com Sun Jun 2 04:24:42 2019 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Sun, 2 Jun 2019 10:24:42 +0200 Subject: [Python-Dev] PEP 595: Improving bugs.python.org In-Reply-To: <20190601113032.2103819c@fsol> References: <20190531102224.0bf2229f@fsol> <43AB66F6-37CA-475F-BE78-E3D1FA726F43@python.org> <20190601113032.2103819c@fsol> Message-ID: On Sat, Jun 1, 2019 at 11:50 AM Antoine Pitrou wrote: > > On Fri, 31 May 2019 11:58:22 -0700 > Nathaniel Smith wrote: > > On Fri, May 31, 2019 at 11:39 AM Barry Warsaw wrote: > > > > > > On May 31, 2019, at 01:22, Antoine Pitrou wrote: > > > > > > > I second this. > > > > > > > > There are currently ~7000 bugs open on bugs.python.org. The Web UI > > > > makes a good job of actually being able to navigate through these bugs, > > > > search through them, etc. > > > > > > > > Did the Steering Council conduct a usability study of Github Issues > > > > with those ~7000 bugs open? If not, then I think the acceptance of > > > > migrating to Github is a rushed job. Please reconsider. > > > > > > Thanks for your feedback Antoine. > > > > > > This is a tricky issue, with many factors and tradeoffs to consider. I really appreciate Ezio and Berker working on PEP 595, so we can put all these issues on the table. > > > > > > I think one of the most important tradeoffs is balancing the needs of existing developers (those who actively triage bugs today), and future contributors. These can be further divided in several groups: from core devs and release managers, to triagers, to regular and occasional contributors, to people that just want to report an issue and be done with it, to people that think the error they just got is a Python bug, each of them with different goals and needs. I think that rather than discussing whether GitHub Issues is better or worse than Roundup, we should first try to understand who is facing what issues now, and who will face what issues after the switch. This can be done both by gathering feedback from different types of people and by testing and comparing the solutions (see below). Once we know what the issues are, we should evaluate if and how we can address them, and also -- if we can't make everyone happy -- what groups of people we want to prioritize (e.g. do we want core devs to be more effective at dealing with the thousands of already existing issues, or we want to make it easier for users to report new bugs?). > > > But this and other UX issues are difficult to compare on our actual data right now. I fully expect that just as with the switch to git, we?ll do lots of sample imports and prototyping to ensure that GitHub issues will actually work for us (given our unique requirements), and to help achieve the proper balance. It does us no good to switch if we just anger all the existing devs. > > > > > > IMHO, if the switch to GH doesn?t improve our workflow, then it definitely warrants a reevaluation. I think things will be better, but let?s prove it. > > > > Perhaps we should put an explicit step on the transition plan, after > > the prototyping, that's "gather feedback from prototypes, re-evaluate, > > make final go/no-go decision"? I assume we'll want to do that anyway, > > and having it formally written down might reassure people. It might > > also encourage more people to actually try out the prototypes if we > > make it very clear that they're going to be asked for feedback. > > Indeed, regardless of the exact implementation details, I think "try > first, decide after" is the right procedure here. > Testing a change of this magnitude is not trivial. I can see several possible options: * using the on-demand approach proposed by PEP 588, a full migration, or some other solution (e.g. parallel, synced trackers); * doing a throwaway test migration (import zero/some/all existing issues, then discard any new message/issue at the end of the test) or using real issues directly (import zero/some/all issues and keep adding real messages/issues); * if we do a test migration and it works, we might need to do a second, real migration, possibly involving the GH staff twice; if it doesn't work, we discard everything and that's it; * if we use real issues, we might need to migrate things back to Roundup if GH doesn't fit our needs and it might be confusing for users; * starting from scratch on GH with new issues (at least initially, for testing purposes) or porting some/all issues from bpo; * if we start from scratch we don't need to write the tools to migrate, but we won't have feedback about searching/navigating through lot of issues; * if we port some/all the issues, we need to write the tools to do it, even if it's just for testing purposes and we end going back to Roundup; * limiting the test to triagers/core-devs, or involve regular users; * if we involve regular users we might get better feedback, but there's risk of confusion (afaik the only way to inform users on GitHub Issues is writing another bot that adds messages) and backlash; * doing separate specific tests (e.g. having a read-only repo with all the issues to test search/navigation, and a separate read-write repo to test issue creation) or a "real-world" test; * some specific tests might be easier to setup (e.g. issue creation test using templates), but for others we still need to import some/all the issues; If we agree on testing, I think we need to discuss the options, define and document a list of steps, and start working on it. Best Regards, Ezio Melotti > Regards > > Antoine. From solipsis at pitrou.net Sun Jun 2 05:37:06 2019 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 2 Jun 2019 11:37:06 +0200 Subject: [Python-Dev] obmalloc (was Have a big machine and spare time? Here's a possible Python bug.) References: <64d3f69a-b900-d17d-679e-aa748d0a23ab@python.org> <20190526112415.33e4a02d@fsol> Message-ID: <20190602113706.01c14820@fsol> On Sun, 2 Jun 2019 00:56:52 -0500 Tim Peters wrote: > > But because O is only trying to deal with small (<= 512 bytes) > requests, it can use a very fast method based on trivial address > arithmetic to find the size of an allocated block by just reading it > up from the start of the (4K) "pool" the address belongs to. T can't > do that - it appears to need to look up the address in a more > elaborate radix tree, to find info recording the size of the block > (which may be just about anything - no upper limit). The interesting thing here is that in many situations, the size is known up front when deallocating - it is simply not communicated to the deallocator because the traditional free() API takes a sole pointer, not a size. But CPython could communicate that size easily if we would like to change the deallocation API. Then there's no bother looking up the allocated size in sophisticated lookup structures. I'll note that jemalloc provides such APIs: http://jemalloc.net/jemalloc.3.html """The dallocx() function causes the memory referenced by ptr to be made available for future allocations. The sdallocx() function is an extension of dallocx() with a size parameter to allow the caller to pass in the allocation size as an optimization.""" Regards Antoine. > > > (well, of course, obmalloc doesn't have to worry about concurrent > > scenarios, which explains some of the simplicity) > > Right, T has a different collection of free lists for each thread. so > on each entry has to figure out which collection to use (and so > doesn't need to lock). That's not free. O only has one collection, > and relies on the GIL. > > Against that, O burns cycles worrying about something else: because > it was controversial when it was new, O thought it was necessary to > handle free/realloc calls even when passed addresses that had actually > been obtained from the system malloc/realloc. The T docs I saw said > "don't do that - things will blow up in mysterious ways". > > That's where O's excruciating "address_in_range()" logic comes from. > While that's zippy and scales extremely well (it doesn't depend on how > many objects/arenas/pools exist), it's not free, and is a significant > part of the "fast path" expense for both allocation and deallocation. > > It also limits us to a maximum pool size of 4K (to avoid possible > segfaults when reading up memory that was actually obtained from the > system malloc/realloc), and that's become increasingly painful: on > 64-bit boxes the bytes lost to pool headers increased, and O changed > to handle requests up to 512 bytes instead of its original limit of > 256. O was intended to supply "a bunch" of usable blocks per pool, > not just a handful. We "should" really at least double the pool and > arena sizes now. > > I don't think we need to cater anymore to careless code that mixes > system memory calls with O calls (e.g., if an extension gets memory > via `malloc()`, it's its responsibility to call `free()`), and if not > then `address_in_range()` isn't really necessary anymore either, and > then we could increase the pool size. O would, however, need a new > way to recognize when its version of malloc punted to the system > malloc. > > BTW, one more: last I saw T never returns memory to "the system", but > O does - indeed, the parent thread here was all about _enormous_ time > waste due to that in O ;-) That's not free either, but doesn't affect > O's fast paths. From armin.rigo at gmail.com Sun Jun 2 07:03:32 2019 From: armin.rigo at gmail.com (Armin Rigo) Date: Sun, 2 Jun 2019 13:03:32 +0200 Subject: [Python-Dev] [PEP 558] thinking through locals() semantics In-Reply-To: <5CEE2124.4020409@canterbury.ac.nz> References: <20190528022848.GF4221@ando.pearwood.info> <5CEE2124.4020409@canterbury.ac.nz> Message-ID: Hi, On Wed, 29 May 2019 at 08:07, Greg Ewing wrote: > Nick Coghlan wrote: > > Having a single locals() call de-optimize an entire function would be > > far from ideal. > > I don't see what would be so bad about that. The vast majority > of functions have no need for locals(). You have the occasional big function that benefits a lot from being JIT-compiled but which contains ``.format(**locals())``. That occurs in practice, and that's why PyPy is happy that there is a difference between ``locals()`` and ``sys._getframe().f_locals``. PyPy could be made to support the full mutable view, but that's extra work that isn't done so far and is a bit unlikely to occur at this point. It also raises the significantly the efforts for other JIT implementations of Python if they have to support a full-featured ``locals()``; supporting ``_getframe().f_locals`` is to some extent optional, but supporting ``locals()`` is not. A bient?t, Armin. From greg.ewing at canterbury.ac.nz Sun Jun 2 07:52:02 2019 From: greg.ewing at canterbury.ac.nz (Greg Ewing) Date: Sun, 02 Jun 2019 23:52:02 +1200 Subject: [Python-Dev] [PEP 558] thinking through locals() semantics In-Reply-To: References: <20190528022848.GF4221@ando.pearwood.info> <5CEE2124.4020409@canterbury.ac.nz> Message-ID: <5CF3B862.4080401@canterbury.ac.nz> Armin Rigo wrote: > You have the occasional big function that benefits a lot from being > JIT-compiled but which contains ``.format(**locals())``. There should be a lot less need for that now that we have f-strings. -- Greg From steve at pearwood.info Sun Jun 2 08:51:17 2019 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 2 Jun 2019 22:51:17 +1000 Subject: [Python-Dev] [PEP 558] thinking through locals() semantics In-Reply-To: <5CF3B862.4080401@canterbury.ac.nz> References: <20190528022848.GF4221@ando.pearwood.info> <5CEE2124.4020409@canterbury.ac.nz> <5CF3B862.4080401@canterbury.ac.nz> Message-ID: <20190602125117.GV4221@ando.pearwood.info> On Sun, Jun 02, 2019 at 11:52:02PM +1200, Greg Ewing wrote: > Armin Rigo wrote: > >You have the occasional big function that benefits a lot from being > >JIT-compiled but which contains ``.format(**locals())``. > > There should be a lot less need for that now that we have f-strings. I think you're forgetting that a lot of code (especially libraries) either have to support older versions of Python, and so cannot use f-strings at all, or was written using **locals before f-strings came along, and hasn't been touched since. Another case where f-strings don't help is when the template is dynamically generated. It may be that there will be less new code written using **locals() but I don't think that the **locals() trick will disappear any time before Python 5000. -- Steven From python at mrabarnett.plus.com Sun Jun 2 11:13:33 2019 From: python at mrabarnett.plus.com (MRAB) Date: Sun, 2 Jun 2019 16:13:33 +0100 Subject: [Python-Dev] [PEP 558] thinking through locals() semantics In-Reply-To: <20190602125117.GV4221@ando.pearwood.info> References: <20190528022848.GF4221@ando.pearwood.info> <5CEE2124.4020409@canterbury.ac.nz> <5CF3B862.4080401@canterbury.ac.nz> <20190602125117.GV4221@ando.pearwood.info> Message-ID: <88cac4a6-c083-7578-4044-f04b68b26817@mrabarnett.plus.com> On 2019-06-02 13:51, Steven D'Aprano wrote: > On Sun, Jun 02, 2019 at 11:52:02PM +1200, Greg Ewing wrote: >> Armin Rigo wrote: >> >You have the occasional big function that benefits a lot from being >> >JIT-compiled but which contains ``.format(**locals())``. >> >> There should be a lot less need for that now that we have f-strings. > > I think you're forgetting that a lot of code (especially libraries) > either have to support older versions of Python, and so cannot use > f-strings at all, or was written using **locals before f-strings came > along, and hasn't been touched since. > > Another case where f-strings don't help is when the template is > dynamically generated. > > It may be that there will be less new code written using **locals() but > I don't think that the **locals() trick will disappear any time before > Python 5000. > We've had .format_map since Python 3.2, so why use ``.format(**locals())`` instead of ``.format_map(locals())``? From random832 at fastmail.com Sun Jun 2 13:33:10 2019 From: random832 at fastmail.com (Random832) Date: Sun, 02 Jun 2019 13:33:10 -0400 Subject: [Python-Dev] [PEP 558] thinking through locals() semantics In-Reply-To: References: <20190528022848.GF4221@ando.pearwood.info> Message-ID: <94b2c4fe-5a56-432c-ac7f-06f74ae97ecb@www.fastmail.com> On Wed, May 29, 2019, at 01:25, Nick Coghlan wrote: > Having a single locals() call de-optimize an entire function would be > far from ideal. What if there were a way to explicitly de-optimize a function, rather than guessing the user's intent based on looking for locals and exec calls (both of which are builtins which could be shadowed or assigned to other variables)? Also, regardless of anything else, maybe in an optimized function locals should return a read-only mapping? From vstinner at redhat.com Sun Jun 2 13:52:59 2019 From: vstinner at redhat.com (Victor Stinner) Date: Sun, 2 Jun 2019 19:52:59 +0200 Subject: [Python-Dev] Expected stability of PyCode_New() and types.CodeType() signatures In-Reply-To: References: <67fbae0d-0155-a21d-cd4b-6e16628debb6@gmail.com> Message-ID: Le vendredi 31 mai 2019, Simon Cross a ?crit : > As the maintainer of Genshi, ... > The new CodeType.replace will remove some potential sources of breakages in the future, so thank you very much for adding that. Hi Simon, You're welcome :-) Genshi was one of my motivation to add CodeType.replace() ;-) Victor -- Night gathers, and now my watch begins. It shall not end until my death. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pablogsal at gmail.com Mon Jun 3 06:38:58 2019 From: pablogsal at gmail.com (Pablo Galindo Salgado) Date: Mon, 3 Jun 2019 11:38:58 +0100 Subject: [Python-Dev] Recent buildbot reports and asyncio test failures Message-ID: Hi everyone, Just a heads-up regarding some messages you will see in your pull requests. There is an intermittent failure on some buildbots regarding asyncio: https://buildbot.python.org/all/#/builders/21 As the builds do not fail all the time, the systems understand that if your (merged) commits fail to build, they may be the cause of the failure and then it does a report into the pull request. I am working on investigating a way to improve the report mechanism to make it less noisy in this case, but bear in mind that the correct way to solve this is fixing the asyncio bug in the test suite and this won't likely go away completely until is solved. We are doing all that we can to solve all the recent leaks and failures on the test suite, but there is a noticeable increase in the number of merged pull requests because of the imminent feature freeze and because this happens across several timezones is very difficult to get them all. Thanks to everyone that is helping solving these bugs :) Regards from sunny London, Pablo Galindo Salgado -------------- next part -------------- An HTML attachment was scrubbed... URL: From vstinner at redhat.com Mon Jun 3 11:05:19 2019 From: vstinner at redhat.com (Victor Stinner) Date: Mon, 3 Jun 2019 17:05:19 +0200 Subject: [Python-Dev] Recent buildbot reports and asyncio test failures In-Reply-To: References: Message-ID: Many tests are failing randomly on buildbots. *MANY* failures: "test_asyncio: test_cancel_gather_2() dangling thread" https://bugs.python.org/issue37137 Some failures on specific buildbots. multiprocessing does crash randomly on Windows: https://bugs.python.org/issue37143 and on FreeBSD: https://bugs.python.org/issue37135 I bet on a regression caused by: https://bugs.python.org/issue33608#msg344240 One failure. "test_asyncio timed out on AMD64 FreeBSD CURRENT Shared 3.x" https://bugs.python.org/issue37142 I failed to reproduce any of these bugs. Victor Le lun. 3 juin 2019 ? 12:42, Pablo Galindo Salgado a ?crit : > > Hi everyone, > > Just a heads-up regarding some messages you will see in your pull requests. There is an intermittent failure on some buildbots > regarding asyncio: > > https://buildbot.python.org/all/#/builders/21 > > As the builds do not fail all the time, the systems understand that if your (merged) commits fail to build, they may be the cause > of the failure and then it does a report into the pull request. > > I am working on investigating a way to improve the report mechanism to make it less noisy in this case, but bear in mind that > the correct way to solve this is fixing the asyncio bug in the test suite and this won't likely go away completely until is solved. > > We are doing all that we can to solve all the recent leaks and failures on the test suite, but there is a noticeable increase in the > number of merged pull requests because of the imminent feature freeze and because this happens across several timezones > is very difficult to get them all. > > Thanks to everyone that is helping solving these bugs :) > > Regards from sunny London, > Pablo Galindo Salgado > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com -- Night gathers, and now my watch begins. It shall not end until my death. From vstinner at redhat.com Mon Jun 3 21:35:10 2019 From: vstinner at redhat.com (Victor Stinner) Date: Tue, 4 Jun 2019 03:35:10 +0200 Subject: [Python-Dev] Recent buildbot reports and asyncio test failures In-Reply-To: References: Message-ID: Hi, Update. I'm collaborating with Pablo to fix these recent regressions. We had to revert 2 changes to fix asyncio and multiprocessing test failures and crashes. New issue: "opcode cache for LOAD_GLOBAL emits false alarm in memory leak hunting" https://bugs.python.org/issue37146 INADA-san worked around this issue by disabling the new opcode cache (LOAD_GLOBAL optimization) when Python is compiled in debug mode. We should try to find a more long term solution after beta1 release. Le lun. 3 juin 2019 ? 17:05, Victor Stinner a ?crit : > *MANY* failures: "test_asyncio: test_cancel_gather_2() dangling thread" > https://bugs.python.org/issue37137 Andrew Svetlov identified the root issue, it's a side effect of a new feature: https://bugs.python.org/issue37137#msg344488 He reverted his change to fix the bug. We may reconsider the feature after beta1 (that's up to our Release Manager, I guess). Note: I pushed 2 changes to attempt to fix the issue, but they didn't fix the issue. We can consider to revert them after beta1. > Some failures on specific buildbots. multiprocessing does crash > randomly on Windows: > https://bugs.python.org/issue37143 > and on FreeBSD: > https://bugs.python.org/issue37135 > I bet on a regression caused by: > https://bugs.python.org/issue33608#msg344240 Pablo and me identified the root issue: https://bugs.python.org/issue37135#msg344509 I reverted a change to fix the regression. Sadly, previously I also reverted a different change which wasn't the root cause. We can consider to maybe reapply it after beta1. (!) "Night gathers, and now my watch begins. It shall not end until my death." (!) Victor From lukasz at langa.pl Tue Jun 4 18:44:04 2019 From: lukasz at langa.pl (=?utf-8?Q?=C5=81ukasz_Langa?=) Date: Wed, 5 Jun 2019 00:44:04 +0200 Subject: [Python-Dev] [RELEASE] Python 3.8.0b1 is now available for testing Message-ID: <2FEBF8A0-7D2D-4BB8-BE37-6B0E6D9F75B8@langa.pl> The time has come for Python 3.8.0b1: https://www.python.org/downloads/release/python-380b1/ This release is the first of four planned beta release previews. Beta release previews are intended to give the wider community the opportunity to test new features and bug fixes and to prepare their projects to support the new feature release. The next pre-release of Python 3.8 will be 3.8.0b2, currently scheduled for 2019-07-01. Call to action We strongly encourage maintainers of third-party Python projects to test with 3.8 during the beta phase and report issues found to the Python bug tracker as soon as possible. While the release is planned to be feature complete entering the beta phase, it is possible that features may be modified or, in rare cases, deleted up until the start of the release candidate phase (2019-09-30). Our goal is have no ABI changes after beta 3 and no code changes after 3.8.0rc1, the release candidate. To achieve that, it will be extremely important to get as much exposure for 3.8 as possible during the beta phase. Please keep in mind that this is a preview release and its use is not recommended for production environments. A new challenger has appeared! With the release of Python 3.8.0b1, development started on Python 3.9. The ?master? branch in the cpython repository now tracks development of 3.9 while Python 3.8 received its own branch, called simply ?3.8?. Acknowledgments As you might expect, creating new branches triggers a lot of changes in configuration for all sorts of tooling that we?re using. Additionally, the inevitable deadline for new features caused a flurry of activity that tested the buildbots to the max. The revert hammer got used more than once. I would not be able to make this release available alone. Many thanks to the fearless duo of Pablo Galindo Salgado and Victor Stinner for spending tens of hours during the past week working on getting the buildbots green for release. Seriously, that took a lot of effort. We are all so lucky to have you both. Thanks to Andrew Svetlov for his swift fixes to asyncio and to Yury Selivanov for code reviews, even when jetlagged. Thanks to Julien Palard for untangling the documentation configs. Thank you to Zachary Ware for help with buildbot and CI configuration. Thanks to Mariatta for helping with the bots. Thank you to Steve Dower for delivering the Windows installers. Most importantly though, huge thanks to Ned Deily who not only helped me understand the scope of this special release but also did some of the grunt work involved. Last but not least, thanks to you for making this release more meaty than I expected. There?s plenty of super exciting changes in there. Just take a look at ?What?s New ?! One more thing Hey, fellow Core Developer, Beta 2 is in four weeks. If your important new feature got reverted last minute, or you decided not to merge due to inadequate time, I have a one time offer for you (restrictions apply). If you: find a second core developer champion for your change; and in tandem you finish your change complete with tests and documentation before Beta 2 then I will let it in. I?m asking for a champion because it?s too late now for changes with hasty design or code review. And as I said, restrictions apply. For instance, at this point changes to existing APIs are unlikely to be accepted. Don?t start new work with 3.8 in mind. 3.9 is going to come sooner than you think! - ? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From vstinner at redhat.com Tue Jun 4 20:21:29 2019 From: vstinner at redhat.com (Victor Stinner) Date: Wed, 5 Jun 2019 02:21:29 +0200 Subject: [Python-Dev] PEP 594: update 1 In-Reply-To: <0acd0f02-ecff-8ea0-757b-30884b4ea8b0@python.org> References: <0acd0f02-ecff-8ea0-757b-30884b4ea8b0@python.org> Message-ID: So what is happening for this PEP since Python 3.8 beta1 has been released? Is it too late for Python 3.8 or not? It seems like most people are confused by the intent of the PEP. IMHO it would be better to rewrite "Remove packages from the stdlib" as "Move some stdlib modules to PyPI". But that would require to rewrite some parts of the PEP to explain how modules are moved, who become the new maintainers, how to support modules both in stdlib (old Python versions) and in PyPI (new Python), etc. Victor From tjreedy at udel.edu Tue Jun 4 22:07:17 2019 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 4 Jun 2019 22:07:17 -0400 Subject: [Python-Dev] PEP 594: update 1 In-Reply-To: References: <0acd0f02-ecff-8ea0-757b-30884b4ea8b0@python.org> Message-ID: On 6/4/2019 8:21 PM, Victor Stinner wrote: > So what is happening for this PEP since Python 3.8 beta1 has been > released? Is it too late for Python 3.8 or not? The only action proposed for 3.8 was soft deprecation in the docs, which I presume can be done later in the beta process. > It seems like most people are confused by the intent of the PEP. IMHO > it would be better to rewrite "Remove packages from the stdlib" as > "Move some stdlib modules to PyPI". But that would require to rewrite > some parts of the PEP to explain how modules are moved, who become the > new maintainers, how to support modules both in stdlib (old Python > versions) and in PyPI (new Python), etc. > > Victor > -- Terry Jan Reedy