From bcannon at gmail.com Mon Jul 20 21:49:50 2015 From: bcannon at gmail.com (Brett Cannon) Date: Mon, 20 Jul 2015 19:49:50 +0000 Subject: [core-workflow] Starting the improved workflow discussion again Message-ID: In my ideal workflow scenario, these are the steps a patch would take: 1. Issue is created 2. Issue is triaged to have right affected versions, etc. 3. Patch is uploaded 4. CI kicks the patch off for *all* branches and OSs that are affected 5. CI flags what branches and OSs did (not) pass or apply cleanly to 6. If necessary, another patch that works in a specific branch that is affected is uploaded (obviously requires some way to flag that a patch applies to a specific branch, deciding how to deal with Misc/NEWS, etc.) 7. Code review -- with a tool other than Rietveld -- from a core developer with feedback 8. New version of patch uploaded, usual CI kicked off 9. If everything looks good and CI is green, get patch approval from a core dev 10. Approval submits the patch(es) to the appropriate branches 11. CI triggered yet again, and if tests fail then patch(es) are automatically rolled back Now I realize this is not about to launch immediately. There are changes to Roundup in there, a reliable test suite that actually fails only on failures and not because it's flaky, etc. But the key point here is that everything that can be automated is, and code reviews can occur entirely through a browser. The independent parts I see here are (which probably all require some Roundup integration to be effective): - CI for every patch - A new code review tool - Automated/browser-based handling of VCS (e.g., submission, rollback) Once the pieces are in place then they can be tied together and drive each other (e.g., code review tool submitting a patch, CI tool automatically handling rollbacks, etc.), but that is not necessary to make forward progress. I'll let Nick and Donald chime on what exactly their proposals can do today and what they will need to make the magical workflow happen. =) -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Mon Jul 20 22:35:52 2015 From: rdmurray at bitdance.com (R. David Murray) Date: Mon, 20 Jul 2015 16:35:52 -0400 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: <20150720203552.B1368250FA2@webabinitio.net> On Mon, 20 Jul 2015 19:49:50 -0000, Brett Cannon wrote: > In my ideal workflow scenario, these are the steps a patch would take: > > 1. Issue is created > 2. Issue is triaged to have right affected versions, etc. > 3. Patch is uploaded > 4. CI kicks the patch off for *all* branches and OSs that are affected This may be a non-starter. Instead, I believe it will be much more practical to have core dev review first, with a way for the core dev to trigger the CI run. Specifically, I as a buildbot owner do not want arbitrary patch uploads to be able to run on my servers. Nor will infrastructure allow this on any platform they control (we asked). If there is a CI system out there that will allow this and whose free (or donated) tier will support our test suite, then it might be viable, but I doubt very much that it will cover all our platforms. That may not be a blocker, though...this CI could just be a "basic check" run, with the buildbots continuing to provide the all-supported-platforms (and then some) post-commit check they do now. On the other hand, steps 1 to 3 are a problem regardless. It often happens that a patch is uploaded before triage is done, and the branches are not set correctly. And you'd need some way to re-trigger a build anyway. So, I think really we want triggered CI builds, not automatic ones. We already have something that Kushal built that will do the triggered build for the linux platform...I haven't played with it yet because I haven't had time to do any full review/commit cycles since he made it available. It does not yet report back to the tracker, but I don't think that will be hard to add (he may have already written the code, in fact). I think it just does default and not all branches, but I'm not sure. Regardless, it is a place to start. --David From ncoghlan at gmail.com Tue Jul 21 04:15:18 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 21 Jul 2015 12:15:18 +1000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On 21 July 2015 at 05:49, Brett Cannon wrote: > In my ideal workflow scenario, these are the steps a patch would take: > > Issue is created > Issue is triaged to have right affected versions, etc. > Patch is uploaded > CI kicks the patch off for all branches and OSs that are affected > CI flags what branches and OSs did (not) pass or apply cleanly to > If necessary, another patch that works in a specific branch that is affected > is uploaded (obviously requires some way to flag that a patch applies to a > specific branch, deciding how to deal with Misc/NEWS, etc.) > Code review -- with a tool other than Rietveld -- from a core developer with > feedback > New version of patch uploaded, usual CI kicked off > If everything looks good and CI is green, get patch approval from a core dev > Approval submits the patch(es) to the appropriate branches > CI triggered yet again, and if tests fail then patch(es) are automatically > rolled back > > Now I realize this is not about to launch immediately. There are changes to > Roundup in there, a reliable test suite that actually fails only on failures > and not because it's flaky, etc. But the key point here is that everything > that can be automated is, and code reviews can occur entirely through a > browser. I think you're conflating some different issues here, at least two of which are worth separating out from each other: 1. Completely online workflow for documentation editing 2. Pre-commit CI for CPython Both of the current forge.python.org proposals are aimed primarily at the first problem, since they start out with purely documentation repos like the developer guide and the PEPs. Hopefully we can also eventually separate out "version independent" repos for the how to guides and the tutorial. In addition to a completely online process for documentation editing, review, and approval, the other key benefit to these repos would be that *access management* would also be online, rather than requiring shell access to hg.python.org. Documentation projects are a good starting point for this side of things, as they're inherently lower risk. The worst thing documentation can do is give readers bad advice, it can't force them to follow it. This means that for forge.python.org, I think "What about CPython?" should be something we take into account as a "What's next?" for the service, but our near term focus should be on making things like the developer guide and PEPs trivial to suggest edits to, trivial to approve edits to, and trivial to grant approval rights over. Those levels of access (who can suggest edits, who can approve edits, who can approve edit rights for others) should also all be completely transparent and changes in them should be tracked automatically rather than requiring manual updates to a text file. Pre-commit CI for CPython is a different story - as David points out, it is *not* OK to run code on the Buildbot fleet that hasn't been approved by a core developer. Folks are trusting *us* to run code on their systems, not random developers posting patches to bugs.python.org. Noah (quite sensibly) isn't interested in getting the PSF Infrastructure team involved in running random code from the internet either. That's where the system Kushal set up in collaboration with the CentOS folks potentially comes in: https://mail.python.org/pipermail/python-dev/2015-May/140050.html That's just a simple "smoke test" to say "Does the proposed change pass on x86_64 systems running CentOS 7?". If we could combine it with a similar system for running Windows smoke tests in Appveyor, I think we'd flush out a lot of basic cross-platform compatibility issues pre-commit, regardless of whether folks are working locally on a *nix system or a Windows one. (We wouldn't catch *everything*, because Linux is not FreeBSD is not Mac OS X, etc, but we'd catch a lot of them). At the moment, running those requires that we be logged into IRC, be approved to trigger test runs, and indicate which issue we'd like to test. If we instead had a "test" link next to patch files in Roundup, then a core developer, completely online, could: 1. Read over the patch to see if we think its reasonable to smoke test 2. Trigger the smoke test directly from Roundup 3. Receive the results back as Roundup comments, with links to the run logs As we gained further familiarity and confidence with the system, we could extend the trust for running pre-commit test runs to anyone that has been granted Developer privileges on the issue tracker, rather than restricting it specifically to core developers. (BTW, we should probably come up with an icon for folks with elevated tracker privileges - at the moment they're just marked as having signed the CLA if they aren't also CPython core developers) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From soltysh at gmail.com Tue Jul 21 15:07:43 2015 From: soltysh at gmail.com (Maciej Szulik) Date: Tue, 21 Jul 2015 15:07:43 +0200 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On Tue, Jul 21, 2015 at 4:15 AM, Nick Coghlan wrote: > On 21 July 2015 at 05:49, Brett Cannon wrote: > > In my ideal workflow scenario, these are the steps a patch would take: > > > > Issue is created > > Issue is triaged to have right affected versions, etc. > > Patch is uploaded > > CI kicks the patch off for all branches and OSs that are affected > > CI flags what branches and OSs did (not) pass or apply cleanly to > > If necessary, another patch that works in a specific branch that is > affected > > is uploaded (obviously requires some way to flag that a patch applies to > a > > specific branch, deciding how to deal with Misc/NEWS, etc.) > > Code review -- with a tool other than Rietveld -- from a core developer > with > > feedback > > New version of patch uploaded, usual CI kicked off > > If everything looks good and CI is green, get patch approval from a core > dev > > Approval submits the patch(es) to the appropriate branches > > CI triggered yet again, and if tests fail then patch(es) are > automatically > > rolled back > > > > Now I realize this is not about to launch immediately. There are changes > to > > Roundup in there, a reliable test suite that actually fails only on > failures > > and not because it's flaky, etc. But the key point here is that > everything > > that can be automated is, and code reviews can occur entirely through a > > browser. > > I think you're conflating some different issues here, at least two of > which are worth separating out from each other: > > 1. Completely online workflow for documentation editing > 2. Pre-commit CI for CPython > > Both of the current forge.python.org proposals are aimed primarily at > the first problem, since they start out with purely documentation > repos like the developer guide and the PEPs. Hopefully we can also > eventually separate out "version independent" repos for the how to > guides and the tutorial. In addition to a completely online process > for documentation editing, review, and approval, the other key benefit > to these repos would be that *access management* would also be online, > rather than requiring shell access to hg.python.org. > > Documentation projects are a good starting point for this side of > things, as they're inherently lower risk. The worst thing > documentation can do is give readers bad advice, it can't force them > to follow it. > > This means that for forge.python.org, I think "What about CPython?" > should be something we take into account as a "What's next?" for the > service, but our near term focus should be on making things like the > developer guide and PEPs trivial to suggest edits to, trivial to > approve edits to, and trivial to grant approval rights over. Those > levels of access (who can suggest edits, who can approve edits, who > can approve edit rights for others) should also all be completely > transparent and changes in them should be tracked automatically rather > than requiring manual updates to a text file. > > Pre-commit CI for CPython is a different story - as David points out, > it is *not* OK to run code on the Buildbot fleet that hasn't been > approved by a core developer. Folks are trusting *us* to run code on > their systems, not random developers posting patches to > bugs.python.org. Noah (quite sensibly) isn't interested in getting the > PSF Infrastructure team involved in running random code from the > internet either. > > That's where the system Kushal set up in collaboration with the CentOS > folks potentially comes in: > https://mail.python.org/pipermail/python-dev/2015-May/140050.html > > That's just a simple "smoke test" to say "Does the proposed change > pass on x86_64 systems running CentOS 7?". If we could combine it with > a similar system for running Windows smoke tests in Appveyor, I think > we'd flush out a lot of basic cross-platform compatibility issues > pre-commit, regardless of whether folks are working locally on a *nix > system or a Windows one. (We wouldn't catch *everything*, because > Linux is not FreeBSD is not Mac OS X, etc, but we'd catch a lot of > them). > > At the moment, running those requires that we be logged into IRC, be > approved to trigger test runs, and indicate which issue we'd like to > test. > > If we instead had a "test" link next to patch files in Roundup, then a > core developer, completely online, could: > > 1. Read over the patch to see if we think its reasonable to smoke test > 2. Trigger the smoke test directly from Roundup > 3. Receive the results back as Roundup comments, with links to the run logs > > We (openshift) have similar technique developed around vagrant + jenkins where we can kick of by commenting on a github PR to either run the tests or perform actual merge. Both of these operation run exactly the same suite of tests, but only the merge performs actual, well merging. Obviously only certain group of core devs has the rights to tag the PRs. Additionally when such tag/comment already exists all the new changes (eg. additional changes after review) are tested/merged and this is something that should be considered in this case as well (there are some slight nuances, but I don't want to go too much into details). Does the tag applies always or only once, and if once how you know it was applied already? So having a tag on/off future would be desirable imho. I'd be happy to help triaging this problem as much as my time allows me ;) Maciej > As we gained further familiarity and confidence with the system, we > could extend the trust for running pre-commit test runs to anyone that > has been granted Developer privileges on the issue tracker, rather > than restricting it specifically to core developers. (BTW, we should > probably come up with an icon for folks with elevated tracker > privileges - at the moment they're just marked as having signed the > CLA if they aren't also CPython core developers) > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > _______________________________________________ > core-workflow mailing list > core-workflow at python.org > https://mail.python.org/mailman/listinfo/core-workflow > This list is governed by the PSF Code of Conduct: > https://www.python.org/psf/codeofconduct > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Tue Jul 21 19:54:28 2015 From: bcannon at gmail.com (Brett Cannon) Date: Tue, 21 Jul 2015 17:54:28 +0000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: <20150720203552.B1368250FA2@webabinitio.net> References: <20150720203552.B1368250FA2@webabinitio.net> Message-ID: On Mon, Jul 20, 2015 at 1:36 PM R. David Murray wrote: > On Mon, 20 Jul 2015 19:49:50 -0000, Brett Cannon > wrote: > > In my ideal workflow scenario, these are the steps a patch would take: > > > > 1. Issue is created > > 2. Issue is triaged to have right affected versions, etc. > > 3. Patch is uploaded > > 4. CI kicks the patch off for *all* branches and OSs that are affected > > This may be a non-starter. Instead, I believe it will be much more > practical to have core dev review first, with a way for the core dev to > trigger the CI run. Specifically, I as a buildbot owner do not want > arbitrary patch uploads to be able to run on my servers. Nor will > infrastructure allow this on any platform they control (we asked). > > If there is a CI system out there that will allow this and whose free > (or donated) tier will support our test suite, then it might be viable, > but I doubt very much that it will cover all our platforms. That may > not be a blocker, though...this CI could just be a "basic check" run, > with the buildbots continuing to provide the all-supported-platforms > (and then some) post-commit check they do now. > > On the other hand, steps 1 to 3 are a problem regardless. It often > happens that a patch is uploaded before triage is done, and the branches > are not set correctly. And you'd need some way to re-trigger a build > anyway. So, I think really we want triggered CI builds, not automatic > ones. > That's all very convincing and I'm happy to let CI be a privileged, triggered event if we stick with buildbots for our testing fleet. > > We already have something that Kushal built that will do the triggered > build for the linux platform...I haven't played with it yet because I > haven't had time to do any full review/commit cycles since he made it > available. It does not yet report back to the tracker, but I don't > think that will be hard to add (he may have already written the code, in > fact). I think it just does default and not all branches, but I'm not > sure. Regardless, it is a place to start. > I haven't played with it myself for the same reasons as you, David. I'm very appreciative to have the tool available today, although I would like to see it integrated with Roundup so I don't have to log into IRC just to fire off a CI run. Plus Linux obviously doesn't cover everything. Ideally we would then gate committal on passing CI and then commit it. The issue with that, though, is possible race conditions on commits where you would need to run the CI again to verify that the tests all still pass once the commit landed. So I don't think we can avoid at least two CI runs per patch (making sure it's basically sound and then verifying that it doesn't require a rollback). -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Tue Jul 21 20:03:36 2015 From: bcannon at gmail.com (Brett Cannon) Date: Tue, 21 Jul 2015 18:03:36 +0000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On Mon, Jul 20, 2015 at 7:15 PM Nick Coghlan wrote: > On 21 July 2015 at 05:49, Brett Cannon wrote: > > In my ideal workflow scenario, these are the steps a patch would take: > > > > Issue is created > > Issue is triaged to have right affected versions, etc. > > Patch is uploaded > > CI kicks the patch off for all branches and OSs that are affected > > CI flags what branches and OSs did (not) pass or apply cleanly to > > If necessary, another patch that works in a specific branch that is > affected > > is uploaded (obviously requires some way to flag that a patch applies to > a > > specific branch, deciding how to deal with Misc/NEWS, etc.) > > Code review -- with a tool other than Rietveld -- from a core developer > with > > feedback > > New version of patch uploaded, usual CI kicked off > > If everything looks good and CI is green, get patch approval from a core > dev > > Approval submits the patch(es) to the appropriate branches > > CI triggered yet again, and if tests fail then patch(es) are > automatically > > rolled back > > > > Now I realize this is not about to launch immediately. There are changes > to > > Roundup in there, a reliable test suite that actually fails only on > failures > > and not because it's flaky, etc. But the key point here is that > everything > > that can be automated is, and code reviews can occur entirely through a > > browser. > > I think you're conflating some different issues here, at least two of > which are worth separating out from each other: > > 1. Completely online workflow for documentation editing > 2. Pre-commit CI for CPython > I wasn't conflating them so much as not worrying about #1 as I know that's not a hard problem to solve like the CPython-specific workflow is. > > Both of the current forge.python.org proposals are aimed primarily at > the first problem, since they start out with purely documentation > repos like the developer guide and the PEPs. Hopefully we can also > eventually separate out "version independent" repos for the how to > guides and the tutorial. In addition to a completely online process > for documentation editing, review, and approval, the other key benefit > to these repos would be that *access management* would also be online, > rather than requiring shell access to hg.python.org. > > Documentation projects are a good starting point for this side of > things, as they're inherently lower risk. The worst thing > documentation can do is give readers bad advice, it can't force them > to follow it. > > This means that for forge.python.org, I think "What about CPython?" > should be something we take into account as a "What's next?" for the > service, but our near term focus should be on making things like the > developer guide and PEPs trivial to suggest edits to, trivial to > approve edits to, and trivial to grant approval rights over. Those > levels of access (who can suggest edits, who can approve edits, who > can approve edit rights for others) should also all be completely > transparent and changes in them should be tracked automatically rather > than requiring manual updates to a text file. > OK, then let's choose the devguide or the PEPs to test Kalithea out on and see how it goes since no one has experience with the service while I bet everyone has experience with at least GitHub. If you can get whomever has done the most amount of work on the devguide lately to sign off on it -- probably Carol Willing -- then I say get a test instance of forge.python.org up for the devguide and let's see what working with Kalithea is like (and it also gets us more new contributor feedback than the PEPs would while also not frustrating Guido when he deals with peps@ =). > > Pre-commit CI for CPython is a different story - as David points out, > it is *not* OK to run code on the Buildbot fleet that hasn't been > approved by a core developer. Folks are trusting *us* to run code on > their systems, not random developers posting patches to > bugs.python.org. Noah (quite sensibly) isn't interested in getting the > PSF Infrastructure team involved in running random code from the > internet either. > > That's where the system Kushal set up in collaboration with the CentOS > folks potentially comes in: > https://mail.python.org/pipermail/python-dev/2015-May/140050.html > > That's just a simple "smoke test" to say "Does the proposed change > pass on x86_64 systems running CentOS 7?". If we could combine it with > a similar system for running Windows smoke tests in Appveyor, I think > we'd flush out a lot of basic cross-platform compatibility issues > pre-commit, regardless of whether folks are working locally on a *nix > system or a Windows one. (We wouldn't catch *everything*, because > Linux is not FreeBSD is not Mac OS X, etc, but we'd catch a lot of > them). > Right. Basic coverage is better than no coverage for initial patch testing (after core dev approval). > > At the moment, running those requires that we be logged into IRC, be > approved to trigger test runs, and indicate which issue we'd like to > test. > > If we instead had a "test" link next to patch files in Roundup, then a > core developer, completely online, could: > > 1. Read over the patch to see if we think its reasonable to smoke test > 2. Trigger the smoke test directly from Roundup > 3. Receive the results back as Roundup comments, with links to the run logs > SGTM > > As we gained further familiarity and confidence with the system, we > could extend the trust for running pre-commit test runs to anyone that > has been granted Developer privileges on the issue tracker, rather > than restricting it specifically to core developers. (BTW, we should > probably come up with an icon for folks with elevated tracker > privileges - at the moment they're just marked as having signed the > CLA if they aren't also CPython core developers) > Definitely wouldn't hurt, and it does raise their profiles on the issue tracker. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcannon at gmail.com Tue Jul 21 20:05:54 2015 From: bcannon at gmail.com (Brett Cannon) Date: Tue, 21 Jul 2015 18:05:54 +0000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On Tue, Jul 21, 2015 at 6:07 AM Maciej Szulik wrote: > On Tue, Jul 21, 2015 at 4:15 AM, Nick Coghlan wrote: > >> On 21 July 2015 at 05:49, Brett Cannon wrote: >> > In my ideal workflow scenario, these are the steps a patch would take: >> > >> > Issue is created >> > Issue is triaged to have right affected versions, etc. >> > Patch is uploaded >> > CI kicks the patch off for all branches and OSs that are affected >> > CI flags what branches and OSs did (not) pass or apply cleanly to >> > If necessary, another patch that works in a specific branch that is >> affected >> > is uploaded (obviously requires some way to flag that a patch applies >> to a >> > specific branch, deciding how to deal with Misc/NEWS, etc.) >> > Code review -- with a tool other than Rietveld -- from a core developer >> with >> > feedback >> > New version of patch uploaded, usual CI kicked off >> > If everything looks good and CI is green, get patch approval from a >> core dev >> > Approval submits the patch(es) to the appropriate branches >> > CI triggered yet again, and if tests fail then patch(es) are >> automatically >> > rolled back >> > >> > Now I realize this is not about to launch immediately. There are >> changes to >> > Roundup in there, a reliable test suite that actually fails only on >> failures >> > and not because it's flaky, etc. But the key point here is that >> everything >> > that can be automated is, and code reviews can occur entirely through a >> > browser. >> >> I think you're conflating some different issues here, at least two of >> which are worth separating out from each other: >> >> 1. Completely online workflow for documentation editing >> 2. Pre-commit CI for CPython >> >> Both of the current forge.python.org proposals are aimed primarily at >> the first problem, since they start out with purely documentation >> repos like the developer guide and the PEPs. Hopefully we can also >> eventually separate out "version independent" repos for the how to >> guides and the tutorial. In addition to a completely online process >> for documentation editing, review, and approval, the other key benefit >> to these repos would be that *access management* would also be online, >> rather than requiring shell access to hg.python.org. >> >> Documentation projects are a good starting point for this side of >> things, as they're inherently lower risk. The worst thing >> documentation can do is give readers bad advice, it can't force them >> to follow it. >> >> This means that for forge.python.org, I think "What about CPython?" >> should be something we take into account as a "What's next?" for the >> service, but our near term focus should be on making things like the >> developer guide and PEPs trivial to suggest edits to, trivial to >> approve edits to, and trivial to grant approval rights over. Those >> levels of access (who can suggest edits, who can approve edits, who >> can approve edit rights for others) should also all be completely >> transparent and changes in them should be tracked automatically rather >> than requiring manual updates to a text file. >> >> Pre-commit CI for CPython is a different story - as David points out, >> it is *not* OK to run code on the Buildbot fleet that hasn't been >> approved by a core developer. Folks are trusting *us* to run code on >> their systems, not random developers posting patches to >> bugs.python.org. Noah (quite sensibly) isn't interested in getting the >> PSF Infrastructure team involved in running random code from the >> internet either. >> >> That's where the system Kushal set up in collaboration with the CentOS >> folks potentially comes in: >> https://mail.python.org/pipermail/python-dev/2015-May/140050.html >> >> That's just a simple "smoke test" to say "Does the proposed change >> pass on x86_64 systems running CentOS 7?". If we could combine it with >> a similar system for running Windows smoke tests in Appveyor, I think >> we'd flush out a lot of basic cross-platform compatibility issues >> pre-commit, regardless of whether folks are working locally on a *nix >> system or a Windows one. (We wouldn't catch *everything*, because >> Linux is not FreeBSD is not Mac OS X, etc, but we'd catch a lot of >> them). >> >> At the moment, running those requires that we be logged into IRC, be >> approved to trigger test runs, and indicate which issue we'd like to >> test. >> >> If we instead had a "test" link next to patch files in Roundup, then a >> core developer, completely online, could: >> >> 1. Read over the patch to see if we think its reasonable to smoke test >> 2. Trigger the smoke test directly from Roundup >> 3. Receive the results back as Roundup comments, with links to the run >> logs >> >> > We (openshift) have similar technique developed around vagrant + jenkins > where > we can kick of by commenting on a github PR to either run the tests or > perform > actual merge. Both of these operation run exactly the same suite of tests, > but only the merge performs actual, well merging. Obviously only certain > group > of core devs has the rights to tag the PRs. Additionally when such > tag/comment already exists all the new changes (eg. additional changes > after review) > are tested/merged and this is something that should be considered in this > case > as well (there are some slight nuances, but I don't want to go too much > into details). > Does the tag applies always or only once, and if once how you know > it was applied already? So having a tag on/off future would be desirable > imho. > I'd be happy to help triaging this problem as much as my time allows me ;) > Thanks for the offer to help. The input of people with experience with a wide-range of systems will help make sure we don't botch this. =) -Brett > > Maciej > > >> As we gained further familiarity and confidence with the system, we >> could extend the trust for running pre-commit test runs to anyone that >> has been granted Developer privileges on the issue tracker, rather >> than restricting it specifically to core developers. (BTW, we should >> probably come up with an icon for folks with elevated tracker >> privileges - at the moment they're just marked as having signed the >> CLA if they aren't also CPython core developers) >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> > _______________________________________________ >> core-workflow mailing list >> core-workflow at python.org >> https://mail.python.org/mailman/listinfo/core-workflow >> This list is governed by the PSF Code of Conduct: >> https://www.python.org/psf/codeofconduct >> > _______________________________________________ > core-workflow mailing list > core-workflow at python.org > https://mail.python.org/mailman/listinfo/core-workflow > This list is governed by the PSF Code of Conduct: > https://www.python.org/psf/codeofconduct -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezio.melotti at gmail.com Tue Jul 21 20:14:36 2015 From: ezio.melotti at gmail.com (Ezio Melotti) Date: Tue, 21 Jul 2015 21:14:36 +0300 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On Mon, Jul 20, 2015 at 10:49 PM, Brett Cannon wrote: > In my ideal workflow scenario, these are the steps a patch would take: > > Issue is created > Issue is triaged to have right affected versions, etc. > Patch is uploaded > CI kicks the patch off for all branches and OSs that are affected > CI flags what branches and OSs did (not) pass or apply cleanly to Checking if a patch applies cleanly on the active branches can be done with a Roundup detector. The detector can also add this information in the patch metadata. We currently have two GSoC students working on Roundup: 1) one is adding a REST API that will make a lot of these things simpler; 2) the other so far worked on an hg extension that talks with Roundup and is currently working on a patch analysis feature that figures out which files are affected (and could also check which branches the patch applies to). The patch analysis shouldn't be too expensive, and can probably been done for each patch as soon as it's uploaded. These and other tracker improvements will likely get integrated around the end of GSoC. Best Regards, Ezio Melotti > If necessary, another patch that works in a specific branch that is affected > is uploaded (obviously requires some way to flag that a patch applies to a > specific branch, deciding how to deal with Misc/NEWS, etc.) > Code review -- with a tool other than Rietveld -- from a core developer with > feedback > New version of patch uploaded, usual CI kicked off > If everything looks good and CI is green, get patch approval from a core dev > Approval submits the patch(es) to the appropriate branches > CI triggered yet again, and if tests fail then patch(es) are automatically > rolled back > > Now I realize this is not about to launch immediately. There are changes to > Roundup in there, a reliable test suite that actually fails only on failures > and not because it's flaky, etc. But the key point here is that everything > that can be automated is, and code reviews can occur entirely through a > browser. > > The independent parts I see here are (which probably all require some > Roundup integration to be effective): > > CI for every patch > A new code review tool > Automated/browser-based handling of VCS (e.g., submission, rollback) > > Once the pieces are in place then they can be tied together and drive each > other (e.g., code review tool submitting a patch, CI tool automatically > handling rollbacks, etc.), but that is not necessary to make forward > progress. > > I'll let Nick and Donald chime on what exactly their proposals can do today > and what they will need to make the magical workflow happen. =) > > _______________________________________________ > core-workflow mailing list > core-workflow at python.org > https://mail.python.org/mailman/listinfo/core-workflow > This list is governed by the PSF Code of Conduct: > https://www.python.org/psf/codeofconduct From ncoghlan at gmail.com Wed Jul 22 06:16:51 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 22 Jul 2015 14:16:51 +1000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On 22 July 2015 at 04:14, Ezio Melotti wrote: > On Mon, Jul 20, 2015 at 10:49 PM, Brett Cannon wrote: >> In my ideal workflow scenario, these are the steps a patch would take: >> >> Issue is created >> Issue is triaged to have right affected versions, etc. >> Patch is uploaded >> CI kicks the patch off for all branches and OSs that are affected >> CI flags what branches and OSs did (not) pass or apply cleanly to > > Checking if a patch applies cleanly on the active branches can be done > with a Roundup detector. > The detector can also add this information in the patch metadata. > > We currently have two GSoC students working on Roundup: > 1) one is adding a REST API that will make a lot of these things simpler; > 2) the other so far worked on an hg extension that talks with Roundup > and is currently working on a patch analysis feature that figures out > which files are affected (and could also check which branches the > patch applies to). > > The patch analysis shouldn't be too expensive, and can probably been > done for each patch as soon as it's uploaded. > These and other tracker improvements will likely get integrated around > the end of GSoC. \o/ Thank you for driving that. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From ncoghlan at gmail.com Fri Jul 24 18:07:54 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 25 Jul 2015 02:07:54 +1000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On 22 July 2015 at 04:03, Brett Cannon wrote: > On Mon, Jul 20, 2015 at 7:15 PM Nick Coghlan wrote: >> I think you're conflating some different issues here, at least two of >> which are worth separating out from each other: >> >> 1. Completely online workflow for documentation editing >> 2. Pre-commit CI for CPython > > > I wasn't conflating them so much as not worrying about #1 as I know that's > not a hard problem to solve like the CPython-specific workflow is. Weelll, it's harder than I'd like, because software is software, and companies are companies, and I have some pretty major trust issues when it comes to the latter :) >> This means that for forge.python.org, I think "What about CPython?" >> should be something we take into account as a "What's next?" for the >> service, but our near term focus should be on making things like the >> developer guide and PEPs trivial to suggest edits to, trivial to >> approve edits to, and trivial to grant approval rights over. Those >> levels of access (who can suggest edits, who can approve edits, who >> can approve edit rights for others) should also all be completely >> transparent and changes in them should be tracked automatically rather >> than requiring manual updates to a text file. > > > OK, then let's choose the devguide or the PEPs to test Kalithea out on and > see how it goes since no one has experience with the service while I bet > everyone has experience with at least GitHub. If you can get whomever has > done the most amount of work on the devguide lately to sign off on it -- > probably Carol Willing -- then I say get a test instance of forge.python.org > up for the devguide and let's see what working with Kalithea is like (and it > also gets us more new contributor feedback than the PEPs would while also > not frustrating Guido when he deals with peps@ =). This is actually where the "What about BitBucket as an interim solution?" thread that spawned Donald's GitHub+Phabricator counter-proposal came from. I already know there are two major limitations of Kallithea that we'd likely want to address before adopting it as our primary repo hosting solution, even for the support repos: 1. Social auth support, so folks can log in with GitHub/BitBucket/Twitter/Facebook/Google et al credentials rather than having to create yet another account. 2. Online creation and acceptance of change proposals from third parties (If I understand Kallithea's current capabilities correctly, you can edit directly on your own repos, but there's no counterpart to the "fork->edit->PR" workflow GitHub & BitBucket offer for submitting online-only changes to other people's repos) Addressing the first one may also involve a Pylons -> Pyramid upgrade for Kallithea. I'd be prepared to coordinate a grant proposal to the PSF to fund that work (hopefully in collaboration with the folks from Agendaless, since I spoke to them about the prospect at PyCon US), but I wouldn't want to commit funds to it without some way of ensuring we're happy with a pull request based workflow first. Even beyond that, though, I'm also looking at the workflow the Kallithea team *themselves* are currently using, and thinking "Hmm, I quite like that approach". What they're doing is using BitBucket as their "working repo", and https://kallithea-scm.org/repos/ as their "repository of record". Since the long term outcome I'd like to see us get to is "able to accept PRs on both GitHub and BitBucket, repository of record on PSF hosted infrastructure", the transition plan that is starting to make sense to me is: 1. Move one or two support repos to the PSF BitBucket account, automatically update from there back to the existing repos on hg.python.org (we'd revoke direct commit access to the latter, but the build processes hanging off them would still trigger when commits were pushed by the automated sync) 2. Work with that model for a while, establish that folks are happy with the PR based workflow 3. Approach forge.python.org as an easier to manage hg.python.org, rather than as the key enabler in offering pull request based workflows for third party contributors to the support repos 4. The main capability of interest with Kallithea would then be gaining support for "remote PRs" where folks actually submitted the PR through GitHub or BitBucket, but it could be processed by reviewers in Kallithea. That capability doesn't exist yet (in any tool), but it's one Mozilla are also interested in. In this model, the main rationale for "Why BitBucket rather than GitHub?" is purely and simply "because migrating from Hg to git doesn't buy us enough for the cost in time and effort given that most repos are going to be remaining on hg.python.org". I guess I should update my workflow PEP to reflect the above changes in my thinking over the past several months... Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From bcannon at gmail.com Sat Jul 25 06:47:48 2015 From: bcannon at gmail.com (Brett Cannon) Date: Sat, 25 Jul 2015 04:47:48 +0000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On Fri, Jul 24, 2015, 09:07 Nick Coghlan wrote: On 22 July 2015 at 04:03, Brett Cannon wrote: > On Mon, Jul 20, 2015 at 7:15 PM Nick Coghlan wrote: >> I think you're conflating some different issues here, at least two of >> which are worth separating out from each other: >> >> 1. Completely online workflow for documentation editing >> 2. Pre-commit CI for CPython > > > I wasn't conflating them so much as not worrying about #1 as I know that's > not a hard problem to solve like the CPython-specific workflow is. Weelll, it's harder than I'd like, because software is software, and companies are companies, and I have some pretty major trust issues when it comes to the latter :) >> This means that for forge.python.org, I think "What about CPython?" >> should be something we take into account as a "What's next?" for the >> service, but our near term focus should be on making things like the >> developer guide and PEPs trivial to suggest edits to, trivial to >> approve edits to, and trivial to grant approval rights over. Those >> levels of access (who can suggest edits, who can approve edits, who >> can approve edit rights for others) should also all be completely >> transparent and changes in them should be tracked automatically rather >> than requiring manual updates to a text file. > > > OK, then let's choose the devguide or the PEPs to test Kalithea out on and > see how it goes since no one has experience with the service while I bet > everyone has experience with at least GitHub. If you can get whomever has > done the most amount of work on the devguide lately to sign off on it -- > probably Carol Willing -- then I say get a test instance of forge.python.org > up for the devguide and let's see what working with Kalithea is like (and it > also gets us more new contributor feedback than the PEPs would while also > not frustrating Guido when he deals with peps@ =). This is actually where the "What about BitBucket as an interim solution?" thread that spawned Donald's GitHub+Phabricator counter-proposal came from. I already know there are two major limitations of Kallithea that we'd likely want to address before adopting it as our primary repo hosting solution, even for the support repos: 1. Social auth support, so folks can log in with GitHub/BitBucket/Twitter/Facebook/Google et al credentials rather than having to create yet another account. 2. Online creation and acceptance of change proposals from third parties (If I understand Kallithea's current capabilities correctly, you can edit directly on your own repos, but there's no counterpart to the "fork->edit->PR" workflow GitHub & BitBucket offer for submitting online-only changes to other people's repos) Addressing the first one may also involve a Pylons -> Pyramid upgrade for Kallithea. I'd be prepared to coordinate a grant proposal to the PSF to fund that work (hopefully in collaboration with the folks from Agendaless, since I spoke to them about the prospect at PyCon US), but I wouldn't want to commit funds to it without some way of ensuring we're happy with a pull request based workflow first. Even beyond that, though, I'm also looking at the workflow the Kallithea team *themselves* are currently using, and thinking "Hmm, I quite like that approach". What they're doing is using BitBucket as their "working repo", and https://kallithea-scm.org/repos/ as their "repository of record". Since the long term outcome I'd like to see us get to is "able to accept PRs on both GitHub and BitBucket, repository of record on PSF hosted infrastructure", the transition plan that is starting to make sense to me is: 1. Move one or two support repos to the PSF BitBucket account, automatically update from there back to the existing repos on hg.python.org (we'd revoke direct commit access to the latter, but the build processes hanging off them would still trigger when commits were pushed by the automated sync) 2. Work with that model for a while, establish that folks are happy with the PR based workflow 3. Approach forge.python.org as an easier to manage hg.python.org, rather than as the key enabler in offering pull request based workflows for third party contributors to the support repos 4. The main capability of interest with Kallithea would then be gaining support for "remote PRs" where folks actually submitted the PR through GitHub or BitBucket, but it could be processed by reviewers in Kallithea. That capability doesn't exist yet (in any tool), but it's one Mozilla are also interested in. Basically Kalithea becomes a common frontend to bitbucket and github where we generate patches from PRs on either site and work with them on our end for reviews, applying them to our repo of record, etc. How would we handle changes that require custom fixes in two branches? They just fix it in both and we automatically handle the merge/revert steps? In this model, the main rationale for "Why BitBucket rather than GitHub?" is purely and simply "because migrating from Hg to git doesn't buy us enough for the cost in time and effort given that most repos are going to be remaining on hg.python.org". I guess I should update my workflow PEP to reflect the above changes in my thinking over the past several months... Yes please. :) -Brett Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Sun Jul 26 12:22:32 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 26 Jul 2015 20:22:32 +1000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On 25 July 2015 at 14:47, Brett Cannon wrote: > Basically Kalithea becomes a common frontend to bitbucket and github where > we generate patches from PRs on either site and work with them on our end > for reviews, applying them to our repo of record, etc. Yep, that's my hoped for outcome - that way, folks can develop and submit changes using their preferred tools (whether that's git or hg, GitHub or BitBucket), while we only have to deal with a single set of tooling on the backend. With the data models of git and hg being isomorphic to each other, it's technically feasible to provide that choice to contributors, and I think it's worth aiming to do so rather than forcing them to adapt to one or the other before we'll accept their contributions. I currently expect triagers and core developers would need to go back to the original services if we wanted to actually discuss the change with the submitter (at least in the near term), but I think there's actually potential value in offering that kind of split conversation - while the forge discussion would still be public if contributors wanted to go look at it (and even participate), the internal discussion between triagers and core developers would be separated from the third party contributor facing discussion. That's a standard feature of service-oriented ticketing systems such that the requestor doesn't get spammed about updates relating to internal implementation details that aren't relevant to them. Services like http://gerrithub.io/ show that this kind of external review service integration is already possible with GitHub, and the Gerrit plugin for that is open source: https://gerrit.googlesource.com/plugins/github/+/master/README.md We'd also hopefully be able to sync this up with bugs.python.org in a way that made it possible to check incoming change requests against the CLA records held there. That would likely require the ability to voluntarily link bugs.python.org accounts with accounts on the services where we decided to accept pull requests. > How would we handle > changes that require custom fixes in two branches? They just fix it in both > and we automatically handle the merge/revert steps? I don't think it's a coincidence that there's a correlation between "project uses a pull request based workflow" and "project doesn't provide maintenance releases for past feature releases" :) Beyond CPython, the main DVCS based workflows I'm familiar with that need to provide maintenance releases are Beaker and OpenStack, and those both rely on Gerrit. Quibbles with the busy nature of the Gerrit web UI aside, I think the underlying workflow is well designed for the task (at least the way we had it configured for Beaker), and it should be adaptable to Mercurial as well (especially with appropriate use of changeset evolution to manage the "still in review" commit stacks). While it's sheer vaporware, I speculated on how that model could potentially be adapted to Kallithea earlier this year: http://lists.sfconservancy.org/pipermail/kallithea-general/2015q1/000231.html Once "accepted change proposals" are their own distinct entity that exists within the target repo (rather than being produced as a live diff against a separate clone), it becomes more feasible to offer the option to cherry-pick/graft them to other branches, rather than merging the heads. That way, the multiple branch workflow *always* involves independent commits, but in the trivial cases, the forward ports are a push-button exercise (with each branch getting an independent CI run prior to being merged). The current model of "commit locally, merge forward locally, push to remote" goes away in favour duplicating change proposals on the server so they can be readily applied to additional branches. You do pick up a new risk where the forward porting step is missed, but it's possible to deal with that at the issue tracker level by checking if the code commits match the affected branches. My experience with Beaker is that this risk is worth it, as you actually get a big gain from the fact that "needs tweaks to work on the newer branch" and "doesn't need to be forward ported at all" are better accounted for in the default workflow - the "common" case isn't common enough that the exceptional cases can be so readily ignored. That said, it would be good to have server side automation for the default case (forward porting an unmodified change), and that could potentially be done if we finally picked one of the conflict-free NEWS file automation ideas and pushed it through to completion. This would technically be the topic of a "Migrate CPython to forge.python.org" follow-up PEP rather than the current PEP 474 proposal, and I'm not sure when I'm going to have a chance to write that - PyCon Australia is next weekend, and then the weekend after I'm off to the US for a couple of weeks to attend Fedora's Flock conference and PyGotham. I can at least get the PEP 474 update done before I leave for the US, though, and I can include some of these ideas in there as "possible future work". Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia From donald at stufft.io Mon Jul 27 00:45:06 2015 From: donald at stufft.io (Donald Stufft) Date: Sun, 26 Jul 2015 18:45:06 -0400 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On July 26, 2015 at 6:22:44 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: > On 25 July 2015 at 14:47, Brett Cannon wrote: > > How would we handle > > changes that require custom fixes in two branches? They just fix it in both > > and we automatically handle the merge/revert steps? > > I don't think it's a coincidence that there's a correlation between > "project uses a pull request based workflow" and "project doesn't > provide maintenance releases for past feature releases" :) I think it is a coincidence (and in fact there are projects that use PRs and multiple maintenance series). No matter what any system requires some(one|thing) to trigger a patch against the other branches. There is basically no difference between sending a PR in GitHub to a particular series branch or submitting a CR in Gerrit to a different branch. GitHub lets you do it entirely in the web interface for the simple case where there are no merge conflicts even where you can create a PR that forward merges the old branches in the UI. Of course there?s also nothing preventing automation to be added here that looks for changes to the old branches and automatically proposes a forward merge as well. Can you explain what you think the differences are between the multiple systems that affect PRs differently than they affect the other systems that makes it harder? --- Donald Stufft PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA From ncoghlan at gmail.com Mon Jul 27 15:27:26 2015 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 27 Jul 2015 23:27:26 +1000 Subject: [core-workflow] Starting the improved workflow discussion again In-Reply-To: References: Message-ID: On 27 July 2015 at 08:45, Donald Stufft wrote: > On July 26, 2015 at 6:22:44 AM, Nick Coghlan (ncoghlan at gmail.com) wrote: >> On 25 July 2015 at 14:47, Brett Cannon wrote: >> > How would we handle >> > changes that require custom fixes in two branches? They just fix it in both >> > and we automatically handle the merge/revert steps? >> >> I don't think it's a coincidence that there's a correlation between >> "project uses a pull request based workflow" and "project doesn't >> provide maintenance releases for past feature releases" :) > > I think it is a coincidence (and in fact there are projects that use PRs > and multiple maintenance series). No matter what any system requires > some(one|thing) to trigger a patch against the other branches. There is > basically no difference between sending a PR in GitHub to a particular > series branch or submitting a CR in Gerrit to a different branch. GitHub > lets you do it entirely in the web interface for the simple case where > there are no merge conflicts even where you can create a PR that forward > merges the old branches in the UI. Oh, nice - I didn't know GitHub could do that. That reduces the meaningful workflow differences I am aware of to Gerrit's support for "multiple approvals required for merge" and the more fine-grained access control model around things like who can submit change requests, who can mark them as verified, who can approve them, etc. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia