[core-workflow] Can core developers bypass PR checks?

Donald Stufft donald at stufft.io
Sun Feb 19 18:54:01 EST 2017


> On Feb 19, 2017, at 6:29 PM, Steve Dower <steve.dower at python.org> wrote:
> 
> On 19Feb2017 1035, Donald Stufft wrote:
>>> 
>>> On Feb 18, 2017, at 4:38 PM, Steve Dower <steve.dower at python.org
>>> <mailto:steve.dower at python.org>> wrote:
>>> 
>>> (I'm not subscribed to this list, so please keep me on CC)
>>> 
>>> Was there any discussion about allowing core developers to bypass any
>>> of the PR checks? Or was there an assumption that we'd just push
>>> directly to the main repo?
>> 
>> See: https://mail.python.org/pipermail/core-workflow/2017-February/000884.html
> 
> Thanks. The outcome of that thread (let's wait for a month) is the right one. Like Barry, I was just impatient.
> 
>>> For example, most of my contributions are to do with the Windows
>>> installer. Due to the nature of installers, virtually nobody else is
>>> confident to review anything complex, or even simple ones (such as
>>> https://github.com/python/cpython/pull/160) due to not being able to
>>> predict the flow on effects. As a result, I make many small changes
>>> that would take a long time to get reviewed.
>> 
>> 
>> So as I said to Barry in the linked thread, part of the goal here is to
>> force the process between committers and non-committers to be as similar
>> as possible which will solve the real very real problem (that the GH
>> migration’s goal was to help with) that when you have two processes, a
>> stream lined one for those in power and a bulky slow one for those not,
>> that it is hard to get the bulky slow once fixed because nobody in
>> power ever experiences it. By aligning the two processes as much as
>> possible, we identify pain points are forced to solve them or deal with
>> them.
> 
> I have absolutely no problem with requiring core devs to send PRs - I am 100% against *anyone* directly pushing to the main repo - it's just the PR approval process that is the problem.

Yea, Github doesn’t give us a mechanism to require PRs to be sent other than either requiring reviews or requiring status checks. I’m not personally ideologically opposed to letting people merge their own patches (and in fact, when it was decided to turn on required merges I thought it allowed people to self approve, dunno if Brett did or not) it was primarily I *think* because we were worried that the status checks  weren’t going to be stable enough to turn them on required.

Of course we can always just say it’s “required”, as a policy without any technical enforcement.

> 
>> This sounds like one such problem. If you’re the only person capable
>> (for skill or confidence) of reviewing these changes, what happens if
>> you’re unavailable for some reason? (Burnout, illness, whatever). Is
>> there any effort to mentor or attract additional contributors who are
>> able to review these changes? I’m guessing there is not, and the risk to
>> allowing bypassing the requirement is that there never *is* a plan to
>> fix it.
> 
> I'd love to attract additional contributors for this. Unfortunately, working on installers on Windows is even less popular than working on OpenSSL :)

Yea I can feel that pain, since packaging is not something a ton of people are generally enthused to work on either. It’s possible that it’s just really hard to find someone to do this and it’s just something we have to live with. Hopefully not though :)

> 
>> On the other hand, much like I said with Barry, it’s also possible that
>> these problems are special enough that we can or should just relax
>> restrictions in order to allow forward movement to still happen.
>> 
>> 
>>> 
>>> Similarly, many of my changes are not touched by the automated builds.
>>> Particularly anything to do with the installer is certainly not going
>>> to be built on a Linux box, and most (probably all) Windows buildbots
>>> don't build the installer right now. I've seen some systems where tags
>>> can be applied to a PR to skip build validation - I would certainly
>>> find that valuable.
>> 
>> This also sounds like a problem to me :) Obviously not the Linux part
>> (but then, burning some CPU on running the test suite on Linux isn’t
>> really that big of a deal is it?), but probably the installer should be
>> getting tested in some way?
> 
> It's manually tested, and typically on the build machine where it will be actually built. Getting it into a build somewhere may be useful, but adding 5-10 minutes to *every* checkin for something that basically only I ever change seems like a waste.
> 
> And you never regret burning some CPU on tests until you have more tests running than available machines. Previously we would batch test commits - I'm not sure if that's still happening or not, but that indicates some value in not spending the full time on each individual commit.
> 
> But maybe there's an easy way to only trigger tests when certain files are modified? It may not make sense to set that up for each individual module, but it might be worthwhile to trigger certain extra builds when PCBuild/* or tools/msi/* are modified.

We don’t actually have Windows tests being run at all on pull requests (largely because AppVeyor took too long to run), though we obviously still have the buildbot fleet running post commit. This is something we should ideally rectify though!

On the Travis builders, we skip running the unit tests all together if the PR was for something that couldn’t possibly change the outcome. I’m not familiar enough with the capabilities of the Python test runner to know if it can mark tests somehow to be excluded by default and only run if a certain flag is ran, if so it shouldn’t be very hard to make something that only runs those tests on changes to PCBuilder/* or tools/msi/*. Otherwise it might need a separate set of tests for it that can be independently ran.

> 
>>> But essentially, I believe it should be within the power of core
>>> developers to bypass the checkin requirements (review + builds) and
>>> force a merge. Currently this is not possible - could it be enabled?
>>> Obviously it's placing trust in our core dev team, but that's the way
>>> it has always been, and we don't lose any traceability from this
>>> (unlike allowing people to delete branches from the repo, which can
>>> permanently lose code).
>> 
>> 
>> I mentioned it above, but to my mind (and this is just a personal
>> opinion) it’s not really anything about *trust*. If we have two
>> different processes, we are going to make the one that we, the people in
>> power, use be as streamlined as possible and the other process for non
>> committers will *likely* end up slowly bitrotting and not keeping up
>> with no tooling, changes, and ideas that can make it better. Forcing
>> everyone onto the same process means that as we figure out and
>> continuously improve the process for us, it also improves it for other
>> people who do not have a commit bit.
> 
> And as I said above, I don't think this is a different process. It's an escape hatch for the single process that's only available to certain trusted people (i.e. those who we trust to say whether code is good enough to be committed). But I'm happy to wait out the month and see if it's an issue.
> 
> (Was about to write "we can all complain at the PyCon US language summit", which made me worry that if that's *all* we complain about it may be useless. For those of us who will be there, should we organise a separate workflow summit? Preferably somewhere that serves strong drinks? ;) )
> 

—
Donald Stufft



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/core-workflow/attachments/20170219/d2e8c433/attachment.html>


More information about the core-workflow mailing list