[Python-Dev] Rework nntlib?

Jesse Noller jnoller at gmail.com
Wed Sep 15 20:39:43 CEST 2010


On Wed, Sep 15, 2010 at 2:29 PM, Barry Warsaw <barry at python.org> wrote:
> On Sep 15, 2010, at 02:02 PM, Jesse Noller wrote:
>
>>And who do you get to maintain all the new tests and buildbots you
>>spawn from running hundreds of community projects unittests? How do
>>you know those tests are real, and actually work? You quickly outstrip
>>the ability of the core team to stay on top of it, and you run into
>>issues in the past akin to the community build bots project.
>
> This was something we were thinking about as part of the snakebite project.  I
> don't know if that's still alive though.  I would love to have *some* kind of
> health/QA metric show up next to packages on the Cheeseshop for example.

My GSOC student this past year worked on a testing backend for PyPI, I
think there's a want and strong desire for this, but a lack of
person-resources. Also, the onus has to be on the maintainers of the
package; not on core.

> It's also something I've been mulling over as part of QA for the large number
> of Python packages available in Debian/Ubuntu.  This was in the context of
> trying to judge the health of those packages for Python 2.7.
>
> At our last UDS I spoke to a number of people and thought that it actually
> wouldn't be infeasible to set up some kind of automated Hudson-based farm to
> run test suites for all the packages we make available.  I think all the basic
> pieces are there, it's mostly a matter of finding the time to put it all
> together.  I of course did not find the time :/ so it hasn't happened yet.

Yeah; we have a plethora of options - hudson, pony-build, buildbot,
pyti (http://bitbucket.org/mouad/pypi-testing-infrastructure-pyti) and
many more. We also have the isolation tools (such as virtualenv) and
awesome little utilities like tox (http://pypi.python.org/pypi/tox)
for doing all of this now.

Manpower and time prohibit it

> Of course, the set of platforms, Python versions, and packages we care about
> is much narrower than the full Python community, but it would still be an
> interesting exercise.

If we had an existing back end for this - say "python 2.6, 2.7, 3.1"
and package maintainers could use that infrastructure, on pypi, to run
their tests and we could see Green Dots (pass) for those packages, I
think it's a short jump from that, to having a "dev" section where we
can take tests that pass 3.1 and run it on a new experimental
interpreter.

If we know in advance that it passes on the old interpreter, we can at
least assume that we may have broken something.


More information about the Python-Dev mailing list