[Catalog-sig] Package Quality Measurement for packages on Pypi

David Lyon david.lyon at preisshare.net
Thu Nov 19 00:02:33 CET 2009


On Wed, 18 Nov 2009 14:33:27 -0600, Robert Kern <robert.kern at gmail.com>
wrote:
> Personally, I don't want to see any aggregates of incommensurable
> observations 
> ever. I don't mind seeing a dashboard of individual observations (even if
I
> 
> disagree with many of the individual measurements), but aggregating them
> with 
> arbitrary weights into a single score is simply wrong. I disagree with
> including 
> user ratings, too, for much the same reasons.

I'm not sure if CPANTS displays their findings/ratings to package users
on CPAN either. I think you have to navigate to a seperate site to see 
the grade.

The purpose of testing packages isn't to warn users off a package, say,
because it has no docstrings. It's about taking their package, running the
internal test suite on a number of different platforms (windows, linux,
mac),
checking that it installs properly with
distribute/setuptools/distutils/pip.

After that, to probe it and put some numbers (ratings) on what is
and isn't done. Like documentation, tests, pylint, pep8.

Any new package writer would expect to submit a package and get a rating
in the C or D range (if graded with letters). With some extra polishing,
you'd expect them to be interested in moving their package up into the
A or B range.

I can't see why it would be so wrong to give them tools that would allow
them to do something like that. Otherwise, there's no incentive to try
to make things good. Because it looks like nobody cares.

Ratings to assist a package developer identify weaknesses are a good
thing. Both for the developer and for the python community at large..

David








More information about the Catalog-SIG mailing list