[Catalog-sig] simple index and urls exracted from metadata text fields

Tarek Ziadé ziade.tarek at gmail.com
Sun Sep 13 14:21:55 CEST 2009


2009/9/13 "Martin v. Löwis" <martin at v.loewis.de>:
>> $ easy_install hachoir-core
>> Searching for hachoir-core
>> Reading http://pypi.python.org/simple/hachoir-core/
>> Reading http://hachoir.org/wiki/hachoir-core   <- this page doesn't
>> exists anymore that's an old home url
>>
>>      page, you're blocked for a while !!
>>
>> If we keep this behavior, the client-side should be more smart.
>
> I disagree. It's the package maintainer's task to make sure the
> published URLs actually work.
>

They do as a matter of fact. But once an url is published, it's
published "forever".

Take the hachoir-core as an example. The home URL was changed in
1.2.1 to :

"http://bitbucket.org/haypo/hachoir/wiki/hachoir-core"

the 1.2 version home url was:

"http://bitbucket.org/haypo/hachoir/wiki/hachoir-core"

But the PyPI simple API will keep track of both:

http://pypi.python.org/simple/hachoir-core

Leading to the problem described (because the script visits all urls
before it decides
what tarball to pick)

So what the maintainer should do ?

Recreate a new version of an old release so the old URL is removed
from PyPI ? Just register the metadata, knowing that the one contained in
the tarball is not the same ?

I mean, if I change my home url at the 25th version of my distribution,
I need to release again the 24 previous versions of the distribution ?

We can handle timeouts on client-side of course,
but I don't understand why you don't see the consistency problem
in the simple API I am describing here.

Maybe the solution would be to add in that page only the latest home URL link..

Tarek


More information about the Catalog-SIG mailing list