[Python-Dev] Tracker archeology

Daniel (ajax) Diniz ajaksu at gmail.com
Fri Feb 13 07:34:35 CET 2009


Daniel (ajax) Diniz wrote:
> "Martin v. Löwis" wrote:
>>> Now, getting into pie-in-the-sky territory, if someone (not logged in)
>>>  was to download all issues for scrapping and feeding to a local
>>> database, what time of day would be less disastrous for the server? :)
>>
>> I think HTML scraping is a really bad idea. What is it that you
>> specifically want to do with these data?
>
> For starters, free form searches, aggregation and filtering of
> results. The web interface is pretty good for handling individual
> issues, but not so good for adding someone as nosy to lots of issues.
>
> With some more time and effort, I'd be able to:
> Organize a local workflow with tweaked UI
>
 Send emails before they were done :D
 Use a VCS for in-progress activities
 Figure out how to serialize and submit the work done locally
 Share results with interested parties off-tracker (e.g., the
auto-nosy mentioned up-thread)

Right now, more efficient searching and aggregation along with some
(local, flexible) UI tweaks sum it up.

Maybe I can offer a patch for something like PyPI's 'simple' interface?

Cheers,
Daniel


More information about the Python-Dev mailing list