RSS aggregator with curses and feedparser

Roberto Bechtlufft robertobech at gmail.com
Sun Sep 24 10:43:45 EDT 2006


And another thing: feedparser returns the result entries as
dictionaries. What's the best approach to create my cache file? I see
that the cache file in liferea is an xml file. Should I try to create
my own xml file based on the results from feedparser?

Thanks for your help.

Roberto Bechtlufft wrote:
> Hi, I'm new around here... I'm a Python hobbyist, and I'm far from
> being a professional programmer, so please be patient with me...
>
> I'm working on my first Python program: a curses based RSS Aggregator.
> It's basically a clone of snownews, one of my very favorite programs.
> But I want to add some funcionality. Whenever you update a feed in
> snownews, it loses all previous topics, even though you may not have
> read them, it only keeps the current topics. I want my program to
> actually aggregate feeds. Also, I want to make it able to read atom
> feeds, which is easy since I'm using feedparser. I see that liferea
> keeps a cache file for each feed it downloads, storing all of it's
> topics. I'm doing the same here.
>
> A question: how do I tell my program that a certain entry was/wasn't
> downloaded yet? Should I use the date or link tag of the entry?




More information about the Python-list mailing list