nntplib, huge xover object

Ben Hutchings do-not-spam-ben.hutchings at businesswebsoftware.com
Thu Apr 3 11:51:25 EST 2003


In article <bmqn8vonc4qkmnr2kcl573k4l58c2t70fq at 4ax.com>, carroll at tjc.com wrote:
> On Tue, 1 Apr 2003 23:52:52 -0500, David Sfiligoi
><webmaster at quanta1.world--vr.com> wrote:
> 
>>I built a small script that use the xover function in the nntplib module.   
>>The problem that I came across is that what xover can return a huge tuple 
>>when there is 1000s of article in a newsgroup(which is frequent)
>>
>>testxover_resp,testxover_subs = s.xover(start,end)
>>
>>On my system it's not an issue I have 768 Mb of RAM.... but I have to 
>>believe that there is a way to optimise this while keeping all this simple.
>>
>>How can I limit the amount of memory xover would take. the other I did an 
>>xover of a huge group and the python process was taking about 650Mb Res. 
>>memory.  
> 
> This is a tough one; the problem is, if you were actually issuing the
> XOVER command to the NNTP server, you could handle it line-by-line as
> it come in.  But nntplib does that on your behalf and hands you the
> whole tuple, filled with all the headers.
>
> So as long as you're using nntplib instead of rolling your own (and I
> think that using nntplib the way to go), you're going to have the
> constraint that you have to have the resources to handle that big
> tuple.
<snip>

Or you can edit nntplib to add the option of storing XOVER output to a
file.  It doesn't look like this would be too hard.  The 'file' could
be an object that processes each line as it comes, rather than a real
file.  Then send your change to nntplib back to the author and hope it's
accepted into the official version.




More information about the Python-list mailing list