iostream-like lib?

Jude Venn jude-venn at blueyonder.co.uk
Fri May 16 17:48:47 EDT 2003


file.readline has an optional size parameter that limits the maximum number of bytes that will be returned, sounds like it might help your cause.

hth,
Jude


On Thu, 15 May 2003 18:40:18 GMT
"Max Khesin" <max at cNOvSisiPonAtecMh.com> wrote:

> efficiency is a possible reason. my files may be very large and i would not
> like to read a couple of meg (if the line is that long) and then call split
> on it just to get the next word.
> 
> --
> ========================================
> Max Khesin, software developer -
> max at cNvOiSsPiAoMntech.com
> [check out our image compression software at www.cvisiontech.com, JBIG2-PDF
> compression @
> www.cvisiontech.com/cvistapdf.html]
> 
> 
> "Anton Muhin" <antonmuhin at sendmail.ru> wrote in message
> news:ba0mj3$1vns$1 at news.peterlink.ru...
> > Max Khesin wrote:
> > > The trouble is that readline() reads more than i have to in the first
> place,
> > > even before I call split().
> > > I did hack it along the lines you suggested with a generator (limiting
> > > readline() to a number of bytes and accounting for the last character
> being
> > > possibly whitespace). I was just wondering if (and why not) there is/is
> not
> > > direct support for whitespace-delimited input.
> >
> > I don't know :)
> >
> > The only thig I want to add: I see no reason why you should limit
> > readline with number of bytes? Doesn't the following code (untested) work?
> >
> > def read_tokens(filename):
> >      f = file(filename)
> >      for l in f:
> >           for token in l.split():
> >                yield token
> >      close(f)
> >
> > Best regards,
> > anton.
> >
> 
> 




More information about the Python-list mailing list