[Python-Dev] Rationale behind lazy map/filter

Graham Gower graham.gower at gmail.com
Tue Oct 13 19:42:09 EDT 2015


On 14 October 2015 at 09:59, Steven D'Aprano <steve at pearwood.info> wrote:
> On Tue, Oct 13, 2015 at 11:26:09AM -0400, Random832 wrote:
>> "R. David Murray" <rdmurray at bitdance.com> writes:
>>
>> > On Tue, 13 Oct 2015 14:59:56 +0300, Stefan Mihaila
>> > <stefanmihaila91 at gmail.com> wrote:
>> >> Maybe it's just python2 habits, but I assume I'm not the only one
>> >> carelessly thinking that "iterating over an input a second time will
>> >> result in the same thing as the first time (or raise an error)".
>> >
>> > This is the way iterators have always worked.
>>
>> It does raise the question though of what working code it would actually
>> break to have "exhausted" iterators raise an error if you try to iterate
>> them again rather than silently yield no items.
>
> Anything which looks like this:
>
>
> for item in iterator:
>     if condition:
>         break
>     do_this()
> ...
> for item in iterator:
>     do_that()
>
>
> If the condition is never true, the iterator is completely processed by
> the first loop, and the second loop is a no-op by design.
>
> I don't know how common it is, but I've written code like that.
>

I wrote code like this yesterday, to parse a file where there were
multiple lines of one type of data, followed by multiple lines of
another type of data.

I can think of more complex examples including two (or more) iterators
where one might reasonably do similar things. E.g. file 1 contains
data, some of which is a subset of data in file 2, both of which are
sorted. And during parsing, one wishes to match up the common
elements.

-Graham


More information about the Python-Dev mailing list