Fast recursive generators?

Michael McGlothlin michaelm at plumbersstock.com
Fri Oct 28 20:49:15 EDT 2011


>> I'm trying to generate a list of values
>
> Better to think of a sequence of values, whether materialized as a 'list' or
> not.

The final value will actually be a string but it seems it is usually
faster to join a list of strings than to concat them one by one.

>> where each value is dependent
>> on the previous value in the list and this bit of code needs to be
>> repeatedly so I'd like it to be fast. It doesn't seem that
>> comprehensions will work as each pass needs to take the result of the
>> previous pass as it's argument. map() doesn't seem likely. filter() or
>
> Comprehensions combine map and filter, both of which conceptually work on
> each item of a pre-existing list independently. (I am aware that the
> function passed can stash away values to create dependence.

The problem is I don't really have a pre-existing list. I just want a
tight loop to generate the list from the initial value. Probably I
could do something similar to my filter() method below but it seems
really convoluted. I thought maybe there was a better way I just
didn't know about. Python is full of neat tricks but it can be hard to
remember them all.

>> reduce() seem workable but not very clean. Is there a good way to do
>> this? About the best I can get is this:
>>
>> l = [ func ( start ) ]
>> f = lambda a: func ( l[-1] ) or a
>> filter ( f, range ( big_number, -1, -1 ) )
>>
>>
>> I guess I'm looking for something more like:
>>
>> l = do ( lambda a: func ( a ), big_number, start )
>
> Something like
>
> def do(func, N, value):
>    yield value
>    for i in range(1,N):
>        value = func(value)
>        yield value
>
> ?
> For more generality, make func a function of both value and i.
> If you need a list, "l = list(do(f,N,x))", but if you do not, you can do
> "for item in do(f,N,x):" and skip making the list.

Generators seem considerably slower than using comprehension or
map/filter/reduce. I have tons of memory to work in (~100GB actual
RAM) and the resulting data isn't that big so I'm more concerned about
CPU.


Thanks.



More information about the Python-list mailing list