[Python-ideas] Iterable function calls and unpacking [was: PEP for issue2292, "Missing *-unpacking generalizations"]

Joshua Landau joshua at landau.ws
Mon Jul 15 22:32:08 CEST 2013


I've moved this to a different thread, as I agree with Guido that it's
a different PEP.

On 15 July 2013 21:06, Guido van Rossum <guido at python.org> wrote:
> On Mon, Jul 15, 2013 at 1:01 PM, Oscar Benjamin
> <oscar.j.benjamin at gmail.com> wrote:
>> On 15 July 2013 12:08, Joshua Landau <joshua.landau.ws at gmail.com> wrote:
>>> On 15 July 2013 11:40, Oscar Benjamin <oscar.j.benjamin at gmail.com> wrote:
>>>
>>> In fact, I'd much like it if there was an iterable "unpacking" method
>>> for functions, too, so "chain.from_iterable()" could use the same
>>> interface as "chain" (and str.format with str.format_map, etc.). I
>>> feel we already have a good deal of redundancy due to this.
>>
>> I've also considered this before. I don't know what a good spelling
>> would be but lets say that it uses *args* so that you have a function
>> signature like:
>>
>>     def chain(*iterables*):
>>        for iterable in iterables:
>>             yield from iterable

Personally some form of decorator would be simpler:

@lazy_unpack()
def chain(*iterables):
    ...

(and it could theoretically work for mappings too, by just "bundling"
them; useful for circumstances where the mapping is left untouched and
just passed to the next function in line.)

>> And then if the function is called with
>>
>>     for line in chain(first_line, *inputfile):
>>         # do stuff
>>
>> then iterables would be bound to a lazy generator that chains
>> [first_line] and inputfile. Then you could create the unpacking
>> iterator I wanted by just using chain e.g.:
>>
>>     chain(prepend, *iterable, append)
>
> But how could you do this without generating different code depending
> on how the function you are calling is declared? Python's compiler
> doesn't have access to that information.

You could simply make the code such that if has an unpack inside the
call it does a run-time check. Whilst this will be slower for the
false-positives, the number of times *args is pass-through (and thus
you save a redundant copy of the argument tuple) and *args is a simple
loop-once construct makes it plausible that those losses would be
outweighed.

It doesn't even reduce efficiency that much, too, as the worst case
scenario is immediately falling back after checking a single C-level
attribute of the function, and the function doesn't need to be fetched
again or anything suchlike.

Then again, I'm guessing.

You'd also need to add a call to exhaust the iterator at the end of
every function utilising this (transparently, probably) to make this
have no obvious externally-visible effects. There would still be a
call-order change, but that's much more minor.


More information about the Python-ideas mailing list