The Cost of Dynamism (was Re: Pyhon 2.x or 3.x, which is faster?)

Steven D'Aprano steve at pearwood.info
Thu Mar 24 13:56:37 EDT 2016


On Fri, 25 Mar 2016 02:03 am, Jon Ribbens wrote:

> On 2016-03-24, BartC <bc at freeuk.com> wrote:
>> On 24/03/2016 14:08, Jon Ribbens wrote:
>>> if you thought you needed to do that then most likely what you
>>> should actually be doing is re-writing your code so you no longer
>>> need to. However, you could do:
>>>
>>>    L[:] = [0] * len(L)
>>
>> OK, but that's just building a new list as I've already mentioned.
> 
> No it isn't, it's replacing the elements in-place, that's what the
> "L[:]" bit means. Replacing the list object itself would be:
> L = [0] * len(L)

Yes it is: a new list is built, copied, then discarded.

Python builds the [0]*len(L) list first, then copies the elements from it
into the existing L (which in CPython is just a fast memcopy of a whole
bunch of pointers, plus a resize of L), then garbage collects the new list.

A *sufficiently smart* Python interpreter might be able to optimize that
even further, but since it's just copying pointers from one array to
another, it's already pretty quick.

I imagine that, if you had enough memory, a *sufficiently enormous* list
would actually be faster to modify in place using the Pascal-ish idiom:

for i in range(len(L)):
    L[i] = 0


rather than building a temporary list and copying it. But I've never been
able to demonstrate that: no matter how big a list I created, it was always
faster to create a second one and then do slice assignment than to iterate
over it in a for loop changing each item in place.

This demonstrates that what logically ought to be more efficient (changing
values in place) is not necessarily *actually* more efficient.



-- 
Steven




More information about the Python-list mailing list