Eliminating duplicates entries from a list efficiently
phil hunt
zen19725 at zen.co.uk
Sat Jul 3 11:30:34 EDT 2004
On 3 Jul 2004 00:40:00 GMT, Paul <paul at oz.net> wrote:
>Can anyone suggest an efficient way to eliminate duplicate entries
>in a list? The naive approach below works fine, but is very slow
>with lists containing tens of thousands of entries:
>
>def uniq(list):
> u = []
> for x in list:
> if x not in u: u.append(x)
> return u
>
>print uniq(['a','b','c','a'])
>['a', 'b', 'c']
How about:
def uniq(list):
list.sort() # put duplicate entries together
lastValue = "different from 0th value" + str(list[0])
u = []
for item in list:
if item != lastValue:
u.append(item)
lastValue = item
return u
--
"It's easier to find people online who openly support the KKK than
people who openly support the RIAA" -- comment on Wikipedia
(Email: zen19725 at zen dot co dot uk)
More information about the Python-list
mailing list