Eliminating duplicates entries from a list efficiently
Larry Bates
lbates at swamisoft.com
Tue Jul 6 12:39:50 EDT 2004
Take a look at recipe here:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/204297
another way would be to create a dictionary (which would
eliminate the duplicates) and then convert back to a list.
uniq_list=dict(zip(list,list)).keys()
FYI,
Larry Bates
Syscon, Inc.
"phil hunt" <zen19725 at zen.co.uk> wrote in message
news:slrncedk8p.jr.zen19725 at cabalamat.cabalamat.org...
> On 3 Jul 2004 00:40:00 GMT, Paul <paul at oz.net> wrote:
> >Can anyone suggest an efficient way to eliminate duplicate entries
> >in a list? The naive approach below works fine, but is very slow
> >with lists containing tens of thousands of entries:
> >
> >def uniq(list):
> > u = []
> > for x in list:
> > if x not in u: u.append(x)
> > return u
> >
> >print uniq(['a','b','c','a'])
> >['a', 'b', 'c']
>
>
> How about:
>
> def uniq(list):
> list.sort() # put duplicate entries together
> lastValue = "different from 0th value" + str(list[0])
> u = []
> for item in list:
> if item != lastValue:
> u.append(item)
> lastValue = item
> return u
>
>
>
> --
> "It's easier to find people online who openly support the KKK than
> people who openly support the RIAA" -- comment on Wikipedia
> (Email: zen19725 at zen dot co dot uk)
>
>
More information about the Python-list
mailing list