Can I do this faster?
Horst Gassner
horst at proceryon.at
Thu Aug 10 01:19:23 EDT 2000
Hello!
Thanks a lot to all of you!
The best approach is the one with the cache / lookup table or index -
however you want to call it.
Here are the results (using profiler):
old approach: 17033 calls - cumtime 2.49 (20000 - 2.92)
new approach: 13894 calls - cumtime 0.17 (20000 - 0.245)
I have used Kevins approach but I think that all the other code snippets
would generate similar results.
Nice greetings
Horst
Alain TESIO wrote:
>
> Hello,
>
> I think there is no magical trick to speed it up with this structure.
> You should create (and modify it when s.__rowDict is modified) a dictionary having
> s.__rowDict[key]['data'] as a key.
>
> It's similar to creating an index in a database (unique as you don't seem to worry about
> diplicates) :
>
> index={}
> for key in s.__rowDict.keys():
> index[s.__rowDict[key]['data']]=key
>
> When you have this index, this uses a hash and is immediate :
>
> def __GetRowID (index, row):
> if index.has_key(row):
> return index[row]
>
> Alain
>
> On Wed, 09 Aug 2000 08:35:58 +0200, Horst Gassner <horst at proceryon.at> wrote:
>
> >Hello!
> >
> >The following code is executed very often in my program and I would be
> >happy if someone could help me to speed this up.
> >
> >def __GetRowID (s, row):
> > for key in s.__rowDict.keys():
> > if s.__rowDict[key]['data'] == row:
> > return key
> >
> >Thanx in advance
> >Horst
--
---------------------------
ProCeryon Biosciences GmbH.
Ing. Horst Gassner
Software Engineer
More information about the Python-list
mailing list