Can you make this faster?

Kamilche klachemin at home.com
Sun Jun 27 22:50:14 EDT 2004


Roy Smith <roy at panix.com> wrote in message news:<roy-00797E.15144527062004 at reader2.panix.com>...

> The first thing is to profile your code and make sure this really is 
> significant in the overall scheme of things.  From your description, it 
> sounds like you've already done that, so I'll go with that assumption.

Yeah, it makes a difference. I did run profile on it, plus custom
timing tests. Sorry I didn't mention the typical arguments!

This routine gets passed anywhere from 5 to 20 smallish arguments,
less than 20 bytes in size. The most likely is an int, then a string,
then so on. I put them in order of likelihood already.

I tried a dictionary lookup for the argument types already, and it
slowed it down, so I took it out.

The best I was able to do, was change the 'list append' to a simple
string += . That speeded it up by 50%... only because the format
strings are relatively short, I imagine.

It's the need to calculate the strings separately that is slowing me
up! If they had a struct pack/unpack that didn't require you to pass
in a different format string just for strings of varying lengths, I'd
have my optimization - I'd put these function signatures in a hash
lookup table once I calculated it, so I'd never have to calculate it
again!

Too bad the struct pack/unpack routines require custom massaging for
strings. :-( I wish it had an option to specify a 'string delimiter'
for those of us that aren't passing binary data in the string, so it
could 'automagically' unpack the record, even with strings.



More information about the Python-list mailing list