My script is taking 12 hours+ any suggestions?
Mel Wilson
mwilson at the-wire.com
Sun Aug 31 00:03:29 EDT 2003
In article <3f4ff9f8$0$23588$5a62ac22 at freenews.iinet.net.au>,
Ideasman <cpbarton at pacific.net.au> wrote:
>Hi I have a made a script that process normals for a flat shaded 3D mesh's.
>It compares every vert with every other vert to look for verts that can
>share normals and It takes ages.
>
>I'm not asking anyone to rewrite the script- just have a look for any
>stupid errors that might be sucking up time.
I don't know for sure, but a few things look suspicious.
- Keeping a list of line numbers of lines that have been
looked at. When you do this, an expression like
if newCompareLoopindex not in listOfDoneLines:
means a sequential search of the list every time. A set of
processed lines, or a dictionary keyed by the line numbers
of processed lines might be faster. Another alternative, if
you can spare the memory, would be a list
isProcessed = [0]*fileLen
where you set
isProcessed[lineIndex] = 1
as the line is processed, and change the test mentioned
above to simply
if isProcessed[newCompareLoopindex]:
- keeping all the input data in string form and splitting
the line each time it's used must be taking time. (The
alternative has dangers too, if unpacking all the lines
exceeds your memory, then you could trade 12 hours
unpacking data lines for 12 hours on the swap file)
- the line
if [comp1[0], comp1[1], comp1[2]] == [comp2[0], comp2[1], comp2[2]]:
results in building two brand new lists, comparing their
contents, then throwing them away. It might be better to
code
if comp1[0] == comp2[0] and comp1[1] == comp2[1] and comp1[2] == comp2[2]:
- many str calls to convert things that are already strings
(as far as I can tell), although this will probably be a
very small saving.
- as somebody else said, we don't know what kind of code is
being `eval`ed, maybe your problem just takes 12 hours to
solve.
Good Luck. Mel.
More information about the Python-list
mailing list