My script is taking 12 hours+ any suggestions?

Jeff Epler jepler at unpythonic.net
Sat Aug 30 10:02:26 EDT 2003


As one poster mentioned, you should convert to Python floats once, not
many times.

Second, you achieve some speed increase by putting everything inside a
function.  For instance, this program:
	for i in range(1000): j = sin(i)
will run just a bit faster if you write it as
	def f():
		for i in range(1000): j = math.sin(i)
	f()
because access to "i" and "j" will be a C array indexing operation
instead of a Python hash table lookup.  You can get a little bit more
speed if you make 'math.sin' into a local variable, for instance with
	def f():
		sin = math.sin
		for i in range(1000): j = sin(i)
	f()

As to the smoothing algorithm itself: I haven't decoded exactly what
you're doing, but I once wrote a polygon smoothing algorithm modeled
after http://www.xmission.com/~nate/smooth.html -- Here is his summary
of his method:
	 * First builds a list of all the triangles each vertex is in.
	 * Then loops through each vertex in the the list averaging
	 * all the facet normals of the triangles each vertex is in.
	 * Finally, sets the normal index in the triangle for the vertex
	 * to the generated smooth normal.  If the dot product of a facet
	 * normal and the facet normal associated with the first triangle
	 * in the list of triangles the current vertex is in is greater
	 * than the cosine of the angle parameter to the function, that
	 * facet normal is not added into the average normal calculation
	 * and the corresponding vertex is given the facet normal.
	 * This tends to preserve hard edges.  The angle to use depends
	 * on the model, but 90 degrees is usually a good start.
In my application, I had a flag available which said whether a particular
face should participate in smooth normal generation or not, so I ignored
that part of the algorithm.

The step which takes some time is the grouping of the vertices.  My
models were small, so an N^2 approach in vertex count was not a problem
even though I had to do the smoothing "online".  The naive way, which
involves comparing each vertex to all the other vertices, is N^2.

You can get better-than-N^2 in a couple of ways.  For instance, you
could use a tree structure to partition space, just like octrees are
used for color reduction.  (this probably gives O(NlgN) behavior) You could
just use a Python dict to place points into bins of a carefully chosen
size so that points the other algorithms would consider identical almost
always end up in the same bin, and points that the other algorithms
would consider different almost always end up in different bins.  I
think this would give nearly O(N) behavior.

Jeff





More information about the Python-list mailing list