3D and floating point optimization
Mike C. Fletcher
mcfletch at rogers.com
Mon Jan 20 23:10:03 EST 2003
Using fixed-point to represent floating point wouldn't be the first
place I'd look for optimisation. Extensive use of Numeric Python
whereever possible, binding core-code with psyco, liberal use of
display-lists (preferably with a dynamically updated compilation) and/or
array-based geometry, and traditional Opengl optimisation strategies
(aggressive scene culling, state-change minimisation) will probably get
you farther faster.
AFAIK, Numeric Python doesn't do any parallelisation tricks for e.g.
3DNow or SSE, so adding extension support for using those for
array-proccessing (normal calculation or the like) would probably give a
significant speedup for some applications (though at least in my apps, I
only do those tasks during startup).
Re-coding a few core loops in C or PyRex would also likely be a decent
optimisation approach. After you've tried that, and found the code
still too slow, I'd maybe consider fixed-point optimisations (well,
okay, no _I_ wouldn't ;) ). Giving some idea of your application type
(visualisation, VR, gaming), your general approach, your
scale-of-operation, and the profiling results for your apps might yield
more directed suggestions.
Good luck,
Mike
Simon Wittber (Maptek) wrote:
>Many years ago, using fixed point numbers to speed up calculations (esp.
>in the demo-scene) was a standard optimization, which would speed up
>code by orders of magnitude.
>
>I have been experimenting with Python+OpenGL and have run into a few
>speed bumps.
>
>My question is, considering the speed of today's FPUs, is using fixed
>point math still a valid optimization, in Python?
>
>simon.
>
>
_______________________________________
Mike C. Fletcher
Designer, VR Plumber, Coder
http://members.rogers.com/mcfletch/
More information about the Python-list
mailing list