[Numpy-discussion] Evaluating performance of f2py extensions with gprof, why spending time _gfortran_compare_string
Åsmund Hjulstad
asmund.hjulstad at gmail.com
Wed Aug 18 09:17:51 EDT 2010
I am calling a few functions in a fortran library. All parameters are short
(longest array of 20 elements), and I do three calls to the fortran library
pr iteration. According to the python profiler (running the script as %run
-p in ipython), all time is spent in the python extension.
I built the extension with options -pg -O , ran a test script, and
evaluated the output with
gprof <libraryname>.py -b
with the following output:
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
41.64 5.03 5.03
_gfortran_compare_string
27.40 8.34 3.31 rdxhmx_
19.21 10.66 2.32 phimix_
5.88 11.37 0.71 phifeq_
2.32 11.65 0.28 phihmx_
0.66 11.73 0.08 phiderv_
[...]
and this call graph:
Call graph
granularity: each sample hit covers 4 byte(s) for 0.08% of 11.83 seconds
index % time self children called name
<spontaneous>
[1] 42.9 5.07 0.00 _gfortran_compare_string [1]
-----------------------------------------------
What can this mean?
Executing a simple test program, exercising many of the same methods, I
don't see any _gfortran_compare_string in the output.
Suggestions most welcome.
--
Åsmund Hjulstad, asmund at hjulstad.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20100818/bad20c83/attachment.html>
More information about the NumPy-Discussion
mailing list