Doubt in line_profiler documentation

Alain Ketterlin alain at universite-de-strasbourg.fr.invalid
Sat Jan 27 03:25:20 EST 2018


Abhiram R <abhi.darkness at gmail.com> writes:

[...]
> https://github.com/rkern/line_profiler
>
> The definition for the time column says -
>
> "Time: The total amount of time spent executing the line in the timer's
> units. In the header information before the tables, you will see a line
> 'Timer unit:' giving the conversion factor to seconds. It may be different
> on different systems."

> For example, if the timer unit is :* 3.20802e-07 s*
> and a particular instruction's time column says its value is 83.0, is the
> time taken 83.0*3.20802e-07 s? Or is there more to it?

That's it.

> If my understanding is correct however, why would there be a need for
> this? What could be the cause of this - " It may be different on
> different systems "?

Time is a complicated thing on a computer, and is only measured with a
certain precision (or "resolution"). This precision may vary from system
to system. It is customary to mention the resolution when profiling,
because the resolution is usually coarse wrt processor frequency
(typically 1 microsecond, around 3000 processor cycles at 3Ghz). So
profiling very short running pieces of code is highly inaccurate.

You can look at the code used in rkern, at

https://github.com/rkern/line_profiler/blob/master/timers.c

You'll see that on Windows it uses QueryPerformanceCounter() [*] and
QueryPerformanceFrequency(). On Unix it uses gettimeofday(), which has a
fixed/conventional resolution.

By the way, the use of gettimeofday() is strange since this function is
now deprecated... clock_gettime() should be used instead. It has an
associated clock_getres() as well.

-- Alain.

[*] WTF is wrong with these microsoft developpers? Clocks and
performance counters are totally different things. what's the need for
confusing terms in the API?



More information about the Python-list mailing list