Why they use this: duration = time.time() - self.start_time + 1

Richard Damon Richard at Damon-Family.org
Sun Aug 4 08:25:57 EDT 2019


On 8/3/19 10:33 PM, Hongyi Zhao wrote:
> Hi, 
>
> I read the code here:
>
> https://github.com/shichao-an/homura/blob/master/homura.py
>
>
> It said in line 244:
>
>   duration = time.time() - self.start_time + 1
>
> I'm very confusing why it used like this instead of the following:
>
>   duration = time.time() - self.start_time 
>
>
> Any hints?
>
> Regards

Not sure if it is the reason here (since time() returns a float), but
many time like functions return a time chunk number of how many 'tick'
intervals have passed, depending when in each interval you made a
reading, the actual duration between the two calls is plus or minus 1
tick due to quantization error. Adding 1 tick gives you a maximal
estimate of the duration, and also has the advantage of avoiding calling
a time period 0 ticks long, so becomes a common idiom.

This doesn't directly apply to time, as time doesn't directly count
ticks, but scales them for you into real time in seconds, so +1 isn't
quite right (you should add the quanta of the timer to the value), but
it still has the advantage that it is a quick and dirty way to avoid the
0 duration, and after a human scale duration, doesn't perturb the value
enough to make much of a difference.

-- 
Richard Damon




More information about the Python-list mailing list