Floating point equality [was Re: What exactly is "exact" (was Clean Singleton Docstrings)]

Marko Rauhamaa marko at pacujo.net
Thu Jul 21 05:09:13 EDT 2016


Chris Angelico <rosuav at gmail.com>:

> On Thu, Jul 21, 2016 at 5:52 PM, Marko Rauhamaa <marko at pacujo.net> wrote:
>> A couple of related anecdotes involving integer errors.
>>
>> 1. I worked on a (video) product that had to execute a piece of code
>>    every 7 µs or so. A key requirement was that the beat must not drift
>>    far apart from the ideal over time. At first I thought the
>>    traditional nanosecond resolution would be sufficient for the purpose
>>    but then made a calculation:
>>
>>         maximum rounding error = 0.5 ns/7 µs
>>                                = 70 µs/s
>>                                = 6 s/day
>>
>>    That's why I decided to calculate the interval down to a femtosecond,
>>    whose error was well within our tolerance.
>
> I'd be curious to know whether, had you used nanosecond resolution,
> you ever would have seen anything like that +/- 6s/day error. One
> convenient attribute of the real world [1] is that, unless there's a
> good reason for it to do otherwise [2], random error will tend to
> cancel out rather than accumulate. With error of +/- 0.5 ns, assume
> (for the sake of argument) that the actual error at each measurement
> is random.choice((-0.4, -0.3, -0.2, -0.1, 0.1, 0.2, 0.3, 0.4)) ns,

No, this is a systematic error due to the rounding of a rational number
to the nearest nanosecond interval. Systematic errors of this kind are a
real thing and not even all that uncommon.

> (I don't believe I've ever actually used a computer that's capable of
> nanosecond-accurate time calculations. Generally they return time in
> nanoseconds for consistency, but they won't return successive integer
> values. You must have been on some seriously high-end hardware -
> although that doesn't surprise me much, given that you were working on
> a video product.)

Nanosecond, even femtosecond, calculations are trivially simple integer
arithmetics that can be performed with pencil and paper. The hardware
was nothing fancy, and the timing error due to various hardware,
software and network realities was in the order of ±100 µs. That was
tolerable and could be handled through buffering. However, a cumulative
error of 6 seconds per day was *not* tolerable.

The calculations needed to be extremely simple because the interrupt
routine had to execute every 7 microseconds. You had to let the hardware
RTC do most of the work and do just a couple of integer operations in
the interrupt routine. In particular, you didn't have time to
recalculate and reprogram the RTC at every interrupt.


Marko



More information about the Python-list mailing list