Floating point equality [was Re: What exactly is "exact" (was Clean Singleton Docstrings)]

Chris Angelico rosuav at gmail.com
Thu Jul 21 04:46:44 EDT 2016


On Thu, Jul 21, 2016 at 5:52 PM, Marko Rauhamaa <marko at pacujo.net> wrote:
> A couple of related anecdotes involving integer errors.
>
> 1. I worked on a (video) product that had to execute a piece of code
>    every 7 µs or so. A key requirement was that the beat must not drift
>    far apart from the ideal over time. At first I thought the
>    traditional nanosecond resolution would be sufficient for the purpose
>    but then made a calculation:
>
>         maximum rounding error = 0.5 ns/7 µs
>                                = 70 µs/s
>                                = 6 s/day
>
>    That's why I decided to calculate the interval down to a femtosecond,
>    whose error was well within our tolerance.

I'd be curious to know whether, had you used nanosecond resolution,
you ever would have seen anything like that +/- 6s/day error. One
convenient attribute of the real world [1] is that, unless there's a
good reason for it to do otherwise [2], random error will tend to
cancel out rather than accumulate. With error of +/- 0.5 ns, assume
(for the sake of argument) that the actual error at each measurement
is random.choice((-0.4, -0.3, -0.2, -0.1, 0.1, 0.2, 0.3, 0.4)) ns,
with zero and the extremes omitted to make the calculations simpler.
In roughly twelve billion randomizations (86400 seconds divided by
7µs), the chances of having more than one billion more positive than
negative are... uhh.... actually, I don't know how to calculate
probabilities off numbers that big, but pretty tiny. So you're going
to have at least 5.5 billion negatives to offset your positives (or
positives to offset your negatives, same diff); more likely they'll be
even closer. So if you have (say) 5.5 to 6.5 ratio of signs, what
you're actually working with is half a second per day of accumulated
error - and I think you'd have a pretty tiny probability of even
*that* extreme a result. If it's more like 5.9 to 6.1, you'd have 0.1
seconds per day of error, at most. Plus, the same probabilistic
calculation can be done for days across a month, so even though the
theory would let you drift by three minutes a month, the chances of
shifting by even an entire second over that time are fairly slim.

This is something where I'd be more worried about systematic bias in
the code than anything from measurement or rounding error.

(I don't believe I've ever actually used a computer that's capable of
nanosecond-accurate time calculations. Generally they return time in
nanoseconds for consistency, but they won't return successive integer
values. You must have been on some seriously high-end hardware -
although that doesn't surprise me much, given that you were working on
a video product.)

ChrisA

[1] Believe you me, it has no shortage of INconvenient attributes, so
it's nice to have one swing the balance back a bit!
[2] If there's systematic error - if your 7 µs is actually averaging
7.25 µs - you need to deal with that separately.



More information about the Python-list mailing list