[SciPy-User] understanding machine precision

Robert Kern robert.kern at gmail.com
Tue Dec 14 14:07:39 EST 2010


On Tue, Dec 14, 2010 at 12:57,  <josef.pktd at gmail.com> wrote:
> On Tue, Dec 14, 2010 at 1:47 PM, Robert Kern <robert.kern at gmail.com> wrote:
>> On Tue, Dec 14, 2010 at 12:42, Keith Goodman <kwgoodman at gmail.com> wrote:
>>> On Tue, Dec 14, 2010 at 9:42 AM,  <josef.pktd at gmail.com> wrote:
>>>> I thought that we get deterministic results, with identical machine
>>>> precision errors, but I get (with some random a0, b0)
>>>>
>>>>>>> for i in range(5):
>>>>        x = scipy.linalg.lstsq(a0,b0)[0]
>>>>        x2 = scipy.linalg.lstsq(a0,b0)[0]
>>>>        print np.max(np.abs(x-x2))
>>>>
>>>>
>>>> 9.99200722163e-016
>>>> 9.99200722163e-016
>>>> 0.0
>>>> 0.0
>>>> 9.99200722163e-016
>>>
>>> I've started a couple of threads in the past on repeatability. Most of
>>> the discussion ends up being about ATLAS. I suggest repeating the test
>>> without ATLAS.
>
> Is there a way to turn ATLAS off without recompiling?

No.

>> On OS X with numpy linked against the builtin Accelerate.framework
>> (which is based off of ATLAS), I get the same result every time.
>
> When I run the script on the commandline (with a new python each
> time), I get the same results each time, but within the loop the
> results still differ up to 1.55431223448e-015. On IDLE when I remain
> in the same session, results differ with each run.

I mean that I get "0.0" for each iteration of each loop even if I push
the number of iterations up to 500 or so.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco



More information about the SciPy-User mailing list