Maths error

Nick Maclaren nmm1 at cus.cam.ac.uk
Mon Jan 15 04:02:41 EST 2007


In article <f4mlq29t80pvvqbfm2ch2r2uf6tfnhlmnl at 4ax.com>,
Tim Roberts <timr at probo.com> writes:
|> "Hendrik van Rooyen" <mail at microcorp.co.za> wrote:
|> 
|> >> What I don't know is how much precision this approximation loses when
|> >> used in real applications, and I have never found anyone else who has
|> >> much of a clue, either.
|> >> 
|> >I would suspect that this is one of those questions which are simple
|> >to ask, but horribly difficult to answer - I mean - if the hardware has 
|> >thrown it away, how do you study it - you need somehow two
|> >different parallel engines doing the same stuff, and comparing the 
|> >results, or you have to write a big simulation, and then you bring 
|> >your simulation errors into the picture - There be Dragons...
|> 
|> Actually, this is a very well studied part of computer science called
|> "interval arithmetic".  As you say, you do every computation twice, once to
|> compute the minimum, once to compute the maximum.  When you're done, you
|> can be confident that the true answer lies within the interval.

The problem with it is that it is an unrealistically pessimal model,
and there are huge classes of algorithm that it can't handle at all;
anything involving iterative convergence for a start.  It has been
around for yonks (I first dabbled with it 30+ years ago), and it has
never reached viability for most real applications.  In 30 years, it
has got almost nowhere.

Don't confuse interval methods with interval arithmetic, because you
don't need the latter for the former, despite the claims that you do.

|> For people just getting into it, it can be shocking to realize just how
|> wide the interval can become after some computations.

Yes.  Even when you can prove (mathematically) that the bounds are
actually quite tight :-)


Regards,
Nick Maclaren.



More information about the Python-list mailing list