[Python-ideas] PEP 485: A Function for testing approximate equality

Chris Barker chris.barker at noaa.gov
Mon Jan 26 18:33:02 CET 2015


On Sun, Jan 25, 2015 at 10:39 PM, Steven D'Aprano <steve at pearwood.info>
wrote:

> On Sun, Jan 25, 2015 at 05:21:53PM -0800, Chris Barker wrote:
> > But adding a relative tolerance to unittest makes a lot of sense -- would
> > "assertCloseTo" sound entirely too much like assertAlmostEqual? I think
> it
> > may be OK if the docs for each pointed to the other.
>
> CloseTo assumes an asymetric test, which isn't a given :-)
>
> I prefer ApproxEqual, although given that it is confusingly similar to
> AlmostEqual, IsClose would be my second preference.


indeed -- I did add the "to" to imply the asymmetric test -- so I say if we
go with asymmetric test then IsClose and the IsCloseTo if we go with the
asymmetric test.

> Well it requires the tolerance values to be set on the instance, and they
> > default to zero. So if we were to add this to unittest.TestCase, would
> you
> > make those instance attributes of TestCase?
>
> No, I would modify it to do something like this:
>
>     if tol is None:
>         tol = getattr(self, "tol", 0.0)  # or some other default
>
> and similar for rel.
>

so the attributes would not be there by default but users could add them if
they want:

class my_tst(unitest.TestCase):
    tol = 1e-8

That would work, but seems like a pretty unclear API to me -- is there a
precedent in unitest for this already?

But I'll leave further discussion on that to other -- I don't like the
UnitTest API anyway ;-)

I recommend using short names for the two error tolerances, tol and rel,
> because if people are going to be writing a lot of tests, having to
> write:
>
>     self.assertIsClose(x, y, absolute_tolerance=0.001)
>
> will get tiresome.


Isn't that why you set an attribute on your class?

but if short, at least rel_tol and abs_tol  a plain "tol" could be too
confusing (even though I did that in my first draft...)

My current draft has rel_tolerance and abs_toelraance -- perhaps a bit too
long to type often, but a few people asked for longer, more descriptive
names.

> > - since there are considerable disagreements about the right way to
> > >   handle a fuzzy comparison when *both* an absolute and relative error
> > >   are given,


Is there? In this discussion , no one had any issue with the proposed
approach:

result = difference <= rel_tolerance*scaling_value or difference <=
abs_tolerance

The only issue brought up is that we might want to do it the numpy  way for
the sake of compatibility with numpy. That's why I  didn't add it to my
list of issues to resolve.


> I was motivated by assertEqual and the various sequence/list methods.


yup -- good to keep that trend going.

> But you could also add an optional parameter to pass in an alternate
> > comparison function, rather than have it be a method of TestCase. As I
> > said, I think it's better to have it available, and discoverable, for use
> > outside of unitest.
>
> That's an alternative too. I guess it boils down to whether you prefer
> inheritance or the strategy design pattern :-)
>

Now that I think about it -- we could easily do both.

Define a math.is_close_to()

in TestCase:

    @staticmethod
    def is_close_to(*args, **kwargs):
        return math.isclose_to(*args, **kwargs)

best of both worlds. I"ve got my stand alone function outside unittest, and
folks can still override TestCase.is_close_to if they want.

I do think there are two distinct use-cases that should be included in
> the PEP:
>
> (1) Unit testing, and a better alternative to assertAlmostEqual.
>
> (2) Approximate equality comparisons, as per Guido's example.
>



> Note that those two are slightly different: in the unit testing case,
> you usually have an known expected value (not necessarily mathematically
> exact, but at least known) while in Guido's example neither value is
> necessarily better than they other, you just want to stop when they are
> close enough.
>

yes, but in an iterative solution you generally compute a solution, then
use that to compute a new solution, and you want to know if the new one is
significantly different than the previous -- so an asymmetric test does
make some sense. But again either work work, and pretty much the same.

Example forthcoming....

Like Nick, I think the first is the more important one. In the second
> case, anyone writing a numeric algorithm is probably copying an
> algorithm which already incorporates a fuzzy comparison, or they know
> enough to write their own. The benefits of a standard solution are
> convenience and correctness. Assuming unittest provides a well-tested
> is_close/approx_equal function, why not use it?


Exactly.

I can see we're going to have to argue about the "Close To" versus
> "Close" distinction :-)
>

I think we both understand and agree on the distinction. My take is:

 - Either will work fine in most instances
 - The asymmetric one is a bit clearer and maybe better for the testing
use-case.
 - I'd be perfectly happy with either on in the standard library

Maybe not consensus, but the majority on this thread seem to prefer the
asymmetric test.

We could, of course add a flag to turn on the symmetric test (probably the
Boost "strong" case), but I'd rather not have more flags, and as you
indicate above, the people for whom it matters will probably write their
own comparison criteria anyway.

It looks like we need to add a bunch of tet to the PEP about incorporating
this into unitest -- I'd  love it if someone else wrote that -- I'm not
much of a unitest user anyway.

Pull requests accepted:

https://github.com/PythonCHB/close_pep


-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

Chris.Barker at noaa.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20150126/0b382b14/attachment-0001.html>


More information about the Python-ideas mailing list