math.nroot [was Re: A brief question.]

Tom Anderson twic at urchin.earth.li
Sun Jul 3 10:46:35 EDT 2005


On Mon, 4 Jul 2005, Steven D'Aprano wrote:

> On Sun, 03 Jul 2005 10:56:42 +0100, Tom Anderson wrote:
>
>> On Sun, 3 Jul 2005, Steven D'Aprano wrote:
>>
>>> On Sun, 03 Jul 2005 02:22:23 +0200, Fredrik Johansson wrote:
>>>
>>>> On 7/3/05, Tom Anderson <twic at urchin.earth.li> wrote:
>>>>> That's one way. I'd do:
>>>>>
>>>>> root = value ** 0.5
>>>>>
>>>>> Does that mean we can expect Guido to drop math.sqrt in py3k? :)
>>>>
>>>> I'd rather like to see a well implemented math.nthroot. 64**(1/3.0)
>>>> gives 3.9999999999999996, and this error could be avoided.
>>>
>>> py> math.exp(math.log(64)/3.0)
>>> 4.0
>>>
>>> Success!!!
>>
>> Eeeeeeenteresting. I have no idea why this works. Given that math.log is
>> always going to be approximate for numbers which aren't rational powers of
>> e (which, since e is transcendental, is all rational numbers, and
>> therefore all python floats, isn't it?), i'd expect to get the same
>> roundoff errors here as with exponentiation. Is it just that the errors
>> are sufficiently smaller that it looks exact?
>
> I have no idea :-)
>
> Unfortunately, floating point maths is a bit of a black art. Try this
> simple calculation:
>
> py> 4.0/3 - 5.0/6
> 0.49999999999999989
>
> Should be 0.5 exactly.
>
> Many numbers which you can write exactly in decimal cannot be
> stored exactly in floating point. Even something as simple as one tenth
> 0.1 doesn't have an exact representation in binary:
>
> py> 1.0/10
> 0.10000000000000001

I realise that - but all that argues that your example shouldn't work 
either!

I think there would be a lot less confusion over the alleged inaccuracy of 
floating point if everyone wrote in hex - indeed, i believe that C99 has 
hex floating-point literals. C has always been such a forward-thinking 
language!

>>> Note how much simpler this would be if we could guarantee proper
>>> infinities and NaNs in the code. We could turn a 23-line block to a
>>> one-liner.
>>
>> YES! This is something that winds me up no end; as far as i can tell,
>> there is no clean programmatic way to make an inf or a NaN; in code i
>> write which cares about such things, i have to start:
>>
>> inf = 1e300 ** 1e300
>> nan = inf - inf
>>
>> Every bloody time. I'm going to be buggered if python ever rolls out some
>> sort of bigfloat support.
>
> It fails for me:
>
> py> inf = 1e300 ** 1e300
> Traceback (most recent call last):
>  File "<stdin>", line 1, in ?
> OverflowError: (34, 'Numerical result out of range')

Oops. I meant to write "1e300 * 1e300" - multiplication, not 
exponentiation. My bad.

> But this works:
>
> py> inf = float("inf")
> py> inf
> inf

True. Still, i'd rather not have to rely on string parsing to generate a 
fairly fundamental arithmetic quantity.

>> And then god forbid i should actually want to test if a number is NaN,
>> since, bizarrely, (x == nan) is true for every x;
>
> Well dip me in chocolate and call me a Tim Tam, you're right. That is
> bizarre. No, that's not the word I want... that behaviour is broken.
>
>> instead, i have to
>> write:
>>
>> def isnan(x):
>>  	return (x == 0.0) and (x == 1.0)
>>
>> The IEEE spec actually says that (x == nan) should be *false* for every x,
>> including nan. I'm not sure if this is more or less stupid than what
>> python does!
>
> Well, no, the IEEE standard is correct. NaNs aren't equal to anything, 
> including themselves, since they aren't numbers.

I don't buy that. Just because something isn't a number doesn't mean it 
can't be equal to something else, does it? I mean, we even say x == None 
if x is indeed None. Moreover, this freaky rule about NaNs means that this 
is an exception to the otherwise absolutely inviolate law that x == x. I'd 
rather have that simple, fundamental logical consistency than some IEEE 
rocket scientist's practical-value-free idea of mathematical consistency.

> Apple Computer had a maths library that implemented a superset of IEEE 
> arithmetic. It allowed for 254 different NaNs (twice that if you looked 
> at the sign bit, which you weren't supposed to do). This wasn't a 
> deliberate feature as such, it merely fell out naturally from the 
> bit-patterns that represented NaNs, but it was useful since Apple took 
> advantage of it by specifying certain particular NaNs for certain 
> failure modes. Eg NaN(1) might represent the square root of a negative 
> number, but NaN(2) might be INF-INF.
>
> Furthermore, you could easily query the class of a number (zero,
> normalised, denormalised, infinity, NaN), so you would test for a NaN by
> doing something like:
>
> if Kind(x) == NanKind: ...
>
> or
>
> if isNaN(x): ...

We should add that to python - although it would one one small step for 
NaN, one giant leap for NanKind.

Sorry.

>> And while i'm ranting, how come these expressions aren't the same:
>>
>> 1e300 * 1e300
>> 1e300 ** 2
>
> Because this is floating point, not real maths :-)
>
> I get inf and Overflow respectively. What do you get?

The same. They really ought to give the same answer.

>> And finally, does Guido know something about arithmetic that i don't, or
>> is this expression:
>>
>> -1.0 ** 0.5
>>
>> Evaluated wrongly?
>
> No, it is evaluated according to the rules of precedence. It is 
> equivalent to -(1.0**0.5). You are confusing it with (-1.0)**0.5 which 
> fails as expected.

Ah. My mistake. I submit that this is also a bug in python's grammar. 
There's probably some terribly good reason for it, though.

tom

-- 
Links are content.



More information about the Python-list mailing list