math.nroot [was Re: A brief question.]

Tom Anderson twic at urchin.earth.li
Sun Jul 3 15:53:22 EDT 2005


On Mon, 4 Jul 2005, Steven D'Aprano wrote:

> On Sun, 03 Jul 2005 15:46:35 +0100, Tom Anderson wrote:
>
>> I think there would be a lot less confusion over the alleged inaccuracy of
>> floating point if everyone wrote in hex - indeed, i believe that C99 has
>> hex floating-point literals. C has always been such a forward-thinking
>> language!
>
> No, all that would do is shift the complaint from "Python has a bug when
> you divide 10 into 1.0" to "Python has a bug when you convert 0.1 into hex".

Ah, but since the programmer would have to do that conversion themself, 
they wouldn't be able to blame python got it!

>>> But this works:
>>>
>>> py> inf = float("inf")
>>> py> inf
>>> inf
>>
>> True. Still, i'd rather not have to rely on string parsing to generate a
>> fairly fundamental arithmetic quantity.
>
> What happens when your Python script is running on a platform that can
> deal with 1e300*1e300 giving 1e600 without overflow?

Then i lose.

> Okay, maybe no such platform exists today (or does it?), but it could 
> exist, and your code will fail on those systems.
>
> I believe that the IEEE standard specifies that float("inf") should give
> an infinity, just as float("0.0") should give a zero.

I think it's been pointed out that this fails on (some versions of?) 
windows.

> For production code, I'd wrap float("inf") in a try...except and only 
> fall back on your method if it raised an exception, and then I'd 
> actually test that your result was a real inf (eg by testing that 
> inf+inf=inf).

Okay, let's try this ...

def mkinf():
 	try:
 		return float("inf")
 	except ValueError:
 		x = 1e300
 		while ((x + x) != x):
 			x = x * x
 		return x

inf = mkinf()

Is that a portable solution?

>>>> The IEEE spec actually says that (x == nan) should be *false* for 
>>>> every x, including nan. I'm not sure if this is more or less stupid 
>>>> than what python does!
>>>
>>> Well, no, the IEEE standard is correct. NaNs aren't equal to anything, 
>>> including themselves, since they aren't numbers.
>>
>> I don't buy that. Just because something isn't a number doesn't mean it
>> can't be equal to something else, does it? I mean, we even say x == None
>> if x is indeed None.
>
> Yes, but None does equal None, since there is only one None, and by 
> definition, a thing is equal to itself.

Yes.

> But NaNs are _not_ things.

I disagree. A NaN _is_ a thing; it's not a floating-point number, for 
sure, but it is a symbol which means "there is no answer", or "i don't 
know", and as such, it should follow the universal rules which apply to 
all things.

> That is the whole point! Yes, we _represent_ INF-INF as a particular 
> bit-pattern and call it NaN, but mathematically subtracting infinity 
> from infinity is not defined. There is no answer to the question "what 
> is infinity subtracted from infinity?".

There is a value at large in my programs, represented however, meaning 
whatever, and with whatever name, and it should follow the same 
fundamental rules as every single other value in the entire programmatic 
universe. Yes, NaN is a special case, but special cases aren't special 
enough to break the rules.

> We pretend that the answer is NaN, but that isn't right. The NaN is just 
> there as a placeholder for "there is no answer", so that we don't have 
> to sprinkle our code with a thousand and one tests.

In the same way as None is a placeholder for "there is no thing". These 
placeholders are themselves things!

> Since INF-INF doesn't have an answer, we can't do this:
>
> x = inf - inf
> y = inf - inf
>
> and expect that x == y.

I think we can. Both x and y have the same value, a value of 
indeterminacy. NaN is a rigidly defined value of doubt and uncertainty!

>> Moreover, this freaky rule about NaNs means that this is an exception 
>> to the otherwise absolutely inviolate law that x == x.
>
> Yes. So what? Remove the NaN shorthand:
>
> "The non-existent answer to this question is the same non-existent answer
> to this other question."

Make sense to me.

> It doesn't make sense to say that a non-thing is equal to anything -- even
> to itself, since itself doesn't exist.
>
> (Remember, a NaN is just an illusionary placeholder, not a number.)

If you think it's illusionary, i invite you to inspect the contents of my 
variables - i have a real, live NaN trapped in one of them!

>> I'd rather have that simple, fundamental logical consistency than some IEEE
>> rocket scientist's practical-value-free idea of mathematical consistency.
>
> Ah, from a _programmer's_ perspective, there is an argument that the 
> simplicity of just testing NaNs with equality outweighs the logical 
> silliness of doing such a thing.

Yes. It may not be mathematically pure (although i argue that it is, in 
fact, as long as you don't think of floats as being real numbers), it is 
practical, and practicality beats purity.

> But, apart from testing whether a float is a NaN, why would you ever 
> want to do an equality test?

By definition, never. Isn't that usage reason enough?

> The only usage case I can think of is would be something like this:
>
> def isNaN(x):
>    return x == SOME_KNOWN_NAN
>
> But that won't work, because there are lots of different NaNs. 254 of 
> them, or twice that if you include signed NaNs (which you shouldn't, but 
> you do have to allow for them in equality testing).

Ah, well. There we have the question of whether python should implement 
full-blown IEEE arithmetic. This is somewhat heretical, but i think it 
shouldn't; i think it would be much better to adopt Java's noddy-IEEE 
approach, where there's exactly one NaN (although with well-behaved 
equality comparison). I realise this isn't going to happen, though.

> Any good IEEE compliant system should already have a built-in function
> that tests for NaNs.

Agreed.

>>>> And while i'm ranting, how come these expressions aren't the same:
>>>>
>>>> 1e300 * 1e300
>>>> 1e300 ** 2
>>>
>>> Because this is floating point, not real maths :-)
>>>
>>> I get inf and Overflow respectively. What do you get?
>>
>> The same. They really ought to give the same answer.
>
> In real numbers, yes they should. In floating point, that is too much to
> expect.

Why on earth is that? All the code handling exponentiation has to do is 
trap the OverflowError and return an inf instead!

> In mathematics, the order you do your calculation shouldn't matter. But 
> in floating point, where there is rounding errors and finite precision 
> issues, it does.

True but, in this case, quite irrelevant.

>>>> And finally, does Guido know something about arithmetic that i don't,
>>>> or is this expression:
>>>>
>>>> -1.0 ** 0.5
>>>>
>>>> Evaluated wrongly?
>>>
>>> No, it is evaluated according to the rules of precedence. It is
>>> equivalent to -(1.0**0.5). You are confusing it with (-1.0)**0.5 which
>>> fails as expected.
>>
>> Ah. My mistake. I submit that this is also a bug in python's grammar.
>> There's probably some terribly good reason for it, though.
>
> Yes. You generally want exponentiation to have the highest precedence.
>
> 2*3**4 should give 162, not 1296. Think about how you would write
> that mathematically, with pencil and paper: the 4 is written as a
> superscript over the 3, and is applied to that before multiplying by the 2.
>
> Unary minus is equivalent to multiplying by -1, so -3**4 is equivalent to
> -1*3**4.
>
> These are the rules of precedence mathematicians have worked out over 
> many centuries. Yes, some of them are arbitrary choices, but they do 
> work, and changing them for another arbitrary choice doesn't give us any 
> benefit.

I am utterly baffled. Three people so far have told me that exponentiation 
has higher precedence than unary minus *in conventional notation*. Are you 
really telling me that you think this expression:

   2
-1

Evaluates to -1?

tom

-- 
When you mentioned INSERT-MIND-INPUT ... did they look at you like this?



More information about the Python-list mailing list