[Python-ideas] Floating point contexts in Python core

Oscar Benjamin oscar.j.benjamin at gmail.com
Thu Oct 11 19:17:33 CEST 2012


On 11 October 2012 15:54, Guido van Rossum <guido at python.org> wrote:
> I think you're mistaking  my suggestion. I meant to recommend that
> there should be a way to control the behavior (e.g. whether to
> silently return Nan/Inf or raise an exception) of floating point
> operations, using the capabilities of the hardware as exposed through
> C, using Python's existing float type. I did not for a second consider
> reimplementing IEEE 754 from scratch. Therein lies insanity.
>
> That's also why I recommended you look at the fpectl module.

I would like to have precisely the functionality you are suggesting
and I don't want to reimplement anything (I assume this message is
intended for me since it was addressed to me).

I don't know enough about the implementation details to agree on the
hardware capabilities part. From a quick glance at the fpectl module I
see that it has problems with portability:

http://docs.python.org/library/fpectl.html#fpectl-limitations

   Setting up a given processor to trap IEEE-754 floating point errors
   currently requires custom code on a per-architecture basis.
  You may have to modify fpectl to control your particular hardware.

This presumably explains why I don't have the module in my Windows
build or on the Linux machines in the HPC cluster I use. Are these
problems that can be overcome? If it is necessary to have this
hardware-specific accelerator for floating point exceptions then is it
reasonable to expect implementations other than CPython to be able to
match the semantics of floating point contexts without a significant
degradation in performance?

I was expecting the implementation to be some checks in straight
forward C code for invalid values. I would expect this to cause a
small degradation in performance (the kind that you wouldn't notice
unless you went out of your way to measure it). Python already does
this by checking for a zero value on every division. As far as I can
tell from the numpy codebase this is how it works there.

This function seems to be responsible for the integer division by zero
result in numpy:
https://github.com/numpy/numpy/blob/master/numpy/core/src/scalarmathmodule.c.src#L271

>>> import numpy as np
>>> np.seterr()
{'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'}
>>> np.int32(1) / np.int32(0)
__main__:1: RuntimeWarning: divide by zero encountered in long_scalars
0
>>> np.seterr(divide='ignore')
{'over': 'warn', 'divide': 'warn', 'invalid': 'warn', 'under': 'ignore'}
>>> np.int32(1) / np.int32(0)
0
>>> np.seterr(divide='raise')
{'over': 'warn', 'divide': 'ignore', 'invalid': 'warn', 'under': 'ignore'}
>>> np.int32(1) / np.int32(0)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
FloatingPointError: divide by zero encountered in long_scalars

This works perfectly well in numpy and also in decimal I see no reason
why it couldn't work for float/int. But what would would be even
better is if you could control all of them with a single context
manager. Typically I don't care with the error occurred as a result of
operations on ints/floats/ndarrays/decimals I just know that I got a
NaN from somewhere and I need to debug it.


Oscar



More information about the Python-ideas mailing list