[Numpy-discussion] sinpi/cospi trigonometric functions

Robert Kern robert.kern at gmail.com
Wed Jul 14 12:15:46 EDT 2021


On Wed, Jul 14, 2021 at 11:39 AM Tom Programming <tom.programs at gmail.com>
wrote:

> Hi all,
>
> (I am very new to this mail list so please cut me some slack)
>
> trigonometric functions like sin(x) are usually implemented as:
>
> 1. Some very complicated function that does bit twiddling and basically
> computes the reminder of x by pi/2. An example in
> http://www.netlib.org/fdlibm/e_rem_pio2.c (that calls
> http://www.netlib.org/fdlibm/k_rem_pio2.c ). i.e. ~500 lines of branching
> C code. The complexity arises in part because for big values of x the
> subtraction becomes more and more ill defined, due to x being represented
> in binary base to which an irrational number has to subtracted and
> consecutive floating point values being more and more apart for higher
> absolute values.
> 2. A Taylor series for the small values of x,
> 3. Plus some manipulation to get the correct branch, deal with subnormal
> numbers, deal with -0, etc...
>
> If we used a function like sinpi(x) = sin(pi*x) part (1) can be greatly
> simplified, since it becomes trivial to separate the reminder of the
> division by pi/2. There are gains both in the accuracy and the performance.
>
> In large parts of the code anyways there is a pi inside the argument of
> sin since it is common to compute something like sin(2*pi*f*t) etc. So I
> wonder if it is feasible to implement those functions in numpy.
>
> To strengthen my argument I'll note that the IEEE standard, too, defines (
> https://irem.univ-reunion.fr/IMG/pdf/ieee-754-2008.pdf ) the functions
> sinPi, cosPi, tanPi, atanPi, atan2Pi. And there are existing
> implementations, for example, in Julia (
> https://github.com/JuliaLang/julia/blob/6aaedecc447e3d8226d5027fb13d0c3cbfbfea2a/base/special/trig.jl#L741-L745
> ) and the Boost C++ Math library (
> https://www.boost.org/doc/libs/1_54_0/boost/math/special_functions/sin_pi.hpp
> )
>
> And that issue caused by apparently inexact calculations have been raised
> in the past in various forums (
> https://stackoverflow.com/questions/20903384/numpy-sinpi-returns-negative-value
> https://stackoverflow.com/questions/51425732/how-to-make-sinpi-and-cospi-2-zero
> https://www.reddit.com/r/Python/comments/2g99wa/why_does_python_not_make_sinpi_0_just_really/
> ... )
>
> PS: to be nitpicky I see that most implementation implement sinpi as
> sin(pi*x) for small values of x, i.e. they multiply x by pi and then use
> the same coefficients for the Taylor series as the canonical sin. A
> multiply instruction could be spared, in my opinion, by storing different
> Taylor expansion number coefficients tailored for the sinpi function. It is
> not clear to me if it is not done because the performance gain is small,
> because I am wrong about something, or because those 6 constants of the
> Taylor expansion have a "sacred aura" about them and nobody wants to enter
> deeply into this.
>

The main value of the sinpi(x) formulation is that you can do the reduction
on x more accurately than on pi*x (reduce-then-multiply rather than
multiply-then-reduce) for people who particularly care about the special
locations of half-integer x. sin() and cos() are often not implemented in
software, but by CPU instructions, so you don't want to reimplement them.
There is likely not a large accuracy win by removing the final
multiplication.

We do have sindg(), cosdg(), and tandg() in scipy.special that work
similarly for inputs in degrees rather than radians. They also follow the
reduce-then-multiply strategy. scipy.special would be a good place for
sinpi() and friends.

-- 
Robert Kern
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.python.org/pipermail/numpy-discussion/attachments/20210714/022e0a86/attachment.html>


More information about the NumPy-Discussion mailing list