[Numpy-discussion] Log Arrays

Anne Archibald peridot.faceted at gmail.com
Thu May 8 13:46:43 EDT 2008


2008/5/8 Charles R Harris <charlesr.harris at gmail.com>:
>
> On Thu, May 8, 2008 at 10:56 AM, Robert Kern <robert.kern at gmail.com> wrote:
> >
> > When you're running an optimizer over a PDF, you will be stuck in the
> > region of exp(-1000) for a substantial amount of time before you get
> > to the peak. If you don't use the log representation, you will never
> > get to the peak because all of the gradient information is lost to
> > floating point error. You can consult any book on computational
> > statistics for many more examples. This is a long-established best
> > practice in statistics.
>
> But IEEE is already a log representation. You aren't gaining precision, you
> are gaining more bits in the exponent at the expense of fewer bits in the
> mantissa, i.e., less precision. As I say, it may be convenient, but if cpu
> cycles matter, it isn't efficient.

Efficiency is not the point here. IEEE floats simply cannot represent
the difference between exp(-1000) and exp(-1001). This difference can
matter in many contexts.

For example, suppose I have observed a source in X-rays and I want to
fit a blackbody spectrum. I have, say, hundreds of spectral bins, with
a number of photons in each. For any temperature T and amplitude A, I
can compute the distribution of photons in each bin (Poisson with mean
depending on T and A), and obtain the probability of obtaining the
observed number of photons in each bin. Even if a blackbody with
temperature T and amplitude A is a perfect fit I should expect this
number to be very small, since the chance of obtaining *exactly* this
sequence of integers is quite small. But when I start the process I
need to guess a T and an A and evauate the probability. If my values
are far off, the probability will almost certainly be lower than
exp(-1000). But I need to know whether, if I increase T, this
probability will increase or decrease, so I cannot afford to treat it
as zero, or throw up my hands and say "This is smaller than one over
the number of baryons in the universe! My optimization problem doesn't
make any sense!".

I could also point out that frequently when one obtains these crazy
numbers, one is not working with probabilities at all, but probability
densities. A probability density of exp(-1000) means nothing special.

Finally, the fact that *you* don't think this is a useful technique
doesn't affect the fact that there is a substantial community of users
who use it daily and who would like some support for it in scipy.

Anne



More information about the NumPy-Discussion mailing list