[Numpy-discussion] Random int64 and float64 numbers

josef.pktd at gmail.com josef.pktd at gmail.com
Thu Nov 5 21:04:38 EST 2009


On Thu, Nov 5, 2009 at 6:36 PM, Charles R Harris
<charlesr.harris at gmail.com> wrote:
>
>
> On Thu, Nov 5, 2009 at 4:26 PM, David Warde-Farley <dwf at cs.toronto.edu>
> wrote:
>>
>> On 5-Nov-09, at 4:54 PM, David Goldsmith wrote:
>>
>> > Interesting thread, which leaves me wondering two things: is it
>> > documented
>> > somewhere (e.g., at the IEEE site) precisely how many *decimal*
>> > mantissae
>> > are representable using the 64-bit IEEE standard for float
>> > representation
>> > (if that makes sense);
>>
>> IEEE-754 says nothing about decimal representations aside from how to
>> round when converting to and from strings. You have to provide/accept
>> *at least* 9 decimal digits in the significand for single-precision
>> and 17 for double-precision (section 5.6). AFAIK implementations will
>> vary in how they handle cases where a binary significand would yield
>> more digits than that.
>>
>
> I believe that was the argument for the extended precision formats. The
> givien number of decimal digits is sufficient to recover the same float that
> produced them if a slightly higher precision is used in the conversion.
>
> Chuck

>From the discussion for the floating point representation, it seems that
a uniform random number generator would have a very coarse grid
in the range for example -1e30 to +1e30 compared to interval -0.5,0.5.

How many points can be represented by a float in [-0.5,0.5] compared
to [1e30, 1e30+1.]?
If I interpret this correctly, then there are as many floating point numbers
in [0,1] as in [1,inf), or am I misinterpreting this.

So how does a PRNG handle a huge interval of uniform numbers?

Josef

>
>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion at scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>



More information about the NumPy-Discussion mailing list