Working with the set of real numbers

Chris Angelico rosuav at gmail.com
Tue Mar 4 18:01:27 EST 2014


On Wed, Mar 5, 2014 at 9:54 AM, Oscar Benjamin
<oscar.j.benjamin at gmail.com> wrote:
>> Let's compare two
>> versions. In the first, you set the precision (I'm talking in terms of
>> REXX's "NUMERIC DIGITS" statement
>
> I have no idea what that is.
>
>>- anything beyond this many digits
>> will be rounded (and represented exponentially, if necessary); I'm not
>> sure if decimal.Decimal precision works this way) such that you get 10
>> digits.
>
> With the decimal module if you set the precision to 5 digits then it
> basically represents the number in "standard form" with 5 digits .e.g:
> 1.2345 x 10**21.

That's how NUMERIC DIGITS works, so we're on the same page. I'm not
familiar enough with decimal.Decimal and how precision is configured,
but it seems to function the same way.

>> Each iteration requires division by a 10-digit number, which
>> is an operation that takes a certain amount of time; and it's going to
>> take some number of iterations to get to the final answer.
>>
>> Second version, you set the precision so you get 20 digits.
>
> If we're talking about 10-20 digits then the decimal module is
> overkill: just use float. The speed up from hardware arithmetic will
> massively out-weigh any other performance considerations.

Yeah, I'm just digging into the algorithm. The same concept applies
when going from 100 to 200 digits, or 1000 to 2000, and in each case,
the division will get way slower, but the number of iterations won't
go up as fast as I thought it would.

In theory, it should be possible to do the first few divisions at
lower precision, and scale up as you have need. In practice, would the
churning of precisions mean that you lose all the benefit?

ChrisA



More information about the Python-list mailing list