What is considered an "advanced" topic in Python?

Ben Bacarisse ben.usenet at bsb.me.uk
Mon Jun 1 09:22:49 EDT 2015


Laura Creighton <lac at openend.se> writes:

> In a message of Mon, 01 Jun 2015 19:45:31 +1000, Chris Angelico writes:
>>On Mon, Jun 1, 2015 at 5:58 PM, Laura Creighton <lac at openend.se> wrote:
>>> If you are giving a talk about Decimal -- and trying to stamp out the
>>> inappropriate use of floats you have to first inform people that
>>> what they learned as 'decimals' as children was not floating point,
>>> despite the fact that we write them the same way.

That may be focusing on the wrong aspect because what you learn in
school (at least in the UK) is not so dissimilar to floating point
arithmetic.  The differences between that and machine floating point are
that (a) machines typically use base 2 (so the set of fractions that can
be represented exactly is a surprise); (b) machines use fixed-size
floating point whereas at school the representation, whilst always
finite, is more elastic; and (c) the base you calculate in is the base
you are given the data in and also the base you have to give the result
in.

For example, when a pupil is asked to "calculate 1.5 x 10^3 + 1.01 x
10^2 to two decimal places" what they will do is very similar to what a
machine would do if it had decimal floating point.  The surprises that
come from using typical machine floating point come from two base
conversions and (possibly) from rounding at every stage rather than just
at the end.

<snip>
> You have missed my point.  What I want is for floats never to be
> represented as '.' or ',' notation.  That way, when each naive
> user writes his or her first program that deals with money, when
> they look at their computer manual they will come to the section on
> floating point numbers and they will all look like something they
> have never seen before.

I agree that the key in to learn enough about what is going on.  Maybe
the solution is to teach people fixed-width binary floating point
arithmetic in schools!

> So they will read the section carefully
> to see if this is what they want or need, and the section can nicely
> say NEVER USE THIS FOR MONEY and they will know they are in the wrong
> place.

I know what you mean by this but it still bothers me a bit!  My first
job was writing programs for an economist.  It was all about money but,
being macro economics, was all in floating-point.  (In those days you
could not solve sets of non-linear differential equations in reasonable
time using anything else, but even if you could there would be no
point.)

Even some kinds of accounting can be done using floating point.  You can
use "scaled integers" in floating point to get some advantages over
integer arithmetic.  You might get a wider range (~53 bits rather than
32 say) and you get very simple checks for many overflow conditions.
You can also arrange for a simple check that you've failed to round
correctly at some step -- any non-integer fractional part will usually
indicate this.

I'm not averse to a blanket warning as the simplest way to get the
message across to beginners, but it's slowly becoming a "fact" that
nothing to do with money can be done correctly using floating point.

<snip>
-- 
Ben.



More information about the Python-list mailing list