strings (dollar.cents) into floats

sturlamolden sturlamolden at yahoo.no
Fri Aug 31 11:44:04 EDT 2007


On 31 Aug, 02:12, Wildemar Wildenburger
<lasses_w... at klapptsowieso.net> wrote:

> I've heard (ok, read) that several times now and I understand the
> argument. But what use is there for floats, then? When is it OK to use them?

There are fractions that can be exactly represented by floats that
cannot be exactly represented by decimals. There are fractions that
can be exactly represented by decimals that cannot be exactly
represented by floats.

Which one is better? Which do we prefer?

What a float cannot do is to represent a decimal fractional number
(e.g. 1.1) exactly. If we need that, we cannot use floats. A notable
example is monetary computations, it covers 99% of the use for decimal
numbers in computers. For this reason, we should never use floats to
add 10 cents to a dollar. The use of decimals for monetary
calculations is mandatory.

Floats are written as decimal fractional numbers in program code, and
they are printed as decimal fractional numbers on the screen. A novice
programmer may for this reason easily mistake floats for being
decimals. But inside the computer floats are something very different.
Floats do not represent decimal fractional numbers. Floats represent
binary fractional numbers. Decimal numbers are base-10, binary numbers
are base-2. Floats and decimals are similar but different.

It is much easier to manipulate for computers than decimals. Digital
electronics by convention only has two states, TTL voltage levels 0.0
V and 5.0 V, which are taken to mean 0 and 1 in binary (or false and
true in boolean) respectively. Computers work internally with binary
numbers. Floats are preferred to decimals in e.g. numerical maths and
other scientific computing for their better performance speed wise.
Knowledge of the behaviour of floats, e.g. how they cause rounding and
truncation errors, is pivotal in that field of computer science. For
most purposes, we could replace floats with decimals. But then we
would suffer a major speed penalty.

Decimals are only "precise" if the input values can be represented
precisely by decimal fractions. If we typed 1.1 and took that to mean
"one dollar and 10 cents", then certainly "exactly 1.1" was what we
meant. But if we ment "1.1 centigrades read from a mercury
thermometer", we would actually (by convention) mean "something
between 1.05 and 1.14 centigrades". Decimals also fail to be exact if
we try to compute things like "1.0/3.0". 1/3 does not have an exact
representation in decimal, and we get an approximation from the
computer.

For purposes where exact precision is not needed, decimals are neither
better nor worse than floats. This accounts for 99% of the cases where
fractional numbers are used on a computer.

More about floating point numbers here:
http://www.math.byu.edu/~schow/work/IEEEFloatingPoint.htm


Sturla Molden





More information about the Python-list mailing list