float / double support in Python?

Martijn Faassen m.faassen at vet.uu.nl
Wed Feb 12 11:45:52 EST 2003


Brandon Van Every <vanevery at 3dprogrammer.com> wrote:
> Martijn Faassen wrote:
>> Brandon Van Every <vanevery at 3dprogrammer.com> wrote:
>>>
>>> How are you "forced" to care?
>>
>> I'd say in C++ you're forced to care. You get things like "Large Scale
>> C++ Programming", a great book, but dealing with lots of low level
>> details ("don't use unsigned in the interface",
> 
> Hm, I use unsigned in my interfaces all the time.  But of course, it's low
> level game code.  For some problems, I want all the storage I can get out of
> my bitfields.
> 
>> "don't use short in the interface")
> 
> I started out using some shorts in my game.  Lately I've been thinking
> better of it and depreciating it to ints.  16-bit shorts can make some sense
> on an Intel architecture, but they don't make any sense anywhere else.
> Although, oddly enough, some other poster thought we should all be using
> 2-byte chars, and of course those are 16 bits.  :-)
> 
> I don't see that I've been "forced to care" about this however.  I *chose*
> to care.  Big difference.

You want to care about this as you're programming on a low level. If you
had not been programming on a low level, you'd have to still care about it,
as it's there and you have to make a decision. The considerations you made
above will still have to be made on this higher level, and it doesn't make
much sense to do so there. I mean, if Lakos cares about it enough to write
it up in a book, I'd say C++ programmers are forced to consider this. 

>> A language feature that is there will have to be considered for use.
>> Float versus double will make people wonder about what to use,
> 
> I don't see what the big deal is here.  The issues are:
> - one's bigger than the other
> - one's more accurate than the other
> - one's faster for certain operations than the other, but they're the same
> for many operations
> - conversions between the two can be a performance hit
> 
> These aren't terribly tricky issues.  It's not like they have radically
> different encodings or purposes.

You forget the potential loss of accuracy between conversions. And if you
want to change your APIs from one to another, you have to go in and change
a whole lot of code, with room for errors. Anyway, here we have 6 things you
should be thinking about when dealing with floats versus doubles already!

Sure, still not a big deal. The point is that if you add up all those 'small
deals' you suddenly are mired in a mass of details that for a huge
amount of software doesn't matter much at all.

>> and
>> what happens on the intersection when converting the one to the other,
> 
> But you're already forcing this issue when you cross from Python to C++
> anyways.  Some 3D APIs like DirectX only take 4-byte floats.  You only get
> to ignore the issue if you code in pure Python.

Most Python programmers are not crossing the Python/C++ boundary that often,
let alone talking to 3d APIs like DirectX. You're complaining as this is
*your* use case. If Python gave in to everybody's particular use case,
soon it'll become an awful mess. So things are weighed and it's decided that
only supporting 'double' in the Python core is enough for most uses of
floating point numbers. I think that was the right decision.

Note that it is still entirely possible to support 'float' or whatever
data type you like in extension modules. You could make a 'float'
type in Python, and there are only two disadvantages compared to having
it built-in: 

  * you have to distribute your extension

  * you don't have direct syntax for constructing one; instead you
    write:

    from float import float

    myfloat = float(1.2)

    or something along the lines.

>> and what happens on a library boundary where the library writer used
>> float throughout and I used double, is anything going to go wrong,
>> etc.
> 
> I find it pretty irritating that Python, knowing full well it would be using
> C as a lower level component, chose to call the 8-byte floating point type a
> "float."  In C this is a "double."

Python is not C.

> Single precision vs. double precision
> are well defined IEEE 754 / 854 standard terms, and were so long before
> Python.

So ask Tim Peters about Python's float support then. :) 

> If you're worried about people getting confused, this is the most
> confusing thing the Python designers could have done.

No, it is not. It is only a cause for confusion if you're regularly going
between C or C++ and Python, use float and double, and are new to this issue. 
Most people are not doing this. Python here works for the common case where
you write python code. Calling a 'floating point number' a 'float' makes
sense, calling them 'double' only makes sense if you're coming from
C.

>> Obviously Python has mechanisms (like the struct and array modules)
>> to get to basic machine types. They're just not a core part of the
>> language but are in libraries, where they belong. :)
> 
> "Basic machine types belong in libraries" is not real world enough for me.

What is this 'real world' thingy you talk about? Did you forget already
that your "real world" is not my "real world" here? That this is a matter
of perspective? Invoking a magic 'real world' that is exactly what you
want it to be is not very helpful in this discussion.

In Python, print is in the core. Machine types are in the core library,
and other extension libraries. In C and C++, print is in the library,
and basic machine types are in the core. This demonstrates a different
perspective and a different set of priorites. Python is a high level
programming language, whereas C and C++ are low(er) level programming 
languages. C was designed as portable assembler after all, and C++ was
designed as an object oriented programming language without trading in
any efficiency at all.

>> Remember that Python is often used as a glue language.
> 
> Pretty weak glue if it doesn't understand single-precison floats natively
> and I need to deal with an API that takes only single-precison floats.  I
> thank the makers for the busywork.

We already had masses of people complaining about this over the years in this
newsgroup, too. Endless amounts of debating! No, I think you're one of the
few ever to complain in my memory.. I don't think that this shows that
most of the "real world"s urgently need this. And you're complaining about it
without having written *any* Python code yet to boot and you think you can win
this argument? :)

I wonder what will happen when you discover that Python uses indentation for
block structure! ;)

Regards,

Martijn
-- 
History of the 20th Century: WW1, WW2, WW3?
No, WWW -- Could we be going in the right direction?




More information about the Python-list mailing list