How about adding rational fraction to Python?

Lie Lie.1296 at gmail.com
Sun Mar 2 01:33:21 EST 2008


On Mar 2, 2:32 am, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> Lie <Lie.1... at gmail.com> writes:
> > I see, but the same arguments still holds true: the second line have
> > an implicit side-effect of redefining x's type into Fractional type.
> > If I were the designer of the language, I'd leave x's type as it is
> > (as Num) and coerce x for current calculation only.
>
> Num in Haskell is not a type, it is a class of types, i.e. if all you
> know about x is that it is a Num, then its actual type is
> indeterminate.  Types get inferred, they do not get coerced or
> "redefined".  The compiler looks at expressions referring to x, to
> deduce what x's type actually is.  If it is not possible to satisfy
> all constraints simultaneously, then the compiler reports an error.

So basically they refused to satisfy everything that is still possible
individually but would conflict if done together. (I know Haskell not
more than just its name so I don't understand the rationale behind the
language design at all) But I'm interested in how it handles these
cases:

x = 1
a = x + 1    << decides it's an int
b = x + 1.0  << error? or redefine to be float?
c = x + 1    << would this cause error while it worked in line 2?

A slightly obfuscated example:
l = [1, 1.0, 1]
x = 1
for n in l:
  c = x + n



More information about the Python-list mailing list