Python from Wise Guy's Viewpoint

Matthias Blume find at my.address.elsewhere
Wed Oct 22 22:16:08 EDT 2003


Pascal Costanza <costanza at web.de> writes:

> The set of programs that are useful but cannot be checked by a static
> type system is by definition bigger than the set of useful programs
> that can be statically checked.

By whose definition?  What *is* your definition of "useful"?  It is
clear to me that static typing improves maintainability, scalability,
and helps with the overall design of software.  (At least that's my
personal experience, and as others can attest, I do have reasonably
extensive experience either way.)

A 100,000 line program in an untyped language is useless to me if I am
trying to make modifications -- unless it is written in a highly
stylized way which is extensively documented (and which usually means
that you could have captured this style in static types).  So under
this definition of "useful" it may very well be that there are fewer
programs which are useful under dynamic typing than there are under
(modern) static typing.

> So dynamically typed languages allow
> me to express more useful programs than statically typed languages.

There are also programs which I cannot express at all in a purely
dynamically typed language.  (By "program" I mean not only the executable
code itself but also the things that I know about this code.)
Those are the programs which are protected against certain bad things
from happening without having to do dynamic tests to that effect
themselves.  (Some of these "bad things" are, in fact, not dynamically
testable at all.)

> I don't question that. If this works well for you, keep it up. ;)

Don't fear.  I will.

> > (And where are _your_ empirical studies which show that "working around
> > language restrictions increases the potential for bugs"?)
> 
> I don't need a study for that statement because it's a simple
> argument: if the language doesn't allow me to express something in a
> direct way, but requires me to write considerably more code then I
> have considerably more opportunities for making mistakes.

This assumes that there is a monotone function which maps token count
to error-proneness and that the latter depends on nothing else.  This
is a highly dubious assumption.  In many cases the few extra tokens
you write are exactly the ones that let the compiler verify that your
thinking process was accurate (to the degree that this fact is
captured by types).  If you get them wrong *or* if you got the
original code wrong, then the compiler can tell you.  Without the
extra tokens, the compiler is helpless in this regard.

To make a (not so far-fetched, btw :) analogy: Consider logical
statements and formal proofs. Making a logical statement is easy and
can be very short.  It is also easy to make mistakes without noticing;
after all saying something that is false while still believing it to
be true is extremely easy.  Just by looking at the statement it is
also often hard to tell whether the statement is right.  In fact,
computers have a hard time with this task, too.  Theorem-proving is
hard.
On the other hand, writing down the statement with a formal proof is
impossible to get wrong without anyone noticing because checking the
proof for validity is trivial compared to coming up with it in the
first place.  So even though writing the statement with a proof seems
harder, once you have done it and it passes the proof checker you can
rest assured that you got it right.  The longer "program" will have fewer
"bugs" on average.

Matthias




More information about the Python-list mailing list