"Strong typing vs. strong testing"

rustom rustompmody at gmail.com
Fri Oct 1 01:42:45 EDT 2010


Some points that seem to be missed (or Ive missed them?)

1. A dichotomy is being made between 'static' languages like C and
'dynamic' languages like python/lisp. This dichotomy was valid 30
years ago, not today.  In Haskell for example

- static checking is stronger than in C/C++  -- its very hard if not
impossible to core dump haskell except through memory exhaustion

- dynamic-ness is almost that of python/lisp -- on can write
significant haskell programs without type-declaring a single variable/
function

Much more mainstream, C# is almost as 'managed' as dynamic languages
and has efficiency comparable to C.

2. The dichotomy above misses a more pervasive dichotomy -- hardware
vs software -- as real today as 30 years ago.

To see this let us lift the discussion from that of *languages* C vs
Python/Lisp  to philosophies:
-- C-philosophy: the purpose of type-checking is to maximize (runtime)
efficiency
-- Lisp-philosophy: the purpose of type-checking is zero-errors (aka
seg-faults) via continuous checks at all levels.

If one is honest (and not polemical :-) ) it would admitted that both
sides are needed in different contexts.

Now Dijkstra pointed (40 years ago) in Discipline of Programming that
this unfortunate dilemma arises due to lack of hardware support. I am
unable to reproduce the elegance and succinctness of his language but
the argument is as follows:

Let us say that for a typical profile of a computer we have for every
one instruction of the pathological one typified by the maximum
function, a trillion 'normal' instructions.  This is what he calls a
very-skew test -- an if-then-else that checks this would go the if-way
way one trillion times for one else-way.  It is natural for a
programmer to feel the pinch of these trillion checks and (be inclined
to) throw them away.

If however the check was put into hardware there would be no such
dilemma. If every arithmetic operation was always checked for overflow
*by hardware* even languages committed to efficiency like C could trap
on errors with no extra cost.
Likewise Lisp/python-like languages could easily be made more
efficient.

The diff arises from the fact that software costs per use whereas
hardware costs per installation -- a transistor, unlike an if, does
not cost any more if its used once or a trillion times.

In short the problem is not C vs Lisp/Python but architectures like
Intel wherein:

1. an overflow bit harmlessly set by a compare operation is
indistinguishable from one set by a signed arithmetic operation --
almost certainly a problem

2. An into instruction (interrupt on overflow) must be inserted into
the software stream rather than raised as a hardware interrupt.



More information about the Python-list mailing list