What is Expressiveness in a Computer Language

Chris F Clark cfc at shell01.TheWorld.com
Sun Jun 25 12:22:44 EDT 2006


Chris F Clark (I) wrote:

> I'm particularly interested if something unsound (and perhaps
> ambiguous) could be called a type system.  I definitely consider such
> things type systems.

"Marshall" <marshall.spight at gmail.com> wrote:

> I don't understand. You are saying you prefer to investigate the
> unsound over the sound?
...
> Again, I cannot understand this. In a technical realm, vagueness
> is the opposite of understanding.

At the risk of injecting too much irrelevant philosophy into the
discussion, I will with great trepdiation reply.

First in the abstrtact: No, understanding is approximating.  The world
is inherently vague.  We make false symbolic models of the world which
are consistent, but at some level they do not reflect reality, because
reality isn't consistent.  Only by abtracting away the inherent
infinite amout of subtlety present in the real universe can we come to
comprehensible models.  Those models can be consistent, but they are
not the universe.  The models in their consistency, prove things which
are not true about the real universe.

Now in the concrete: In my work productivity is ultimately important.
Therefore, we live by the 80-20 rule in numerous variations.  One of
ths things we do to achieve productivity is simplify things.  In fact,
we are more interested in an unsound simplification that is right 80%
of the time, but takes only 20% of the effort to compute, than a
completely sound (100% right) model which takes 100% of the time to
compute (which is 5 times as long).  We are playing the probabilities.

It's not that we don't respect the sound underpining, the model which
is consistent and establishes certain truths.  However, what we want
is the rule of thumb which is far simpler and approximates the sound
model with reasonable accuracy.  In particular, we accept two types of
unsoundness in the model.  One, we accept a model which gives wrong
answers which are detectable.  We code tests to catch those cases, and
use a different model to get the right answer.  Two, we accept a model
which gets the right answer only when over-provisioned.  for example,
if we know a loop will occassionaly not converge in the n-steps that
it theoretically should, we will run the loop for n+m steps until the
approximation also converges--even if that requires allocating m extra
copies of some small element than are theoretically necessary.  A
small waste of a small resource, is ok if it saves a waste of a more
critical resource--the most critical resource of all being our project
timeliness.

We do the same things in our abstract reasoning (including our type
model).  The problems we are solving are too large for us to
understand.  Therefore, we make simplifications, simplifications we
know are inaccurate and potentially unsound.  Our algorithm then
solves the simplified model, which we think covers the main portion of
the problem.  It also attempts to detect when it has encountered a
problem that is outside of the simplified region, on the edge so to
speak.  Now, in general, we can figure out how to handle certain edge
cases, and we do it.  However, some times the edge of one part of the
algorithm intereacts with an edge of another part of the algorithm and
we get a "corner case".  Because our model is too simple and the
reasoning it allows is thus unsound, the algorithm will occassionally
compute the wrong results for some of those corners.  We use the same
techniques for dealing with them.  Dynamic tagging is one way of
catching when we have gotten the wrong answer in our type
simplification.

Marshall's last point:

> I flipped a coin to see who would win the election; it came
> up "Bush". Therefore I *knew* who was going to win the
> election before it happened. See the probem?

Flipping one coin to determine an election is not playing the
probabilities correctly.  You need a plausible explanation for why the
coin should predict the right answer and a track record of it being
correct.  If you had a coin that had correctly predicted the previous
42 presidencies and you had an explanation why the coin was right,
then it would be credible and I would be willing to wager that it
could also predict that the coin could tell us who the 44th president
would be.  One flip and no explanation is not sufficient.  (And to the
abstract point, to me that is all knowledge is, some convincing amount
of examples and a plausible explanation--anyone who thinks they have
more is appealing to a "knowledge" of the universe that I don't
accept.)

I do not want incredible unsound reasoning, just reasoning that is
marginal, within a margin of safety that I can control.  Everyone that
I have know of has as far as I can tell made simplifying assumptions
to make the world tractable.  Very rarely do we deal with the world
completely formally.  Look at where that got Russell and Whitehead.
I'm just trying to be "honest" about that fact and find ways to
compensate for my own failures.

-Chris



More information about the Python-list mailing list