status of Programming by Contract (PEP 316)?

Alex Martelli aleax at mac.com
Sun Sep 2 00:15:28 EDT 2007


Ricardo Aráoz <ricaraoz at gmail.com> wrote:
   ...
> >> We should remember that the level
> >> of security of a 'System' is the same as the level of security of it's
> >> weakest component,
   ...
> You win the argument, and thanks you prove my point. You typically
> concerned yourself with the technical part of the matter, yet you
> completely ignored the point I was trying to make.

That's because I don't particularly care about "the point you were
trying to make" (either for or against -- as I said, it's a case of ROI
for different investments [in either security, or, more germanely to
this thread, reliability] rather than of useful/useless classification
of the investments), while I care deeply about proper system thinking
(which you keep failing badly on, even in this post).

> In the third part of your post, regarding security, I think you went off
> the road. The weakest component would not be one of the requisites of
> access, the weakest component I was referring to would be an actual
> APPLICATION,

Again, F- at system thinking: a system's components are NOT just
"applications" (what's the alternative to their being "actual", btw?),
nor is it necessarily likely that an application would be the weakest
one of the system's components (these wrong assertions are in addition
to your original error, which you keep repeating afterwards).

For example, in a system where access is gained *just* by knowing a
secret (e.g., a password), the "weakest component" is quite likely to be
that handy but very weak architectural choice -- or, seen from another
viewpoint, the human beings that are supposed to know that password,
remember, and keep it secret.  If you let them choose their password,
it's too likely to be "fred" or other easily guessable short word; if
you force them to make it at least 8 characters long, it's too likely to
be "fredfred"; if you force them to use length, mixed case and digits,
it's too likely to be "Fred2Fred".  If you therefore decide that
passwords chosen by humans are too weak and generate one for them,
obtaining, say, "FmZACc2eZL", they'll write it down (perhaps on a
post-it attached to their screen...) because they just can't commit to
memory a lot of long really-random strings (and nowadays the poor users
are all too likely to need to memorize far too many passwords).  A
clever attacker has many other ways to try to steal passwords, from
"social engineering" (pose as a repair person and ask the user to reveal
their password as a prerequisite of obtaining service), to keystroke
sniffers of several sorts, fake applications that imitate real ones and
steal the password before delegating to the real apps, etc, etc.

Similarly, if all that's needed is a physical token (say, some sort of
electronic key), that's relatively easy to purloin by traditional means,
such as pickpocketing and breaking-and-entering; certain kind of
electronic keys (such as the passive unencrypted RFID chips that are
often used e.g. to control access to buildings) are, in addition,
trivially easy to "steal" by other (technological) means.

Refusing to admit that certain components of a system ARE actually part
of the system is weak, blinkered thinking that just can't allow halfway
decent system design -- be that for purposes of reliability, security,
availability, or whatever else.  Indeed, if certain part of the system's
architecture are OUTSIDE your control (because you can't redesign the
human mind, for example;-), all the more important then to make them the
focus of the whole design (since you must design AROUND them, and any
amelioration of their weaknesses is likely to have great ROI -- e.g., if
you can make the users take a 30-minutes short course in password
security, and accompany that with a password generator that makes
reasonably memorable though random ones, you're going to get substantial
returns on investment in any password-using system's security).

> e.g. an ftp server. In that case, if you have several
> applications running your security will be the security of the weakest
> of them.

Again, false as usual, and for the same reason I already explained: if
your system can be broken by breaking any one of several components,
then it's generally WEAKER than the weakest of the components.  Say that
you're running on the system two servers, an FTP one that can be broken
into by 800 hackers in the world, and a SSH one that can only be broken
into by 300 hackers in the world; unless every single one of the hackers
who are able to break into the SSH server is *also* able to break into
the FTP one (a very special case indeed!), there are now *MORE* than 800
hackers in the world that can break into your system as a whole -- in
other words, again and no matter how often you repeat falsities to the
contraries without a shred of supporting argument, your assertion is
*FALSE*, and in this case your security is *WEAKER* than the security of
the weaker of the two components.

I do not really much care what point(s) you are trying to make through
your glib and false assertions: I *DO* care that these falsities, these
extremely serious errors that stand in the way of proper system
thinking, be never left unchallenged and uncorrected.  Unfortunately a
*LOT* of people (including, shudder, ones who are responsible for
architecting, designing and implementing some systems) are under very
serious misapprehensions that impede "system thinking", some of the same
ilk as your falsities (looking at only PART of the system and never the
whole, using far-too-simplified rules of thumbs to estimate system
properties, and so forth), some nearly reversed (missing opportunities
to make systems *simpler*, overly focusing on separate components, &c).

As to your specific point about "program proofs" being likely overkill
(which doesn't mean "useless", but rather means "low ROI" compared to
spending comparable resources in other reliability enhancements), that's
probably true in many cases.  But when a probably-true thesis is being
"defended" by tainted means, such as false assertions and excessive
simplifications that may cause serious damage if generally accepted and
applied to other issues, debunking the falsities in question is and
remains priority number 1 for me.


Alex



More information about the Python-list mailing list