status of Programming by Contract (PEP 316)?

Alex Martelli aleax at mac.com
Sat Sep 1 21:38:59 EDT 2007


Ricardo Aráoz <ricaraoz at gmail.com> wrote:
   ...
> We should remember that the level
> of security of a 'System' is the same as the level of security of it's
> weakest component,

Not true (not even for security, much less for reliability which is
what's being discussed here).

It's easy to see how this assertion of yours is totally wrong in many
ways...

Example 1: a toy system made up of subsystem A (which has a probability
of 90% of working right) whose output feeds into subsystem B (which has
a probability of 80% of working right).  A's failures and B's faliures
are statistically independent (no common-mode failures, &c).

The ``level of goodness'' (probability of working right) of the weakest
component, B, is 80%; but the whole system has a ``level of goodness''
(probability of working right) of just 72%, since BOTH subsystems must
work right for the whole system to do so.  72 != 80 and thus your
assertion is false.

More generally: subsystems "in series" with independent failures can
produce a system that's weaker than its weakest component.


Example 2: another toy system made up of subsystems A1, A2 and A3, each
trying to transform the same input supplied to all of them into a 1 bit
result; each of these systems works right 80% of the time, statistically
independently (no common-mode failures, &c).  The three subsystems'
results are reconciled by a simple majority-voting component M which
emits as the system's result the bit value that's given by two out of
three of the Ai subsystems (or, of course, the value given unanimously
by all) and has extremely high reliability thanks to its utter
simplicity (say 99.9%, high enough that we can ignore M's contribution
to system failures in a first-order analysis).

The whole system will fail when all Ai fail together (probability
0.2**3) or when 2 out of them fail while the hird one is working
(probability 3*0.8*0.2**2):

>>> 0.2**3+3*0.2**2*0.8
0.10400000000000004

So, the system as a whole has a "level of goodness" (probability of
working right) of almost 90% -- again different from the "weakest
component" (each of the three Ai's), in this case higher.

More generally: subsystems "in parallel" (arranged so as to be able to
survive the failure of some subset) with indipendent failures can
produce a system that's stronger than its weakest component.


Even in the field of security, which (changing the subject...) you
specifically refer to, similar considerations apply.  If your assertion
was correct, then removing one component would never WEAKEN a system's
security -- it might increase it if it was the weakest, otherwise it
would leave it intact.  And yet, a strong and sound tradition in
security is to require MULTIPLE components to be all satisfied e.g. for
access to secret information: e.g. the one wanting access must prove
their identity (say by retinal scan), possess a physical token (say a
key) AND know a certain secret (say a password).  Do you really think
that, e.g., removing the need for the retinal scan would make the
system's security *STRONGER*...?  It would clearly weaken it, as a
would-be breaker would now need only to purloin the key and trick the
secret password out of the individual knowing it, without the further
problem of falsifying a retinal scan successfully.  Again, such security
systems exist and are traditional exactly because they're STRONGER than
their weakest component!


So, the implication accompanying your assertion, that strenghtening a
component that's not the weakest one is useless, is also false.  It may
indeed have extremely low returns on investment, depending on system's
structure and exact circumstances, but then again, it may not; nothing
can be inferred about this ROI issue from the consideration in question.


Alex



More information about the Python-list mailing list