Does Python really follow its philosophy of "Readability counts"?

Luis Zarrabeitia kyrie at uh.cu
Tue Jan 20 01:33:00 EST 2009


Quoting "Russ P." <Russ.Paielli at gmail.com>:

> On Jan 19, 9:21 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> > Bruno Desthuilliers <bdesth.quelquech... at free.quelquepart.fr> writes:
> > > The failure was because a module tested, QA'd and certified within a
> > > given context (in which it was ok to drop the builtin error handling)
> > > was reused in a context where it was not ok. And the point is exactly
> > > that : no *technology* can solve this kind of problem, because it is a
> > > *human* problem (in that case, not taking time to repass the whole
> > > specs / tests / QA process given context change).
> >
> 
> He says that "no *technology* can solve this kind of problem."
> 
> First of all, I'm not sure that's true. I think technology *could*
> have solved the problem -- e.g., Spark Ada, had it been properly
> applied. But that's beside the point. The point is that the problem
> had nothing to do with encapsulation. The rocket failed because a
> conversion was attempted to a data type that could not hold the
> required value. Am I missing something? I don't see what that has to
> do with encapsulation.
> 
> The logic seems to be as follows:
> 
> 1. Ada enforces data hiding.
> 2. Ada was used.
> 2. A major failure occurred.
> 
> Therefore:
> 
> Enforced data hiding is useless.


I don't think that was the logic... At least, it wasn't what I understood from
the example. We were talking at the time (or rather, you both were, as I think I
was still away from the thread) about QA versus static checks (as a superset of
'enforcement of data hiding').

Is not that those checks are useless, is that they make you believe you are
safe. Granted, you may be safe-er, but it may be better to know beforehand "this
is unsafe, I will check every assumption". The arianne 5 example cuts both ways:
messing with "internals" (not that internal in this case, but still) without
fully understanding the implications, and overly trusting the checks put in
place by the language. I don't know spark-ada, but the only thing I can think of
that would have prevented it is not purely technological. It would have been a
better documentation, where that particular assumption was written down. A
language can help you with it, perhaps spark-ada or eiffel or who-knows-what
could've forced the original developers to write that piece of documentation and
make the arianne 5 engineers notice it.

So, Arianne 5's problem had nothing to do with _enforced data hiding_. (Why do
you keep calling it 'encapsulation'?). It was a counterexample for your trust on
compiler checks.

Btw, what do you have against using pylint for detecting those 'access violations'?

> If that reasoning is sound, think about what else is useless.

I _hope_ that wasn't Bruno's reasoning. I don't want to switch sides on this
discussion :D.

Cheers,

-- 
Luis Zarrabeitia
Facultad de Matemática y Computación, UH
http://profesores.matcom.uh.cu/~kyrie



More information about the Python-list mailing list