Python from Wise Guy's Viewpoint
Andrew Dalke
adalke at mindspring.com
Mon Oct 20 18:07:31 EDT 2003
Pascal Bourguignon:
> We all agree that it would be better to have a perfect world and
> perfect, bug-free, software. But since that's not the case, I'm
> saying that instead of having software that behaves like simple unix C
> tools, where as soon as there is an unexpected situation, it calls
> perror() and exit(), it would be better to have smarter software that
> can try and handle UNEXPECTED error situations, including its own
> bugs. I would feel safer in an AI rocket.
Since it was written in Ada and not C, and since it properly raised
an exception at that point (as originally designed), which wasn't
caught at a recoverable point, ending up in the default "better blow
up than kill people" handler ... what would your AI rocket have
done with that exception? How does it decide that an UNEXPECTED
error situation can be recovered? How would you implement it?
How would you test it? (Note that the above software wasn't
tested under realistic conditions; I assume in part because of cost.)
I agree it would be better to have software which can do that.
I have no good idea of how that's done. (And bear in mind that
my XEmacs session dies about once a year, eg, once when NFS
was acting flaky underneath it and a couple times because it
couldn't handle something X threw at it. ;)
The best examples of resilent architectures I've seen come from
genetic algorithms and other sorts of feedback training; eg,
subsumptive architectures for robotics and evolvable hardware.
There was a great article in CACM on programming an FPGA
via GAs, in 1998/'99 (link, anyone?). It worked quite well (as
I recall) but pointed out the hard part about this approach is
that it's hard to understand, and the result used various defects
on the chip (part of the circuit wasn't used but the chip wouldn't
work without it) which makes the result harder to mass produce.
Andrew
dalke at dalkescientific.com
More information about the Python-list
mailing list