Python from Wise Guy's Viewpoint
Pascal Bourguignon
spam at thalassa.informatimago.com
Mon Oct 20 16:08:30 EDT 2003
Steve Schafer <see at reply.to.header> writes:
> On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon
> <spam at thalassa.informatimago.com> wrote:
>
> >Even in case of hardware failure, there's no reason to shut down the
> >mind; just go on with what you have.
>
> When the thing that failed is a very large rocket having a very large
> momentum, and containing a very large amount of very volatile fuel, it
> makes sense to give up and shut down in the safest possible way.
You have to define a "dangerous" situation. Remember that this
"safest possible way" is usually to blow the rocket up. AFAIK, while
this parameter was out of range, there was no instability and the
rocket was not uncontrolable.
> Also keep in mind that this was a "can't possibly happen" failure
> scenario. If you've deemed that it is something that can't possibly
> happen, you are necessarily admitting that you have no idea how to
> respond in a meaningful way if it somehow does happen.
My point. This "can't possibly happen" failure did happen, so clearly
it was not a "can't possibly happen" physically, which means that the
problem was with the software. We know it, but what I'm saying is that
a smarter software could have deduced it on fly.
We all agree that it would be better to have a perfect world and
perfect, bug-free, software. But since that's not the case, I'm
saying that instead of having software that behaves like simple unix C
tools, where as soon as there is an unexpected situation, it calls
perror() and exit(), it would be better to have smarter software that
can try and handle UNEXPECTED error situations, including its own
bugs. I would feel safer in an AI rocket.
--
__Pascal_Bourguignon__
http://www.informatimago.com/
Do not adjust your mind, there is a fault in reality.
Lying for having sex or lying for making war? Trust US presidents :-(
More information about the Python-list
mailing list