Question(s)

avi.e.gross at gmail.com avi.e.gross at gmail.com
Wed Oct 25 21:25:58 EDT 2023


Just want to add that even when you can prove that an algorithm works absolutely positively, it will often fail on our every finite computers. Consider families of algorithms that do hill climbing to find a minimum and maximum and are guaranteed to get ever closer to a solution given infinite time. In real life, using something like 64 bit or 128 bit floating point representations, you can end up with all kinds of rounding errors and inexactness on some inputs so it goes into a loop of constantly bouncing back and forth between the two sides and never gets any closer to the peak. It may work 99.99% of the time and then mysteriously lock up on some new data.

Or consider an algorithm some use in places like Vegas that is absolutely 100% guaranteed to win. If you bet $1 and lose, simply double your bet as many times as needed and eventually, you should win. Of course, one little problem is that all you ever win is $1. And you might lose 50 times in a row and spend hours and risk ever larger amounts to just win that dollar. Sure, you could just start betting a larger amount like a million dollars and eventually win a million dollars but how long can anyone keep doubling before they have to stop and lose it all. After enough doublings the vet is for billions of dollars and soon thereafter, more than the worth of everything on the planet.

The algorithm is mathematically sound but the result given other realities is not.

A last analogy is the division by zero issue. If your algorithm deals with infinitesimally smaller numbers, it may simply be rounded or truncated to exactly zero. The next time the algorithm does a division, you get a serious error. 

So perhaps a PROOF that a real computer program will work would require quite a few constraints. Python already by default supports integers limited only in size by available memory. This can avoid some of the overflow problems when all you are allowed is 64 bits but it remains a constraint and a danger as even a fairly simple algorithm you can PROVE will work, will still fail if your program uses these large integers in ways that make multiple such large integers on machines not designed to extend their memory into whole farms of machines or generates numbers like Googolplex factorial divided by googolplex raised to the log(Googolplex ) power.

Some problems like the above are manageable as in setting limits and simply returning failure without crashing. Many well-designed programs can be trusted to work well as long as certain assumptions are honored. But often it simply is not true and things can change. 

Python actually is a good choice for quite a bit and often will not fail where some other environment might but there are few guarantees and thus people often program defensively even in places they expect no problems. As an example, I have written programs that ran for DAYS and generated many millions of files as they chugged along in a simulation and then mysteriously died. I did not bother trying to find out why one program it called failed that rarely to produce a result. I simply wrapped the section that called the occasional offender in the equivalent try/catch for that language and when it happened, did something appropriate and continued. The few occasional errors were a bug in someone else's code that should have handled whatever weird data I threw at it gracefully but didn't, so I added my overhead so I could catch it. The rare event did not matter much given the millions that gave me the analysis I wanted. But the point is even if my code had been certified as guaranteed to be bug free, any time I stepped outside by calling code from anyone else in a library, or an external program, it is no longer guaranteed.

We are now spending quite a bit of time educating someone who seems to have taken on a task without really being generally knowledgeable about much outside the core language and how much of the answer to making the code as reliable as it can be may lie somewhat outside the program as just seen by the interpreter. 

And unless this is a one-shot deal, in the real world, programs keep getting modified and new features ofteh added and just fixing one bug can break other parts so you would need to verify things over and over and then freeze.


-----Original Message-----
From: Python-list <python-list-bounces+avi.e.gross=gmail.com at python.org> On Behalf Of Michael F. Stemper via Python-list
Sent: Wednesday, October 25, 2023 9:34 AM
To: python-list at python.org
Subject: Re: Question(s)

On 24/10/2023 18.15, o1bigtenor wrote:


> What is interesting about this is the absolute certainty that it is impossible
> to program so that that program is provably correct.

Not entirely true. If I was to write a program to calculate Fibonacci
numbers, or echo back user input, that program could be proven correct.
But, there is a huge set of programs for which it is not possible to
prove correctness.

In fact, there is a huge (countably infinite) set of programs for which it
is not even possible to prove that the program will halt.

Somebody already pointed you at a page discussing "The Halting Problem".
You really should read up on this.

> Somehow - - - well - - to me that sounds that programming is illogical.
> 
> If I set up a set of mathematical problems (linked) I can prove that the
> logic structure of my answer is correct.

Exactly the same situation. There are many (again, countably infinite)
mathematical statements where it is not possible to prove that the statement
is either true or false. I want to be clear that this is not the same as
"we haven't figured out how to do it yet." It is a case of "it is mathematically
possible to show that we can't either prove or disprove statement <X>."

Look up Kurt Gödel's work on mathematical incompleteness, and some of the
statements that fall into this category, such as the Continuum Hypothesis
or the Parallel Postulate.

As I said at the beginning, there are a lot of programs that can be
proven correct or incorrect. But, there are a lot more that cannot.

-- 
Michael F. Stemper
Outside of a dog, a book is man's best friend.
Inside of a dog, it's too dark to read.

-- 
https://mail.python.org/mailman/listinfo/python-list



More information about the Python-list mailing list