random

Alex Martelli aleaxit at yahoo.com
Sun Jun 3 04:53:27 EDT 2001


"Nick Perkins" <nperkins7 at home.com> wrote in message
news:kfkS6.122583$eK2.28480687 at news4.rdc1.on.home.com...
    ...
> >     http://www.cs.umaine.edu/~chaitin/cmu.html

Nice pointer indeed, and it contains others that are just as nice.

> I may be mangling this a tiny bit, not being a mathematician, but I can
see
> where some of the confusion may have set in.  It is easy to mis-understand
> Chaitan's work as giving an algorithm for producing a 'truly' random
number.

Chaitin.  I point it out because you spell this "Chaitan" in a few more
places
further on.

> In fact, what I think he shows, is that there is a number that is 'truly
> random', but producing the Nth bit of that number requires a program that
is
> at least N bits long.

He gives the program-size requirement as the _definition_ of randomness,
and what he proves is that such a number exists -- defines it and explores
it.  Shows the purely arithmetic (Diophantine) equation which it solves, &c.
http://www.cs.umaine.edu/~chaitin/sciamer2.html, his Scientific American
article, goes into this more than the above-quoted URL, which is more about
how his work fits in 20th century mathematics.


> ...anyway, my brain is going all loopy at this point, and I may have
> min-understood Chaitan, but I think I understood that Chaitan's ideas
> support (or conclude, or rely on) the idea that no finite algorithm can
> produce 'truly random' numbers.
>
> Have I mis-understood anything?

I would say "define" rather than 'support' or whatever.  Having defined
the randomness of a number as the size of the smallest program that
generates it as output (and gone on to prove that you cannot get a
lower bound measure on randomness for all save a finite subset of
numbers), it's trivial that a program of finite dimension can only
generate a number of finite randomness, and apparently what people
seem to mean by 'truly' or 'perfect' random is infinite randomness.

Omega, aka the Chaitin number (but he doesn't call it by his name
himself:-), is defined as the probability that a randomly generated
program will halt.  There are some technical issues, key one being
to ensure the program is self-limiting -- it asks for its next bit in
sequence and self-determines algorithmically whether it has already
received all of its bits -- and of course one has to fix a specific binary
Universal Turing Machine encoding, but that's not the deep part:-).
And the 'randomly' in the definition doesn't explore how the random
independent bits of the program are in turn generated -- assume
classical coin tosses.  Again nothing deep as in principle the same
Omega could be obtained without having randomness to start
with by enumerating all programs of N bits for all naturals N (of
course that determination could not run in finite time or space:-).

But the method summarily given in the Sci.Am paper does show how
to program the computation of any given bit of Omega (given a
machine without bounds on the size of the program nor the size
of numbers storable in its registers).  The size of the program grows
with the index k of the bit you want to compute.  But it seems to me
(still from the layman-level presentation in the paper) that these
programs are in turn generable by a finite program -- if you give
up the finiteness constraint along the time axis, or, in other words,
the guarantee that a program will halt.  "Hence the diophantine
equation actually has infinitely many solutions for a particular value
of its parameter k if the kth bit of Omega turns out to be a 1, and for
similar reasons it has only finitely many solutions if the kth bit of
Omega turns out to be a 0".  Here any program size is finite (if the
register-sizes aren't, at least:-), but some would say the finite
program is not "an algorithm" because it's not finite in time -- it
may never halt (and we know about trying to prove if it halts...:-).

So, summing up: Chaitin has surely not shown that a finite program
running in finite time can generate infinite randomness -- that would
contradict his definition of randomness.  He HAS shown infinite
(true, perfect, whatever) randomness can be obtained in arithmetic
if dropping the finiteness constraints on the definition of 'algorithm'.
He also shows (handwaving, but apparently there are solid proofs
in his 'Exploring Randomness' book -- it will take some days before
I can read it though:-) that this mathematical randomness has all
the typical characteristics of classical-physics random numbers:
"""
In my approach I bring both notions of independence to bear. The
answer to my question for one value of k is logically independent of
the answer for another value of k. The reason is that the individual
bits of Omega, which determine the answers, are statistically
independent.

Although it is easy to show that for about half of the values of k the
number of solutions is finite and for the other half the number of
solutions is infinite, there is no possible way to compress the answers
in a formula or set of rules; they mimic the results of coin tosses.
Because Omega is algorithmically random, even knowing the answers
for 1,000 values of k would not help one to give the correct answer for
another value of k. A mathematician could do no better than a gambler
tossing a coin in deciding whether a particular equation had a finite or
an infinite number of solutions.
"""

So where's the "state of sin" in this arithmetical and almost-algorithmical-
except-for-the-little-detail-of-finiteness approach to randomness?


Alex






More information about the Python-list mailing list