random

David C. Ullrich ullrich at math.okstate.edu
Tue Jun 5 11:09:25 EDT 2001


On Mon, 4 Jun 2001 21:25:48 -0400, "Tim Peters" <tim.one at home.com>
wrote:

>[David C. Ullrich]
>> Um, actually I'm pretty sure that Omega _is_ what I said it is, and
>> is also what you say it is.
>
>I'm not <wink>.

Posts crossed in the mail - I'm certain I was wrong.

>> ...
>> The first thing you need to note is that "probability that a random
>> program halts" is _meaningless_ until we have specified how we are
>> _choosing_ our "random program".
>
>Sure.
>
>> ...
[my fascinating definition of "random program" that
turned out not to be Chaitin's snipped]
>
>Chaitin's construction is actually based on the bit-length p of programs P.
>A specific program P of bit-length p that halts contributes (additively)
>1/2**p to Omega -- all programs of length p carry the same weight.  You may
>object that there are 2**p programs of length p alone, so that this isn't
>well-defined (i.e., that the programs of length p alone *may* contribute
>2**p*(1/2**p) = 1 to the total, and that the sum over all P may be
>unbounded).  Indeed, that was a long-standing hang-up!  "The trick" is that
>Chaitin's twist on universal Turing machines requires "self-delimiting"
>programs:
>
>    A key technical point that must be stipulated in order for Omega
>    to make sense is that an input program must be self-delimiting:
>    its total length (in bits) must be given within the program itself.
>    (This seemingly minor point, which paralyzed progress in the field
>    for nearly a decade, is what entailed the redefinition of
>    algorithmic randomness.)
>    ...
>    If programs were not self-delimiting, they could not be constructed
>    from subprograms, and summing the halting probabilities for all
>    programs would yield an infinite number. If one considers only self-
>    delimiting programs, not only is Omega limited to the range between
>    0 to 1 but also it can be explicitly calculated ``in the limit from
>    below.''
>
>IOW, there are slippery subtleties here (subtle enough to trip up serious
>researchers for years) -- if you want to pursue this, it's better to stop
>guessing at the gaps in popular overviews and go to the sources.  The way he
>actually approximates Omega from below is to compute the values of omega_n()
>for increasing n, where for a fixed n omega_n() looks at all programs P of
>bitlength <= n, and for each one that halts within n seconds, adds
>1/2**len(P) to a running total (where len(P) is my spontaneous <wink>
>notation for the bitlength of P).  This gives, for increasing n, a
>non-decreasing sequence of rationals that converge to Omega (but with a
>non-computable convergence rate).
>
>The relation to coin-tossing is this:  *each* time he flips a coin he
>appends a 0 or 1 bit (depending on whether heads or tails comes up).  The
>coin-tossing stops when the program is complete; the "self-delimiting"
>business is crucial here, else you couldn't know when to stop; then there
>are technical details about strings that aren't well-formed programs, etc.
>But the finer a point you want to draw the more the details become
>important, and I think we're already beyond what Usenet is any good for.
>
>> ...
>> The previous paragraph is a precise/rigorous
>> specification of "Omega is the probability that
>> a random program halts"
>
>It's some sort of probability measure, yes, but not Chaitin's.

Yup. (Although whether it's Chaitin's measure or not it _does_
show that "Omega is the probabilty a random program halts"
does _not_ imply that the n-th bit of Omega cannot be 1 iff
the nth TM halts, as someone whose name I forget said<wink wink>.)

>> (except it doesn't quite pin down what Omega _is_ since it does
>> not specify exactly what order we put the Turing machines in at
>> the start. To actually _define_ the sequence T1, T2,... we'd need
>> to get into details of the syntax we were using to define TM's.)
>
>Yes, and "Chaitin's constant" too is really a function of the specific
>universal Turing machine in question.  Without qualification, it usually
>refers to the Omega of a specific UTM he coded in Lisp and published.

Indeed the exact value of Omega depends on the details of the
encodings. What I'm wondering today is how many of his conclusions
also depend on the details of the encoding.

You say there are details regarding what to do when the next bit
gives a syntax error. I'd assumed it was a magical encoding so
that there were no syntax errors.

You may be right about us having reached the limits of usenet. 
I'll give up soon. But I'm not going to take the time to actually
learn all Chaitin's stuff (unless I do) and you're obviously
much more familiar with it than I am (this follows from the
fact that you're obviously somewhat famailar with it). Before
I give up on accomplishing anything here answer me a question:

Suppose I have a syntax where a valid program is precisely
a sequence of 0's terminated by a 1. Any such string is a
valid program. Does this count as "self-delimiting"? 
It seems to me that we can keep asking for one more bit - there
are never any syntax errors, and as soon as we get a 1 we say
we're finished, and then we ask whether the program halts.

>However, Chaitin's defn doesn't involve a bijection between natural numbers
>and machines, except in the incidental sense that programs are viewed as
>bitstrings which *could* be viewed as integers -- but the defn makes no use
>of that, only of the *lengths* of programs, and those are independent of
>arbitrary orderings.

Also depends on the particular syntax being used. (Which is not
just a quibble - I do or do not have an interesting poiint here,
depending on what the answer to the question in the previous
paragraph is.)

>> I don't think so. And in any case if you _do_ choose your program
>> by flipping a coin as above that specifies a distribution, and with
>> that distribution the probability that a random program halts _is_
>> a number with n-th bit = 1 if and only if the n-th TM halts.
>
>Can only repeat that it's a classic example, and, again, IIRC it doesn't
>meet the criteria for randomness. 

Things must be crossing in the mail. No it does not meet the 
criteria for randomness, that was my point.

> Could be I'm wrong about that, but this
>is far enough removed from what I'm supposed to be doing right now that I'm
>not going to go searching one way or the other.  If you do, let me know what
>you find out.
>
>> ...
>> (And in fact when I think about how to do that
>> and I think about how one would "naturally"
>> order the TM's it's not clear to me that such
>> non-randomness does not creep into Omega
>> defined using a "natural" ordering...)
>
>As above, Chaitin's defn. is independent of ordering.
>
>> If I have the "definition" straight (and
>> note that nothing I've said contradicts
>> what you said about the definition!) then
>> the bits of Omega are not going to be much
>> use for an RNG.
>> ...
>
>I agree with the conclusion regardless.
>
>> But it turns out that Omega _does_ solve a
>> _different_ practical problem! What's the
>> other pressing problem? Determining which
>> reals have exact floating-point representations.
>>
>> Thm1. Omega does not have an exact representation
>> in finite-but-unbounded-precision floating point.
>>
>> Thm2. Thm1 is all you can say about Omega, given
>> the definition above: If x is any real (between
>> 0 and 1) and x does not have an exact representation
>> in finite-but-unbounded-precision floating point
>> then there is an ordering T1, T2,... of our
>> Turing machines that makes Omega = x.
>
>Since it's not a computable ordering, I'm not sure "solves" was the right
>word <wink>.  Note that recent results have shown that every non-computable
>*random* real (in (0, 1)) is the (Chaitin-style) Omega for some
>self-delimiting UTM. 

Where "random" means "normal", or at least implies normal?

> That's a paraphrase.  A precise statement requires
>pages of defns, and again people interested enough should go the sources (I
>gave a good link earlier in this thread).
>
>> ...
>> (Devil's advocate: But if Omega can be 0.10101010...
>> then how can it be incompressible? You're not actually
>> saying Chatin's wrong about that, are you?
>
>No.  I believe instead that there's no self-delimiting UTM (etc -- insert
>paragraphs of defns here) s.t. 0.1010... is its Omega.  All Omegas are known
>to be random.

Well right. It was a rhetorical question for myself, in a context
where I was assuming things that turned out not to be so. (Note
that the ordering that led to .10101010... was in fact
non-computable.)

********************************

But this question is not meant to be rhetorical: We have a syntax
where the valid programs are exactly finite sequences of 0's
terminated by a 1. Does this count as "self-delimiting"?

(On the one hand I don't see why not - we keep asking for
bits until we get a 1. On the other hand Chaitin makes
a remark about how these "self-delimiting" programs
are not delimited by an EOF. But I don't see by what
"natural" criterion "last bit is a 1" is not one of
the allowable ways to looks at what we have so far and
decide it's finished.)


David C. Ullrich
*********************
"Sometimes you can have access violations all the 
time and the program still works." (Michael Caracena, 
comp.lang.pascal.delphi.misc 5/1/01)



More information about the Python-list mailing list