random

David C. Ullrich ullrich at math.okstate.edu
Sun Jun 10 09:00:13 EDT 2001


On Sun, 10 Jun 2001 00:50:50 -0400, "Tim Peters" <tim.one at home.com>
wrote:

[...]
>>
>> So I lied, but it's not as obvious as you say that my lie
>> _must_ have been wrong - as long as we're just talking
>> about "a random program" without specifying the distribution
>> what I said could have been right regardless of questions of how
>> smart the first program is.
>
>All agreed.  I wasn't bothered that *a* program could "be that smart", but
>"the 0th" program specifically:  it was pointing at the essential
>arbitrariness of a measure that depends on how you just happen to tag the
>machines with ordinals.  "Interesting" results in this (and related) field
>are independent of labeling, so if Chaitin had come up with a result that
>wasn't so independent, this thread never would have started <wink>.

Yes, when I thought that the definition depended on the ordering
that did make it seem a little silly, I mean a little less
interesting.

Of course it _does_ depend on the UTM, and there _is_ a UTM which
will make Chaitin's Omega equal to the number that has n-th bit
set iff the n-th TM in some arbitrary (computable) ordering halts,
namely the silly one I've described. Giving an Omega which definitely
does not "pass all computable tests for statistical randomness",
or however he puts it. 
By which I positively do not mean to suggest that anything Chaitin
said was wrong, just that I'm still not quite certain exactly what
the _hypotheses_ are under which a UTM gives a perfectly magical
Omega. He says something (which I think you've quoted above) about
the technical part being coming up with a UTM that makes his
probability measure work out right, and absent a definition of
"right" I still wonder to what extent he's defining "right" as
"allowing me to prove what I want to prove".

But I suspect that he has a more natural definition of
"right", or "self-delimiting" means something other than
what I take it to mean, or something. And as you've suggested,
it's not reasonable to expect to get that sort of technical
detail straight on usenet. The UTM that appears to contradict
him is certainly _very_ far from being an _efficient_
encoding - presumably the hypotheses have to do with
that, as someone suggested. (For a while I thought that
if the hypotheses included something about efficiency
then that made his results less "natural". But since
he's specifically studying algorithmic complexity that's
not fair. In fact in real programming languages it will
be true that the length of the shortest program
computing (X,Y) will be not much larger than the
sum of the lengths of the shortest programs compuing
X and Y, so a hypothesis to that effect is really not
all that unnatural. If that or something similar
is what the hypothesis actually is.)


>>...
>> That's the best excuse I can contrive. Oh well.
>
>You don't need an excuse!  I wasn't careful with definitions, and was too
>keen to keep this all at "fuzzy overview" level.  So that's my excuse.

Ok.

>> Thought I had the problem of inexact floating point solved.
>
>Python beat you to it:  A real number is computable iff Python can compute
>it.  Therefore you can enumerate the set of non-computable reals just by
>picking them from the gaps in the ones Python can compute <wink>.

There you go, getting all condescending again. No wait, there's
nothing condescending about it<g>...

>for-example-0.1-ly y'rs  - tim
>
>



David C. Ullrich
*********************
"Sometimes you can have access violations all the 
time and the program still works." (Michael Caracena, 
comp.lang.pascal.delphi.misc 5/1/01)



More information about the Python-list mailing list