random

Darren New dnew at san.rr.com
Mon Jun 4 12:35:49 EDT 2001


Alex Martelli wrote:
> Even in a finitistic definition of algorithm, there may still be
> parameters needed to describe it completely.

Right. That would be the "seed" I was talking abuot. So?
 
> > > > > I still haven't heard your definition of
> > > > > 'perfect' randomness, for example.
> > > >
> > > > Generally, this is something along the lines of:
> > > > Given all available information about the mechanism used to generate
> the
> > >
> > > "available" seems to play a rather crucial role here, then.
> >
> > No. "available" means "possible to be obtained in any way." What you
> 
> So that the randomness of an existing system can suddenly
> change if somebody dies, for example?  Because, e.g., part
> of the information about the system was only "available" by
> asking him (he had memorized it) -- with that information,
> in this thought-experiment, things were predictable -- now
> the designer dies, the information is not available any more,
> and it's not predictable anymore thus becomes 'random'?

No. Because it was possible for you to ask the designer before he died,
which would be "a way". Or you could take the system apart and measure
it to reverse engineer it.

> I would rather not use a definition of "perfect" that requires
> accounting for such factors -- considering human beings as
> parts of the system, knowing if they ever wrote somewhere
> down that information (it might still be 'obtained in some
> way' if I could root through their papers and maybe if I was
> able to decipher some private code they used -- or maybe
> not) or communicated it to someone else (maybe to their
> priest during confession?)... seems a rather shifty base for
> a mathematical definition.

That's because you're making up strawmen.
 
> Why don't we follow De Finetti's lead?  P(x) does not 'exist'
> per se, it's just a shorthand for P(x|y) where y is a 'mindset',
> a sum total of information about the word, that is being
> implicitly assumed.  All probabilities are 'subjective' in the
> specific sense of depending on that y -- there is no 'perfect
> reference y' (God's all-knowing mind) that needs to be taken
> as an assumption -- as long as we fix y (ideally, explicitly)
> we can do all of our probability computations equally well.
> 
> Now if you believe there is an ideal y such that all P(x|y)
> are 0 or 1 for that y, or if you believe it does not exist --
> whether God plays dice or not:-) -- becomes irrelevant
> enough so we can do maths without first having to solve
> theological questions as a prerequisite.  Isn't randomness
> (or "perfect" randomness) akin to probability in this?  Or
> IS it just a theological issue, so that any hope of tackling
> it apart from theology is doomed?

No. That would be something else.

Also, I'm not sure where you keep coming up with the term "perfect
randomness".

> > > That would only leave the arbitrarily long sequence of generated
> > > bits and no other info?  Fine, then:
> >
> > > > I don't think Chaitin's Omega meets this definition, since it is by
> > > > definition a single number than anyone can calculate equally well.
> 
> It's one number PER each Universal Turing Machine, I believe.  I
> do not know if from the first N bits of Omega you get enough
> information to reconstruct all parameters of the UTM in question
> so as to be able to compute the N+1-th bit, but I believe that
> part of the point is that you *don't*.
> 
> > > But from Omega's first N bits, you get no hint at all that lets
> > > you predict its N+1-th bit "with > 50% accuracy".
> >
> > How did you get the first N bits? If you can calculate them, so can I.
> 
> Say I have an idealized black box that was made for me by a
> demented but genial Transylvanian dwarf, who, unfortunately,
> has since deceased.  I do know with certainty (he was a VERY
> trustworthy demented but genial Transylvanian dwarf) that
> what the box does each time I press the button is compute
> the next bit of Omega for some UTM -- alas, I don't know WHAT
> Universal Turing Machine encoding it uses, and the box with
> the "algorithm" (or effective procedure, whatever) is rigged so
> that any attempt to examine its innards will destroy it (I think
> my deceased friend didn't believe humanity was ready to learn
> how he made the infinite-precision registers that are such a
> crucial part of the machine's operation, and, you know, he may
> have been right after all).
> 
> So, "all available information" is: this computes Omega for SOME
> UTM -- sadly, we don't know WHAT UTM that is. 

Sorry. That's not "all available information". Your assumption that it
is impossible to know what the box is doing is turning it into an opaque
physical instantiation of a random number generator, not an algorithm.
Your opaque box is not a random number generator, not a deterministic
computer. You may assert all you want that it's running a deterministic
algorithm, but you have zero proof of that. "Trust the word of this dead
guy" is not a mathematical argument.

Making up an example like "I have enough proof to know that X is true,
but there's no way you can get the same amount of evidence" is making up
a straw man. If you are going to postulate such a black box and call it
an algorithm, you would have to *prove* for me in a mathematical sense
that it's generating bits from an actual Omega calculation. There's no
room in mathematics to say "Trust me, this really does work, and given
that I can prove X." Instead, you have to say "Assume X, now X implies
Y." But you can't prove anything about *X* that way. 

You're saying "assume I have this box that calculates Omega but nobody
can look at it. Therefore, you're wrong because you can't predict its
output." You haven't proven anything about the Omega box, nor my ability
to predict Omega in general. You've only proven things *assuming* you
have such a box. But since you *don't* have such a box, you've proven
nothing. Once you *do* have such a box, it's no longer an algorithm, but
a physical RNG.

> There, you have
> all _available_ information. 

Nope. First you'd have to prove mathematically that it's impossible for
me to examine the box. To do that, you'd have to actually have a box, or
tell me precisely how to construct such a thing.

> Now (assuming you can't come and
> steal my treasured little black box -- it's well-guarded, I assure
> you... you don't want to brave the killer rabbit, for example, and
> that's just the start of it!), how are you going to calculate?  I'll
> send you the first N bits, for any N, and you just need to give
> me the N+1-th one.

Again, "assume you can't come and steal my black box". Fine. "Assume I
have a magic ray built by the drawf's cousin that makes an exact
duplicate of any device." See how silly such an argument is?

> Note that 'all available info' does include the algorithm, net of
> some numeric parameters.  

Well, if you're talking about UTMs, then "algorithm" must include all
the numeric parameters.

> > > So, if the
> > > definition does not include the first condition, from the bits
> > > only you're not going to get any farther than from N tosses
> > > of an idealized coin.
> >
> > Sure. *I* can write an algorithm to calculate the bits of Omega. Then I
> 
> For what UTM encoding, pray?

The same one you're using.

> > run my algorithm, see that the first N bits of its output matches the
> > first N bits of your sequence, and predict with very high probability
> > what the next bit is, right?
> 
> I don't believe so -- I don't think those N bits let you reconstruct
> the UTM encoding being used.  I hope I'll be more positive about
> it once I _have_ received and studied 'Exploring Randomness'.
> 
> Until then, I can only hypothesize -- IF some amount N of bits
> was sufficient to let you reconstruct the UTM parameters, and
> therefore calculate the N+1-th bit, THEN I would agree that the
> Omega's randomness was bounded by N (<=N).  IF no amount
> N of bits has this property, would you agree in turn as to the
> 'perfection' of the randomness (at least once the dwarf has
> died, or has irretrievably forgotten how the hep he DID build
> the darned algorithmic black box in the first place)?

No. First you have to prove he didn't build two.
 
> > If you calculate the first 100,000 bits of
> > Omega and give it to me, and you don't tell me that it's the first
> > 100,000 bits of Omega, and I think to try calculating the first 100,000
> > bits of Omega and they match, then I guess the 100,001'st bit of Omega,
> > then I have a >50% chance of getting the next bit right.
> 
> Again the UTM parameters loom large.  At least until some ISO
> subcommittee comes up with a standardized Universal Turing
> Machine (or maybe the IEEE could do it, as for floating point?-)...

Using the same UTM as you do, of course. Or I could just search thru all
possible UTMs until I found one that matched your output.

> > If, on the other hand, you run a random number generator based on
> > quantum radioactive decay, or you flip a coin 100,000 times, and it just
> > *happens* to match the first 100,000 bits of Omega, and I notice that,
> > and I guess the 100,001'st bit of Omega, then I have still only a 50%
> > probability of being right.
> 
> I think the situation, looking at the bits only, would be the same
> in both cases. 

You're mistaken.

> Now I'm starting to wonder if a UTM encoding (or
> an infinite set of possible such encodings, actually) could be found
> to "account" for ANY such given first-100,000-bits.  Maybe not --
> perhaps I cannot encode an UTM so that random self-delimiting
> programs for it have _any_ prespecified probability of halting
> (or at least to N bits' approximation).

I think part of the problem is that Omega is unknowable, not just
"random". That is, you can't build a machine to calculate what its value
is. Since it's construction is based on whether or not a program halts,
I can't imagine how you build an algorithm to figure out what its bits
are.

> > This is why some people say that any given finite string cannot be
> > considered random or non-random by itself, without knowing how it's
> > generated.
> 
> The 'infinity' aspects with which 'perfect randomness' seems to be
> imbued might surely account for that unwillingness to bestow the
> now-hallowed monicker 'random' onto any mere finite structure, I
> guess.

I'm not sure where 'perfect randomness' comes from in that statement. 

>  I still think it's very useful to study the randomness of such
> finite sets,

Sure. Using a different meaning for the word "random", tho.

> and apparently so do Chaitin and all other scholars in
> that field. 

Again, using a different meaning for the term "random".

> However it was generated, that bitstring now has its
> own intrinsic properties.  A man in the street might well call it
> 'pretty well-ordered' or 'rather random' -- and why should the word
> 'random' be pre-empted, so it has no meaning at all in this context,
> when a useful technical sense, that happens to match pretty well
> the way the man in the street would use the word, can be applied
> instead?

Because we already have other words mean the same thing, like
"uncompressible". Besides, you're still arguing a different point than I
am. The original JVN quote was something equivalent to "deterministic
algorithms don't produce randomness". If you want to define "randomness"
as "the uneducated guy walking down the street doesn't see a pattern"
then obviously JVN is incorrect and would know it. If you want to know
what he meant, pick a meaning for "randomness" that makes the quote make
sense, assuming JVN has a good idea of how algorithms work: i.e., use
the meaning of random that is "unpredictable" rather than
"uncompressible" or "uncalculable."

-- 
Darren New / Senior MTS & Free Radical / Invisible Worlds Inc.
       San Diego, CA, USA (PST).  Cryptokeys on demand.
     This is top-quality raw fish, the Rolls-Rice of Sushi!



More information about the Python-list mailing list