The lowest unit of randomness is probably a random uniform bit, in other words a symmetric Bernoulli random variable taking the values 0 and 1 with probability 1/2. If
U=∞∑n=1bn2−n
is the binary expansion of a random variable U taking values in the interval [0,1], then U follows the uniform law on [0,1] iff the bits (bn)n≥1 are i.i.d. symmetric Bernoulli. As explained in a previous post, this observation allows for instance to extract from U a whole infinite sequence of i.i.d. uniform random variables on [0,1]. But real real numbers are not accessible in current computers.
In a computer, everything is finite, and we use a finite sequence of bits to produce a uniform discrete random variable, up to a fixed precision, typically 32 or 64 bits, which corresponds to 232 and 264 different values respectively. More generally, the simulation of the discrete uniform law on n different values amounts to the coding in bits of the integer n, namely log2(n) bits. In particular, the simulation of a random permutation σn on {1,…,n} costs approximately log2(n!)∼nlog2(n) bits. If we use the Fisher-Yates-Knuth shuffle, we simulate σn using the representation in terms of product of transpositions σn=(1,U1)⋯(1,Un) where U1,…,Un are independent with Uk following the uniform discrete law on {1,…,k} for every 1≤k≤n. Then the total cost is ∼∑nk=1log2(k)∼nlog2(n), as expected.
If we start from a fixed deterministic matrix (xjk)1≤j,k≤n, and produce a random matrix
(xσ(j,k))1≤j,k≤n
by shuffling the n2 entries with a random uniform permutation σ of {1,…,n2}, then we use approximately 2n2log2(n) bits. On the other hand, the production of a n×n random matrix with i.i.d. entries following the uniform distribution on {0,…,N} needs n2log2(N) bits (N=2 for Bernoulli entries).
Leave a Comment