Press "Enter" to skip to content

Probability space

Omega

If you say “Let $X$ be a random variable, bla bla bla, and let $Y$ be another random variable independent of $X$…“, then you might be in trouble because $X$ is defined on some uncontrolled and implicit probability space $(\Omega,\mathcal{A},\mathbb{P})$ and this space is not necessarily large enough to allow the definition of $Y$. The definition of $Y$ may require the enlargement of the initial probability space. This implicitly and sneakily breaks the flow of the mathematical reasoning. Of course this is not a problem in general, and we are often interested in (joint) distributions rather that in probability spaces. But this may produce serious bugs sometimes. The funny thing is that this is done silently everywhere and many are not aware of the danger.

Regarding probability spaces glitches, another common subtlety is the misuse of the Skorokhod representation theorem. This nice theorem states that if $(X_n)$ is a sequence of random variables taking values on say a metric space and such that $X_n\to X$ in law, then there exists a probability space $\Omega^*$ carrying $(X^*_n)$ and $X^*$, such that $X^*_n$ has the law of $X_n$ for all $n$ and $X^*$ has the law of $X$, and $X^*_n\to X^*$ almost surely. This theorem is dangerous because it does not control the law of the sequence $(X^*_n)$ itself, in other words the correlations between the $X^*_n$. Its proof plays with these correlations in order to produce almost sure convergence! In particular $(X_1,\ldots,X_n)$ and $(X^*_1,\ldots,X^*_n)$ do not have the same law in general when $n>1$. Moreover even if the initial $X_n$ are independent, the $X^*_n$ are not independent in general. It is customary to say that if you prove something with the Skorokhod representation theorem, then it is likely that either your statement is wrong or you can find another proof.

Note. The idea behind the proof of the Skorokhod representation theorem is that the proximity of distributions implies the existence of a coupling close to the diagonal. For instance it can be easily checked that if $\mu$ and $\nu$ are probability measures on say $\mathbb{Z}$ then $$\mathrm{d}_{\mathrm{TV}}(\mu,\nu)=\inf_{(X,Y)}\mathbb{P}(X\neq Y)$$ where the inf runs over all couples of random variables $(X,Y)$ with $X\sim\mu$ and $Y\sim\nu$.

Note. The idea of writing this micro-post came from a discussion with a PhD student.

    Leave a Reply

    Your email address will not be published.

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Syntax · Style · .