# Month: December 2010

I like very much this picture from Wikipedia showing that correlation measures linear dependence.

Let ${\mathcal{P}}$ be the set of probability measures ${\mu}$ on ${\mathbb{R}}$ such that ${\mathbb{R}[X]\subset\mathrm{L}^1(\mu)}$. Let us consider the equivalent relation ${\sim}$ on ${\mathcal{P}}$ given by ${\mu_1\sim\mu_2}$ if and only if for every ${P\in\mathbb{R}[X]}$,

$\int\!P\,d\mu_1=\int\!P\,d\mu_2$

(i.e. ${\mu_1}$ and ${\mu_2}$ share the same sequence of moments). We say that ${\mu\in\mathcal{P}}$ is characterized by its moments when its equivalent class is a singleton. Every compactly supported probability measure on ${\mathbb{R}}$ belongs to ${\mathcal{P}}$ and is indeed characterized by its moments thanks to the Weierstrass density theorem. Beyond compactly supported probability measures, one may use the Carleman criterion. The following classical lemma is useful when using the moments method, for instance when proving the Wigner semicircle theorem.

Lemma 1 (From moments convergence to weak convergence) If ${\mu,\mu_1,\mu_2,\ldots}$ belong to ${\mathcal{P}}$ with

$\lim_{n\rightarrow\infty}\int\!P\,d\mu_n=\int\!P\,d\mu$

for every ${P\in\mathbb{R}[X]}$ and if ${\mu}$ is characterized by its moments then for every $f\in\mathcal{C}_b(\mathbb{R},\mathbb{R})$

$\lim_{n\rightarrow\infty}\int\!f\,d\mu_n=\int\!f\,d\mu.$

Proof: The convergence assumption implies that for every polynomial ${P}$

$C_P:=\sup_{n\geq1}\int\!P\,d\mu_n<\infty.$

Thus, by Markov’s inequality, for every real ${R>0}$,

$\mu_n([-R,R]^c)\leq \frac{C_{X^2}}{R^2}$

and therefore ${(\mu_n)_{n\geq1}}$ is tight. As a consequence, by Prohorov’s theorem, it suffices now to show that if ${(\mu_{n_k})_{k\geq1}}$ converges weakly to ${\nu}$ then ${\nu=\mu}$. Recall that the weak convergence here is also known as the narrow convergence and corresponds to the convergence for continuous bounded functions.

Let us show that ${\mu=\nu}$. Let us fix some ${P\in\mathbb{R}[X]}$ and a real number ${R>0}$. Let ${\varphi_R:\mathbb{R}\rightarrow[0,1]}$ be continuous with ${\mathbf{1}_{[-R,R]}\leq\varphi_R\leq\mathbf{1}_{[-R-1,R+1]}}$. We start from the decomposition

$\int\!P\,d\mu_{n_k}=\int\!\varphi_RP\,d\mu_{n_k}+\int\!(1-\varphi_R)P\,d\mu_{n_k}.$

Since ${(\mu_{n_k})_{k\geq1}}$ tends weakly to ${\nu}$ we have

$\lim_{k\rightarrow\infty}\int\!\varphi_RP\,d\mu_{n_k}=\int\!\varphi_RP\,d\nu.$

Additionally, by Cauchy-Schwarz’s and Markov’s inequalities,

$\left|\int\!(1-\varphi_R)P\,d\mu_{n_k}\right|^2 \leq \mu_{n_k}([-R,R]^c)\int\!P^2\,d\mu_{n_k} \leq \frac{C_{X^2}C_{P^2}}{R^2}.$

On the other hand, we know that

$\lim_{k\rightarrow\infty}\int\!P\,d\mu_{n_k}=\int\!P\,d\mu.$

Therefore, we obtain

$\lim_{R\rightarrow\infty}\int\!\varphi_RP\,d\nu=\int\!P\,d\mu.$

Using this for ${P^2}$ we obtain by monotone convergence that ${P\in\mathrm{L}^2(\nu)\subset\mathrm{L}^1(\nu)}$ and then by dominated convergence that

$\int\!P\,d\nu=\int\!P\,d\mu.$

Since ${P}$ is arbitrary and ${\mu}$ is characterized by its moments, it follows that ${\mu=\nu}$. $latex \Box$

If $X$ and $Y$ are independent real random variables with densities $f$ and $g$ then $X+Y$ has density  $f*g$. This density is bounded as soon as $f$ or $g$ is bounded since $\left\Vert f*g\right\Vert_\infty\leq \min(\left\Vert f\right\Vert_\infty,\left\Vert g\right\Vert_\infty).$ One may ask if $XY$ has similarly a bounded density. The answer is  unfortunately negative in general. To see it, we note first that when $X$ and $Y$ are non negative then $XY$ has density $t\in\mathbb{R}_+\mapsto \int_0^\infty\!\frac{f(x)}{x}g\left(\frac{t}{x}\right) dx.$ Now if  for instance $X$ and $Y$ are uniform on $[0,1]$ then $XY$ has density $t\mapsto -\log(t)\mathbf{1}_{[0,1]}(t)$ which is unbounded… If $X$ is  non negative with density $f$ then $X^2$ has density $t\in\mathbb{R}_+\mapsto \frac{f(\sqrt{t})}{2\sqrt{t}}.$ For instance when $X$ is uniform then $X^2$ has (unbounded) density $t\mapsto \frac{1}{2\sqrt{t}}\mathbf{1}_{[0,1]}(t).$ The density of $X^2$ is bounded if $f$ is bounded and $f(t)=O(t)$ as $t\to0$ (imposes $f(0)=0$).

We have already mentioned in a previous post an amusing property of the exponential distribution. Here is another one: if $X$ and $Y$ are two independent exponential random variables with mean $1/\lambda$ and $1/\mu$ respectively then $X-Y$  follows the double exponential distribution on $\mathbb{R}$ with  density

$x\in\mathbb{R}\mapsto \frac{\lambda\mu}{\lambda+\mu}\left(e^{\mu x}\mathbf{1}_{\mathbb{R}_-}(x)+e^{-\lambda x}\mathbf{1}_{\mathbb{R}_+}(x)\right).$

In other words, we have the mixture $\mathcal{L}(X-Y)=\mathcal{L}(X)*\mathcal{L}(-Y)=\frac{\mu}{\lambda+\mu}\mathcal{L}(X)+\frac{\lambda}{\lambda+\mu}\mathcal{L}(-Y).$

In particular, when $\lambda=\mu$ we get the symmetric double exponential (Laplace distribution) with density $x\mapsto \frac{\lambda}{2} e^{-\lambda|x|}$.  Another way to state the property is to say that the double exponential is the image of the product distribution $\mathcal{E}(\lambda)\otimes\mathcal{E}(\mu)$ by the linear map $(x,y)\mapsto x-y$. Note that the density of $X+Y$ is $x\mapsto \lambda\mu\frac{e^{-\mu x}-e^{-\lambda x}}{\lambda-\mu}\mathbf{1}_{\mathbb{R}_+}(x)$

(when $\lambda=\mu$ we recover by continuity the Gamma density $x\mapsto \lambda^2x e^{-\lambda x} \mathbf{1}_{\mathbb{R}_+}(x)$).

Syntax · Style · Tracking & Privacy.