Press "Enter" to skip to content

Month: January 2018

Concentration without moments

Wassily Hoeffding (1914-1991)
Wassily Hoeffding (1914 — 1991)

This post presents an inequality for self-normalized sums without moment assumptions.

Symmetric laws. Recall that a probability distribution is symmetric when \( {X} \) and \( {-X} \) are equally distributed if \( {X} \) is a random variable following this distribution. In this case \( {\varepsilon=\mathrm{sign}(X)} \) and \( {|X|} \) are independent and \( {\varepsilon} \) follows a symmetric Rademacher distribution: \( {\mathbb{P}(\varepsilon=\pm1)=1/2} \).

Concentration. Let \( {X_1,\ldots,X_n} \) be independent real random variables with symmetric law and without atom at \( {0} \). Then for any real \( {r>0} \),

\[ \mathbb{P}\left(\frac{X_1+\cdots+X_n}{\sqrt{X_1^2+\cdots+X_n^2}}\geq r\right) \leq\mathrm{e}^{-\frac{r^2}{2}}. \]

Note that this is available without any moment assumption on the random variables.

Proof. Thanks to the independence and symmetry assumptions, the random variables \( {\varepsilon_1=\mathrm{sign}(X_1),\ldots,\varepsilon_n=\mathrm{sign}(X_n)} \) are iid, follow the symmetric Rademacher distribution, and are independent of \( {|X_1|,\ldots,|X_n|} \). Now by conditioning we get

\[ \mathbb{P}\left(\frac{X_1+\cdots+X_n}{\sqrt{X_1^2+\cdots+X_n^2}}\geq r\right) =\mathbb{E}(\varphi_r(|X_1|,\ldots,|X_n|)) \]

where \( {\varphi_r(c_1,\ldots,c_n)=\mathbb{P}((\varepsilon_1c_1+\cdots+\varepsilon_nc_n)/\sqrt{c_1^2+\cdots+c_n^2}\geq r)} \). We can assume that \( {c_i>0} \) since \( {\mathbb{P}(X_i=0)=0} \). It remains to use the Hoeffding inequality, which states that if \( {Z_1,\ldots,Z_n} \) are independent centered and bounded real random variables then for any real \( {r>0} \),

\[ \mathbb{P}\left(Z_1+\cdots+Z_n\geq r\right) \leq\exp\left(-\frac{2r^2}{\mathrm{osc}(Z_1)^2+\cdots+\mathrm{osc}(Z_n)^2}\right). \]

where \( {\mathrm{osc}(Z)=\max(Z)-\min(Z)} \). Here we use it with, for any \( {i=1,\ldots,n} \),

\[ Z_i=\frac{c_i}{\sqrt{c_1^2+\cdots+c_n^2}}\varepsilon_i \quad\text{for which}\quad \mathrm{osc}(Z_i)^2=\frac{4c_i^2}{c_1^2+\cdots+c_n^2}. \]

Indeed this gives \( { \varphi_r(c_1,\ldots,c_n) =\mathbb{P}\left(Z_1+\cdots+Z_n\geq r\right) \leq\mathrm{e}^{-\frac{r^2}{2}}} \).

Probabilistic interpretation. When \( {X_1,\ldots,X_n} \) are iid and in \( {L^2} \), then their mean is zero, and their variance is say \( {\sigma^2>0} \). The law of large numbers gives \( {\sqrt{X_1^2+\cdots+X_n^2}=\sqrt{n}(\sigma+o_{n\rightarrow\infty}(1))} \) almost surely. Therefore by the central limit theorem and Slutsky’s lemma we get \( {(X_1+\cdots+X_n)/\sqrt{X_1^2+\cdots+X_n^2}\overset{\text{law}}{\longrightarrow}\mathcal{N}(0,1)} \) as \( {n\rightarrow\infty} \).

Geometric interpretation. If \( {X_1,\ldots,X_n} \) are iid standard Gaussian, then

\[ \frac{X_1+\cdots+X_n}{\sqrt{X_1^2+\cdots+X_n^2}} =\langle U_n,\theta_n\rangle \]

where

\[ U_n=\sqrt{n}\frac{(X_1,\ldots,X_n)}{\sqrt{X_1^2+\cdots+X_n^2}} \quad\text{and}\quad \theta_n=\frac{(1,\ldots,1)}{\sqrt{n}}. \]

The random vector \( {U_n} \) is uniformly distributed on the sphere of \( {\mathbb{R}^n} \) of radius \( {\sqrt{n}} \), while the vector \( {\theta_n} \) belongs to the unit sphere. Note that \( {\langle U_n,\theta_n\rangle} \) is the law of the sum of the coordinates of a row or column of a uniform random orthogonal matrix.

Relation to Studentization. The result above can be related to the Studentized version of the empirical mean. Indeed, if one defined the empirical mean and the empirical variance

\[ \overline{X}_n=\frac{X_1+\cdots+X_n}{n} \quad\text{and}\quad \widehat\sigma^2_n=\frac{(X_1-\overline{X}_n)^2+\cdots+(X_n-\overline{X}_n)^2}{n-1} \]

then using

\[ (n-1)\widehat\sigma_n^2 =X_1^2+\cdots+X_n^2-\frac{(X_1+\cdots+X_n)^2}{n} \]

we get, for any \( {r\geq0} \), after some algebra,

\[ \left\{\sqrt{n}\frac{\overline{X}_n}{\widehat\sigma_n}\geq r\right\} =\left\{\frac{X_1+\cdots+X_n}{\sqrt{X_1^2+\cdots+X_n^2}} \geq r\sqrt{\frac{n}{n-1+r^2}}\right\}. \]

It follows then from the concentration inequality above that if \( {X_1,\ldots,X_n} \) are independent, with symmetric law without atom at \( {0} \), then for any \( {r\geq0} \),

\[ \mathbb{P}\left(\sqrt{n}\frac{\overline{X}_n}{\widehat\sigma_n}\geq r\right) \leq\exp\left(-\frac{nr^2}{2(n-1+r^2)}\right). \]

If \( {X_1,\ldots,X_n} \) are iid centered Gaussian then \( {\overline{X}_n\sim\mathcal{N}(0,1)} \) and \( {\widehat{\sigma}^2_n\sim\chi^2(n-1)} \) are independent and their ratio \( {\sqrt{n}\overline{X}_n/\widehat\sigma_n} \) follows the Student \( {t(n-1)} \) law, of density proportional to \( {x\mapsto 1/(1+t^2/(n-1))^{n/2}} \), which is in particular heavy tailed.

Extension and further reading. On this subject, one may take a look for instance at the article by Sergey Bobkov and Friedrich Götze entitled Concentration inequalities and limit theorems for randomized sums (2007). In another direction, it is also possible to show that a Cramér large deviation principle for self-normalized sums is available without any moment assumption, and we refer for instance to the probability survey by Qi-Man Shao and Qiying Wang entitled Self-normalized limit theorems: a survey (2013).

Many thanks to the mysterious L.

Leave a Comment

Around the circular law : erratum

O'Rourke Arms
O’Rourke Arms

Sean O’Rourke pointed out on December 30, 2017 that a notation should be corrected in the statement of Lemma A.1 in the probability survey Around the circular law (2012) that I wrote years ago in collaboration with Charles Bordenave.

Indeed the definition of \( {\sigma^2} \) should be corrected to

\[ \sigma^2 :=\min_{1\leq i,j\leq n}\mathrm{Var}(X_{ij}\mid|X_{i,j}|\leq a)>0. \]

It was erroneously written

\[ \sigma^2 :=\min_{1\leq i,j\leq n}\mathrm{Var}(X_{ij}\mathbf{1}_{|X_{i,j}|\leq a})>0. \]

Let us take this occasion for a back to basics about conditional variance and variance of truncation. Let \( {X} \) be a real random variable on \( {(\Omega,\mathcal{F},\mathbb{P})} \) and \( {A\in\mathcal{F}} \) be an event. First the real number \( {\mathbb{E}(X\mid A)=\mathbb{E}(X\mid\mathbf{1}_A=1)} \) is not the random variable \( {\mathbb{E}(X\mid\mathbf{1}_A)} \). We have

\[ \mathbb{E}(X\mid\mathbf{1}_A) =\underbrace{\frac{\mathbb{E}(X\mathbf{1}_A)}{\mathbb{P}(A)}}_{\mathbb{E}(X\mid A)}\mathbf{1}_A +\underbrace{\frac{\mathbb{E}(X\mathbf{1}_{A^c})}{\mathbb{P}(A^c)}}_{\mathbb{E}(X\mid A^c)}\mathbf{1}_{A^c}. \]

Note that this formula still makes sense when \( {\mathbb{P}(A)=0} \) or \( {\mathbb{P}(A)=1} \).

The quantity \( {\mathbb{E}(X\mid A)} \) makes sense only if \( {\mathbb{P}(A)>0} \), and in this case, the conditional variance of \( {X} \) given the event \( {A} \) is the real number given by

\[ \begin{array}{rcl} \mathrm{Var}(X\mid A) &=&\mathbb{E}((X-\mathbb{E}(X\mid A))^2\mid A)\\ &=&\mathbb{E}(X^2\mid A)-\mathbb{E}(X\mid A)^2\\ &=&\frac{\mathbb{E}(X^2\mathbf{1}_A)}{\mathbb{P}(A)} -\frac{\mathbb{E}(X\mathbf{1}_A)^2}{\mathbb{P}(A)^2}\\ &=& \frac{\mathbb{E}(X^2\mathbf{1}_A)\mathbb{P}(A)-\mathbb{E}(X\mathbf{1}_A)^2}{\mathbb{P}(A)^2}\\ &=&\mathbb{E}_A(X^2)-\mathbb{E}_A(X)^2=:\mathrm{Var}_A(X) \end{array} \]

where \( {\mathbb{E}_A} \) is the expectation with respect to the probability measure with density \( {\mathbf{1}_A/\mathbb{P}(A)} \) with respect to \( {\mathbb{P}} \). In particular, by the Cauchy–Schwarz inequality,

\[ \mathrm{Var}(X\mid A) \geq 0 \]

with equality if and only if \( {X} \) and \( {\mathbf{1}_A} \) are colinear.

Of course \( {\mathrm{Var}(X\mid A)=0} \) if \( {X} \) is constant. However \( {\mathrm{Var}(X\mid A)} \) may vanish for a non-constant \( {X} \). Indeed if \( {A=\{|X|\leq a\}} \) and if \( {X\sim\frac{1}{2}\delta_{a/2}+\frac{1}{2}\delta_{2a}} \) then \( {X\mid A} \) is constant and equal to \( {a/2} \). In this example, since \( {X\mathbf{1}_A} \) is not a constant, this shows also that one cannot lower bound \( {\mathrm{Var}(X\mid A)} \) with the variance of the truncation

\[ \mathrm{Var}(X\mathbf{1}_A)=\mathbb{E}(X^2\mathbf{1}_A)-\mathbb{E}(X\mathbf{1}_A)^2. \]

Another notable correction. Mylène Maïda pointed out to me on February 27 2018 that at the bottom of page 14, just before the statement

\[ \sup_{z\in C}|n\varphi_{n,1}(\sqrt{n}z)-\pi^{-1}\mathbf{1}_{[0,1]}(|z|)|=0 \]

the compact set \( {C} \) must be taken in \( {\{z\in\mathbb{C}:|z|\neq1\}} \) and not on the whole complex plane \( {\mathbb{C}} \). Indeed, when \( {|z|=1} \), \( {n\varphi_{n,1}(\sqrt{n}z)} \) tends as \( {n\rightarrow\infty} \) to \( {1/2} \), and not to \( {\pi^{-1}} \), see for instance this former post for a one formula proof based on the central limit theorem for Poisson random variables. Anyway this is really not surprising since a sequence of continuous functions cannot converge uniformly to a discontinuous function.

Leave a Comment