Loading [MathJax]/jax/output/CommonHTML/jax.js
Press "Enter" to skip to content

Author: Djalil Chafaï

Back to basics - Bits of fluctuations

Eugen Slutsky
Evgeny Evgenievich Slutsky (1880 – 1948).

This tiny back to basics post is devoted to a couple of bits of Probability and Statistics.

The central limit theorem cannot hold in probability. Let (Xn)n1 be iid real random variables with zero mean and unit variance. The central limit theorem (CLT) states that as n,

Zn=X1++XnnlawN(0,1).

A frequently asked question by good students is to know if one can replace the convergence in law by the (stronger) convergence in probability. The answer is negative, and in particular the convergence cannot hold almost surely or in Lp. Let us examine why. Recall that the convergence in probability is stable by linear combinations and by subsequence extraction.

We proceed by contradiction. Suppose that ZnZ in probability. Then necessarily ZN(0,1). Now, on the one hand, Z2nZn0 in probability, while

Z2nZn=122Zn+Xn+1+X2n2n=122Zn+12Zn.

But Zn is an independent copy of Zn. Thus the CLT used twice gives Z2nZnlawN(0,σ2) with σ2=(12)2/2+1/2=220, hence the contradiction.

Alternative proof. Set Sn=X1++Xn, and observe that

S2nSnn=2Z2nZn.

Now, if the CLT was in probability, the right hand side would converge in probability to 2ZZ which follows the law N(0,(21)2). On the other hand, since S2nSn has the law of Sn, by the CLT, the left hand side converges in law towards ZN(0,1), hence the contradiction. This reversed'' proof was kindly suggested by Michel Ledoux.

Yet another proof. If ZnZ in probability then ZnZ in L2 since (Zn)n is uniformly integrable (it is bounded in L2), but this convergence in L2 is impossible since (Zn)n does not satisfy to the Cauchy criterion (consider Z2nZn as above!).

Intermezzo: Slutsky lemma. The Slutsky lemma asserts that if

XnlawXandYnlawc

with c constant, then

(Xn,Yn)law(X,c),

and in particular, f(Xn,Yn)lawf(X,c) for every continuous f.

Let us prove it. Since Ynlawc and c is constant, we have Ync in probability, and since for all tR, the function yeity is uniformly continuous on R, we have that for all s,tR and all ε>0, there exists η>0 such that for large enough n,

|E(eisXn+itYn)E(eisXn+itc)|E(|eitYneitc|1|Ync|η)+2P(|Ync|>η)ε+2ε.

Alternatively we can use the Lipschitz property instead of the uniform continuity:

|E(eisXn+itYn)E(eisXn+itc)|E(|eitYneitc|1|Ync|η)+2P(|Ync|>η)|t|η+2ε.

On the other hand, since XnlawX, we have, for all s,tR, as n,

E(eisXn+itc)=eitcE(eisXn)eitcE(eisX)=E(eisX+itc).

The delta-method. Bizarrely this basic result, very useful in Statistics, appears to be unknown to many young probabilists. Suppose that as n,

an(Znbn)lawL,

where (Zn)n1 is a sequence of real random variables, L a probability distribution, and (an)n1 and (bn)n1 deterministic sequences such that an and bnb. Then for any C1 function f:RR such that f(b)0, we have

anf(b)(f(Zn)f(bn))lawL.

The typical usage in Statistics is for the fluctuations of estimators say for an(Znbn)=n(ˆθnθ). Note that the rate in n and the fluctuation law are not modified! Let us give a proof. By a Taylor formula or here the mean value theorem,

f(Zn)f(bn)=f(Wn)(Znbn)

where Wn is a random variable lying between bn and Zn. Since an, the Slutsky lemma gives Znbn0 in law, and thus in probability since the limit is deterministic. As a consequence Wnbn0 in probability and thus Wnb in probability. The continuity of f at point b provides f(Wn)f(b) in probability, hence f(Wn)/f(b)1 in probability, and again by Slutsky lemma,

anf(b)(f(Zn)f(bn))=f(Wn)f(b)an(Znbn)lawL.

If f(b)=0 then one has to use a higher order Taylor formula, and the rate and fluctuation will be deformed by a power. Namely, suppose that f(1)(b)==f(r1)(b)=0 while f(r)(b)0, then, denoting Lr the push forward of L by xxr, we get

arnr!f(r)(b)(f(Zn)f(bn))lawLr.

The delta-method can be of course generalized to sequences of random vectors, etc.

Leave a Comment
Syntax · Style · .