{"id":14367,"date":"2020-11-25T15:37:08","date_gmt":"2020-11-25T14:37:08","guid":{"rendered":"http:\/\/djalil.chafai.net\/blog\/?p=14367"},"modified":"2020-12-03T15:38:55","modified_gmt":"2020-12-03T14:38:55","slug":"sub-gaussian-tail-bound-for-local-martingales","status":"publish","type":"post","link":"https:\/\/djalil.chafai.net\/blog\/2020\/11\/25\/sub-gaussian-tail-bound-for-local-martingales\/","title":{"rendered":"Sub-Gaussian tail bound for local martingales"},"content":{"rendered":"<figure id=\"attachment_14376\" aria-describedby=\"caption-attachment-14376\" style=\"width: 288px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2020\/11\/HermanChernoff.jpeg\"><img loading=\"lazy\" class=\"wp-image-14376 size-full\" src=\"http:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2020\/11\/HermanChernoff.jpeg\" alt=\"Herman Chernoff\" width=\"288\" height=\"326\" srcset=\"https:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2020\/11\/HermanChernoff.jpeg 288w, https:\/\/djalil.chafai.net\/blog\/wp-content\/uploads\/2020\/11\/HermanChernoff-265x300.jpeg 265w\" sizes=\"(max-width: 288px) 100vw, 288px\" \/><\/a><figcaption id=\"caption-attachment-14376\" class=\"wp-caption-text\"><a href=\"https:\/\/en.wikipedia.org\/wiki\/Herman_Chernoff\">Herman Chernoff<\/a> (1923 - ) One of the first to play with exponential Markov inequalities in the 1950s. He was not aware of the work of <a href=\"https:\/\/djalil.chafai.net\/blog\/2018\/03\/09\/tutorial-on-large-deviation-principles\/\">Harald Cram\u00e9r<\/a> in the 1930s!<\/figcaption><\/figure>\n<p style=\"text-align: justify;\">This post is devoted to a sub-Gaussian tail bound and exponential square integrability for local martingales, taken from my master course on stochastic calculus.<\/p>\n<p style=\"text-align: justify;\"><b>Sub-Gaussian tail bound and exponential square integrability for local martingales.<\/b> Let \\( {M={(M_t)}_{t\\geq0}} \\) be a continuous local martingale issued from the origin. Then for all \\( {t,K,r\\geq0} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\sup_{s\\in[0,t]}|M_s|\\geq r,\u00a0\\langle M\\rangle_t\\leq K\\Bigr) \\leq2\\mathrm{e}^{-\\frac{r^2}{2K}}, \\]<\/p>\n<p style=\"text-align: justify;\">and in particular, if \\( {\\langle M\\rangle_t\\leq Ct} \\) then<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\sup_{s\\in[0,t]}|M_s|\\geq r\\Bigr) \\leq2\\mathrm{e}^{-\\frac{r^2}{2Ct}} \\]<\/p>\n<p style=\"text-align: justify;\">and, for all \\( {\\alpha&lt;\\frac{1}{2Ct}} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}\\Bigr(\\mathrm{e}^{\\alpha\\sup_{s\\in[0,t]}|M_s|^2}\\Bigr)&lt;\\infty. \\]<\/p>\n<p style=\"text-align: justify;\">The condition \\( {\\langle M\\rangle_t\\leq Ct} \\) is a comparison to Brownian motion for which equality holds.<\/p>\n<p style=\"text-align: justify;\"><b>Proof.<\/b> For all \\( {\\lambda,t\\geq0} \\), the Dol\u00e9ans-Dade exponential<\/p>\n<p style=\"text-align: center;\">\\[ X^\\lambda ={\\Bigr(\\mathrm{e}^{\\lambda M_t-\\frac{\\lambda^2}{2}\\langle M\\rangle_t}\\Bigr)}_{t\\geq0} \\]<\/p>\n<p style=\"text-align: justify;\">is a positive super-martingale with \\( {X^\\lambda_0=1} \\) and \\( {\\mathbb{E}X^\\lambda_t\\leq1} \\) for all \\( {t\\geq0} \\). For all \\( {t,\\lambda,r,K\\geq0} \\), by using the maximal inequality for the super-martingale \\( {X^\\lambda} \\) in the last step,<\/p>\n<p style=\"text-align: center;\">\\[ \\begin{array}{rcl} \\mathbb{P}\\Bigr(\\langle M\\rangle_t\\leq K,\\sup_{0\\leq s\\leq t}M_s\\geq r\\Bigr) &amp;\\leq&amp;\\mathbb{P}\\Bigr(\\langle M\\rangle_t\\leq K,\\sup_{0\\leq s\\leq t}X^\\lambda_s\\geq\\mathrm{e}^{\\lambda r-\\frac{\\lambda^2}{2}K}\\Bigr) \\\\ &amp;\\leq&amp;\\mathbb{P}\\Bigr(\\sup_{0\\leq s\\leq t}X^\\lambda_s\\geq\\mathrm{e}^{\\lambda r-\\frac{\\lambda^2}{2}K}\\Bigr)\\\\ &amp;\\leq&amp;\\mathbb{E}(X^\\lambda_0)\\mathrm{e}^{-\\lambda r+\\frac{\\lambda^2}{2}K} =\\mathrm{e}^{-\\lambda r+\\frac{\\lambda^2}{2}K}. \\end{array} \\]<\/p>\n<p style=\"text-align: justify;\">Taking \\( {\\lambda=r\/K} \\) gives<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\langle M\\rangle_t\\leq K,\\sup_{0\\leq s\\leq t}M_s\\geq r\\Bigr) \\leq\\mathrm{e}^{-\\frac{r^2}{2K}}. \\]<\/p>\n<p style=\"text-align: justify;\">The same reasoning for \\( {-M} \\) instead of \\( {M} \\) provides (note that \\( {\\langle -M\\rangle=\\langle M\\rangle} \\) obviously)<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\langle M\\rangle_t\\leq K,\\sup_{0\\leq s\\leq t}(-M_s)\\geq r\\Bigr) \\leq\\mathrm{e}^{-\\frac{r^2}{2K}}. \\]<\/p>\n<p style=\"text-align: justify;\">The union bound (hence the prefactor \\( {2} \\)) gives then the first desired inequality. The exponential square integrability comes from the usual link between tail bound and integrability, namely if \\( {X=\\sup_{s\\in[0,t]}|M_s|} \\), \\( {U(x)=\\mathrm{e}^{\\alpha x^2}} \\), \\( {\\alpha&lt;\\frac{1}{2Kt}} \\), then, by Fubini-Tonelli,<\/p>\n<p style=\"text-align: center;\">\\[ \\begin{array}{rcl} \\mathbb{E}(U(X)) &amp;=&amp;\\mathbb{E}\\Bigr(\\int_0^XU'(x)\\mathrm{d}x\\Bigr)\\\\ &amp;=&amp;\\mathbb{E}\\Bigr(\\int_0^\\infty\\mathbf{1}_{x\\leq X}U'(x)\\mathrm{d}x\\Bigr)\\\\ &amp;=&amp;\\int_0^\\infty U'(x)\\mathbb{P}(X\\geq x)\\mathrm{d}x\\\u00a0 &amp;\\leq&amp;\\int_0^\\infty2\\alpha x\\mathrm{e}^{\\alpha x^2}\\mathrm{e}^{-\\frac{x^2}{2Kt}}\\mathrm{d}x &lt;\\infty. \\end{array} \\]<\/p>\n<p style=\"text-align: justify;\"><b>Doob maximal inequality for super-martingales.<\/b> If \\( {M} \\) is a continuous super-martingale, then for all \\( {t\\geq0} \\) and \\( {\\lambda&gt;0} \\), denoting \\( {M^-=\\max(0,-M)} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\max_{s\\in[0,t]}|M_s|\\geq\\lambda\\Bigr) \\leq\\frac{\\mathbb{E}(M_0)+2\\mathbb{E}(M^-_t)}{\\lambda}. \\]<\/p>\n<p style=\"text-align: justify;\">In particular when \\( {M} \\) is non-negative then \\( {\\mathbb{E}(M^-)=0} \\) and the upper bound is \\( {\\frac{\\mathbb{E}(M_0)}{\\lambda}} \\).<\/p>\n<p style=\"text-align: justify;\"><b>Proof.<\/b> Let us define the bounded stopping time<\/p>\n<p style=\"text-align: center;\">\\[ T=t\\wedge \\inf\\{s\\in[0,t]:M_s\\geq \\lambda\\}. \\]<\/p>\n<p style=\"text-align: justify;\">We have \\( {M_T\\in\\mathrm{L}^1} \\) since \\( {|M_T|\\leq\\max(|M_0|,|M_t|,\\lambda)} \\). By the Doob stopping theorem for the sub-martingale \\( {-M} \\) and the bounded stopping times \\( {0} \\) and \\( {T} \\) that satisfy \\( {M_0\\in\\mathrm{L}^1} \\) and \\( {M_T\\in\\mathrm{L}^1} \\), we get<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(M_0) \\geq\\mathbb{E}(M_T) \\geq \\lambda\\mathbb{P}(\\max_{s\\in[0,t]}M_s\\geq \\lambda) +\\mathbb{E}(M_t\\mathbf{1}_{\\max_{s\\in[0,t]}M_s&lt;\\lambda}) \\]<\/p>\n<p style=\"text-align: justify;\">hence, recalling that \\( {M^-=\\max(-M,0)} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\lambda\\mathbb{P}(\\max_{s\\in[0,t]}M_s\\geq \\lambda) \\leq \\mathbb{E}(M_0)+\\mathbb{E}(M^-_t). \\]<\/p>\n<p style=\"text-align: justify;\">This produces the desired inequality when \\( {M} \\) is non-negative. For the general case, we observe that the Jensen inequality for the nondecreasing convex function \\( {u\\in\\mathbb{R}\\mapsto\\max(u,0)} \\) and the sub-martingale \\( {-M} \\) shows that \\( {M^-} \\) is a sub-martingale. Thus, by the Doop maximal inequality for non-negative sub-martingales,<\/p>\n<p style=\"text-align: center;\">\\[ \\lambda\\mathbb{P}(\\max_{s\\in[0,t]}M^-_s\\geq \\lambda) \\leq\\mathbb{E}(M^-_t). \\]<\/p>\n<p style=\"text-align: justify;\">Finally, putting both inequalities together gives<\/p>\n<p style=\"text-align: center;\">\\[ \\lambda\\mathbb{P}(\\max_{s\\in[0,t]}|M_s|\\geq \\lambda) \\leq \\lambda\\mathbb{P}(\\max_{s\\in[0,t]}M_s\\geq \\lambda) +\\lambda\\mathbb{P}(\\max_{s\\in[0,t]}M^-_s\\geq \\lambda) \\leq\\mathbb{E}(M_0)+2\\mathbb{E}(M^-_t). \\]<\/p>\n<p style=\"text-align: justify;\"><b>Doob maximal inequalities.<\/b> Let \\( {M} \\) be a continuous process.<\/p>\n<ol>\n<li>If \\( {M} \\) is a martingale or a non-negative sub-martingale then for all \\( {p\\geq1} \\), \\( {t\\geq0} \\), \\( {\\lambda&gt;0} \\),\n<p style=\"text-align: center;\">\\[ \\mathbb{P}\\Bigr(\\max_{s\\in[0,t]}|M_s|\\geq\\lambda\\Bigr) \\leq\\frac{\\mathbb{E}(|M_t|^p)}{\\lambda^p}. \\]<\/p>\n<\/li>\n<li>If \\( {M} \\) is a martingale then for all \\( {p&gt;1} \\) and \\( {t\\geq0} \\),\n<p style=\"text-align: center;\">\\[ \\mathbb{E}\\Bigr(\\max_{s\\in[0,t]}|M_s|^p\\Bigr) \\leq\\Bigr(\\frac{p}{p-1}\\Bigr)^p\\mathbb{E}(|M_t|^p) \\]<\/p>\n<p>in other words<\/p>\n<p style=\"text-align: center;\">\\[ \\Bigr\\|\\max_{s\\in[0,t]}|M_s|\\Bigr\\|_p\\leq\\frac{p}{p-1}\\|M_t\\|_p. \\]<\/p>\n<p>In particular if \\( {M_t\\in\\mathrm{L}^p} \\) then \\( {M^*_t=\\max_{s\\in[0,t]}M_s\\in\\mathrm{L}^p} \\).<\/li>\n<\/ol>\n<p style=\"text-align: justify;\"><b>Comments.<\/b> This inequality allows to control the tail of the supremum by the moment at the terminal time. It is a continuous time martingale version of the simpler Kolmogorov maximal inequality for sums of independent and identically distributed random variables. Note that \\( {q=1\/(1-1\/p)=p\/(p-1)} \\) is the H\u00f6lder conjugate of \\( {p} \\) namely \\( {1\/p+1\/q=1} \\). The inequality is often used with \\( {p=2} \\), for which \\( {(p\/(p-1))^p=4} \\).<\/p>\n<p style=\"text-align: justify;\"><b>Proof.<\/b> We can always assume that the right hand side is finite, otherwise the inequalities are trivial.<\/p>\n<ol>\n<li>If \\( {M} \\) is a martingale, then by the Jensen inequality for the convex function \\( {u\\in\\mathbb{R}\\mapsto |u|^p} \\), the process \\( {|M|^p} \\) is a sub-martingale. Similarly, If \\( {M} \\) is a non-negative sub-martingale then, since \\( {u\\in[0,+\\infty)\\mapsto u^p} \\) is convex and non-decreasing it follows that \\( {M^p=|M|^p} \\) is a sub-martingale. Therefore in all cases \\( {{(|M_s|^p)}_{s\\in[0,t]}} \\) is a sub-martingale. Let us define the bounded stopping time\n<p style=\"text-align: center;\">\\[ T=t\\wedge \\inf\\{s\\geq0:|M_s|\\geq\\lambda\\}. \\]<\/p>\n<p>Note that \\( {|M_T|\\leq\\max(|M_0|,\\lambda)} \\) and thus \\( {M_T\\in\\mathrm{L}^1} \\). The Doob stopping theorem for the sub-martingale \\( {|M|^p} \\) and the bounded stopping times \\( {T} \\) and \\( {t} \\) that satisfy \\( {T\\leq t} \\) gives<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(|M_T|^p)\\leq\\mathbb{E}(|M_t|^p). \\]<\/p>\n<p>On the other hand the definition of \\( {T} \\) gives<\/p>\n<p style=\"text-align: center;\">\\[ |M_T|^p \\geq\\lambda^p\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|\\geq\\lambda} +|M_t|^p\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|&lt;\\lambda}\\\\ \\geq\\lambda^p\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|\\geq\\lambda}. \\]<\/p>\n<p>It remains to combine these inequalities to get the desired result.<\/li>\n<li>If we introduce for all \\( {n\\geq1} \\) the ``localization'' stopping time\n<p style=\"text-align: center;\">\\[ T_n=t\\wedge\\inf\\{s\\geq0:|M_s|\\geq n\\}, \\]<\/p>\n<p>then the desired inequality for the bounded sub-martingale \\( {{(|M_{s\\wedge T_n}|)}_{s\\in[0,t]}} \\) would give<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(\\max_{s\\in[0,t]}|M_{s\\wedge T_n}|^p) \\leq\\left(\\frac{p}{p-1}\\right)^p\\mathbb{E}(|M_t|^p), \\]<\/p>\n<p>and the desired result for \\( {{(M_s)}_{s\\in[0,t]}} \\) would then follow by monotone convergence theorem. Thus this shows that we can assume without loss of generality that \\( {{(|M_s|)}_{s\\in[0,t]}} \\) is bounded, in particular that \\( {\\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p)&lt;\\infty} \\). This a martingale localization argument! The previous proof gives<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{P}(\\max_{s\\in[0,t]}|M_s|\\geq\\lambda) \\leq\\frac{\\mathbb{E}(|M_t|\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|\\geq\\lambda})}{\\lambda} \\]<\/p>\n<p>for all \\( {\\lambda&gt;0} \\), and thus<\/p>\n<p style=\"text-align: center;\">\\[ \\int_0^\\infty\\lambda^{p-1} \\mathbb{P}(\\max_{s\\in[0,t]}|M_s|\\geq\\lambda)\\mathrm{d}\\lambda \\leq \\int_0^\\infty\\lambda^{p-2} \\mathbb{E}(|M_t|\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|\\geq\\lambda})\\mathrm{d}\\lambda. \\]<\/p>\n<p>Now the Fubini-Tonelli theorem gives<\/p>\n<p style=\"text-align: center;\">\\[ \\int_0^\\infty\\lambda^{p-1}\\mathbb{P}(\\max_{s\\in[0,t]}|M_s|\\geq\\lambda)\\mathrm{d}\\lambda =\\mathbb{E}\\int_0^{\\max_{s\\in[0,t]}|M_s|}\\lambda^{p-1}\\mathrm{d}\\lambda =\\frac{1}{p}\\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p). \\]<\/p>\n<p>and similarly (here we need \\( {p&gt;1} \\))<\/p>\n<p style=\"text-align: center;\">\\[ \\int_0^\\infty\\lambda^{p-2}\\mathbb{E}(|M_t|\\mathbf{1}_{\\max_{s\\in[0,t]}|M_s|\\geq\\lambda)})\\mathrm{d}\\lambda =\\frac{1}{p-1}\\mathbb{E}(|M_t|\\max_{s\\in[0,t]}|M_s|^{p-1}). \\]<\/p>\n<p>Combining all this gives<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p) \\leq\\frac{p}{p-1} \\mathbb{E}(M_t\\max_{s\\in[0,t]}|M_s|^{p-1}). \\]<\/p>\n<p>But since the H\u00f6lder inequality gives<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(|M_t|\\max_{s\\in[0,t]}|M_s|^{p-1}) \\leq\\mathbb{E}(|M_t|^p)^{1\/p}\\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p)^{\\frac{p-1}{p}}, \\]<\/p>\n<p>we obtain<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p) \\leq\\frac{p}{p-1}\\mathbb{E}(|M_t|^p)^{1\/p}\\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p)^{\\frac{p-1}{p}}. \\]<\/p>\n<p>Consequently, since \\( {\\mathbb{E}(\\max_{s\\in[0,t]}|M_s|^p)&lt;\\infty} \\), we obtain the desired inequality.<\/li>\n<\/ol>\n<p style=\"text-align: justify;\"><b>Doob stopping theorem for sub-martingales.<\/b> If \\( {M} \\) is a continuous sub-martingale and \\( {S} \\) and \\( {T} \\) are bounded stopping times such that \\( {S\\leq T,\\quad M_S\\in\\mathrm{L}^1,\\quad M_T\\in\\mathrm{L}^1} \\), then<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(M_S)\\leq\\mathbb{E}(M_T). \\]<\/p>\n<p style=\"text-align: justify;\"><b>Proof.<\/b> We proceed as in the proof of the Doob stopping theorem for martingales, by assuming first that \\( {S} \\) and \\( {T} \\) take their values in the finite set \\( {\\{t_1,\\ldots,t_n\\}} \\) where \\( {t_1&lt;\\cdots&lt;t_n} \\). In this case \\( {M_T} \\) and \\( {M_S} \\) are in \\( {\\mathrm{L}^1} \\) automatically. The inequality \\( {S\\leq T} \\) gives \\( {\\mathbf{1}_{S\\geq t}\\leq\\mathbf{1}_{T\\geq t}} \\) for all \\( {t} \\). Using this fact and the sub-martingale property of \\( {M} \\), we get<\/p>\n<p style=\"text-align: center;\">\\[ \\begin{array}{rcl} \\mathbb{E}(M_S) &amp;=&amp;\\mathbb{E}(M_0) +\\mathbb{E}\\Big(\\sum_{k=1}^n\\overbrace{\\mathbb{E}(M_{t_k}-M_{t_{k-1}}\\mid\\mathcal{F}_{t_{k-1}})}^{\\geq0}\\mathbf{1}_{S\\geq t_k}\\Bigr)\\\\ &amp;\\leq&amp;\\mathbb{E}(M_0) +\\mathbb{E}\\Big(\\sum_{k=1}^n\\mathbb{E}(M_{t_k}-M_{t_{k-1}}\\mid\\mathcal{F}_{t_{k-1}})\\mathbf{1}_{T\\geq t_k}\\Bigr)\\\\ &amp;=&amp;\\mathbb{E}(M_T). \\end{array} \\]<\/p>\n<p style=\"text-align: justify;\">More generally, when \\( {S} \\) and \\( {T} \\) are arbitrary bounded stopping times satisfying \\( {S\\leq T} \\), we proceed by approximation as in the proof of the Doob stopping for martingales, using again the sub-martingale nature of \\( {M} \\) to get uniform integrability.<\/p>\n<p style=\"text-align: justify;\"><b>Doob stopping theorem for martingales.<\/b> If \\( {M} \\) is a continuous martingale and \\( {T:\\Omega\\rightarrow[0,+\\infty]} \\) is a stopping time then \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\) is a martingale: for all \\( {t\\geq0} \\) and \\( {s\\in[0,t]} \\), we have<\/p>\n<p style=\"text-align: center;\">\\[ M_{t\\wedge T}\\in\\mathrm{L}^1 \\quad\\text{and}\\quad \\mathbb{E}(M_{t\\wedge T}\\mid\\mathcal{F}_s)=M_{s\\wedge T}. \\]<\/p>\n<p style=\"text-align: justify;\">Moreover, if \\( {T} \\) is bounded, or if \\( {T} \\) is almost surely finite and \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\) is uniformly integrable (for instance dominated by an integrable random variable), then<\/p>\n<p style=\"text-align: center;\">\\[ M_T\\in\\mathrm{L}^1 \\quad\\text{and}\\quad \\mathbb{E}(M_T)=\\mathbb{E}(M_0). \\]<\/p>\n<p style=\"text-align: justify;\"><b>Comments.<\/b> The most important is that \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\) is a martingale. We have always \\( {\\lim_{t\\rightarrow\\infty}M_{T\\wedge t}\\mathbf{1}_{T&lt;\\infty}=M_T\\mathbf{1}_{T&lt;\\infty}} \\) almost surely. When \\( {T&lt;\\infty} \\) almost surely we could use what we know on \\( {M} \\) and \\( {T} \\) to deduce by monotone or dominated convergence that this holds in \\( {\\mathrm{L}^1} \\), giving \\( {\\mathbb{E}(M_T)=\\mathbb{E}(\\lim_{t\\rightarrow\\infty}M_{t\\wedge T})=\\lim_{t\\rightarrow\\infty}\\mathbb{E}(M_{t\\wedge T})=\\mathbb{E}(M_0)} \\).<\/p>\n<p style=\"text-align: justify;\">The theorem states that this is automatically the case when \\( {T} \\) is bounded or when \\( {M^T} \\) is uniformly integrable. Furthermore, if \\( {M^T} \\) is uniformly integrable then it can be shown that \\( {M_\\infty} \\) exists, giving a sense to \\( {M_T} \\) even on \\( {\\{T=\\infty\\}} \\), and then \\( {\\mathbb{E}(M_T)=\\mathbb{E}(M_0)} \\).<\/p>\n<p style=\"text-align: justify;\"><b>Proof.<\/b> Let assume first that \\( {T} \\) takes a finite number of values \\( {t_1&lt;\\cdots&lt;t_n} \\). Let us show that \\( {M_T\\in\\mathrm{L}^1} \\) and \\( {\\mathbb{E}(M_T)=\\mathbb{E}(M_0)} \\). We have \\( {M_T=\\sum_{k=1}^nM_{t_k}\\mathbf{1}_{T=t_k}\\in\\mathrm{L}^1} \\), and moreover, using<\/p>\n<p style=\"text-align: center;\">\\[ \\{T\\geq t_k\\}=(\\cup_{i=1}^{k-1}\\{T=t_i\\})^c\\in\\mathcal{F}_{t_{k-1}}, \\]<\/p>\n<p style=\"text-align: justify;\">and the martingale property \\( {\\mathbb{E}(M_{t_k}-M_{t_{k-1}}\\mid\\mathcal{F}_{t_{k-1}})=0} \\), for all \\( {k} \\), we get<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(M_T) =\\mathbb{E}(M_0) +\\mathbb{E}\\Big(\\sum_{k=1}^n\\mathbb{E}(M_{t_k}-M_{t_{k-1}}\\mid\\mathcal{F}_{t_{k-1}})\\mathbf{1}_{T\\geq t_k}\\Bigr) =\\mathbb{E}(M_0). \\]<\/p>\n<p style=\"text-align: justify;\">Suppose now that \\( {T} \\) takes an infinite number of values but is bounded by some constant \\( {C} \\). For all \\( {n\\geq0} \\), we approximate \\( {T} \\) by the piecewise constant random variable (discretization of \\( {[0,C]} \\)).<\/p>\n<p style=\"text-align: center;\">\\[ T_n=C\\mathbf{1}_{T=C}+\\sum_{k=1}^{n}t_{k}\\mathbf{1}_{t_{k-1}\\leq T&lt;t_{k}} \\quad\\text{where}\\quad t_k=t_{n,k}=C\\frac{k}{n}. \\]<\/p>\n<p style=\"text-align: justify;\">This is a stopping time since for all \\( {t\\geq0} \\), \\( {\\{T_n\\leq t\\}=\\{T_n\\leq\\lfloor t\\rfloor\\}\\in\\mathcal{F}_{\\lfloor t\\rfloor}} \\), which reduces the problem to a discrete \\( {t} \\), and then for all integer \\( {m\\geq0} \\), we have that \\( {\\{T_n=m\\} =\\varnothing\\in\\mathcal{F}_0} \\) if \\( {m\\not\\in\\{t_{k}:1\\leq k\\leq n\\}} \\), while \\( {\\{T_n=m\\}=\\{T=C\\}\\in\\mathcal{F}_C} \\) if \\( {m=C} \\), and<\/p>\n<p style=\"text-align: center;\">\\[ \\{T_n=m\\} =\\{T&lt;t_{k-1}\\}^c\\cap\\{T&lt;t_{k}\\}\\in\\mathcal{F}_{t_{k}} \\]<\/p>\n<p style=\"text-align: justify;\">if \\( {m=t_{k}} \\), \\( {1\\leq k\\leq n} \\), where we used the fact that for all \\( {t\\geq0} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\{T=t\\}=\\{T\\leq t\\}\\cap\\{T&lt;t\\}^c=\\{T\\leq t\\}\\cap\\cap_{r=1}^\\infty\\{T&gt;t-1\/r\\}\\in\\mathcal{F}_t. \\]<\/p>\n<p style=\"text-align: justify;\">Since \\( {T_n} \\) takes a finite number of values, the previous step gives \\( {\\mathbb{E}(M_{T_n})=\\mathbb{E}(M_0)} \\). On the other hand, almost surely, \\( {T_n\\rightarrow T} \\) as \\( {n\\rightarrow\\infty} \\). Since \\( {M} \\) is continuous, it follows that almost surely \\( {M_{T_n}\\rightarrow M_T} \\) as \\( {n\\rightarrow\\infty} \\). Let us show now that \\( {{(M_{T_n})}_{n\\geq1}} \\) is uniformly integrable. Since for all \\( {n\\geq0} \\), \\( {T_n} \\) takes its values in a finite set \\( {t_1&lt;\\cdots&lt;t_{m_n}\\leq C} \\), the martingale property and the Jensen inequality give, for all \\( {R&gt;0} \\),<\/p>\n<p style=\"text-align: center;\">\\[ \\begin{array}{rcl} \\mathbb{E}(|M_C|\\mathbf{1}_{|M_{T_n}|\\geq R}) &amp;=&amp;\\sum_k\\mathbb{E}(|M_C|\\mathbf{1}_{|M_{t_k}|\\geq R,T_n=t_k})\\\\ &amp;=&amp;\\sum_k\\mathbb{E}(\\mathbb{E}(|M_C|\\mid\\mathcal{F}_{t_k})\\mathbf{1}_{|M_{t_k}|\\geq R,T_n=t_k})\\\\ &amp;\\geq&amp;\\sum_k\\mathbb{E}(|\\mathbb{E}(M_C\\mid\\mathcal{F}_{t_k})|\\mathbf{1}_{|M_{t_k}|\\geq R,T_n=t_k})\\\\ &amp;=&amp;\\sum_k\\mathbb{E}(|M_{t_k}|\\mathbf{1}_{|M_{t_k}|\\geq R,T_n=t_k})\\\\ &amp;=&amp;\\mathbb{E}(|M_{T_n}|\\mathbf{1}_{|M_{T_n}|\\geq R}). \\end{array} \\]<\/p>\n<p style=\"text-align: justify;\">Now \\( {M} \\) is continuous and thus locally bounded, and \\( {M_C\\in\\mathrm{L}^1} \\), thus, by dominated convergence,<\/p>\n<p style=\"text-align: center;\">\\[ \\sup_n\\mathbb{E}(|M_{T_n}|\\mathbf{1}_{|M_{T_n}|&gt;R}) \\leq\\mathbb{E}(|M_C|\\mathbf{1}_{\\sup_{s\\in[0,C]}|M_s|\\geq R}) \\underset{R\\rightarrow\\infty}{\\longrightarrow}0. \\]<\/p>\n<p style=\"text-align: justify;\">Therefore \\( {{(M_{T_n})}_{n\\geq0}} \\) is uniformly integrable. As a consequence<\/p>\n<p style=\"text-align: center;\">\\[ \\overset{\\mathrm{a.s.}}{\\lim_{n\\rightarrow\\infty}}M_{T_n}=M_T\\in\\mathrm{L}^1 \\quad\\text{and}\\quad \\mathbb{E}(M_T)=\\lim_{n\\rightarrow\\infty}\\mathbb{E}(M_{T_n})=\\mathbb{E}(M_0). \\]<\/p>\n<p style=\"text-align: justify;\">Let us suppose now that \\( {T} \\) is an arbitrary stopping time. For all \\( {0\\leq s\\leq t} \\) and \\( {A\\in\\mathcal{F}_s} \\), the random variable \\( {S=s\\mathbf{1}_A+t\\mathbf{1}_{A^c}} \\) is a (finite) stopping time, and what precedes for the finite stopping time \\( {t\\wedge T\\wedge S} \\) gives \\( {M_{t\\wedge T\\wedge S}\\in\\mathrm{L}^1} \\) and \\( {\\mathbb{E}(M_{t\\wedge T\\wedge S})=\\mathbb{E}(M_0)} \\). Now, using the definition of \\( {S} \\), we have<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}(M_0) =\\mathbb{E}(M_{t\\wedge T\\wedge S}) =\\mathbb{E}(\\mathbf{1}_AM_{s\\wedge T}) +\\mathbb{E}(\\mathbf{1}_{A^c}M_{t\\wedge T}) =\\mathbb{E}(\\mathbf{1}_A(M_{s\\wedge T}-M_{t\\wedge T}))+\\mathbb{E}(M_{t\\wedge T}). \\]<\/p>\n<p style=\"text-align: justify;\">Since \\( {\\mathbb{E}(M_{t\\wedge T})=\\mathbb{E}(M_0)} \\), we get the martingale property for \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\), namely<\/p>\n<p style=\"text-align: center;\">\\[ \\mathbb{E}((M_{t\\wedge T}-M_{s\\wedge T})\\mathbf{1}_A)=0. \\]<\/p>\n<p style=\"text-align: justify;\">Finally, suppose that \\( {T&lt;\\infty} \\) almost surely and \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\) is uniformly integrable. The random variable \\( {M_T} \\) is well defined and \\( {\\lim_{t\\rightarrow\\infty}M_{t\\wedge T}=M_T} \\) almost surely because \\( {M} \\) is continuous. Furthermore, since \\( {{(M_{t\\wedge T})}_{t\\geq0}} \\) is uniformly integrable, it follows that \\( {M_T\\in\\mathrm{L}^1} \\) and \\( {\\lim_{t\\rightarrow\\infty}M_{t\\wedge T}=M_T} \\) in \\( {\\mathrm{L}^1} \\). In particular \\( {\\mathbb{E}(M_0)\\underset{\\forall t}{=}\\mathbb{E}(M_{t\\wedge T})=\\lim_{t\\rightarrow\\infty}\\mathbb{E}(M_{t\\wedge T})=\\mathbb{E}(M_T)} \\). <b>Further reading in the same spirit.<\/b><\/p>\n<ul>\n<li><em><a href=\"\/scripts\/search.php?q=ISBN+%20978-3-319-22099-4\">Concentration Inequalities for Sums and Martingales<\/a><\/em><br \/>\nBernard Bercu, Bernard Delyon, and Emmanuel Rio, Springer (2015).<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>This post is devoted to a sub-Gaussian tail bound and exponential square integrability for local martingales, taken from my master course on stochastic calculus. Sub-Gaussian&#8230;<\/p>\n<div class=\"more-link-wrapper\"><a class=\"more-link\" href=\"https:\/\/djalil.chafai.net\/blog\/2020\/11\/25\/sub-gaussian-tail-bound-for-local-martingales\/\">Continue reading<span class=\"screen-reader-text\">Sub-Gaussian tail bound for local martingales<\/span><\/a><\/div>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":331},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/14367"}],"collection":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/comments?post=14367"}],"version-history":[{"count":19,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/14367\/revisions"}],"predecessor-version":[{"id":14388,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/posts\/14367\/revisions\/14388"}],"wp:attachment":[{"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/media?parent=14367"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/categories?post=14367"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/djalil.chafai.net\/blog\/wp-json\/wp\/v2\/tags?post=14367"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}