# Category: Uncategorized

This post is about the work arXiv:2012.05602 in collaboration with Charles Bordenave and David García-Zelada on the spectral radius. Clearly this is my favorite work so far!

Consider a square random matrix with independent and identically distributed entries of mean zero and unit variance. We show that as the dimension tends to infinity, the spectral radius is equivalent to the square root of the dimension in probability. This result can also be seen as the convergence of the support in the circular law theorem under optimal moment conditions. In the proof we establish the convergence in law of the reciprocal characteristic polynomial to a random analytic function outside the unit disc, related to a hyperbolic Gaussian analytic function. The proof is short and differs from the usual approaches for the spectral radius. It relies on a tightness argument and a joint central limit phenomenon for traces of fixed powers.

Model. Let $\{a_{jk}\}_{j,k\geq1}$ be independent and identically distributed complex random variables with mean zero and unit variance, namely $$\mathbb{E}[a_{11}]=0\quad\text{and}\quad\mathbb E[|a_{11}|^2] = 1.$$ For all $n\geq1$, let us consider the Girko random matrix $$A_n={(a_{jk})}_{1 \leq j,k \leq n}.$$ When $a_{11}$ is Gaussian with independent and identically distributed real and imaginary parts then $A_n$ belongs to the complex Ginibre ensemble. We are interested in the matrix $$\frac{1}{\sqrt{n}}A_n.$$By the law of large numbers, almost surely, as $n\to\infty$, the rows (and the columns) have unit Euclidean norm and are orthogonal. Its characteristic polynomial at point $z\in\mathbb{C}$ is $$p_n(z)=\det\Bigr(z-\frac{A_n}{\sqrt{n}}\Bigr)$$ where $z$ stands for $z$ times identity matrix. The $n$ roots of $p_n$ in $\mathbb{C}$ are the eigenvalues of $\frac{1}{\sqrt{n}}A_n$. They form a multiset $\Lambda_n$ which is the spectrum of $\frac{1}{\sqrt{n}}A_n$. The spectral radius is $$\rho_n=\max_{\lambda\in\Lambda_n}|\lambda|.$$ Following Ginibre, Girko, Bai, …, and finally Tao and Vu, the circular law (of nature) states that the empirical measure of the elements of $\Lambda_n$ tends weakly as $n\to\infty$ to the uniform distribution on the closed unit disc: almost surely, for every nice Borel set $B\subset\mathbb{C}$, $$\lim_{n\to\infty}\frac{\mathrm{card}(B\cap\Lambda_n)}{n} =\frac{\mathrm{area}(B\cap\overline{\mathbb{D}})}{\pi},$$ where “$\mathrm{area}$” stands for the Lebesgue measure on $\mathbb C$, and where $\overline{\mathbb{D}}=\{z\in\mathbb{C}:|z|\leq1\}$ is the closed unit disc. The circular law, which involves weak convergence, does not provide the convergence of the spectral radius, it gives only $$\varliminf_{n\to\infty}\rho_n\geq1.$$

Our result on the spectral radius.  We have $\rho_n\overset{\mathbb{P}}{\underset{n\to\infty}{\longrightarrow}}1$, in the sense that for all $\varepsilon>0$, $$\lim_{n\to\infty}\mathbb{P}(|\rho_n-1|\geq\varepsilon)=0.$$ The moments assumptions of zero mean and unit variance are optimal. Moreover the $\frac{1}{\sqrt{n}}$ scaling is no longer adequate for entries of infinite variance.

We have $\rho_n\leq\sigma_n$ where $\sigma_n$ is the operator norm of $\frac{1}{\sqrt{n}}A_n$ (largest singular value). Bai and al have proved that the condition $\mathbb{E}[|a_{11}|^4]<\infty$ is necessary and sufficient for the convergence of $\sigma_n$ as $n\to\infty$ and that the limit is then $2$. A striking aspect of the spectral radius, in comparison with the operator norm, is that it converges without such an extra condition.

Proof. It does not involve any Hermitization or norms of powers in the spirit of Gelfand’s spectral radius formula. The idea is to show that on $\mathbb{C}\cup\{\infty\}\setminus\overline{\mathbb{D}}$, the function $$z\mapsto z^{-n}p_n(z)$$ tends as $n\to\infty$ to a random analytic function which does not vanish. The first step for mathematical convenience is to convert $\mathbb{C}\cup\{\infty\}\setminus\overline{\mathbb{D}}$ into $\mathbb D=\{z\in\mathbb{C}:|z|<1\}$ by noting that $p_n(z)=z^nq_n(1/z)$, $z\not\in\overline{\mathbb{D}}$, where for all $z\in\mathbb{D}$,
$q_n(z) = \det\left(1- z\frac{A_n}{\sqrt n}\right)$is the reciprocal polynomial of the characteristic polynomial $p_n$. Let $\mathrm{H}(\mathbb{D})$ be the set of holomorphic or complex analytic functions on $\mathbb{D}$, equipped with the topology of uniform convergence on compact subsets, the compact-open topology, studied by Shirai. This allows to see $q_n$ as a random variable on $\mathrm{H}(\mathbb{D})$ and gives a meaning to convergence in law of $q_n$ as $n\to\infty$, namely, $q_n$ converges in law to some random element $q$ of $\mathrm{H}(\mathbb{D})$ if for every bounded real continuous function $f$ on $\mathrm{H}(\mathbb{D})$, $$\mathbb E[f(q_n)] \to \mathbb E[f(q)].$$ Namely, we prove that $$q_n \xrightarrow[n \to \infty]{\mathrm{law}}\kappa \mathrm{e}^{-F},$$ where $F$ is the random holomorphic function on $\mathbb D$ defined by $$F(z)=\sum_{k=1}^\infty X_k \frac{z^k}{\sqrt k}$$ where $\{X_k\}_{k\geq1}$ are independent complex Gaussian random variables such that
$$\mathbb E\big[X_k\big]=0,\quad\mathbb E\left[|X_k|^2\right] = 1\quad\text{and}\quad \mathbb E\left[X_k^2 \right] = \mathbb E \left[a_{11}^2 \right]^k,$$ and where $\kappa: \mathbb D \to \mathbb C$ is the holomorphic function defined for all $z\in\mathbb{D}$ by $$\kappa(z) = \sqrt{1-z^2 \mathbb E \left[a_{11}^2 \right]}.$$ The square root defining $\kappa$ is the one such that $\kappa(0)=1$. Notice that it is a well-defined holomorphic function on the simply connected domain $\mathbb{D}$ since the function $z\mapsto1-z^2\mathbb{E}[a_{11}^2]$ does not vanish on $\mathbb{D}$, indeed $|\mathbb{E}[a_{11}^2]|\leq\mathbb{E}[|a_{11}|^2]=1$.

The proof of the convergence of $q_n$ is partially inspired by a work due to Basak and Zeitouni and relies crucially on a joint combinatorial central limit theorem for traces of fixed powers inspired from a work by Janson and Nowicki. Unlike previous arguments used in the literature for the analysis of Girko matrices, the approach does not rely on Girko Hermitization, Gelfand spectral radius formula, high order traces, resolvent method or Cauchy-Stieltjes transform. The first step consists in showing the tightness of ${(q_n)}_{n\geq1}$, by using a decomposition of the determinant into orthogonal elements related to determinants of submatrices, as in the work of Basak and Zeitouni:$$\det\Bigr(1-z\frac{A_n}{\sqrt{n}}\Bigl)=1+\sum_{k=1}^n(-z)^n\sum_{\substack{I\subset\{1,\ldots,n\}\\|I|=k}}\frac{\det((A_n)_{I,I})}{\sqrt{n}^k}.$$Knowing this tightness, the problem is reduced to show the convergence in law of these elements. A reduction step, inspired by the work of Janson and Nowicki, consists in truncating the entries, reducing the analysis to the case of bounded entries. The next step consists in a central limit theorem for product of traces of powers of fixed order$$\det\Bigr(1-z\frac{A_n}{\sqrt{n}}\Bigr)=\exp\Bigr(-\sum_{k=1}^\infty\frac{\mathrm{Trace}(A_n^k)}{\sqrt{n}^k}\frac{z^k}{k}\Bigr).$$It is important to note that we truncate with a fixed threshold with respect to $n$, and the order of the powers in the traces are fixed with respect to $n$. This is in sharp contrast with the usual Füredi-Komlós truncation-trace approach related to the Gelfand spectral radius formula used in all the previous approaches for the spectral radius.

Moment assumptions. The universality for the first order global asymptotics stated by the circular law depends only on the trace $\mathbb{E}[|a_{11}|^2]$ of the covariance matrix of $\Re a_{11}$ and $\Im a_{11}$. The universality stated by the convergence of $q_n$, just like for the central limit theorem, depends on the whole covariance matrix. Since
$$\mathbb{E}[a_{11}^2]=\mathbb{E}[(\Re a_{11})^2]-\mathbb{E}[(\Im a_{11})^2]+2\mathrm{i}\mathbb{E}[\Re a_{11}\Im a_{11}],$$ we can see that $\mathbb{E}[a_{11}^2]=0$ if and only if $$\mathbb{E}[(\Re a_{11})^2]=\mathbb{E}[(\Im a_{11})^2]\quad\text{and}\quad\mathbb{E}[\Re a_{11}\Im a_{11}]=0.$$  Moreover, we cannot in general get rid of $\mathbb{E}[a_{11}^2]$ by simply multiplying $A_n$ by a phase.

Hyperbolic Gaussian analytic function. When $\mathbb E\left[a_{11}^2\right]=0$ then $\kappa=1$ while the random analytic function $F$ which appears in the limit is a degenerate case of the well-known hyperbolic Gaussian Analytic Functions (GAFs). It can also be obtained as the antiderivative of the $L=2$ hyperbolic GAF which is $0$ at $z=0$. This $L=2$ hyperbolic GAF is related to the Bergman kernel and could be called the Bergman GAF. These GAFs appear also at various places in mathematics and physics and, in particular, in the asymptotic analysis of Haar unitary matrices.

Cauchy-Stieltjes transform. If $\mathbb{E}[a_{11}^2]=0$ then by returning to $p_n$, taking the logarithm and the derivative with respect to $z$ in the convergence of $q_n$, we obtain the convergence in law of the Cauchy-Stieltjes transform (complex conjugate of the electric field) minus $n/z$ towards $z \mapsto F'(1/z)/z^2$ which is a Gaussian analytic function on $\mathbb C \setminus \overline{\mathbb D}$ with covariance given by a Bergman kernel.

Central Limit Theorem. The convergence of $q_n$ can be seen as a central limit theorem for the log-determinant (global second order analysis). Namely for all $z\in\mathbb{D}$, we have $$|q_n(1/z)| = \exp\left[-n \left(U_n(z)-U(z)\right)\right]$$ where $$U_n(z)=-\frac{1}{n}\log|p_n(z)|\quad\text{and}\quad U(z)=-\log|z|$$ are the logarithmic potential at the point $z$ of the empirical spectral distribution of $\frac{1}{\sqrt n} A_n$ and of the uniform distribution on the unit disc $\mathbb{D}$.

Moreover, it is possible to extract from the convergence of $q_n$ a CLT for linear spectral statistics with respect to analytic functions in a neighborhood of $\overline{\mathbb{D}}$. This can be done by using the Cauchy formula for an analytic function $f$,
\begin{align*}\int f(\lambda) \mathrm \mu(\mathrm d \lambda) &= \frac{1}{2\pi\mathrm{i}} \int \left(\oint \frac{f(z)}{z – \lambda} \mathrm d z\right) \mu(\mathrm d \lambda)\\ &= \frac{1}{2\pi\mathrm{i}} \oint f(z) \left(\int \frac{\mu(\mathrm d\lambda)} {z – \lambda} \right)\mathrm d z\\ &= \frac{1}{2\pi\mathrm{i}} \oint f(z)(\log \det \left(z – A \right))’ \mathrm d z \end{align*}where $\mu$ is the counting measure of the eigenvalues of $A$, where the contour integral is around a centered circle of radius strictly larger than $1$, and where we have taken any branch of the logarithm. The approach is purely complex analytic. In particular, it is different from the usual approach with the logarithmic potential of $\mu$ based on the real function given by $z \mapsto \int\log|z-\lambda|\mu(\mathrm{d}\lambda) = \log|\det(z-A)|$.

Wigner case and elliptic interpolation. The finite second moment assumption is optimal. We could explore its relation with the finite fourth moment assumption for the convergence of the spectral edge of Wigner random matrices, which is also optimal. Heuristic arguments tell us that the interpolating condition on the matrix entries is $$\mathbb{E}[|a_{jk}|^2|a_{kj}|^2]<\infty,\quad j\neq k,$$ which is a finite second moment condition for Girko matrices and a finite fourth moment condition for Wigner matrices. This is work in progress.

Coupling and almost sure convergence. For simplicity, we define our random matrix $A_n$ for all $n\geq1$ by truncating from the upper left corner the infinite random matrix $\{a_{jk}\}_{j,k\geq1}$. This imposes a coupling for the matrices $\{A_n\}_{n\geq1}$. However, since our result of the spectral radius involves a convergence in probability, it remains valid for an arbitrary coupling, in the spirit of the triangular arrays assumptions used for classical CLTs. In another direction, one could ask about the upgrade to almost sure convergence. This is an open problem.

Heavy tails. An analogue of the circular law in the heavy-tailed case $\mathbb{E}[|a_{11}|^2]=\infty$ is already available but requires another scaling than $\frac{1}{\sqrt{n}}$. The spectral radius of this model tends to infinity as $n\to\infty$ but it could be possible to analyze the limiting point process at the edge as $n\to\infty$ and its universality. This is an open problem.

Julia code. Here is a Julia code illustrating the high dimensional phenomenon.

using LinearAlgebra, Plots # for eigvals(), scatter(), heatmap()

function phaseportrait(n)
M = (randn(n,n)+im*randn(n,n))/sqrt(2*n)
Spec = eigvals(M)
scatter(real(Spec),imag(Spec),aspect_ratio=:equal,legend=false)
r = 1000
c = max.(abs.(Spec))
c = c + c/2
M = zeros(r,r)
for i in 1:r
for j in 1:r
z = (-c + 2*c*i/r) + im * (-c + 2*c*j/r)
M[i,j] = angle(prod(z.*Spec.-1)) # reciprocal charpoly phase
end
end
heatmap(M,aspect_ratio=:equal,legend=false,background_color=:transparent,c=:hsv)
end # function

phaseportrait(250)


Discrete analogue. The method can be adapted to sparse Boolean matrices, replacing the Gaussian limiting regime by a Poisson limiting regime, in relation to important aspects of the high dimensional analysis of random graphs. This was done by Simon Coste.

10. Le département de mathématiques et applications (DMA) de l’École normale supérieure de Paris est à la fois un laboratoire de recherche et un département d’enseignement. Ce département a une particularité rare : aucun poste d’enseignant-chercheur n’y est permanent, et la durée maximale d’affectation est de dix ans. Outre les administratifs, doctorants, et post-doctorants, les membres sont essentiellement des chercheurs du CNRS et des enseignants-chercheurs mis à disposition par les universités parisiennes. Cette particularité du DMA est due au mathématicien Georges Poitou (1926 – 1989) qui a dirigé et transformé l’établissement.

U. Dans ce petit établissement, les effectifs des départements sont assez réduits. Au DMA, les élèves proviennent, pour la plupart, des classes préparatoires aux grandes écoles, mais aussi, dans une moindre mesure, de l’étranger et des universités françaises. En particulier, chaque année, quelques étudiants de licence de mathématiques sont recrutés à l’issue du Concours normalien étudiant. Ce concours étudiant mérite d’être connu, tout comme celui de l’ÉNS Paris-Saclay, de l’ÉNS Lyon, et de l’École Polytechnique (qui recrute beaucoup à Dauphine !). Les lignes bougent le long de l’opposition franco-française entre filières sélectives et universités.

If you believe that the completion and right-continuity of filtrations are typical abstract non sense of the general theory of stochastic processes, useless and obscure, you are maybe missing something interesting. Contrary to discrete time or space stochastic processes, continuous time and space stochastic processes lead naturally to measurability issues, when considering for instance natural objects such as running suprema or stopping times.

Negligible sets and completeness. In a probability space ${(\Omega,\mathcal{F},\mathbb{P})}$, we say that ${A\subset\Omega}$ is negligible when there exists ${A’\in\mathcal{F}}$ with ${A\subset A’}$ and ${\mathbb{P}(A’)=0}$. We say that the ${(\Omega,\mathcal{F},\mathbb{P})}$ is complete when ${\mathcal{F}}$ contains the negligible subsets of ${\Omega}$. A filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ on ${(\Omega,\mathcal{F},\mathbb{P})}$ is complete when ${\mathcal{F}_0}$ contains the negligible elements of ${\mathcal{F}}$.

Completeness emerges naturally via almost sure events which are complement of negligible sets (as for running supremum below) as well as via projections of measurable sets of product spaces (as for hitting times of Borel sets below) .

We say that a process ${{(X_t)}_{t\in\mathbb{R}_+}}$ taking values in a topological space equipped with its Borel ${\sigma}$-field is continuous when it has almost surely continuous trajectories. This is the case for instance of Brownian motion constructed from random series.

Measurability of running supremum from completeness. Let ${{(X_t)}_{t\in\mathbb{R}_+}}$ be continuous, defined on a probability space ${(\Omega,\mathcal{F},\mathbb{P})}$, and taking values in a topological space ${E}$ equipped with its Borel ${\sigma}$-field ${\mathcal{E}}$. Let ${f:E\rightarrow\mathbb{R}}$ be a measurable function.

• If ${(\Omega,\mathcal{F},\mathbb{P})}$ is complete then ${\sup_{s\in[0,t]}f(X_s)}$ is measurable for all ${t\in\mathbb{R}_+}$.
• If ${X}$ is adapted for a complete ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ then ${{(\sup_{s\in[0,t]}f(X_s))}_{t\in\mathbb{R}_+}}$ is adapted.

Proof. Let ${\Omega’\in\mathcal{F}}$ be an a.s. event on which ${X}$ is continuous. Set ${S_t=\sup_{s\in[0,t]}f(X_s)}$.

• We have ${\Omega’\in\mathcal{F}}$. Next, for all ${t\in\mathbb{R}_+}$ and ${A\in\mathcal{E}}$, we have

$\Omega’\cap\{S_t\in A\} =\Omega’\cap \bigr\{\sup_{s\in[0,t]\cap\mathbb{Q}}f(X_s)\in A\bigr\}\in\mathcal{F},$

while ${(\Omega\setminus\Omega’)\cap\{S_t\in A\}\subset\Omega\setminus\Omega’}$ is negligible and thus in ${\mathcal{F}}$ by completeness.

• Same argument as before with ${\mathcal{F}_t}$ instead of ${\mathcal{F}}$.

Universal completeness. The notion of completeness is relative to the probability measure ${\mathbb{P}}$. There is also a notion of universal completeness, that does not depend on the probability measure, see Dellacherie and Meyer 1978. This is not that useful in probability.

Stopping times. A map ${T:\Omega\rightarrow[0,+\infty]}$ is a stopping time for ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ when

$\{T\leq t\}\in\mathcal{F}_t$

for all ${t\in\mathbb{R}_+}$. Contrary to discrete time filtrations, the notion of stopping times for continuous time filtrations leads naturally to the notions of complete filtration and right continuous filtration. This is visible notably with hitting times as follows.

Hitting times as archetypal examples of stopping times. Let ${X={(X_t)}_{t\in\mathbb{R}_+}}$ be a continuous and adapted process defined on ${(\Omega,\mathcal{F},\mathbb{P})}$ with respect to a complete filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$, and taking its values in a metric space ${G}$ equipped with its Borel ${\sigma}$-field. Then, for all closed subset ${A\subset G}$, the hitting time ${T_A:\Omega\rightarrow[0,+\infty]}$ of ${A}$, given by

$T_A=\inf\{t\in\mathbb{R}_+:X_t\in A\},$

with convention ${\inf\varnothing=+\infty}$, is a stopping time.

Proof. Let ${\Omega’}$ be the a.s. event on which ${X}$ is continuous. On ${\Omega’}$, since ${X}$ is continuous and ${A}$ is closed, we have ${\{t\in\mathbb{R}_+:X_t\in A\}=\{t\in\mathbb{R}_+:\mathrm{dist}(X_t,A)=0\}}$, the map ${t\in\mathbb{R}_+\mapsto\mathrm{dist}(X_t,A)}$ is continuous, and the ${\inf}$ in the definition of ${T_A}$ is a ${\min}$. Now, since ${X}$ is adapted, we have, for all ${t\in\mathbb{R}_+}$,

$\Omega’\cap\{T_A\leq t\} =\Omega’\cap\bigcap_{s\in[0,t]\cap\mathbb{Q}}\{X_s\in A\} \in\mathcal{F}_t,$

where we have also used ${\Omega’\in\mathcal{F}_t}$ for all ${t\in\mathbb{R}_+}$ since ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ is complete. Moreover ${(\Omega\setminus\Omega’)\cap\{T_A\leq t\}\subset\Omega\setminus\Omega’}$ is negligible, and thus in ${\mathcal{F}_t}$ by completeness.

Right-continuity. A filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ is right-continuous if for all ${t\in\mathbb{R}_+}$ we have

$\mathcal{F}_t=\mathcal{F}_{t^+} \quad\text{where}\quad \mathcal{F}_{t+} =\bigcap_{\varepsilon>0}\mathcal{F}_{t+\varepsilon} =\bigcap_{s>t}\mathcal{F}_s.$

Alternative definition of stopping times. If ${T:\Omega\rightarrow[0,+\infty]}$ is a stopping time with respect to a filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ then

$\{T<t\}\in\mathcal{F}_t$

for all ${t\in\mathbb{R}_+}$. Conversely this property implies that ${T}$ is a stopping time when the filtration is right-continuous. Indeed, if ${T}$ is a stopping time then for all ${t\in\mathbb{R}_+}$ we have

$\{T<t\} =\bigcup_{n=1}^\infty\{T\leq t-{\textstyle\frac{1}{n}}\} \in\mathcal{F}_t,$

Conversely ${\{T\leq t\}\in\cap_{s>t}\mathcal{F}_s=\mathcal{F}_{t+}}$ since for all ${s>t}$,

$\{T\leq t\} =\bigcap_{n=1}^\infty\{T<(t+{\textstyle\frac{1}{n}})\wedge s\} \in\mathcal{F}_{s}.$

Note that if ${T}$ is a stopping time then ${\{T=t\}=\{T\leq t\}\cap\{T<t\}^c\in\mathcal{F}_t}$.

Progressively measurable processes. Recall that a process ${{(X_t)}_{t\in\mathbb{R}_+}}$ defined on a probability space ${(\Omega,\mathcal{F},\mathbb{P})}$ is progressively measurable for ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$ when for all ${t\in\mathbb{R}_+}$ the map ${(\omega,s)\in\Omega\times[0,t]\mapsto X_s(\omega)}$ is measurable for ${\mathcal{F}_t\otimes\mathcal{B}_{[0,t]}}$. Example of progressively measurable processes include adapted right-continuous processes.

Hitting time of Borel sets. Let ${X={(X_t)}_{t\in\mathbb{R}_+}}$ be a progressively measurable process defined on a probability space ${(\Omega,\mathcal{F},\mathbb{P})}$ equipped with a right continuous and complete filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$, and taking its values in a measurable space ${G}$. Then for all measurable subset ${A\subset G}$, the hitting time ${T_A:\Omega\rightarrow[0,+\infty]}$ defined by

$T_A=\inf\{t\in\mathbb{R}_+:X_t\in A\},$

with convention ${\inf\varnothing=+\infty}$, is a stopping time.

Proof. The debut ${D_B}$ of any ${B\in\mathcal{F}\otimes\mathcal{B}(\mathbb{R}_+)}$ is defined for all ${\omega\in\Omega}$ by

$D_B(\omega)=\inf\{t\in\mathbb{R}_+:(\omega,t)\in B\}\in[0,+\infty].$

If ${B}$ is progressive, then ${D_B}$ is a stopping time (this is known as the debut theorem). Indeed, for all ${t\in\mathbb{R}_+}$ the set ${\{D_B<t\}}$ is then the projection on ${\Omega}$ of

$C=\{s\in[0,t):(\omega,s)\in B\},$

which belongs to ${\mathcal{B}(\mathbb{R}_+)\otimes\mathcal{F}_t}$ since ${B}$ is progressive. Since the filtration is complete, this projection belongs to ${\mathcal{F}_t}$, see for instance Dellacherie and Meyer Theorem IV.50 page 116. Now ${\{D_B<t\}\in\mathcal{F}_t}$ for all ${t\in\mathbb{R}_+}$ implies that ${D_B}$ is a stopping time since the filtration is right continuous. Finally it remains to note that

$T_A=D_B\quad\text{with}\quad B=\{(\omega,t):X_t\in A\},$

which is progressive as pre-image of ${\mathbb{R}_+\times A}$ by ${(\omega,t)\mapsto X_t(\omega)}$ (${X}$ is progressive).

A famous mistake. This is related to a famous mistake made by Henri Lebesgue (1875 — 1941) on the measurability of projections of measurable sets in product spaces, that motivated Nikolai Luzin (1883 — 1950) and his student Mikhail Suslin (1894 — 1919) to forge the concept of analytic set and descriptive set theory.

[…] Sans que le terme de tribu soit utilisé à l’époque, il semblait à Borel qu’aucune opération de l’analyse ne ferait jamais sortir de la tribu borélienne. C’était aussi l’avis de Lebesgue, et il avait cru le démontrer en 1905. Rarement erreur a été plus fructueuse. Au début de l’année 1917, les Comptes rendus publient deux notes des Russes Nicolas Lusin et M. Ya. Souslin. […] La projection d’un borélien n’est pas nécessairement un borélien. L’analyse classique force donc à sortir de la tribu borélienne. Entre la tribu de Borel et celle de Lebesgue se trouve la tribu de Lusin, constituée par les ensembles que Lusin appelle analytiques et qui sont des images continues de boréliens. Au cours des années 1920 s’est développée à Moscou une école mathématique extrêmement brillante, dont Lusin a été le fondateur. Ainsi Lebesgue et son œuvre ont été beaucoup mieux connus à Moscou qu’ils ne l’étaient en France. Hongrie, Pologne et Russie ont été les foyers de rayonnement de la pensée de Lebesgue et de son héritage. […]

Jean-Pierre Kahane (2001)

Canonical filtration. It is customary to assume that the underlying filtration is right-continuous and complete. For a given filtration ${{(\mathcal{F}_t)}_{t\in\mathbb{R}_+}}$, it is always possible to consider its completion ${(\sigma_t)}_{t\in\mathbb{R}_+}={(\sigma(\mathcal{N}\cup\mathcal{F}_t))}_{t\in\mathbb{R}_+}$ where ${\mathcal{N}}$ is the collection of negligible elements of ${\mathcal{F}}$. It is also customary to consider the right-continuous version ${{(\sigma_{t+})}_{t\in\mathbb{R}_+}}$, called the canonical filtration. A process is always adapted with respect to the canonical filtration constructed from its completed natural filtration.

Subtleties about righ-continuity of filtrations. The natural filtration of a right-continuous process is not right-continuous in general, indeed a counter example is given by ${X_t=tZ}$ for all ${t\in\mathbb{R}_+}$ where ${Z}$ is a non-constant random variable. Indeed, we have ${\sigma(X_0)=\{\varnothing,\Omega\}}$ while ${\sigma(X_{0+\varepsilon}:\varepsilon>0)=\sigma(Z)\neq\sigma(X_0)}$. However it can be shown that the completion of the natural filtration of a Feller Markov process, including all Lévy processes and in particular Brownian motion, is always right-continuous.