Press "Enter" to skip to content

Libres pensées d'un mathématicien ordinaire Posts

Portmanteau

The Portmanteau theorem gives several statements equivalent to the narrow convergence i.e. the weak convergence of probability measures with respect to continuous bounded functions. I wonder if Portmanteau was a mathematician or if this name is just due to the fact that the theorem is a portmanteau for several statements.

Leave a Comment

Eigenvectors universality for random matrices

Let $latex (X_{jk})_{j,k\geq1}$ be an infinite table of complex random variables and set $latex X:=(X_{j,k})_{1\leq j,k\leq n}$. If $latex X_{11}$ is Gaussian then $latex X$ belongs to the so called Ginibre Ensemble. Consider the random unitary matrices $latex U$ and $latex V$ such that $latex X=UDV$ where $latex D=\mathrm{diag}(s_1,\ldots,s_n)$ and where $latex s_1,\ldots,s_n$ are the singular values of $latex X$, i.e. the eigenvalues of $latex \sqrt{XX^*}$. When $latex X_{11}$ is Gaussian, the law of $latex X$ is rotationally invariant, and the matrices $latex U$ and $latex V$ are distributed according to the Haar law on the unitary group $latex \mathbb{U}_n$. The Gaussian version of the Marchenko-Pastur theorem tells us that with probability one, the counting probability distribution of the singular values, appropriately scaled, tends weakly to the quartercircular  law as $latex n\to\infty$.

The Marchenko-Pastur theorem is universal in the sense that it holds with the same limit beyond the Gaussian case provided that $latex X_{11}$ has moments identical to the Gaussian up to the order 2. One can ask if a similar statement holds for the eigenvectors, i.e. for the matrices $latex U$ and $latex V$. Are they asymptotically Haar distributed? For instance, one may ask if $latex W_2(\mathcal{L}(U),\mathrm{Haar}(\mathbb{U}_n))$ tends to zero as $latex n\to\infty$, where $latex W_2$ is the Wasserstein coupling distance. The distance choice is important. One may consider  many other distances including for instance the Fourier distance $latex \sup_g|\Phi_\mu(g)-\Phi_\nu(g)|$ where $latex \Phi_\mu$ denotes the Fourier transforrm of $latex \mu$ (characteristic function). A weakened version of this statement consist in asking if linear functionals of $latex U$ and $latex V$ behave asymptotically as Brownian bridges. Indeed, it is well known that linear functionals of the Haar law on the unitary group behave asmptotically like this. Silverstein has done some work in this direction. Of course, one can ask the same question for the eigenvectors in the Girko circular law and in the Wigner theorem. One can guess that a finite fourth moment assumption on $latex X_{11}$ is needed, otherwise the top of the spectrum will blow up and the corresponding eigenvectors will maybe localize.

If you do not trust me, just do simulations or… computations! There is here potentially a whole line of research, sparsely explored for the moment. If you like free probability, you may ask if $latex U’XV’$ is close to $latex X$ when $latex U’$ and $latex V’$ are Haar distributed and independent of $latex X$.

There is some literature on the behavior of eigenvectors of deterministic matrices under perturbations of the entries of the matrix, see e.g. the book of Bhatia (ch. VII). Among many results, if $latex A$ and $latex B$ are two invertible $latex n\times n$ complex matrices with respective polar unitary factors $latex U_A$ and $latex U_B$ in their polar factorization then for any unitary invariant norm $latex \left\Vert\cdot\right\Vert$ we have

$latex \displaystyle\left\Vert U_A-U_B\right\Vert\leq 2\frac{\left\Vert A-B\right\Vert}{\left\Vert A^{-1}\right\Vert^{-1}+\left\Vert B^{-1}\right\Vert^{-1}}.$

The eigenvectors are more sensitive than the bulk of the spectrum to perturbations on $latex X$, and one may understand this by remembering that for a normal matrix, they are arg-suprema while the eigenvalues are suprema. Also, one can guess that the asymptotic uniformization of the eigenvectors may be even sensitive to the skewness of the law of $latex X_{11}$.

It is well known that the $latex k$-dimensional projection of the uniform law on the sphere of $latex \mathbb{R}^n$ of radius $latex \sqrt{n}$ tends to the Gaussian law as $latex n\to\infty$. By viewing $latex \mathbb{U}_n$ as a bunch of exchangeable spheres, one can guess that the Haar law on the unitary group, appropriately scaled, will converge in some sense to the Brownian sheet bridge as the dimension tends to infinity. Recent addition to this post: this was proved in a paper by Donati-Martin and Rouault! We conjecture that this result is universal for  the eigenvectors matrix of random matrices with i.i.d. entries and moments identical to the Gaussian moments up to order $latex 4$.

The uniformization of the eigenvectors of random matrices is related to their delocalization, a phenomenon recently investigated by Erdös, Schlein, Ramirez, Yau, Tao, Vu, as a byproduct of their analysis of the universality of local statistics of the spectrum. This is a huge contrast with the well known Anderson localization phenomenon in mathematical physics for random Schrödinger operators.

The unitary group $latex \mathbb{U}_n$ is a purely $latex \ell^2$ object. Its $latex \ell^1$ analogue is the Birkhoff polytope of doubly stochastic matrices, also known as the transportation polytope,  assignment polytope, or perfect matching polytope, but this is another story…

This post benefined from discussions with Charles Bordenave and Florent Benaych-Georges.

1 Comment

{0,1}

It is amazing to realize how complex things in Mathematics and in Computer Science can be reduced after all to 0 and 1, in other words, to the simple notion of difference… Is it beautiful or disappointing? Well, maybe both! Any resemblance to actual events is coincidental. In fact, and to be more precise, we must say sequences of 0 and 1, making then more apparent the role of ∞. In a way, Computer Science is the reign of finite sequences of 0 and 1 while Mathematics is the reign of infinite sequences of 0 and 1, in other words, the reign of ∞. Between the two, you may take a look at the concepts of Turing machines and Kolmogorov complexity. You may also take a look at Peano axioms and Gödel’s incompleteness theorems. In Probability Theory, the modelling of the Heads-or-Tails coin-tossing game involves a probability measure on {0,1}, the set of infinite sequences of 0 and 1. It has been shown recently that in a sense, almost all large statements are indecidable, see for instance

Leave a Comment