# Month: November 2010

I have posted today on arXiv a paper entitled Intertwining and commutation relations for birth-death processes, joint work with Aldéric Joulin.

Given a birth-death process on ${\mathbb{N}}$ with semigroup ${(P_t)_{t\geq 0}}$ and a discrete gradient ${\partial_u}$ depending on a positive weight ${u}$, we establish intertwining relations of the form ${\partial_u P_t = Q_t\partial_u }$, where ${(Q_t)_{t\geq 0}}$ is the Feynman-Kac semigroup with potential ${V_u}$ of another birth-death process. We provide applications when ${V_u}$ is positive and uniformly bounded from below, including Lipschitz contraction and Wasserstein curvature, various functional inequalities, and stochastic orderings. The proofs are remarkably simple and rely on interpolation, commutation, and convexity.

Let us give the main ingredient. We consider a birth-death process ${(X_t)_{t\geq 0}}$ on the state space ${\mathbb{N} := \{ 0,1,2, \ldots \}}$, i.e. a Markov process with transition probabilities given by

$P_t^x (y) = \mathbb{P}_x (X_t =y) = (\lambda_x t)\mathbf{1}_{y=x+1} +(\nu_x t)\mathbf{1}_{y=x-1} +(1- (\lambda_x + \nu_x) t)\mathbf{1}_{y=x} + t\varepsilon(t).$

The transition rates ${\lambda}$ and ${\nu}$ are respectively called the birth and death rates of the process ${(X_t)_{t\geq 0}}$. We assume that the process is irreducible, positive recurrent, and non-explosive. This holds when the rates satisfy to ${\lambda>0}$ on ${\mathbb{N}}$ and ${\nu>0}$ on ${\mathbb{N}^*}$ and ${\nu_0 = 0}$ and

$\sum_{x=1}^\infty \frac{\lambda_0 \lambda_1 \cdots \lambda_{x-1}}{\nu_1 \nu_2 \cdots \nu_x} <\infty \quad\text{and}\quad \sum_{x=1}^\infty \left(\frac{1}{\lambda_x}+\frac{\nu_x}{\lambda_x\lambda_{x-1}} +\cdots+\frac{\nu_x\cdots\nu_1}{\lambda_x\cdots\lambda_1\lambda_0}\right) = \infty.$

The unique stationary distribution ${\mu}$ of the process is reversible and is given by

$\mu (x) = \mu (0) \prod_{y=1}^x \frac{\lambda_{y-1}}{\nu_y} ,\ x\in\mathbb{N} \quad \text{with} \quad \mu (0) := \left(1+\sum_{x=1}^\infty \frac{\lambda_0\lambda_1\cdots\lambda_{x-1}}{\nu_1\nu_2\cdots\nu_x}\right)^{-1} . \ \ \ \ \ (1)$

Let us denote by ${\mathcal{F}}$ (respectively ${\mathcal{F}_{\!\!+}}$) the space of real-valued (respectively positive) functions ${f}$ on ${\mathbb{N}}$, and let ${b\mathcal{F}}$ be the subspace of bounded functions. The associated semigroup ${(P_t )_{t\geq 0}}$ is defined for any function ${f\in b\mathcal{F} \cup \mathcal{F}_+}$ and ${x\in\mathbb{N}}$ as

$P_t f (x) = \mathbb{E}_x [f(X_t)] = \sum_{y=0}^\infty f(y) P_t^x (y).$

This family of operators is positivity preserving and contractive on ${L^p (\mu)}$, ${p\in [1,\infty]}$. Moreover, the semigroup is also symmetric in ${L^2(\mu)}$ since ${\lambda_x\mu(x) = \nu_{1+x}\mu(1+x)}$ for any ${x\in\mathbb{N}}$ (detailed balance equation). The generator ${\mathcal{L}}$ of the process is given for any ${f\in \mathcal{F}}$ and ${x\in\mathbb{N}}$ by

$\mathcal{L} f(x) = \lambda_x \, \left( f(x+1) -f(x)\right) + \nu_x \, \left( f(x-1) -f(x)\right) = \lambda_x \, \partial f (x) + \nu_x \, \partial^* f(x),$

where ${\partial }$ and ${\partial^*}$ are respectively the forward and backward discrete gradients on ${\mathbb{N}}$:

$\partial f(x) := f(x+1)-f(x) \quad \text{and} \quad \partial^* f(x) := f(x-1)-f(x) .$

Our approach is inspired from the remarkable properties of two special birth-death processes: the ${M/M/1}$ and the ${M/M/\infty}$ queues. The ${M/M/\infty}$ queue has rates ${\lambda_x=\lambda}$ and ${\nu_x=\nu x}$ for positive constants ${\lambda}$ and ${\nu}$. It is positive recurrent and its stationary distribution is the Poisson measure ${\mu_\rho}$ with mean ${\rho=\lambda/\mu}$. If ${\mathcal{B}_{x,p}}$ stands for the binomial distribution of size ${x\in\mathbb{N}}$ and parameter ${p \in [0,1]}$, the ${M/M/\infty}$ process satisfies for every ${x\in\mathbb{N}}$ and ${t\geq0}$ to the Mehler type formula

$\mathcal{L} (X_t |X_0 = x) = \mathcal{B}_{x, e^{-\nu t}} \ast \mu_{\rho (1-e^{-\nu t})}. \ \ \ \ \ (2)$

The ${M/M/1}$ queueing process has rates ${\lambda_x=\lambda}$ and ${\nu_x=\nu \mathbf{1}_{\mathbb{N}\setminus\{0\}}}$ where ${0<\lambda<\nu}$ are constants. It is a positive recurrent random walk on ${\mathbb{N}}$ reflected at ${0}$. Its stationary distribution ${\mu}$ is the geometric measure with parameter ${\rho := \lambda /\nu}$ given by ${\mu (x) = (1-\rho)\rho^x}$ for all ${x\in \mathbb{N}}$. A remarkable common property shared by the ${M/M/1}$ and ${M/M/\infty}$ processes is the intertwining relation

$\partial \mathcal{L} = \mathcal{L}^{V} \partial \ \ \ \ \ (3)$

where ${\mathcal{L}^{V}=\mathcal{L}-V}$ is the discrete Schrödinger operator with potential ${V}$ given by

• ${V(x) := \nu}$ in the case of the ${M/M/\infty}$ queue
• ${V(x) := \nu \mathbf{1}_{\{0\}}(x)}$ for the ${M/M/1}$ queue.

The operator ${\mathcal{L} ^{V}}$ is the generator of a Feynman-Kac semigroup ${(P_t^{V})_{t\geq 0}}$ given by

$P_t^{V} f(x) = \mathbb{E}_x \left[ f(X_t) \exp \left(-\int_0^t V(X_s) ds \right) \right].$

The intertwining relation (3) is the infinitesimal version at time ${t=0}$ of the semigroup intertwining

$\partial P_t f (x) = P_t^{V} \partial f (x) = \mathbb{E}_x \left[ \partial f(X_t) \, \exp \left( – \int_0^t V(X_s) \, ds \right)\right] . \ \ \ \ \ (4)$

Conversely, one may deduce (4) from (3) by using a semigroup interpolation. Namely, if we consider

$s\in[0,t]\mapsto J(s) := P_s^{V} \partial P_{t-s} f$

with ${V}$ as above, then (4) rewrites as ${J(0) = J(t)}$ and (4) follows from (3) since

$J'(s) = P_s^{V} \left( \mathcal{L}^{V} \partial P_{t-s} f – \partial \mathcal{L} P_{t-s} f \right) =0.$

Let us fix some ${u \in \mathcal{F}_{\!\!+}}$. The ${u}$-modification of the original process ${(X_t)_{t\geq 0}}$ is a birth-death process ${(X_{u, t})_{t\geq 0}}$ with semigroup ${(P_{u,t})_{t\geq 0}}$ and generator ${\mathcal{L}_u}$ given by

$\mathcal{L}_u f(x) = \lambda^u_x \, \partial f (x) + \nu^u_x \, \partial^* f(x),$

where the birth and death rates are respectively given by

$\lambda^u_x := \frac{u_{x+1}}{u_x} \, \lambda_{x+1} \quad\text{and}\quad \nu^u_x := \frac{u_{x-1}}{u_x} \, \nu_x .$

One can check that the measure ${\lambda u^2\mu}$ is reversible for ${(X_{u,t})_{t\geq0}}$. As consequence, the process ${(X_{u,t})_{t\geq0}}$ is positive recurrent if and only if ${\lambda u^2}$ is ${\mu}$-integrable. We define the discrete gradient ${\partial_u}$ and the potential ${V_u}$ by

$\partial_u := (1/u)\partial \quad\text{and}\quad V_u (x) := \nu_{x+1} – \nu^u_x +\lambda_x – \lambda^u_x.$

Let ${\varphi : \mathbb{R}\rightarrow\mathbb{R}_+}$ be a smooth convex function such that for some constant ${c>0}$, and for all ${r\in\mathbb{R}}$,

$\varphi ‘(r)r \geq c\varphi (r). \ \ \ \ \ (5)$

In particular, ${\varphi}$ vanishes at ${0}$, is non-increasing on ${(-\infty , 0)}$ and non-decreasing on ${(0,\infty)}$. Moreover, the behavior at infinity is at least polynomial of degree ${c}$. Note that one can easily find a sequence of such functions converging pointwise to the absolute value ${\left|\cdot\right|}$.

Theorem 1 (Intertwining and sub-commutation) Assume that for every ${x\in\mathbb{N}}$ and ${t\geq0}$, we have

$\mathbb{E} _x \left[ \exp \left( – \int_0^t V_u (X_{u,s}) \, ds \right)\right] <\infty .$

Then for every ${f\in b\mathcal{F}}$, ${x\in\mathbb{N}}$, ${t\geq0}$,

$\partial_u P_t f (x) \, = \, P_{u,t}^{V_u} \partial_u f (x) \, = \, \mathbb{E} _x \left[ \partial_u f(X_{u,t}) \, \exp \left( – \int_0^t V_u (X_{u,s}) \, ds \right)\right]. \ \ \ \ \ (6)$

Moreover, if ${V_u\geq0}$ then for every ${f\in b\mathcal{F}}$, ${x\in\mathbb{N}}$, ${t\geq0}$,

$\varphi \left( \partial_u P_t f \right)(x) \leq \mathbb{E}_x \left[ \varphi( \partial_u f) (X_{u,t}) \, \exp \left( – \int_0^t c V_u ( X_{u,s}) \, ds \right)\right] . \ \ \ \ \ (7)$

Proof: Let us prove (7). If we define

$s\in[0,t]\mapsto J(s) := P_{u,s}^{cV_u} \varphi (\partial_u P_{t-s} f)$

then (7) rewrites as ${J(0) \leq J(t)}$. Hence it suffices to show that ${J}$ is non-decreasing. We have the intertwining relation

$\partial_u \mathcal{L} = \mathcal{L}_u^{V_u} \partial_u, \ \ \ \ \ (8)$

where ${\mathcal{L}_u}$ is the generator of the ${u}$-modification process ${(X_{u,t})_{t\geq 0}}$ and where

$\mathcal{L}_u^{V_u}:=\mathcal{L}_u-V_u.$

Now

$J'(s) = P_{u,s} ^{cV_u} (T) \quad\text{where}\quad T = \mathcal{L}_u^{cV_u} \varphi (\partial_u P_{t-s} f) – \varphi ‘(\partial_u P_{t-s} f)\, \partial_u \mathcal{L} P_{t-s}f .$

Letting ${g_u = \partial_u P_{t-s} f}$, we obtain, by using (8),

$T = \mathcal{L}_u^{cV_u} \varphi (g_u) – \varphi ‘(g_u) \mathcal{L}_u^{V_u} g_u$

and thus

$T = \lambda^u \left( \partial \varphi (g_u) – \varphi ‘(g_u)\partial g_u \right) + \nu^u \left( \partial^* \varphi (g_u) – \varphi ‘(g_u)\partial^* g_u \right) + V_u \left( \varphi ‘(g_u) g_u – c\varphi (g_u)\right).$

Now (5) and ${V_u\geq0}$ give ${T\geq0}$. Since the Feynman-Kac semigroup ${(P_{u,t}^{cV_u})_{t\geq 0}}$ is positivity preserving, we get (7). The proof of (6) is similar but simpler (${T}$ is identically zero). $latex \Box$

The identity (6) implies a propagation of monotonicity: if ${f}$ is non-increasing then ${P_tf}$ is also non-increasing.

Actually, the intertwining relations above have their counterpart in continuous state space. Let ${\mathcal{A}}$ be the generator of a one-dimensional real-valued diffusion ${(X_{t})_{t\geq 0}}$ of the type

$\mathcal{A} f = \sigma ^2 f”+ bf’,$

where ${f}$ and the two functions ${\sigma,b}$ are sufficiently smooth. Given a smooth positive function ${a}$ on ${\mathbb{R}}$, the gradient of interest is ${\nabla_a f = a\, f’}$. Denote ${(P_t)_{t\geq 0}}$ the associated diffusion semigroup. Then it is not hard to adapt to the continuous case the argument of theorem~1 to show that the following intertwining relation holds:

$\nabla_a P_tf (x) = \mathbb{E}_x \left[ \nabla_a f(X_{a,t}) \, \exp \left( – \int_0^t V_a ( X_{a,s}) \, ds \right)\right] .$

Here ${(X_{a,t})_{t\geq 0}}$ is a new diffusion process with generator

$\mathcal{A} _a f = \sigma ^2 f” + b_a f’$

and drift ${b_a}$ and potential ${V_a}$ given by

$b_a := 2\sigma \sigma ‘ +b – 2\sigma ^2 \, \frac{a’}{a} \quad\text{and}\quad V_a := \sigma ^2 \, \frac{a”}{a} – b’ + \frac{a’}{a} \, b_a.$

In particular, if the weight ${a=\sigma}$, where ${\sigma}$ is assumed to be positive, then the two processes above have the same distribution and by Jensen’s inequality, we obtain

$\vert \nabla_\sigma P_tf (x) \vert \leq \mathbb{E}_x \left[ \vert \nabla_\sigma f (X_{t}) \vert \, \exp \left( – \int_0^t \left( \sigma \sigma” -b’ + b\, \frac{\sigma ‘}{\sigma} \right) ( X_{s}) \, ds \right)\right] .$

Hence under the assumption that there exists a constant ${\rho}$ such that

$\inf \, \sigma \sigma” -b’ + b\, \frac{\sigma ‘}{\sigma} \geq \rho,$

then we get ${\vert \nabla_\sigma P_tf \vert \leq e^{-\rho t} \, P_t \vert \nabla_\sigma f \vert}$. This type of sub-commutation relation is at the heart of the Bakry-Emery calculus for diffusions.

This post is devoted to the Complex Ginibre Ensemble, the subject of an expository talk that I gave few months ago. Let ${(G_{i,j})_{i,j\geq1}}$ be an infinite table of i.i.d. random variables on ${\mathbb{C}\equiv\mathbb{R}^2}$ with ${G_{11}\sim\mathcal{N}(0,\frac{1}{2}I_2)}$. The Lebesgue density of the ${n\times n}$ random matrix ${\mathbf{G}=(G_{i,j})_{1\leq i,j\leq n}}$ in ${\mathcal{M}_n(\mathbb{C})\equiv\mathbb{C}^{n\times n}}$ is

$\mathbf{A}\in\mathcal{M}_n(\mathbb{C}) \mapsto \pi^{-n^2}e^{-\sum_{i,j=1}^n|\mathbf{A}_{ij}|^2} = \pi^{-n^2}e^{-\mathrm{Tr}(\mathbf{A}\mathbf{A}^*)} \ \ \ \ \ (1)$

where ${\mathbf{A}^*}$ the conjugate-transpose of ${\mathbf{A}}$. We say that ${\mathbf{G}}$ belong to the Complex Ginibre Ensemble. This law is unitary invariant, in the sense that if ${\mathbf{U}}$ and ${\mathbf{V}}$ are ${n\times n}$ unitary matrices then ${\mathbf{U}\mathbf{G}\mathbf{V}}$ and ${\mathbf{G}}$ are equally distributed. The entry-wise and matrix-wise real and imaginary parts of ${\mathbf{G}}$ are independent and belong to the Gaussian Unitary Ensemble (GUE) of Gaussian random Hermitian matrices (the converse is also true).

The eigenvalues ${\lambda_1(\mathbf{A}),\ldots,\lambda_n(\mathbf{A})}$ of a matrix ${\mathbf{A}\in\mathcal{M}_n(\mathbb{C})}$ are the roots in ${\mathbb{C}}$ of its characteristic polynomial ${P_\mathbf{A}(z)=\det(\mathbf{A}-z\mathbf{I})}$, labeled so that ${|\lambda_1(\mathbf{A})|\geq\cdots\geq|\lambda_n(\mathbf{A})|}$ with growing phases.

Lemma 1 (Diagonalizability) For every ${n\geq1}$, the set of elements of ${\mathcal{M}_n(\mathbb{C})}$ with multiple eigenvalues has zero Lebesgue measure in ${\mathbb{C}^{n\times n}}$. In particular, the set of nondiagonalizable elements of ${\mathcal{M}_n(\mathbb{C})}$ has zero Lebesgue measure in ${\mathbb{C}^{n\times n}}$.

Proof: If ${\mathbf{A}\in\mathcal{M}_n(\mathbb{C})}$ has characteristic polynomial

$P_\mathbf{A}(z)=z^n+a_{n-1}z^{n-1}+\cdots+a_0,$

then ${a_0,\ldots,a_{n-1}}$ are polynomials functions of ${\mathbf{A}}$. The resultant ${R(P_\mathbf{A},P’_\mathbf{A})}$ of ${P_\mathbf{A},P’_\mathbf{A}}$, called the discriminant of ${P_\mathbf{A}}$, is the determinant of the ${(2n-1)\times(2n-1)}$ Sylvester matrix of ${P_\mathbf{A},P’_\mathbf{A}}$. It is a polynomial in ${a_0,\ldots,a_{n-1}}$. We have also the Vandermonde formula

$|R(P_\mathbf{A},P’_\mathbf{A})|=\prod_{i{<}j}|\lambda_i(\mathbf{A})-\lambda_j(\mathbf{A})|^2.$

Consequently, ${\mathbf{A}}$ has all eigenvalues distinct if and only if ${\mathbf{A}}$ lies outside the polynomial hyper-surface ${\{\mathbf{A}\in\mathbb{C}^{n\times n}:R(P_\mathbf{A},P’_\mathbf{A})=0\}}$. ☐

Since the law of ${\mathbf{G}}$ is absolutely continuous, from lemma 1, we get that a.s. ${\mathbf{G}\mathbf{G}^*\neq \mathbf{G}^*\mathbf{G}}$ (nonnormality) but ${\mathbf{G}}$ is diagonalizable with distinct eigenvalues. Note that if ${\mathbf{G}=\mathbf{U}\mathbf{T}\mathbf{U}^*}$ is the Schur unitary decomposition where ${\mathbf{U}}$ is unitary and ${\mathbf{T}}$ is upper triangular, then ${\mathbf{T}=\mathbf{D}+\mathbf{N}}$ with ${\mathbf{D}=\mathrm{diag}(\lambda_1(\mathbf{G}),\ldots,\lambda_n(\mathbf{G}))}$ and

$\mathrm{Tr}(\mathbf{G}\mathbf{G}^*) =\mathrm{Tr}(\mathbf{D}\mathbf{D}^*) +\mathrm{Tr}(\mathbf{N}\mathbf{N}^*).$

Following Ginibre, one may compute the joint density of the eigenvalues by integrating (1) over the non spectral variables ${\mathbf{U}}$ and ${\mathbf{N}}$. The result is stated in Theorem 2 below. The law of ${\mathbf{G}}$ is invariant by the multiplication of the entries with a common phase, and thus the law spectrum of ${\mathbf{G}}$ is rotationally invariant in ${\mathbb{C}^n}$. In the sequel we set

$\Delta_n:=\{(z_1,\ldots,z_n)\in\mathbb{C}^n:|z_1|\geq\cdots\geq|z_n|\}.$

Theorem 2 (Spectrum law) ${(\lambda_1(\mathbf{G}),\ldots,\lambda_n(\mathbf{G}))}$ has density ${n!\varphi_n\mathbf{1}_{\Delta_n}}$ where

$\varphi_n(z_1,\ldots,z_n)=\frac{\pi^{-n}}{\prod_{k=1}^nk!} \exp\left(-\sum_{k=1}^n|z_k|^2\right)\prod_{1\leq i{<}j\leq n}|z_i-z_j|^2.$

In particular, for every symmetric Borel function ${F:\mathbb{C}^n\rightarrow\mathbb{R}}$,

$\mathbb{E}[F(\lambda_1(\mathbf{G}),\ldots,\lambda_n(\mathbf{G}))] =\int_{\mathbb{C}^n}\!F(z_1,\ldots,z_n)\varphi_n(z_1,\ldots,z_n)\,dz_1\cdots dz_n.$

We will use Theorem 2 with symmetric functions of the form

$F(z_1,\ldots,z_n) =\sum_{i_1,\ldots,i_k \text{ distinct}}f(z_{i_1})\cdots f(z_{i_k}).$

The Vandermonde determinant comes from the Jacobian of the diagonalization, and can be interpreted as an electrostatic repulsion. The spectrum is a Gaussian determinantal process. One may also take a look at the fourth chapter of the book by Hough, Krishnapur, Peres, and Virag for a generalization to zeros of Gaussian analytic functions. Theorem 2 is reported in chapter 15 of Mehta’s book. Ginibre considered additionally the case where ${\mathbb{C}}$ is replaced by ${\mathbb{R}}$ or by the quaternions. These two cases are less studied than the complex case, due to their peculiarities. For instance, for the real Gaussian case, and following Edelman (Corollary 7.2), the probability that the ${n\times n}$ matrix has all its eigenvalues real is ${2^{-n(n-1)/4}}$, see also Akemann and Kanzieper. The whole spectrum does not have a density in ${\mathbb{C}^n}$ in this case!

Theorem 3 (${k}$-points correlations) Let ${z\in\mathbb{C}\mapsto\gamma(z)=\pi^{-1}e^{-|z|^2}}$ be the density of the standard Gaussian law ${\mathcal{N}(0,\frac{1}{2}I_2)}$ on ${\mathbb{C}}$. Then for every ${1\leq k\leq n}$, the “${k}$-point correlation”

$\varphi_{n,k}(z_1,\ldots,z_k) := \int_{\mathbb{C}^{n-k}}\!\varphi_n(z_1,\ldots,z_k)\,dz_{k+1}\cdots dz_n$

satisfies to

$\varphi_{n,k}(z_1,\ldots,z_k) = \frac{(n-k)!}{n!}\gamma(z_1)\cdots\gamma(z_k) \det\left[K(z_i,z_j)\right]_{1\leq i,j\leq k}$

where

$K(z_i,z_j) :=\sum_{\ell=0}^{n-1}\frac{(z_iz_j^*)^\ell}{\ell!} =\sum_{\ell=0}^{n-1}H_\ell(z_i)H_\ell(z_j)^* \quad\text{with}\quad H_\ell(z):=\frac{1}{\sqrt{\ell!}}z^\ell.$

In particular, by taking ${k=n}$ we get

$\varphi_{n,n}(z_1,\ldots,z_n) =\varphi_n(z_1,\ldots,z_n) =\frac{1}{n!}\gamma(z_1)\cdots\gamma(z_n)\det\left[K(z_i,z_j)\right]_{1\leq i,j\leq n}.$

Proof: Calculations made by Mehta (chapter 15 page 271 equation 15.1.29) using

$\prod_{1\leq i{<}j\leq n}|z_i-z_j|^2 =\prod_{1\leq i{<}j\leq n}(z_i-z_j)\prod_{1\leq i{<}j\leq n}(z_i-z_j)^*$

and

$\det\left[z_j^{i-1}\right]_{1\leq i,j\leq k}\det\left[(z_j^*)^{i-1}\right]_{1\leq i,j\leq k} =\frac{\prod_{j=1}^kj!}{n!}\det\left[K(z_i,z_j)\right]_{1\leq i,j\leq k}.$

Recall that the empirical spectral distribution of an ${n\times n}$ matrix ${A}$ is given by

$\mu_\mathbf{A}:=\frac{1}{n}\sum_{k=1}^n\delta_{\lambda_k(\mathbf{A})}.$

Theorem 4 (Mean Circular Law) For every continuous bounded function ${f:\mathbb{C}\rightarrow\mathbb{R}}$,

$\lim_{n\rightarrow\infty}\mathbb{E}\left[\int\!f\,d\mu_{\frac{1}{\sqrt{n}}\mathbf{G}}\right] =\pi^{-1}\int_{|z|\leq 1}\!f(z)\,dxdy.$

Proof: From Theorem 3, with ${k=1}$, we get that the density of ${\mathbb{E}\mu_{\mathbf{G}}}$ is

$\varphi_{n,1}: z\mapsto \gamma(z)\left(\frac{1}{n}\sum_{\ell=0}^{n-1}|H_\ell|^2(z)\right) =\frac{1}{n\pi}e^{-|z|^2}\sum_{\ell=0}^{n-1}\frac{|z|^{2\ell}}{\ell!}.$

Following Mehta (chapter 15 page 272), by elementary calculus, for every compact ${C\subset\mathbb{C}}$,

$\lim_{n\rightarrow\infty}\sup_{z\in C} \left|n\varphi_{n,1}(\sqrt{n}z)-\pi^{-1}\mathbf{1}_{[0,1]}(|z|)\right|=0.$

The ${n}$ in front of ${\varphi_{n,1}}$ is due to the fact that we are on the complex plane ${\mathbb{C}=\mathbb{R}^2}$ and thus ${d\sqrt{n}xd\sqrt{n}y=ndxdy}$. Here is the start of the elementary calculus: for ${r^2<n}$,

$e^{r^2}-\sum_{\ell=0}^{n-1}\frac{r^{2\ell}}{\ell!} =\sum_{\ell=n}^\infty\frac{r^{2\ell}}{\ell!} \leq \frac{r^{2n}}{n!}\sum_{\ell=0}^\infty\frac{r^{2\ell}}{(n+1)^\ell} =\frac{r^{2n}}{n!}\frac{n+1}{n+1-r^2}$

while for ${r^2>n}$,

$\sum_{\ell=0}^{n-1}\frac{r^{2\ell}}{\ell!} \leq\frac{r^{2(n-1)}}{(n-1)!}\sum_{\ell=0}^{n-1}\left(\frac{n-1}{r^2}\right)^\ell \leq \frac{r^{2(n-1)}}{(n-1)!}\frac{r^2}{r^2-n+1}.$

This leads to the result by taking ${r^2:=|\sqrt{n}z|^2}$. ☐

The real Gaussian version of Theorem 4 was established by Edelman (theorem 6.3). The sequence ${(H_k)_{k\in\mathbb{N}}}$ forms an orthonormal basis of square integrable analytic functions on ${\mathbb{C}}$ for the standard Gaussian on ${\mathbb{C}}$. The uniform law on the unit disc (known as the circular law) is the law of ${\sqrt{V}e^{2i\pi W}}$ where ${V}$ and ${W}$ are i.i.d. uniform random variables on the ${[0,1]}$. Ledoux makes use of this point of view for his interpolation between complex Ginibre and GUE via the Girko elliptic laws (see also Johansson).

Theorem 5 (Strong Circular Law) Almost surely, for all continuous bounded ${f:\mathbb{C}\rightarrow\mathbb{R}}$,

$\lim_{n\rightarrow\infty}\int\!f\,d\mu_{\frac{1}{\sqrt{n}}\mathbf{G}} = \pi^{-1}\int_{|z|\leq 1}\!f(z)\,dxdy.$

Proof: We reproduce Silverstein’s argument, published by Hwang. The argument is similar to the proof of the strong law of large numbers for i.i.d. random variables with finite fourth moment. It suffices to establish the result for compactly supported continuous bounded functions. Let us pick such a function ${f}$ and set

$S_n:=\int_{\mathbb{C}}\!f\,d\mu_{\frac{1}{\sqrt{n}}\mathbf{G}} \quad\text{and}\quad S_\infty:=\pi^{-1}\int_{|z|\leq 1}\!f(z)\,dxdy.$

Suppose for now that we have

$\mathbb{E}[\left(S_n-\mathbb{E} S_n\right)^4]=O(n^{-2}). \ \ \ \ \ (2)$

By monotone convergence,

$\mathbb{E}\sum_{n=1}^\infty\left(S_n-\mathbb{E} S_n\right)^4 =\sum_{n=1}^\infty\mathbb{E}[\left(S_n-\mathbb{E} S_n\right)^4]<\infty$

and consequently ${\sum_{n=1}^\infty\left(S_n-\mathbb{E} S_n\right)^4<\infty}$ a.s. which implies ${\lim_{n\rightarrow\infty}S_n-\mathbb{E} S_n=0}$ a.s. Since ${\lim_{n\rightarrow\infty}\mathbb{E} S_n=S_\infty}$ by theorem 4, we get ${\lim_{n\rightarrow\infty}S_n=S_\infty}$ a.s. Finally, one can swap the universal quantifiers on ${\omega}$ and ${f}$ thanks to the separability of ${\mathcal{C}_c(\mathbb{C},\mathbb{R})}$. To establish (2), we set

$S_n-\mathbb{E} S_n=\frac{1}{n}\sum_{i=1}^nZ_i \quad\text{with}\quad Z_i:=f\left(\lambda_i\left(\frac{1}{\sqrt{n}}\mathbf{G}\right)\right).$

Next, we obtain, with ${\sum_{i_1,\ldots}}$ running over distinct indices in ${1,\ldots,n}$,

$\mathbb{E}\left[\left(S_n-\mathbb{E} S_n\right)^4\right] =\frac{1}{n^4}\sum_{i_1}\mathbb{E}[Z_{i_1}^4]$

$+\frac{4}{n^4}\sum_{i_1,i_2}\mathbb{E}[Z_{i_1}Z_{i_2}^3]$

$+\frac{3}{n^4}\sum_{i_1,i_2}\mathbb{E}[Z_{i_1}^2Z_{i_2}^2]$

$+\frac{6}{n^4}\sum_{i_1,i_2,i_3}\mathbb{E}[Z_{i_1}Z_{i_2}Z_{i_3}^2]$

$+\frac{1}{n^4}\sum_{i_1,i_2,i_3,i_3,i_4}\!\!\!\!\!\!\mathbb{E}[Z_{i_1}Z_{i_3}Z_{i_3}Z_{i_4}].$

The first three terms of the right hand side are ${O(n^{-2})}$ since ${\max_{1\leq i\leq n}|Z_i|\leq\Vert f\Vert_\infty}$. Finally, some calculus using the expressions of ${\varphi_{n,3}}$ and ${\varphi_{n,4}}$ provided by Theorem 3 allows to show that the remaining two terms are also ${O(n^{-2})}$. See Hwang (page 151). ☐

Following Kostlan, the integration of the phases in the joint density of the spectrum given by Theorem 2 leads to theorem 6 below. See also Rider and the book by Hough, Krishnapur, Peres, and Virag for a generalization to determinental processes.

Theorem 6 (Layers) If ${Z_{1},\ldots,Z_{n}}$ are independent with ${Z_k^2\sim\Gamma(k,1)}$ for every ${k}$, then

$\left\{|\lambda_1(\mathbf{G})|,\ldots,|\lambda_n(\mathbf{G})|\right\} \overset{d}{=} \left\{Z_{1},\ldots,Z_{n}\right\}.$

Note that ${(\sqrt{2}Z_k)^2\sim\chi^2(2k)}$ which is useful for ${\sqrt{2}\mathbf{G}}$. Since ${Z_k^2\overset{d}{=}E_1+\cdots+E_k}$ where ${E_1,\ldots,E_k}$ are i.i.d. exponential random variables of unit mean, we get, for every ${r>0}$,

$\mathbb{P}\left(\rho(\mathbf{G})\leq \sqrt{n}r\right) =\prod_{1\leq k\leq n}\mathbb{P}\left(\frac{E_1+\cdots+E_k}{n}\leq r^2\right)$

where ${\rho(\mathbf{G})=\max_{1\leq i\leq n}|\lambda_i(\mathbf{G})|=|\lambda_1(\mathbf{G})|}$ is the spectral radius of ${\mathbf{G}}$.

The law of large numbers suggests that ${r=1}$ is a critical value. The central limit theorem suggests that ${n^{-1/2}\rho(\mathbf{G})}$ behaves when ${n\gg1}$ as the maximum of a standard Gaussian i.i.d sample, for which the fluctuations follow the Gumbel law. A quantitative central limit theorem and the Borel-Cantelli lemma provides the follow result. The full proof is in Rider.

Theorem 7 (Convergence and fluctuation of the spectral radius)

$\mathbb{P}\left(\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\rho(\mathbf{G})=1\right)=1.$

Moreover, if ${\gamma_n:=\log(n/2\pi)-2\log(\log(n))}$ then

$\sqrt{4n\gamma_n} \left(\frac{1}{\sqrt{n}}\rho(\mathbf{G})-1-\sqrt{\frac{\gamma_n}{4n}}\right) \overset{d}{\underset{n\rightarrow\infty}{\longrightarrow}} \mathcal{G}.$

where ${\mathcal{G}}$ is the Gumbel law with cumulative distribution function ${x\mapsto e^{-e^{-x}}}$ on ${\mathbb{R}}$.

The convergence of the spectral radius was obtained by Mehta (chapter 15 page 271 equation 15.1.27) by integrating the joint density of the spectrum of Theorem 2 over the set ${\bigcap_{1\leq i\leq n}\{|\lambda_i|>r\}}$. The same argument is reproduced by Hwang (pages 149-150). Let us give now an alternative derivation of Theorem 4. From Theorem 7, the sequence ${(\mathbb{E}\mu_{n^{-1/2}\mathbf{G}})_{n\geq1}}$ is tight and every adherence value ${\mu}$ is supported in the unit disc. From Theorem 2, such a ${\mu}$ is rotationally invariant, and from Theorem 6, the image of ${\mu}$ by ${z\in\mathbb{C}\mapsto|z|}$ has density ${r\mapsto 2r\mathbf{1}_{[0,1]}(r)}$ (use moments!). Theorem 4 follows immediately.

Note that the recent book of Forrester contains many computations for the Ginibre Ensemble. See also the chapter by Khoruzhenko and Sommers in the Oxford Handbook of Random Matrices.

Syntax · Style · Tracking & Privacy.