# Month: November 2011

Recently, I have spent some time with Krzysztof Burdzy writing a $\LaTeX$ document class for the periodicals Electronic Journal of Probability (EJP) and Electronic Communications in Probability (ECP). This document class is named ejpecp. The latest version is available on the Comprehensive TeX Archive Network (CTAN). It will serve starting from January 2012, together with a new editorial board and a new website. EJP and ECP are peer reviewed free access electronic periodicals publishing mathematical research articles in Probability Theory. They are sponsored by the Institute of Mathematical Statistics (IMS) and the Bernoulli Society. They share the same editorial board, with distinct Editor in Chief. They use the Open Journal System (OJS) free software developed by the non-profit organization Public Knowledge Project (PKP).

This post is devoted to central limit theorems for convex bodies.

The uniform law on the cube. Let ${X=(X_1,\ldots,X_n)}$ be a random variable distributed according to the uniform law on the cube

$B_{n,\infty}(\sqrt{3}) =\left\{x\in\mathbb{R}^n:\max_{1\leq i\leq n}|x_i|\leq\sqrt{3}\right\}$

The coordinates ${X_1,\ldots,X_n}$ are i.i.d. with uniform law on ${[-\sqrt{3},\sqrt{3}]}$. In particular,

$\mathbb{E}(X)=0 \quad\text{and}\quad \mathrm{Cov}(X)=I_n.$

The classical central limit theorem gives then that as ${n\rightarrow\infty}$,

$\frac{X_1+\cdots+X_n}{\sqrt{n}}\overset{d}{\longrightarrow}\mathcal{N}(0,1),$

where ${\mathcal{N}(0,1)}$ stands for the Gaussian law on ${\mathbb{R}}$ with mean ${0}$ and variance ${1}$. Actually, and more generally, by the classical Berry-Esseen theorem (the version for independent and possibly not identically distributed random variables), we know that for a well spread ${\theta}$ in the unit sphere (in the sense of a ratio of norms), as ${n\rightarrow\infty}$,

$\left<X,\theta\right>\overset{d}{\longrightarrow}\mathcal{N}(0,1).$

This does not hold for all ${\theta}$ in the unit sphere, and a simple counter example is given by ${\theta=e_1}$ for which ${\left<X,\theta\right>=X_1\sim\mathrm{Uniform}([-\sqrt{3},\sqrt{3}])}$ which is not Gaussian as ${n\rightarrow\infty}$.

The uniform law on the sphere. Let ${X=(X_1,\ldots,X_n)}$ be a random variable distributed according to the uniform law on the sphere

$S_{n,2}(\sqrt{n}) =\left\{x\in\mathbb{R}^n:\sqrt{x_1^2+\cdots+x_n^2}=\sqrt{n}\right\}.$

A way to simulate ${X}$ consists in using the identity in law

$X \overset{d}{=} \sqrt{n}\frac{G}{\sqrt{G_1^2+\cdots+G_n^2}}$

where ${G=(G_1,\ldots,G_n)\sim\mathcal{N}(0,I_n)}$. The classical law of large numbers gives ${G_1^2+\cdots+G_n^2=n(1+o(1))}$ almost surely. This gives that ${X_1\overset{d}{=}(1+o(1))G_1}$ converges in law to ${\mathcal{N}(0,1)}$ as ${n\rightarrow\infty}$. By rotational invariance, for any ${\theta}$ in the unit sphere, as ${n\rightarrow\infty}$,

$\left<X,\theta\right>\overset{d}{\longrightarrow}\mathcal{N}(0,1).$

In particular, for ${\theta=(1/\sqrt{n},\ldots,1/\sqrt{n})}$, we get, as ${n\rightarrow\infty}$,

$\frac{X_1+\cdots+X_n}{\sqrt{n}}\overset{d}{\longrightarrow}\mathcal{N}(0,1).$

From the sphere to the ball with the Archimedes principle. The Archimedes principle states that the projection of the uniform law of the unit sphere of ${\mathbb{R}^3}$, on a diameter, is exactly the uniform law on the diameter. More generally, if ${n\geq3}$ and ${X=(X_1,\ldots,X_n)}$ is a uniform random variable on the unit sphere ${S_{n,2}(1)}$ of ${\mathbb{R}^n}$ then ${(X_1,\ldots,X_{n-2})}$ is uniform on the unit ball ${B_{n-2,2}(1)}$ of ${\mathbb{R}^{n-2}}$. Letac has few pages on this nice antic observation. This does not work if one replaces ${n-2}$ by ${n-1}$. Note that the projection of the uniform law of the unit circle on a diameter gives an arc-sine law and not the uniform law.

The uniform law on the ball. Let ${X=(X_1,\ldots,X_n)}$ be a random variable following the uniform law on the ball

$B_{n,2}(\sqrt{n})=\left\{x\in\mathbb{R}^n:\sqrt{x_1^2+\cdots+x_n^2}\leq\sqrt{n}\right\}.$

A way to simulate ${X}$ is to use the Archimedes principle:

$X\overset{d}{=}\frac{\sqrt{n}}{\sqrt{n+2}}(Y_1,\ldots,Y_n)$

where ${Y=(Y_1,\ldots,Y_{n+2})}$ is uniform on ${S_{n+2,2}(\sqrt{n+2})}$. Now, from our preceding analysis on the sphere, we obtain that ${X_1\overset{d}{=}(1+o(1))Y_1}$ tends in law to ${\mathcal{N}(0,1)}$ as ${n\rightarrow\infty}$. By rotational invariance, for any ${\theta}$ in the unit sphere, as ${n\rightarrow\infty}$,

$\left<X,\theta\right>\overset{d}{\longrightarrow}\mathcal{N}(0,1).$

In particular, taking ${\theta=(1/\sqrt{n},\ldots,1/\sqrt{n})}$, we get, as ${n\rightarrow\infty}$,

$\frac{X_1+\cdots+X_n}{\sqrt{n}}\overset{d}{\longrightarrow}\mathcal{N}(0,1).$

Note that here again we have

$\mathbb{E}(X)=0 \quad\text{and}\quad \mathrm{Cov}(X)\approx I_n.$

It is worthwhile to mention an alternative simulation algorithm of the uniform law on ${B_{n,2}(1)}$. Namely, and following Barthe, Guédon, Mendelson, and Naor, if ${G=(G_1,\ldots,G_n)}$ and ${E}$ are independent with ${G\sim\mathcal{N}(0,I_n)}$ and ${E\sim\mathrm{Exponential}(1)}$ then

$X\overset{d}{=}\sqrt{n}\frac{G}{\sqrt{G_1^2+\cdots+G_n^2+E}}.$

Exercise. Show that the central limit theorem holds for the ${\ell^1}$ ball with suitable radius, using the probabilistic representation of the uniform law on the ${\ell^1}$ unit sphere

$\frac{E}{|E_1|+\cdots+|E_1|}$

where ${E=(E_1,\ldots,E_n)}$ has i.i.d. components following a symmetric exponential distribution.

The central limit theorem for convex bodies. So, well, the central limit theorem holds for the uniform law on the cube and on the ball, two remarkable convex bodies. We note also:

• normalization used: zero mean and identity covariance. In other words, the coordinates are centered, uncorrelated, with unit variance. This does not mean that they are independent. Their possible dependence is related to the geometry of the convex body (independent for the cube but dependent for the ball). For any ${\theta}$ in the unit sphere, since ${X}$ has zero mean and identity covariance,

$\mathbb{E}(\left<X,\theta\right>)=0 \quad\text{and}\quad \mathbb{E}(\left<X,\theta\right>^2) =\mathrm{Var}(\left<X,\theta\right>) =\left<\theta,\mathrm{Cov}(X)\theta\right> =\left\Vert\theta\right\Vert_2^2=1.$

Therefore ${\left<X,\theta\right>}$ and ${\mathcal{N}(0,1)}$ have identical first two moments.

• possible weights: the ${\theta}$ version may not work for all ${\theta}$, and the example of the cube shows that one may experience problems if the most of the mass of ${\theta}$ is supported by few coordinates. It is actually well known that even for independent random variables, the central limit theorem may fail in the presence of sparsity or localization.

Let ${K}$ be a convex body, in other words a convex and non empty compact sub-set of ${\mathbb{R}^n}$. Let us equip ${K}$ with the uniform law, which is the Lebesgue measure normalized to give a total mass ${1}$ to ${K}$. Let us assume that this law has zero mean and identity covariance (isotropy). All ${\ell^p}$ balls are of this kind. From the examples ${p=2}$ and ${p=\infty}$ considered above, it is quite natural to ask if a central limit theorem holds on a generic convex body such as ${K}$.

The celebrated Dvoretzky theorem is an encouraging result in this direction, dealing with sections instead of projections. Using ideas going back to Sudakov, it has been understood that a central limit theorem is implied by the thin shell property: the norm on ${K}$ is well concentrated around its mean and most of the mass of ${K}$ lies in a thin shell. This roughly reduces the problem to show that there exists a sequence ${\varepsilon_n\rightarrow0}$ which does not depend on ${K}$ such that if ${X}$ follows the uniform law of ${K}$ then

$\mathbb{P}\left(\left|\frac{\left\Vert X\right\Vert_2}{\sqrt{n}}-1\right| \geq\varepsilon_n\right) \leq\varepsilon_n.$

Note that ${\mathbb{E}(\left\Vert X\right\Vert_2^2)^{1/2}=\sqrt{n}}$. The details are in Anttila, Ball, and Perissinaki. In this kind of convex geometry, the study of uniform laws on convex bodies in naturally superseded by the study of ${\log}$-concave probability measures (the indicator of a convex set is ${\log}$-concave). Let ${f:\mathbb{R}^n\rightarrow\mathbb{R}_+}$ be a ${\log}$-concave probability density function with respect to the Lebesgue measure, with mean ${0}$ and covariance ${I_n}$. In other words, ${f=e^{-V}}$ with ${V}$ convex, and

$\int\!xf(x)\,dx=0\quad\text{and}\quad\int\!x_ix_jf(x)\,dx=\delta_{i,j}.$

Let ${X=(X_1,\ldots,X_n)}$ be a random vector of ${\mathbb{R}^n}$ with density ${f}$. The central limit theorem of Klartag states that if additionally ${f}$ is invariant by any signs flips on the coordinates (we say that it is unconditional, and this holds in particular for all ${\ell^p}$ balls), then, as ${n\rightarrow\infty}$,

$\frac{X_1+\cdots+X_n}{\sqrt{n}} \overset{d}{\longrightarrow} \mathcal{N}(0,1).$

Moreover, there exists a uniform quantitative version à la Berry-Esseen on cumulative distribution functions. If one drops the unconditional assumption, then a weaker version of the central limit theorem remains available. Moreover, the central limit theorem remains valid for ${\left<X,\theta\right>}$, for most ${\theta}$ in the unit sphere (some directions are however possibly bad, as shown for the cube). The work of Klartag is crucially based on a proof of the thin shell property. An alternative proof was also given by Fleury, Guédon, and Paouris (slightly after Klartag).

Further reading. The main source for this post is the expository paper by Franck Barthe entitled Le théorème central limite pour les corps convexes, d’après Klartag et Fleury-Guédon-Paouris, published in Séminaire Bourbaki (2010). One may also take a look at the survey High-dimensional distributions with convexity properties, by Klartag, and to the paper Variations on the Berry-Esseen theorem by Klartag and Sodin. On this blog, you may also be interested by the blog posts When the central limit theorem fails. Sparsity and localization and Independence and central limit theorems. It is worthwhile to mention that there exists other kinds of central limit theorems for convex bodies, such as the central limit theorem for the volume and the number of faces of random polytopes, obtained by Barany and Vu (it is another convex universe).

This post is devoted to few convex and compact sets of matrices that I like.

The set ${\mathcal{C}_n}$ of correlation matrices. A real ${n\times n}$ matrix ${C}$ is a correlation matrix when ${C}$ is symmetric, semidefinite positive, with unit diagonal. This means that

$C_{ii}=1, \quad C_{ji}=C_{ji},\quad \left<x,Cx\right>\geq0$

for every ${1\leq i,j\leq n}$ and for every ${x\in\mathbb{R}^n}$. In particular, we have ${C_{ij}=C_{ji}\in[-1,1]}$ for every ${1\leq i,j\leq n}$. The set ${\mathcal{C}_n}$ of all correlation matrices is convex and compact. It contains the identity matrix ${I_n}$ but is not stable by the matrix product. However, it is stable by the Schur-Hadamard entrywise product! It is the intersection of the cone of symmetric positive semidefinite matrices ${\mathcal{S}_n^+}$ with the affine subspace of equations ${S_{11}=\cdots=S_{nn}=1}$. A closer look at ${\mathcal{C}_3}$ gives, by simply computing the principal minors,

$\begin{pmatrix} 1 & a & c \\ a & 1 & b \\ c & b & 1 \end{pmatrix} \in\mathcal{C}_3 \quad\text{iff}\quad a,b,c\in[-1,1] \quad\text{and}\quad a^2+b^2+c^2\leq 1+2abc.$

This shows the ellipsoidal geometry of ${\mathcal{C}_n}$, coming from the spectral constraint, and related to the Schur complement. The set ${\mathcal{C}_n}$ is not a polytope of ${\mathbb{R}^{n\times n}}$ (i.e. is not the intersection of a finite collection of half spaces). Some authors use the name elliptope, an example of spectrahedron (set of matrices defined by linear matrix inequalities). The extremal points of ${\mathcal{C}_n}$ were studied by Ycart while the facial structure was explored by Laurent and Poljak. A useful parameterization of ${\mathcal{C}_n}$ using covariance graphs was developed by Kurowicka and Cooke.

The uniform probability measure on ${\mathcal{C}_n}$ is the normalized trace of the Lebesgue measure. Its simulation was studied by many people, and one can take a look at the works of Joe and of Lewandowski, Kurowicka, and Joe, where the volume of ${\mathcal{C}_n}$ is also computed in ${\mathbb{R}^{n(n-1)/2}}$:

$\mathrm{Volume}(\mathcal{C}_n) = \pi^{(n^2-1)/4}\frac{\prod_{k=1}^{(n-1)/2}\Gamma(2k)}{2^{(n-1)^2/4}\Gamma^{n-1}((n+1)/2)}$

if ${n}$ is odd, while if ${n}$ is even:

$\mathrm{Volume}(\mathcal{C}_n) = \pi^{n(n-2)/4}\frac{2^{(3n^2-4n)/4}\Gamma^n(n/2)\prod_{k=1}^{(n-2)/2}\Gamma(2k)}{\Gamma^{n-1}(n)}.$

If ${S}$ is a symmetric ${n\times n}$ positive semidefinite matrix then the matrix ${C}$ defined by

$C_{ij}=\frac{S_{ij}}{\sqrt{S_{ii}S_{jj}}}$

for every ${1\leq i,j\leq n}$ belongs to ${\mathcal{C}_n}$. If ${C}$ is a random element of ${\mathcal{C}_n}$ following the uniform law, then it is known that the off-diagonal entries of ${C}$, which are correlated, follow the same law, namely the Beta law of parameter ${(n/2,n/2)}$ on ${[-1,1]}$ (the density of the law of ${C}$ is also computable using a covariance graph representation, leading to the volumetric formulas above). This allows to show that if ${S}$ is the empirical covariance matrix of a standard Gaussian sample in ${\mathbb{R}^n}$ then the random correlation matrix constructed from ${S}$ (which is the empirical correlation matrix) does not follow the uniform law on ${\mathcal{C}_n}$. One may ask whether or not the asymptotic spectral properties of a uniform random element of ${\mathcal{C}_n}$, as ${n\rightarrow\infty}$, are identical to the ones of Gaussian empirical covariance matrices (Marchenko-Pastur quarter circular law). I have not tested this numerically by simulation.

The set ${\mathcal{M}_n}$ of stochastic matrices. A real ${n\times n}$ matrix ${M}$ is a stochastic matrix (we say also a Markov matrix) when all its entries are non-negative and all rows sum up to one. This means that ${M_{ij}\geq0}$ and ${M_{i1}+\cdots+M_{in}=1}$ for every ${1\leq i,j\leq n}$. In particular, ${M_{ij}\in[0,1]}$ for all ${1\leq i,j\leq n}$, and every row belongs to the non-negative portion of the ${\ell^1_n}$ unit sphere. The name Markov comes from the fact that such matrices are exactly the transition matrices of finite Markov chains on a state space of cardinal ${n}$. The set ${\mathcal{M}_n}$ of all ${n\times n}$ Markov matrices is convex and compact. It is a polytope of ${\mathbb{R}^{n\times n}}$ (i.e. intersection of half spaces) with ${n(n-1)}$ degrees of freedom. The set ${\mathcal{M}_n}$ contains the identity matrix ${I_n}$ and is stable by matrix product (it is thus a semigroup for this product).

If ${E}$ is a ${n\times n}$ matrix with non-negative entries then the ${n\times n}$ matrix ${M}$, defined by

$M_{ij}=\frac{E_{ij}}{E_{i1}+\cdots+E_{in}}$

for every ${1\leq i,j\leq n}$, belongs to ${\mathcal{M}_n}$. If the entries of ${E}$ are i.i.d. exponential random variables then the Markov matrix ${M}$ constructed from ${E}$ follows the uniform law on ${\mathcal{M}_n}$ which is the normalized trace of the Lebesgue measure. This kind of random matrices is called the Dirichlet Markov Ensemble since the rows of such random matrices are i.i.d. and follow the Dirichlet distribution with parameter ${(1,\ldots,1)}$. Note that the entries of this matrix are identically distributed and follow a Beta law, and are correlated on each row and independent across rows. The law of large numbers gives ${E_{i1}+\cdots+E_{in}=n(1+o(1))}$ and this suggests that the asymptotic spectral properties of such a random ${M}$ as ${n\rightarrow\infty}$ are identical to the ones of ${E}$ (Marchenko-Pastur quarter circular law and Girko circular law), see cb-pc-dc.

Stochastic group. Let us consider the set of all ${n\times n}$ real matrices such that ${M}$ is invertible and ${M_{i1}+\cdots+M_{in}=1}$ for all ${1\leq i\leq n}$ (note that the entries of ${M}$ are allowed to take negative values). This set is actually a group for the matrix product, sometimes called the stochastic group, and we refer to Poole for more details. This set is neither convex nor compact and I ignore what are its asymptotic spectral properties if equipped with a probability measure (notion of uniform law via constrained maximum entropy?).

The set ${\mathcal{B}_n}$ of doubly stochastic matrices. A real ${n\times n}$ matrix ${B}$ is doubly stochastic (we say also bistochastic) if ${B}$ and ${B^\top}$ are both stochastic matrices. This means that ${B_{i1}+\cdots+B_{in}=B_{i1}+\cdots+B_{in}=1}$ for any ${1\leq i\leq n}$. In particular, ${B_{ij}\in[0,1]}$ for every ${1\leq i,j\leq n}$. The set ${\mathcal{B}_n}$ is convex and compact, and forms a polytope of ${\mathbb{R}^{n\times n}}$ with ${(n-1)^2}$ degrees of freedom. It was shown by Canfield and McKay that in ${\mathbb{R}^{(n-1)\times(n-1)}}$,

$\mathrm{Volume}(\mathcal{B}_n) =\frac{\exp\left(\frac{1}{3}+n^2+o(1)\right)}{(2\pi)^{n-1/2}n^{(n-1)^2}n^{n-1}}.$

The Birkhoff polytope was remarkably studied by Barvinok, and a discrete version (contingency tables) by Barvinok and Hartigan.

The set ${\mathcal{B}_n}$ contains the identity matrix and is stable by the matrix product (semigroup!). The matrix with all entries equal to ${1/n}$ belongs to ${\mathcal{B}_n}$ and is called sometimes the Wedderburn matrix. The van der Waerden conjecture, solved by Falikman, states that the Wedderburn matrix achieves the minimum of the permanent on ${\mathcal{B}_n}$. The set ${\mathcal{B}_n}$ is referred to as the Birkhoff polytope (that is why we use the notation ${\mathcal{B}_n}$). A celebrated theorem of Birkhoff and von Neumann states that ${\mathcal{B}_n}$ is the convex envelope of permutation matrices. It can be shown, using the Minkowski-Carathéodory theorem, that each element of ${\mathcal{B}_n}$ is the convex combination of at most ${(n-1)^2+1}$ permutation matrices (which depend on the particular element). The set ${\mathcal{B}_n}$ is also known as the assignment polytope or as the transportation polytope, since each element corresponds to a transportation map between ${n}$ unit sources and ${n}$ unit targets. One should not confuse ${\mathcal{B}_n}$ with the permutahedron. The set ${\mathcal{B}_n}$ plays a role in diverse areas and can be used for instance for the proof of the Hoffman-Wielandt inequality in matrix analysis.

The uniform law ${\mathcal{L}_n}$ on ${\mathcal{B}_n}$ is the normalized trace of the Lebesgue measure. The law ${\mathcal{L}_n}$ has identical marginal laws, and is invariant by permutation of rows and of columns but is not exchangeable (i.e. invariant by any permutation of the entries). Let us consider the random walk on ${\mathcal{B}_n}$ obtained at each step by selecting randomly two rows and two columns and transferring a random amount of mass between the corresponding four entries. This is a Markov chain which admits ${\mathcal{L}_n}$ as an invariant law. Its speed of convergence was studied by Diaconis. More recently, Chatterjee, Diaconis, and Sly have shown that many asymptotic properties of random matrices drawn from ${\mathcal{L}_n}$ are identical to the ones of random matrices with i.i.d. exponential entries. In particular, the entries are close to the exponential distribution of mean ${1/n}$ and the Marchenko-Pastur quarter circular law holds true. It is conjectured that the Girko circular law is also true (we ignore how to adapt the Tao and Vu proof).

Unistochastic matrices. If ${U}$ is ${n\times n}$ unitary then the matrix ${(|U_{ij}|^2)_{1\leq i,j\leq n}}$ belongs to ${\mathcal{B}_n}$. Such matrices are sometimes called unistochastic by some physicists. However, it turns out that many elements of ${\mathcal{B}_n}$ are not unistochastic.

Tridiagonal doubly stochastic matrices. The subset of tridiagonal elements of ${\mathcal{B}_n}$ corresponds to birth and death processes on a finite state space of cardinal ${n}$ which are symmetric with respect to the uniform law. The uniform distribution on this remarkable subset was studied by Diaconis and Wood.

Permutation matrices. Recall that ${P}$ is an ${n\times n}$ permutation matrix when there exists a permutation ${\sigma}$ of ${\{1,\ldots,n\}}$ such that ${P_{ij}=\mathbf{1}_{i=\sigma(j)}}$ for every ${1\leq i,j\leq n}$. The set ${\mathcal{P}_n}$ of ${n\times n}$ permutation matrices is the set of extremal points of the convex polytope ${\mathcal{B}_n}$. The set ${\mathcal{P}_n}$ is a finite sub-group of ${\mathrm{GL}_n}$ homomorphic to the permutation group ${\Sigma_n}$. The spectrum of a permutation matrix ${P}$ associated to ${\sigma\in\Sigma_n}$ is actually a very nice object which is related to its cycles decomposition. Namely, if ${\sigma}$ has exactly ${1\leq r\leq n}$ cycles, with lengths ${\ell_1,\ldots,\ell_r}$, then ${P}$ has the spectrum of the bloc diagonal matrix

$B_1\oplus\cdots\oplus B_r$

where ${B_k}$ is ${\ell_k\times\ell_k}$ of the form (the cycle is a right shift on ${\mathbb{Z}/\ell_k\mathbb{Z}}$)

$\begin{pmatrix} 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 \\ 1 & 0 & 0 & 0 & \cdots & 0 \end{pmatrix}.$

The spectrum of ${B_k}$ is given by the solutions of the equation ${\lambda^{\ell(k)}=(-1)^{\ell(k)}}$ (roots of ${\pm}$ unity). The spectrum of ${P}$ is the union of the spectra of ${B_1,\ldots,B_r}$. In particular, the spectrum of ${P}$ is included in the unit circle ${\{z\in\mathbb{C};|z|=1\}}$ and depends only on ${c_1,\ldots,c_n}$ where ${c_k}$ is the number of cycles of length ${k}$ of ${\sigma}$. Note that ${c_1}$ is the number of fixed points of ${\sigma}$, and ${c_1+2c_2+\cdots+nc_n=n}$. Now if ${P}$ is uniformly distributed on ${\mathcal{P}_n}$ then ${\sigma}$ is uniformly distributed on ${\Sigma_n}$ and this allows to describe the spectrum of ${P}$ via an Ewens distribution with parameter ${\theta=1}$, which gives rise as ${n\rightarrow\infty}$ to a Poisson-Dirichlet distribution. The spectrum of uniform permutation matrices was studied for instance by Wieand, by Hambly, Keevash, O’Connell, and Stark, and more recently by Ben Arous and Dang. We ignore how to exploit these results in order to study the spectrum of the element of ${\mathcal{B}_n}$.

Note. I have learned the existence of the set of correlation matrices from Gérard Letac, an eclectic mind who was one of my teachers years ago.

I use like many others the free software GNU Octave for my numerical computations. Just like Scilab, Octave serves as a good free alternative to the commercial software Matlab. I prefer Octave over Scilab because it belongs to the GNU Project and because its syntax is more compatible with the one of Matlab (official objective of Octave). Scilab is quite popular in France for various reasons. Actually, one can also use the more recent Python packages matplotlib, SciPy, NumPy. You may browse the online matplotlib gallery. The installation of matplotlib/SciPy/NumPy is fearly easy on a Debian GNU/Linux system:

sudo apt-get install python-matplotlib ipython

If you do not know it yet, iphyton is “an interactive shell for the Python programming language that offers enhanced introspection, additional shell syntax, tab completion and rich history”.

Syntax · Style · Tracking & Privacy.