# Month: September 2014

Have you ever opened The Oxford Handbook of Random Matrix Theory edited by Akemann, Baik, and Di Francesco? The short foreword by Freeman Dyson is fun. I was intrigued, page 16, by a sentence in chapter 2 (“History – An overview”, by Bohigas and Weidenmüller):

… That search and, as he says (see page 225 of Ref. [Por65]), the ‘accidental’ discovery of the Wishart ensemble of random matrices in an early version of the book by Wilks [Wil62], motivated Wigner to use random matrices.

I already knew Wilks’s book [Wil62], but not [Por65], which turned out to be a book:

Statistical Theories of Spectra: Fluctuations, A Collection of Reprints and Original Papers. With an introductory Review by Charles E. Porter, Brookhaven National Laboratory, New York. Academic Press, 1965.

I’ve bought a used copy of this book. Wigner’s paper is a conference proceedings p. 223-225:

Proceedings of the International Conference on the Neutron Interactions with the Nucleus. Held at Columbia University, New York, September 9-13, 1957. Published by the United States Atomic Energy Commission, Technical Information Service Extension, Oak Ridge, Tennessee.

Talk IIB1. E. P. Wigner, Princeton University. Presented by E. P. Wigner.

Distribution of Neutron Resonance Level Spacing.

Session IIB. Interpretation of Low Energy Neutron Spectroscopy.

Chairman: W. W. Havens, Jr.

The problem of the spacing of levels is neither a terribly important one nor have I solved it. That is really the point which I want to make very definitely. As we go up in the energy scale it is evident that the detailed analyses which we have seen for low energy levels is not possible, and we can only make statistical statements about the properties of the energy levels. Statistical properties refer particularly to two quantities, to wit, the distribution of the widths of energy levels and the spacing of energy levels. As far as the widths of the energy levels is concerned, this is an important problem which has been solved experimentally principally by Hughes, Harvey and the group which was at the time at Brookhaven. The theoretical work, which resulted in a relatively minor correction, was done by Scott, Porter and Thomas. The distribution of widths is rather important, as has been shown by Mr. Dresner, since the use of different distributions can introduce a factor of 3 or more in the determination of the average cross sections. Nothing of similar importance can be said about the spacing except that it is tantalizing not to know what the probability of a certain spacing of the energy levels is.

Now, if we have an average spacing, let us say D, what is the probability of a spacing, X. This question is the one which I have not solved, but perhaps I can tell you something about this which is relatively easy to see. It was pointed out at the Amsterdam conference last year that the probability is 0 at X = 0. This is rather obvious, and follows from the fact that if you have a symmetric matrix, the probability that two energy levels coincide is governed by the requirement that two parameters be 0, rather than that one parameter be 0. This was pointed out by Dr. Von Neuman and myself in an article which everybody has probably forgotten, but it is a very easy thing to show. If I take, for Instance, a two-dimensional matrix then the secular equation reads: $$E =1/2(E_1+ E_2) \pm 1/2\sqrt{(E_1 — E_2)^2 + 4V_{(12)}^2}.$$ The two roots can coincide only if the square root is equal to zero, which requires that both $E_1 – E_2 = 0$ and $V_{(12)} =0$. If you have a third order equation, which is slightly more complicated, that tells you two roots coincide, it again amounts to two equations rather than one equation. As a result, the probability is 0 and, therefore, the probability curve looks something like that in Figure 1. This is of course what is also observed. The only question is how does this look for large spacings, and the answer to this question is what I don’t know.

Dr. Hughes presented a curve which I have not seen, in which he compared an experiment with a curve which I suggested some time ago, a form $Xe^{-X^2/2}$ or something like this. Hughes found reasonably good agreement, but the agreement was good in the region of Small X, and not very good in the region of large X. Agreement in this large X region would not look so bad if the experimental curve were the theoretical curve and the theoretical curve, the experimental curve. Because if you could calculate that there should be four cases with a given spacing and you find only one, then that’s unfortunate, but it is a reasonable statistical deviation. However, if you calculate that there should only be one and find four, that is a much more serious deviation and much less probable statistically.

Let me say only one more word. It is very likely that the curve in Figure 1 is a universal function. in other words, it doesn’t depend on the details of the model with which you are working. There is one particular model in which the probability of the energy levels can be written down exactly. I mentioned this distribution already in Gatlinburg. It is called the Wishart distribution. Consider a set of symmetric matrixes in such a way that the diagonal element $m_{11}$ has a distribution $\exp(-m_{11}^2/4)$. In other words, the probability that this diagonal element shall assume the value $m_{11}$ is proportional to $\exp(-m_{11}^2/4)$. Then as I mentioned, and this was shown a long time ago by Wishart, the probability for the characteristic roots to be $\lambda_1, \lambda_2, \lambda_3, \ldots, \lambda_n$, if this is an $n$ dimensional matrix, is given by the expression: $$\exp(-1/2(\lambda_1^2+\lambda_2^2+..+\lambda_n^2))[(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)…….(\lambda_{n-1}-\lambda_n)].$$ You see again this characteristic factor which shows that the probability of two roots coinciding is 0. If you want to calculate from this formula, the probability that two successive roots have a distance X, then you have to integrate over all of them except two. This is very easy to do for the first integration, possible to do for the second integration, but when you get to the third, fourth and fifth, etc., integrations you have the same problem as in statistical mechanics, and presumably the solution of the problem will be accomplished by one of the methods of statistical mechanics. Let me only mention that I did integrate over all of them except one, and the result is $\frac{1}{2\pi}\sqrt{4n-\lambda^2}$. This is the probability that the root shall be $\lambda$. All I have to do is to integrate over one less variable than I have integrated over, but this I have not been able to do so far.

Discussion

W. HAVENS: Where does one find out about a Wishart distribution?

E. WIGNER: A Wishart distribution is given in S.S. Wilks book about statistics and I found it just by accident.

Note. Edward Teller is among other things credited as being the father of the US H-bomb. He is also one of the authors of the the famous 1953 paper entitled Equation of State Calculations by Fast Computing Machines, together with Nicholas Metropolis, Arianna Rosenbluth, Marshall Rosenbluth, and Augusta Teller, which introduced the so-called Metropolis-Hastings algorithm. Eugene Wigner received a Nobel Prize in Physics in 1963 “for his contributions to the theory of the atomic nucleus and the elementary particles, particularly through the discovery and application of fundamental symmetry principles”. Wigner is often considered as a hidden genius, in comparison with Einstein.

This post is about the discrete Dirichlet problem and Gaussian free field, linked with the random walk on ${\mathbb{Z}^d}$. Discreteness allows to go to the concepts with minimal abstraction. The Gaussian free field (GFF) is an important Gaussian object which appears, like most Gaussian objects, as a limiting object in many models of Statistical Physics.

Symmetric Nearest Neighbors random walk. The symmetric nearest neighbors random walk on ${\mathbb{Z}^d}$ is the sequence of random variables ${X={(X_n)}_{n\geq0}}$ defined by the linear recursion

$X_{n+1}=X_n+\xi_{n+1}=X_0+\xi_1+\cdots+\xi_{n+1}$

where ${{(\xi)}_{n\geq1}}$ is a sequence of independent and identically distributed random variables on ${\mathbb{Z}^d}$, independent of ${X_0}$, and uniformly distributed on the discrete ${\ell^1}$ sphere: ${\{\pm e_1,\ldots,\pm e_d\}}$ where ${e_1,\ldots,e_d}$ is the canonical basis of ${\mathbb{R}^d}$. The term symmetric comes from the fact that in each of the ${d}$ dimensions, both directions are equally probable in the law of the increments.

The sequence ${X}$ is a homogeneous Markov chain with state space ${\mathbb{Z}^d}$ since

$\begin{array}{rcl} \mathbb{P}(X_{n+1}=x_{n+1}\,|\,X_n=x_n,\ldots,X_0=x_0) &=&\mathbb{P}(X_{n+1}=x_{n+1}\,|\,X_n=x_n)\\ &=&\mathbb{P}(X_1=x_{n+1}\,|\,X_0=x_n)\\ &=&\mathbb{P}(\xi_1=x_{n+1}-x_n). \end{array}$

Its transition kernel ${P:\mathbb{Z}^d\times\mathbb{Z}^d\rightarrow[0,1]}$ is given for every ${x,y\in\mathbb{Z}^d}$ by

$P(x,y) = \mathbb{P}(X_{n+1}=y\,|\,X_n=x) =\frac{\mathbf{1}_{\{\left|x-y\right|_1=1\}}}{2d} \quad\mbox{where}\quad \left|x-y\right|_1=\sum_{k=1}^d\left|x_k-y_k\right|.$

It is an infinite Markov transition matrix. It acts on a bounded function ${f:\mathbb{Z}^d\rightarrow\mathbb{R}}$ as

$(Pf)(x)=\sum_{y\in\mathbb{Z}^d}P(x,y)f(y)=\mathbb{E}(f(X_1)\,|\,X_0=x).$

The sequence ${X}$ is also a martingale for the filtration ${{(\mathcal{F}_n)}_{n\geq0}}$ defined by ${\mathcal{F}_0=\sigma(X_0)}$ and ${\mathcal{F}_{n+1}=\sigma(X_0,\xi_1,\ldots,\xi_{n+1})}$. Indeed, by measurability and independence,

$\mathbb{E}(X_{n+1}\,|\,\mathcal{F}_n) =X_n+\mathbb{E}(\xi_{n+1}\,|\,\mathcal{F}_n) =X_n+\mathbb{E}(\xi_{n+1}) =X_n.$

Dirichlet problem. For any bounded function ${f:\mathbb{Z}^d\rightarrow\mathbb{R}}$ and any ${x\in\mathbb{Z}^d}$,

$\mathbb{E}(f(X_{n+1})\,|\,X_n=x)-f(x) =(Pf)(x)-f(x) =(\Delta f)(x) \quad\mbox{where}\quad \Delta=P-I.$

Here ${I}$ is the identity operator ${I(x,y)=\mathbf{1}_{\{x=y\}}}$. The operator

$\Delta =P-I$

is the generator of the symmetric nearest neighbors random walk. It is a discrete Laplace operator, which computes the difference between the value at a point and the mean on its neighbors for the ${\ell^1}$ distance:

$(\Delta f)(x) =\left\{\frac{1}{2d}\sum_{y:\left|y-x\right|_1=1}f(y)\right\}-f(x) =\frac{1}{2d}\sum_{y:\left|y-x\right|_1=1}(f(y)-f(x)).$

We say that ${f}$ is harmonic on ${A\subset\mathbb{Z}^d}$ when ${\Delta f=0}$ on ${A}$, which means that on every point of ${A}$, the value of ${f}$ is equal to the mean of the values of ${f}$ on the ${2d}$ neighbors of ${x}$. The operator ${\Delta}$ is local in the sense that ${(\Delta f)(x)}$ depends only on the value of ${f}$ at ${x}$ and its nearest neighbors. Consequently, the value of ${\Delta f}$ on ${A}$ depends only on the values of ${f}$ on

$\bar{A}:=A\cup\partial\!A$

where

$\partial\!A:=\{y\not\in A:\exists x\in A,\left|x-y\right|_1=1\}$

is the exterior boundary of ${A}$. In this context, the Dirichlet problem consists in finding a function which is harmonic on ${A}$ and for which the value on ${\partial\!A}$ is prescribed. This problem is actually a linear algebra problem for which the result below provides a probabilistic expression of the solution, based on the stopping time

$\tau_{\partial\!A}:=\inf\{n\geq0:X_n\in\partial\!A\}.$

In Physics, the quantity ${f(x)}$ models typically the temperature at location ${x}$, while the harmonicity of ${f}$ on ${A}$ and the boundary condition on ${\partial\!A}$ express the thermal equilibrium and a thermostatic boundary, respectively.

Theorem 1 (Dirichlet problem) Let ${\varnothing\neq A\subset\mathbb{Z}^d}$ be finite. Then for any ${x\in A}$,

$\mathbb{P}_x(\tau_{\partial\!A}<\infty)=1.$

Moreover, for any ${g:\partial\!A\rightarrow\mathbb{R}}$, the function ${f:\bar{A}\rightarrow\mathbb{R}}$ defined for any ${x\in\bar{A}}$ by

$f(x)=\mathbb{E}_x(g(X_{\tau_{\partial\!A}}))$

is the unique solution of the system

$\left\{ \begin{array}{rl} f=g &\mbox{on } \partial\!A,\\ \Delta f=0&\mbox{on } A. \end{array} \right.$

When ${d=1}$ we recover the function which appears in the gambler ruin problem. Recall that the image of a Markov chain by a function which is harmonic for the generator of the chain is a martingale (discrete Itô formula !), and this explains a posteriori the probabilistic formula for the solution of the Dirichlet problem, thanks to the Doob optional stopping theorem.

The quantity ${f(x)=\mathbb{E}_x(g(\tau_{\partial\!A}))=\sum_{y\in\partial\!A}g(y)\mathbb{P}_x(\tau_{\partial\!A}=y)}$ is the mean of ${g}$ for the law ${\mu_x}$ on ${\partial\!A}$ called the harmonic measure defined for any ${y\in\partial\!A}$ by

$\mu_x(y):=\mathbb{P}_x(X_{\tau_{\partial\!A}}=y)$

Proof: The property ${\mathbb{P}_x(\tau_{\partial\!A}<\infty)=1}$ follows from the Central Limit Theorem or by using conditioning which gives a geometric upper bound for the tail of ${\tau_{\partial\!A}}$.

Let us check that the proposed function ${f}$ is a solution. For any ${x\in\partial\!A}$, we have ${\tau_{\partial\!A}=0}$ on ${\{X_0=x\}}$ and thus ${f=g}$ on ${\partial\!A}$. Let us show now that ${\Delta f=0}$ on ${A}$. We first reduce the problem by linearity to the case ${g=\mathbf{1}_{\{z\}}}$ with ${z\in\partial\!A}$. Next, we write, for any ${y\in\bar{A}}$,

$\begin{array}{rcl} f(y) &=&\mathbb{P}_y(X_{\tau_{\partial\!A}}=z)\\ &=&\sum_{n=0}^\infty\mathbb{P}_y(X_n=z,\tau_{\partial\!A}=n)\\ &=&\mathbf{1}_{y=z}+\mathbf{1}_{y\in A}\sum_{n=1}^\infty\sum_{x_1,\ldots,x_{n-1}\in A}P(y,x_1)P(x_1,x_2)\cdots P(x_{n-1},z). \end{array}$

On the other hand, since ${f=0}$ on ${\partial\!A\setminus\{z\}}$ and since ${\Delta}$ is local, we have, for any ${x\in A}$,

$(P f)(x) =\sum_{y\in\mathbb{Z}^d}P(x,y)f(y) =P(x,z)f(z)+\sum_{y\in A}P(x,y)f(y).$

As a consequence, for any ${x\in A}$,

$\begin{array}{rcl} (P f)(x) &=&P(x,z)f(z)+\sum_{n=1}^\infty\sum_{y,x_1,\ldots,x_{n-1}\in A} P(x,y)P(y,x_1)\cdots P(x_{n-1},z)\\ &=&P(x,z)f(z)+(f(x)-(\mathbf{1}_{x=z}+P(x,z))), \end{array}$

which is equal to ${f(x)}$ since ${\mathbf{1}_{x=z}=0}$ and ${f(z)=1}$. Hence ${Pf=f}$ on ${A}$, in other words ${\Delta f=0}$ on ${A}$.

To establish the uniqueness of the solution, we first reduce the problem by linearity to show that ${f=0}$ is the unique solution when ${g=0}$. Next, if ${f:\bar{A}\rightarrow\mathbb{R}}$ is harmonic on ${A}$, the interpretation of ${\Delta f(x)}$ as a difference with the mean on the nearest neighbors allows to show that both the minimum and the maximum of ${f}$ on ${\bar{A}}$ are (at least) necessarily achieved on the boundary ${\partial\!A}$. But since ${f}$ is vanishes on ${\partial\!A}$, it follows that ${f}$ vanishes on ${\bar{A}}$. ☐

Dirichlet problem and Green function. Here is a generalization of Theorem 1. When ${h=0}$ we recover Theorem 1.

Theorem 2 (Dirichlet problem and Green function) If ${\varnothing\neq A\subset\mathbb{Z}^d}$ is finite then for any ${g:\partial\!A\rightarrow\mathbb{R}}$ and ${h:A\rightarrow\mathbb{R}}$, the function ${f:\bar{A}\rightarrow\mathbb{R}}$ defined for any ${x\in\bar{A}}$ by

$f(x)=\mathbb{E}_x\left(g(X_{\tau_{\partial\!A}})+\sum_{n=0}^{\tau_{\partial\!A}-1}h(X_n)\right)$

is the unique solution of

$\left\{ \begin{array}{rl} f=g&\mbox{on } \partial\!A,\\ \Delta f=-h&\mbox{on } A. \end{array} \right.$

When ${d=1}$ and ${h=1}$ then we recover a function which appears in the analysis of the gambler ruin problem.

For any ${x\in A}$ we have

$f(x) =\sum_{y\in\partial\!A}g(y)\mathbb{P}_x(\tau_{\partial\!A}=y)+\sum_{y\in A}h(y)G_A(x,y)$

where ${G_A(x,y)}$ is the average number of passages at ${y}$ starting from ${x}$ and before escaping from ${A}$:

$G_A(x,y):=\mathbb{E}_x\left\{\sum_{n=0}^{\tau_{\partial\!A}-1}\mathbf{1}_{X_n=y}\right\} =\sum_{n=0}^\infty\mathbb{P}_x(X_n=y,n<\tau_{\partial\!A}).$

We say that ${G_A}$ is the Green function of the symmetric nearest neighbors random walk on ${A}$ killed at the boundary ${\partial\!A}$. It is the inverse of the restriction ${-\Delta_A}$ of ${-\Delta}$ to functions on ${\bar{A}}$ vanishing on ${\partial\!A}$:

$G_A=-\Delta_A^{-1}.$

Indeed, when ${g=0}$ and ${h=\mathbf{1}_{\{y\}}}$ we get ${f(x)=G_A(x,y)}$ and thus ${\Delta_AG_A=-I_A}$.

Proof: Thanks to Theorem 1 it suffices by linearity to check that ${f(x)=\mathbf{1}_{x\in A}G_A(x,z)}$ is a solution when ${g=0}$ and ${h=\mathbf{1}_{\{z\}}}$ with ${z\in A}$. Now for any ${x\in A}$,

$\begin{array}{rcl} f(x) &=&\mathbf{1}_{\{x=z\}}+\sum_{n=1}^\infty\mathbb{P}_x(X_n=z,n<\tau_{\partial\!A})\\ &=&\mathbf{1}_{\{x=z\}}+\sum_{y:\left|x-y\right|_1=1}\sum_{n=1}^\infty\mathbb{P}(X_n=z,n<\tau_{\partial\!A}\,|\,X_1=y)P(x,y)\\ &=&\mathbf{1}_{\{x=z\}}+\sum_{u:\left|x-y\right|_1=1}f(y)P(x,y) \end{array}$

thanks to the Markov property. We have ${f=h+Pf}$ on ${A}$, in other words ${\Delta f=-h}$ on ${A}$. ☐

Beyond the symmetric nearest neighbors random walk. The proofs of Theorem 1 and Theorem 2 remain valid for asymmetric nearest neighbors random walks on ${\mathbb{Z}^d}$, provided that we replace the generator ${\Delta}$ of the symmetric nearest neighbors random walk by the generator ${L:=P-I}$, which is still a local operator: ${\left|x-y\right|>1\Rightarrow L(x,y)=P(x,y)=0}$. One may even go beyond this framework by adapting the notion of boundary: ${\{y\not\in A:\exists x\in A, L(x,y)>0\}}$.

Gaussian free field. The Gaussian free field is model of Gaussian random interface for which the covariance matrix is the Green function of the discrete Laplace operator. More precisely, let ${\varnothing\neq A\subset\mathbb{Z}^d}$ be a finite set. An interface is a height function ${f:\bar{A}\rightarrow\mathbb{R}}$ which associates to each site ${x\in\bar{A}}$ a height ${f(x)}$, also called spin in the context of Statistical Physics. For simplicity, we impose a zero boundary condition: ${f=0}$ on the boundary ${\partial\!A}$ of ${A}$.

Let ${\mathcal{F}_A}$ be the set of interfaces ${f}$ on ${\bar{A}}$ vanishing at the boundary ${\partial\!A}$, which can be identified with ${\mathbb{R}^A}$. The energy ${H_A(f)}$ of the interface ${f\in\mathcal{F}_A}$ is defined by

$H_A(f)=\frac{1}{4d}\sum_{\substack{\{x,y\}\subset\bar{A}\\\left|x-y\right|_1=1}}(f(x)-f(y))^2,$

where ${f=0}$ on ${\partial\!A}$. The flatter is the interface, the smaller is the energy ${H_A(f)}$. Denoting ${\left<u,v\right>_A:=\sum_{x\in A}u(x)v(x)}$, we get, for any ${f\in\mathcal{F}_A}$,

$H_A(f) =\frac{1}{4d}\sum_{x\in A}\sum_{\substack{y\in\bar{A}\\\left|x-y\right|_1=1}}(f(x)-f(y))f(x) =\frac{1}{2}\left<-\Delta f,f\right>_A.$

Since ${H_A(0)=0}$ and ${H_A(f)=0}$ give ${f=0}$, the quadratic form ${H_A}$ is not singular and we can define the Gaussian law ${Q_A}$ on ${\mathcal{F}_A}$ which favors low energies:

$Q_A(df)=\frac{1}{Z_A}e^{-H_A(f)}\,df \quad\mbox{where}\quad Z_A:=\int_{\mathcal{F}_A}\!e^{-H_A(f)}\,df.$

This Gaussian law, called the Gaussian free field, is characterized by its mean ${m_A:A\rightarrow\mathbb{R}}$ and its covariance matrix ${C_A:A\times A\rightarrow\mathbb{R}}$, given for any ${x,y\in A}$ by

$m_A(x):=\int\!f_x\,Q_A(df)=0$

and

$C_A(x,y):=\int\!f_xf_y\,Q_A(df)-m_A(x)m_A(y)=-\Delta_A^{-1}(x,y)=G_A(x,y),$

where ${f_x}$ denotes the coordinate map ${f_x:f\in\mathcal{F}_A\mapsto f(x)\in\mathbb{R}}$.

Simulation of the Gaussian free field. The simulation of the Gaussian free field can be done provided that we know how to compute a square root of ${G_A}$ in the sense of quadratic forms, using for instance a Cholesky factorization or diagonalization. Let us consider the square case:

$A=L(0,1)^d\cap\mathbb{Z}^d=\{1,\ldots,L-1\}^d.$

This gives ${\bar{A}=\{0,1,\ldots,L\}^d}$. In this case, one can compute the eigenvectors and eigenvalues of ${\Delta_A}$ and deduce the ones of ${G_A=-\Delta_A^{-1}}$. In fact, the continuous Laplace operator ${\Delta}$ on ${[0,1]^d\in\mathbb{R}^n}$, with Dirichlet boundary conditions, when defined on the Sobolev space ${\mathrm{H}_0^2([0,1]^d)}$, is a symmetric operator with eigenvectors ${\{e_n:n\in\{1,2,\ldots\}^d\}}$ and eigenvalues ${\{\lambda_n:n\in\{1,2,\ldots\}^d\}}$ given by

$e_n(t):=2^{d/2}\prod_{i=1}^d\sin(\pi n_i t_i), \quad\mbox{and}\quad \lambda_n:=-\pi^2\left|n\right|_2^2 =-\pi^2(n_1^2+\cdots+n_d^2).$

Similarly, the discrete Laplace operator ${\Delta_A}$ with Dirichlet boundary conditions on this ${A}$ has eigenvectors ${\{e^L_n:n\in A\}}$ and eigenvalues ${\{\lambda_n^L:n\in A\}}$ given for any ${k\in A}$ by

$e^L_n(k):=e_n\left(\frac{k}{L}\right) \quad\mbox{and}\quad \lambda_n^L:=-2\,\sum_{i=1}^d \sin^2\left(\frac{\pi n_i}{2L}\right).$

The vectors ${e_n^L}$ vanish on the boundary ${\partial\!A}$. It follows that for any ${x,y\in A}$,

$\Delta_A(x,y)=\sum_{n\in A}\lambda_n^L\left<e^L_n,f_x\right>\left<e^L_n,f_y\right> =\sum_{n\in A}\lambda_n^L e^L_n(x)e^L_n(y)$

and

$G_A(x,y)=-\sum_{n\in A}\left(\lambda_n^L\right)^{-1} e^L_n(x)e^L_n(y).$

A possible matrix square root ${\sqrt{G_A}}$ of ${G_A}$ is given for any ${x,y\in A}$ by

$\sqrt{G_A}(x,y)=-\sum_{n\in A}\left(\lambda_n^L\right)^{-1/2} e^L_n(x)e^L_n(y).$

Now if ${Z={(Z_y)}_{y\in A}}$ are independent and identically distributed Gaussian random variables with zero mean and unit variance, seen as a random vector of ${\mathbb{R}^A}$, then the random vector

$\sqrt{G_A}Z = \left(\sum_{y\in A}\sqrt{G_A}(x,y)Z_y\right)_{x\in A} =\left(\sum_{y\in A}\sum_{n\in A}\left(-\lambda_n^L\right)^{-1/2}e_n^L(x)e_n^L(y)Z_y\right)_{x\in A}$

is distributed according to the Gaussian free field ${Q_A}$.

Continuous analogue. The scaling limit ${\varepsilon\mathbb{Z}^d\rightarrow\mathbb{R}^d}$ leads to natural continuous objects (in time and space): the simple random walk becomes the standard Brownian Motion via the Central Limit Theorem, the discrete Laplace operator becomes the Laplace second order differential operator via a Taylor formula, while the discrete Gaussian free field becomes the continuous Gaussian free field (its covariance is the Green function of the Laplace operator). The continuous object are less elementary since their require much more analytic and probabilistic machinery, but in the mean time, they provide differential calculus, a tool which is problematic in discrete settings due in particular to the lack of chain rule.

Note. The probabilistic approach to the Dirichlet problem goes back at least to Shizuo Kakutani (1911 – 2004), who studied around 1944 the continuous version with Brownian Motion. This post is the English translation of an excerpt from a forthcoming book on stochastic models, written, in French, together with my old friend Florent Malrieu.

#
# load within julia using include("dirichlet.jl")
#
nx, ny = 25, 25
f = zeros(nx,ny)
f[1,:], f[nx,:], f[:,1], f[:,ny] = 2, 2, -2, -2
C = zeros(nx,ny)
N = 5000
#
for x = 1:nx
for y = 1:ny
for k = 1:N
xx = x
yy = y
while ((xx > 1) && (xx < nx) && (yy > 1) && (yy < ny))
b = rand(2)
if (b[1] < .5)
if (b[2] < .5)
xx -= 1
else
xx += 1
end
elseif (b[2] < .5)
yy -= 1
else
yy += 1
end
end # while
C[x,y] += f[xx,yy]
end # for k
end # for y
end # for x
draw(PNG("dirichlet.png", 10cm, 10cm), spy(C/N))

#
# load within julia using include("gff.jl")
#
using Compose, Gadfly, Cairo # for 2D graphics
#using PyPlot # for 3D graphics (Gadfly does not provide 3D graphics)
#
# dimension 1
#
L = 500
Z = randn(L-1)
lambda = 2*sin(pi*[1:L-1]/(2*L)).^2
evec = sqrt(2)*sin(pi*kron([1:L-1],[1:L-1]')/L)
GFF1 = zeros(L-1)
for x = 1:L-1
for y = 1:L-1
for n = 1:L-1
GFF1[x] += (lambda[n])^(-1/2)*evec[n,x]*evec[n,y]*Z[y]
end
end
end
p = plot(x = [1:L-1], y = GFF1,
Geom.line,
Guide.xlabel(""),
Guide.ylabel(""),
#Guide.title("GFF"),
Theme(default_color=color("black")))
draw(PNG("gff1.png", 10cm, 10cm), p)
#
# dimension 2
#
L = 20
Z = randn(L-1,L-1)
AUX = kron(ones(L-1)',[1:L-1])
lambda = 2*sin(pi*AUX/(2*L)).^2 + 2*sin(pi*AUX'/(2*L)).^2
evec = zeros(L-1,L-1,L-1,L-1) # n1,n2,k1,k2 : e_n^L(k)
for n1 = 1:L-1
for n2 = 1:L-1
AUX = kron(ones(L-1)',[1:L-1])
evec[n1,n2,:,:] = reshape(2*sin(pi*n1*AUX/L).*sin(pi*n2*AUX'/L),(1,1,L-1,L-1))
end
end
GFF2 = zeros(L-1,L-1)
for x1 = 1:L-1
for x2 = 1:L-1
for y1 = 1:L-1
for y2 = 1:L-1
for n1 = 1:L-1
for n2 = 1:L-1
aux = (lambda[n1,n2])^(-1/2)
aux *= evec[n1,n2,x1,x2]
aux *= evec[n1,n2,y1,y2]
aux *= Z[y1,y2]
GFF2[x1,x2] += aux
end
end
end
end
end
end
# graphics with PyPlot
#surf(GFF2)
#savefig("gff2.svg")
draw(PNG("gff2.png", 10cm, 10cm), spy(GFF2))


Vélotaf, de vélo (bicyclette) et taf  (travail, familier) : utilisation du vélo comme moyen de transport pour se rendre sur son lieu de travail.

J’ai entrepris depuis cet été de me rendre quotidiennement sur mon lieu de travail à vélo, histoire de vivre avec mon temps. Une belle balade, du centre de Vincennes à porte Dauphine. On s’habitue vite ! Le vélo, c’est propre, silencieux, agréable, et bon pour la santé, tout le contraire de la plupart des autres modes de transport. Bien que beaucoup de progrès pourraient être accomplis dans les aménagements urbains, il n’a jamais été aussi facile et aussi peu dangereux de faire du vélo à Paris.

Itinéraire Vincennes-Dauphine express (environ 13,5 km). Le trajet Vincennes-Dauphine le plus rapide est assez rectiligne et suit la ligne 1 du métro, sauf entre Nation et Bastille, et après Étoile : avenue de Paris, porte et cours de Vincennes, place de la  Nation, rue du Faubourg Saint-Antoine, place de la Bastille, rue Saint-Antoine, rue de Rivoli, place de la Concorde, avenue Gabriel (palais de l’Élysée), rue de Ponthieu, rue de Berri, avenue des Champs-Élysées, place Charles-de-Gaulle – Étoile, avenue Foch. Plutôt roulant et protégé, ce trajet est agréable et pittoresque en dehors des heures de pointe. Les portions les moins faciles sont le bout de la rue du Faubourg Saint-Antoine, parfois encombrée, et le bout de la montée des Champs-Élysées, pentue et pavée. Le trajet comporte deux descentes notables : début de la rue du Faubourg Saint-Antoine, et avenue Foch (pour l’arrivée !). Il est également possible d’éviter les Champs-Élysées et la place de l’Étoile, en bifurquant vers la Seine, mais gare à la colline de Chaillot ! Par ailleurs, il est possible de commencer en traversant plutôt le bois de Vincennes et en rejoignant Bastille par Daumesnil (petite côte), ce qui donne un parcours d’environ 15 km.

Itinéraire Vincennes-Dauphine alternatif (environ 19 km). Une piste cyclable relie le centre de Vincennes au bois de Vincennes, puis, au niveau de Porte Dorée, à la piste cyclable des boulevards des Maréchaux (comme le tramway). Cette dernière fait le tour de Paris, et passe en particulier par Dauphine. L’itinéraire est plutôt agréable et roulant dans l’ensemble. Moins urbain et pittoresque que le précédent, il comporte deux belles côtes et une très belle descente. L’itinéraire est réversible, mais la descente devient côte.

Itinéraire Dauphine-Vincennes express (environ 13,5 km). Le trajet passe par la rue de Longchamp, place de Mexico (belle vue sur la tour Eiffel du haut de la colline), puis descente vers  place d’Iéna, avenue du Président Wilson (descente !), place de l’Alma, cours Albert premier – Cours de la Reine  (belle piste cyclable le long de la Seine, un délice le soir), quai des Tuileries, quai François Mitterand, quai de la Mégisserie, quai de l’Hôtel de  ville – quai des Célestins (piste cyclable de l’autre côté), boulevard Henri IV, place de la Bastille, rue de Lyon, avenue Daumesnil puis boulevard Diderot (beaucoup plus roulant dans ce sens que via le Faubourg Saint-Antoine), place de la Nation, cours de Vincennes, porte de Vincennes, avenue de Paris, avenue de la République, avenue Aubert. Ce bel itinéraire est l’occasion d’admirer la lumière du soir sur la Seine, les Invalides, le musée d’Orsay, etc.

Itinéraire Dauphine-Vincennes alternatif (environ 15 km). Même itinéraire que le précédent, mais suit l’avenue Daumesnil, jusqu’à la place Daumesnil (ça monte sur la fin !), puis jusqu’à porte Dorée (belle descente !), puis belle piste cyclable du bois de Vincennes.

Topographie. Paris compte quelques collines autour de sa Seine : rive droite on trouve Montmartre (131 m), Belleville (128,5 m), Ménilmontant (108 m), Buttes-Chaumont (103 m), Passy (71 m), Charonne (69 m), Chaillot (67 m), tandis que rive gauche on trouve Montsouris (78 m), Montparnasse (66 m), Butte-aux-Cailles (63 m), Montagne Sainte-Geneviève (61 m). C’est Chaillot et Passy qu’il faut éviter près de Dauphine. Il est utile de rappeler que les pentes les plus raides peuvent être sur le flanc des collines les moins hautes (maths…) !

Monture. Choisir un vélo adapté à la ville : un VTC urbain pour aborder les trottoirs, un vélo de route pour la vitesse – je conseille les vélos Origine – mais peut-être moins un VTT (éviter les motos sans moteur). Les gardes-boues sont indispensables pour la pluie. Un petit porte-bagage permet de poser un sac qu’on ne mettra plus sur le dos. Des feux pour être vu le soir venu. Un moyeu à vitesses intégrées permet de changer de vitesse à l’arrêt, et d’éviter déraillement, entretien, salissures ! Pourquoi pas des freins à disques pour un freinage efficace, voire des freins à disques hydrauliques pour les plus dépensiers. Les amortisseurs sont superflus pour un usage urbain. Certains apprécient les chaussures de vélo, qui font gagner en rendement, et qui sont même disponibles en version «ville».

Sudation. La sueur arrive surtout dans les vingt minutes qui suivent l’effort, après avoir quitté son vélo. Certains peuvent prendre une douche à l’arrivée et ne s’en privent pas. Il est utile d’embarquer un vêtement de rechange dans son sac. Voici quelques petites astuces :

• Anticiper au maximum pour minimiser les efforts inutiles
• Ne pas chercher à rattraper celui qui dépasse !
• Ne pas mettre le sac à dos sur son dos, mais sur le porte-bagage !
• Pédaler dans les descentes pour gagner du temps !
• Ne jamais forcer, jouer sur les rapports  (un délice avec le Shimano Alfine 8 ou 11)
• Porter un vêtement adapté, voire un vêtement «technique» (surtout l’hiver)
• Opter éventuellement pour un vélo à assistance électrique (VAE) !

Sécurité.

• Porter un casque (peut sauver la vie en cas de rencontre avec un trottoir)
• Éviter les rues étroites, préférer les grands axes protégés
• Se rendre visible (gilet) et prévisible (pas d’hésitations)
• Se méfier des ouvertures de portières, et des clignotants oubliés
• Garder les deux mains sur le guidon, prêt à freiner, vérifier l’arrière avant de tourner
• Ne jamais rester dans l’angle mort des véhicules, notamment bus et camions
• Anticiper : feux, bus, scooters, cyclistes, piétons, pentes, touristes, pigeons, poulets, …

Étoile. Suivre le fil bleu fermement et ne pas avoir peur, place et visibilité ne manquent pas.

Pavés. Tous les coureurs du Paris-Roubaix le savent : ne pas traîner sur les pavés !

Douleurs. La pratique du vélo peut soulager le mal de dos. Cependant, un vélo mal réglé peut faire mal. La selle notamment ne doit être ni trop haute, ni trop basse. Alterner le pied posé à terre à l’arrêt permet d’éviter de forcer sur le même genoux opposé au démarrage.

Vitesse. En ville, on atteint vite une vitesse moyenne de 15 km/h en respectant le code de la route (mode «zen»), ce qui rend déjà le vélo attractif. Avec un peu moins de respect, on peut dépasser les 20 km/h de moyenne (mode «sport»), et goûter à l’art de couler les feux quand la situation s’y prête. Le nombre de feux est important. Exemple : lundi 15 septembre 2014, itinéraire Vincennes-Dauphine express en mode sport modéré, départ à 7h30 du matin, bonnes conditions de circulation, 40 minutes, soit environ 20 km/h de moyenne.

Danger. Les professionnels de la route comme les bus et les taxis contrôlent bien les risques et savent s’imposer. Ils sont peu dangereux. Il faut apprendre à faire comme eux ! Sur la route, le danger vient plutôt des deux roues à moteur, mais aussi des usagers occasionnels, par exemple des véhicules de location, y compris les vélos ! Il y a enfin la fameuse ouverture non contrôlée de portière, qui peut aussi concerner les clients des taxis.

Vol. On ne plaisante pas avec le vol. Même en attachant solidement son vélo deux fois par les deux roues et le cadre, on peut perdre les pédales, voire la selle ou le guidon, et oui.

Métrologie. Un compteur de vitesse est superflu. L’application Android Google Mes Parcours (My Tracks) est très pratique. Le site Géovélo constitue une alternative à Google Maps, et dispose d’une application Android. On peut aussi enregistrer ses parcours sur Bikemap plutôt que sur Google (signalé par Amic Frouvelle, ne fonctionne que sur l’application iOS pour l’instant). Il existe même un programme expérimental de recherche d’itinéraire vélo minimisant l’énergie (signalé par Amic).

Équipement. Dans le sac à dos posé sur le porte-bagage, un câble tendeur sandow pour fixer le sac à dos, une clé de 15 plate pour les pédales, des clés Allen (vis à six pans creux) pour les réglages, de petites diodes électroluminescentes autonomes rechargeables par USB, une pince à vélo pour le pantalon, un nécessaire anti-crevaison comprenant une chambre à air de rechange, des démonte-pneus, du papier de verre, des rustines autocollantes, et une petite cartouche de CO2, une pompe quand même, et des ratons laveurs.

Hiver. À venir !

Mot de la fin. Le vélo, ça fait faire de l’exercice, ça ne fait pas de bruit, ça ne pollue pas, ça ne pue pas (quoi que le cycliste…). En ville, ça devient même un passionnant jeu vidéo !

Syntax · Style · Tracking & Privacy.