Random walk, Dirichlet problem, and Gaussian free field

September 16th, 2014 No comments
Solution of a Dirichlet problem obtained with a Monte Carlo simulation, using the Julia programming language

Solution of a Dirichlet problem by Monte Carlo simulation of probabilistic formulation, using Julia.

This post is about the discrete Dirichlet problem and Gaussian free field, linked with the random walk on \( {\mathbb{Z}^d} \). Discreteness allows to go to the concepts with minimal abstraction.

Symmetric Nearest Neighbors random walk. The symmetric nearest neighbors random walk on \( {\mathbb{Z}^d} \) is the sequence of random variables \( {X={(X_n)}_{n\geq0}} \) defined by the linear recursion

\[ X_{n+1}=X_n+\xi_{n+1}=X_0+\xi_1+\cdots+\xi_{n+1} \]

where \( {{(\xi)}_{n\geq1}} \) is a sequence of independent and identically distributed random variables on \( {\mathbb{Z}^d} \), independent of \( {X_0} \), and uniformly distributed on the discrete \( {\ell^1} \) sphere: \( {\{\pm e_1,\ldots,\pm e_d\}} \) where \( {e_1,\ldots,e_d} \) is the canonical basis of \( {\mathbb{R}^d} \). The term symmetric comes from the fact that in each of the \( {d} \) dimensions, both directions are equally probable in the law of the increments.

The sequence \( {X} \) is a homogeneous Markov chain with state space \( {\mathbb{Z}^d} \) since

\[ \begin{array}{rcl} \mathbb{P}(X_{n+1}=x_{n+1}\,|\,X_n=x_n,\ldots,X_0=x_0) &=&\mathbb{P}(X_{n+1}=x_{n+1}\,|\,X_n=x_n)\\ &=&\mathbb{P}(X_1=x_{n+1}\,|\,X_0=x_n)\\ &=&\mathbb{P}(\xi_1=x_{n+1}-x_n). \end{array} \]

Its transition kernel \( {P:\mathbb{Z}^d\times\mathbb{Z}^d\rightarrow[0,1]} \) is given for every \( {x,y\in\mathbb{Z}^d} \) by

\[ P(x,y) = \mathbb{P}(X_{n+1}=y\,|\,X_n=x) =\frac{\mathbf{1}_{\{\left|x-y\right|_1=1\}}}{2d} \quad\mbox{where}\quad \left|x-y\right|_1=\sum_{k=1}^d\left|x_k-y_k\right|. \]

It is an infinite Markov transition matrix. It acts on a bounded function \( {f:\mathbb{Z}^d\rightarrow\mathbb{R}} \) as

\[ (Pf)(x)=\sum_{y\in\mathbb{Z}^d}P(x,y)f(y)=\mathbb{E}(f(X_1)\,|\,X_0=x). \]

The sequence \( {X} \) is also a martingale for the filtration \( {{(\mathcal{F}_n)}_{n\geq0}} \) defined by \( {\mathcal{F}_0=\sigma(X_0)} \) and \( {\mathcal{F}_{n+1}=\sigma(X_0,\xi_1,\ldots,\xi_{n+1})} \). Indeed, by measurability and independence,

\[ \mathbb{E}(X_{n+1}\,|\,\mathcal{F}_n) =X_n+\mathbb{E}(\xi_{n+1}\,|\,\mathcal{F}_n) =X_n+\mathbb{E}(\xi_{n+1}) =X_n. \]

Dirichlet problem. For any bounded function \( {f:\mathbb{Z}^d\rightarrow\mathbb{R}} \) and any \( {x\in\mathbb{Z}^d} \),

\[ \mathbb{E}(f(X_{n+1})\,|\,X_n=x)-f(x) =(Pf)(x)-f(x) =(\Delta f)(x) \quad\mbox{where}\quad \Delta=P-I. \]

Here \( {I} \) is the identity operator \( {I(x,y)=\mathbf{1}_{\{x=y\}}} \). The operator

\[ \Delta =P-I \]

is the generator of the symmetric nearest neighbors random walk. It is a discrete Laplace operator, which computes the difference between the value at a point and the mean on its neighbors for the \( {\ell^1} \) distance:

\[ (\Delta f)(x) =\left\{\frac{1}{2d}\sum_{y:\left|y-x\right|_1=1}f(y)\right\}-f(x) =\frac{1}{2d}\sum_{y:\left|y-x\right|_1=1}(f(y)-f(x)). \]

We say that \( {f} \) is harmonic on \( {A\subset\mathbb{Z}^d} \) when \( {\Delta f=0} \) on \( {A} \), which means that on every point of \( {A} \), the value of \( {f} \) is equal to the mean of the values of \( {f} \) on the \( {2d} \) neighbors of \( {x} \). The operator \( {\Delta} \) is local in the sense that \( {(\Delta f)(x)} \) depends only on the value of \( {f} \) at \( {x} \) and its nearest neighbors. Consequently, the value of \( {\Delta f} \) on \( {A} \) depends only on the values of \( {f} \) on

\[ \bar{A}:=A\cup\partial\!A \]

where

\[ \partial\!A:=\{y\not\in A:\exists x\in A,\left|x-y\right|_1=1\} \]

is the exterior boundary of \( {A} \). In this context, the Dirichlet problem consists in finding a function which is harmonic on \( {A} \) and for which the value on \( {\partial\!A} \) is prescribed. This problem is actually a linear algebra problem for which the result below provides a probabilistic expression of the solution, based on the stopping time

\[ \tau_{\partial\!A}:=\inf\{n\geq0:X_n\in\partial\!A\}. \]

In Physics, the quantity \( {f(x)} \) models typically the temperature at location \( {x} \), while the harmonicity of \( {f} \) on \( {A} \) and the boundary condition on \( {\partial\!A} \) express the thermal equilibrium and a thermostatic boundary, respectively.

Theorem 1 (Dirichlet problem) Let \( {\varnothing\neq A\subset\mathbb{Z}^d} \) be finite. Then for any \( {x\in A} \),

\[ \mathbb{P}_x(\tau_{\partial\!A}<\infty)=1. \]

Moreover, for any \( {g:\partial\!A\rightarrow\mathbb{R}} \), the function \( {f:\bar{A}\rightarrow\mathbb{R}} \) defined for any \( {x\in\bar{A}} \) by

\[ f(x)=\mathbb{E}_x(g(X_{\tau_{\partial\!A}})) \]

is the unique solution of the system

\[ \left\{ \begin{array}{rl} f=g &\mbox{on } \partial\!A,\\ \Delta f=0&\mbox{on } A. \end{array} \right. \]

When \( {d=1} \) we recover the function which appears in the gambler ruin problem. Recall that the image of a Markov chain by a function which is harmonic for the generator of the chain is a martingale (discrete Itô formula !), and this explains a posteriori the probabilistic formula for the solution of the Dirichlet problem, thanks to the Doob optional stopping theorem.

The quantity \( {f(x)=\mathbb{E}_x(g(\tau_{\partial\!A}))=\sum_{y\in\partial\!A}g(y)\mathbb{P}_x(\tau_{\partial\!A}=y)} \) is the mean of \( {g} \) for the law \( {\mu_x} \) on \( {\partial\!A} \) called the harmonic measure defined for any \( {y\in\partial\!A} \) by

\[ \mu_x(y):=\mathbb{P}_x(X_{\tau_{\partial\!A}}=y) \]

Proof: The property \( {\mathbb{P}_x(\tau_{\partial\!A}<\infty)=1} \) follows from the Central Limit Theorem or by using conditioning which gives a geometric upper bound for the tail of \( {\tau_{\partial\!A}} \).

Let us check that the proposed function \( {f} \) is a solution. For any \( {x\in\partial\!A} \), we have \( {\tau_{\partial\!A}=0} \) on \( {\{X_0=x\}} \) and thus \( {f=g} \) on \( {\partial\!A} \). Let us show now that \( {\Delta f=0} \) on \( {A} \). We first reduce the problem by linearity to the case \( {g=\mathbf{1}_{\{z\}}} \) with \( {z\in\partial\!A} \). Next, we write, for any \( {y\in\bar{A}} \),

\[ \begin{array}{rcl} f(y) &=&\mathbb{P}_y(X_{\tau_{\partial\!A}}=z)\\ &=&\sum_{n=0}^\infty\mathbb{P}_y(X_n=z,\tau_{\partial\!A}=n)\\ &=&\mathbf{1}_{y=z}+\mathbf{1}_{y\in A}\sum_{n=1}^\infty\sum_{x_1,\ldots,x_{n-1}\in A}P(y,x_1)P(x_1,x_2)\cdots P(x_{n-1},z). \end{array} \]

On the other hand, since \( {f=0} \) on \( {\partial\!A\setminus\{z\}} \) and since \( {\Delta} \) is local, we have, for any \( {x\in A} \),

\[ (P f)(x) =\sum_{y\in\mathbb{Z}^d}P(x,y)f(y) =P(x,z)f(z)+\sum_{y\in A}P(x,y)f(y). \]

As a consequence, for any \( {x\in A} \),

\[ \begin{array}{rcl} (P f)(x) &=&P(x,z)f(z)+\sum_{n=1}^\infty\sum_{y,x_1,\ldots,x_{n-1}\in A} P(x,y)P(y,x_1)\cdots P(x_{n-1},z)\\ &=&P(x,z)f(z)+(f(x)-(\mathbf{1}_{x=z}+P(x,z))), \end{array} \]

which is equal to \( {f(x)} \) since \( {\mathbf{1}_{x=z}=0} \) and \( {f(z)=1} \). Hence \( {Pf=f} \) on \( {A} \), in other words \( {\Delta f=0} \) on \( {A} \).

To establish the uniqueness of the solution, we first reduce the problem by linearity to show that \( {f=0} \) is the unique solution when \( {g=0} \). Next, if \( {f:\bar{A}\rightarrow\mathbb{R}} \) is harmonic on \( {A} \), the interpretation of \( {\Delta f(x)} \) as a difference with the mean on the nearest neighbors allows to show that both the minimum and the maximum of \( {f} \) on \( {\bar{A}} \) are (at least) necessarily achieved on the boundary \( {\partial\!A} \). But since \( {f} \) is vanishes on \( {\partial\!A} \), it follows that \( {f} \) vanishes on \( {\bar{A}} \). ☐

Dirichlet problem and Green function. Here is a generalization of Theorem 1. When \( {h=0} \) we recover Theorem 1.

Theorem 2 (Dirichlet problem and Green function) If \( {\varnothing\neq A\subset\mathbb{Z}^d} \) is finite then for any \( {g:\partial\!A\rightarrow\mathbb{R}} \) and \( {h:A\rightarrow\mathbb{R}} \), the function \( {f:\bar{A}\rightarrow\mathbb{R}} \) defined for any \( {x\in\bar{A}} \) by

\[ f(x)=\mathbb{E}_x\left(g(X_{\tau_{\partial\!A}})+\sum_{n=0}^{\tau_{\partial\!A}-1}h(X_n)\right) \]

is the unique solution of

\[ \left\{ \begin{array}{rl} f=g&\mbox{on } \partial\!A,\\ \Delta f=-h&\mbox{on } A. \end{array} \right. \]

When \( {d=1} \) and \( {h=1} \) then we recover a function which appears in the analysis of the gambler ruin problem.

For any \( {x\in A} \) we have

\[ f(x) =\sum_{y\in\partial\!A}g(y)\mathbb{P}_x(\tau_{\partial\!A}=y)+\sum_{y\in A}h(y)G_A(x,y) \]

where \( {G_A(x,y)} \) is the average number of passages at \( {y} \) starting from \( {x} \) and before escaping from \( {A} \):

\[ G_A(x,y):=\mathbb{E}_x\left\{\sum_{n=0}^{\tau_{\partial\!A}-1}\mathbf{1}_{X_n=y}\right\} =\sum_{n=0}^\infty\mathbb{P}_x(X_n=y,n<\tau_{\partial\!A}). \]

We say that \( {G_A} \) is the Green function of the symmetric nearest neighbors random walk on \( {A} \) killed at the boundary \( {\partial\!A} \). It is the inverse of the restriction \( {-\Delta_A} \) of \( {-\Delta} \) to functions on \( {\bar{A}} \) vanishing on \( {\partial\!A} \):

\[ G_A=-\Delta_A^{-1}. \]

Indeed, when \( {g=0} \) and \( {h=\mathbf{1}_{\{y\}}} \) we get \( {f(x)=G_A(x,y)} \) and thus \( {\Delta_AG_A=-I_A} \).

Proof: Thanks to Theorem 1 it suffices by linearity to check that \( {f(x)=\mathbf{1}_{x\in A}G_A(x,z)} \) is a solution when \( {g=0} \) and \( {h=\mathbf{1}_{\{z\}}} \) with \( {z\in A} \). Now for any \( {x\in A} \),

\[ \begin{array}{rcl} f(x) &=&\mathbf{1}_{\{x=z\}}+\sum_{n=1}^\infty\mathbb{P}_x(X_n=z,n<\tau_{\partial\!A})\\ &=&\mathbf{1}_{\{x=z\}}+\sum_{y:\left|x-y\right|_1=1}\sum_{n=1}^\infty\mathbb{P}(X_n=z,n<\tau_{\partial\!A}\,|\,X_1=y)P(x,y)\\ &=&\mathbf{1}_{\{x=z\}}+\sum_{u:\left|x-y\right|_1=1}f(y)P(x,y) \end{array} \]

thanks to the Markov property. We have \( {f=h+Pf} \) on \( {A} \), in other words \( {\Delta f=-h} \) on \( {A} \). ☐

Beyond the symmetric nearest neighbors random walk. The proofs of Theorem 1 and Theorem 2 remain valid for asymmetric nearest neighbors random walks on \( {\mathbb{Z}^d} \), provided that we replace the generator \( {\Delta} \) of the symmetric nearest neighbors random walk by the generator \( {L:=P-I} \), which is still a local operator: \( {\left|x-y\right|>1\Rightarrow L(x,y)=P(x,y)=0} \). One may even go beyond this framework by adapting the notion of boundary: \( {\{y\not\in A:\exists x\in A, L(x,y)>0\}} \).

Gaussian free field. The Gaussian free field is model of Gaussian random interface for which the covariance matrix is the Green function of the discrete Laplace operator. More precisely, let \( {\varnothing\neq A\subset\mathbb{Z}^d} \) be a finite set. An interface is a height function \( {f:\bar{A}\rightarrow\mathbb{R}} \) which associates to each site \( {x\in\bar{A}} \) a height \( {f(x)} \), also called spin in the context of Statistical Physics. For simplicity, we impose a zero boundary condition: \( {f=0} \) on the boundary \( {\partial\!A} \) of \( {A} \).

Let \( {\mathcal{F}_A} \) be the set of interfaces \( {f} \) on \( {\bar{A}} \) vanishing at the boundary \( {\partial\!A} \), which can be identified with \( {\mathbb{R}^A} \). The energy \( {H_A(f)} \) of the interface \( {f\in\mathcal{F}_A} \) is defined by

\[ H_A(f)=\frac{1}{4d}\sum_{\substack{\{x,y\}\subset\bar{A}\\\left|x-y\right|_1=1}}(f(x)-f(y))^2, \]

where \( {f=0} \) on \( {\partial\!A} \). The flatter is the interface, the smaller is the energy \( {H_A(f)} \). Denoting \( {\left<u,v\right>_A:=\sum_{x\in A}u(x)v(x)} \), we get, for any \( {f\in\mathcal{F}_A} \),

\[ H_A(f) =\frac{1}{4d}\sum_{x\in A}\sum_{\substack{y\in\bar{A}\\\left|x-y\right|_1=1}}(f(x)-f(y))f(x) =\frac{1}{2}\left<-\Delta f,f\right>_A. \]

Since \( {H_A(0)=0} \) and \( {H_A(f)=0} \) give \( {f=0} \), the quadratic form \( {H_A} \) is not singular and we can define the Gaussian law \( {Q_A} \) on \( {\mathcal{F}_A} \) which favors low energies:

\[ Q_A(df)=\frac{1}{Z_A}e^{-H_A(f)}\,df \quad\mbox{where}\quad Z_A:=\int_{\mathcal{F}_A}\!e^{-H_A(f)}\,df. \]

This Gaussian law, called the Gaussian free field, is characterized by its mean \( {m_A:A\rightarrow\mathbb{R}} \) and its covariance matrix \( {C_A:A\times A\rightarrow\mathbb{R}} \), given for any \( {x,y\in A} \) by

\[ m_A(x):=\int\!f_x\,Q_A(df)=0 \]

and

\[ C_A(x,y):=\int\!f_xf_y\,Q_A(df)-m_A(x)m_A(y)=-\Delta_A^{-1}(x,y)=G_A(x,y), \]

where \( {f_x} \) denotes the coordinate map \( {f_x:f\in\mathcal{F}_A\mapsto f(x)\in\mathbb{R}} \).

Simulation of the Gaussian free field. The simulation of the Gaussian free field can be done provided that we know how to compute a square root of \( {G_A} \) in the sense of quadratic forms, using for instance a Cholesky factorization or diagonalization. Let us consider the square case:

\[ A=L(0,1)^d\cap\mathbb{Z}^d=\{1,\ldots,L-1\}^d. \]

This gives \( {\bar{A}=\{0,1,\ldots,L\}^d} \). In this case, one can compute the eigenvectors and eigenvalues of \( {\Delta_A} \) and deduce the ones of \( {G_A=-\Delta_A^{-1}} \). In fact, the continuous Laplace operator \( {\Delta} \) on \( {[0,1]^d\in\mathbb{R}^n} \), with Dirichlet boundary conditions, when defined on the Sobolev space \( {\mathrm{H}_0^2([0,1]^d)} \), is a symmetric operator with eigenvectors \( {\{e_n:n\in\{1,2,\ldots\}^d\}} \) and eigenvalues \( {\{\lambda_n:n\in\{1,2,\ldots\}^d\}} \) given by

\[ e_n(t):=2^{d/2}\prod_{i=1}^d\sin(\pi n_i t_i), \quad\mbox{and}\quad \lambda_n:=-\pi^2\left|n\right|_2^2 =-\pi^2(n_1^2+\cdots+n_d^2). \]

Similarly, the discrete Laplace operator \( {\Delta_A} \) with Dirichlet boundary conditions on this \( {A} \) has eigenvectors \( {\{e^L_n:n\in A\}} \) and eigenvalues \( {\{\lambda_n^L:n\in A\}} \) given for any \( {k\in A} \) by

\[ e^L_n(k):=e_n\left(\frac{k}{L}\right) \quad\mbox{and}\quad \lambda_n^L:=-2\,\sum_{i=1}^d \sin^2\left(\frac{\pi n_i}{2L}\right). \]

The vectors \( {e_n^L} \) vanish on the boundary \( {\partial\!A} \). It follows that for any \( {x,y\in A} \),

\[ \Delta_A(x,y)=\sum_{n\in A}\lambda_n^L\left<e^L_n,f_x\right>\left<e^L_n,f_y\right> =\sum_{n\in A}\lambda_n^L e^L_n(x)e^L_n(y) \]

and

\[ G_A(x,y)=-\sum_{n\in A}\left(\lambda_n^L\right)^{-1} e^L_n(x)e^L_n(y). \]

A possible matrix square root \( {\sqrt{G_A}} \) of \( {G_A} \) is given for any \( {x,y\in A} \) by

\[ \sqrt{G_A}(x,y)=-\sum_{n\in A}\left(\lambda_n^L\right)^{-1/2} e^L_n(x)e^L_n(y). \]

Now if \( {Z={(Z_y)}_{y\in A}} \) are independent and identically distributed Gaussian random variables with zero mean and unit variance, seen as a random vector of \( {\mathbb{R}^A} \), then the random vector

\[ \sqrt{G_A}Z = \left(\sum_{y\in A}\sqrt{G_A}(x,y)Z_y\right)_{x\in A} =\left(\sum_{y\in A}\sum_{n\in A}\left(-\lambda_n^L\right)^{-1/2}e_n^L(x)e_n^L(y)Z_y\right)_{x\in A} \]

is distributed according to the Gaussian free field \( {Q_A} \).

Continuous analogue. The scaling limit \( {\varepsilon\mathbb{Z}^d\rightarrow\mathbb{R}^d} \) leads to natural continuous objects (in time and space): the simple random walk becomes the standard Brownian Motion via the Central Limit Theorem, the discrete Laplace operator becomes the Laplace second order differential operator via a Taylor formula, while the discrete Gaussian free field becomes the continuous Gaussian free field (its covariance is the Green function of the Laplace operator). The continuous object are less elementary since their require much more analytic and probabilistic machinery, but in the mean time, they provide differential calculus, a tool which is problematic in discrete settings due in particular to the lack of chain rule. The Gaussian free field (GFF) is an important Gaussian object which appears, like most Gaussian objects, as a limiting object in many models of Statistical Physics.

Further reading.

Note. The probabilistic approach to the Dirichlet problem goes back at least to Shizuo Kakutani (1911 – 2004), who studied around 1944 the continuous version with Brownian Motion. This post is the English translation of an excerpt from a forthcoming book on stochastic models, written, in French, together with my old friend Florent Malrieu.

1D GFF simulated with Julia

1D GFF simulated with Julia

2D GFF simulated with Julia

2D GFF simulated with Julia

Johann Peter Gustav Lejeune Dirichlet (1805 - 1859) who probably never thought about random walks

Johann Peter Gustav Lejeune Dirichlet (1805 – 1859) who probably never thought about random walks and Gaussian fields

Share

À bicyclette…

September 13th, 2014 No comments
Source : Wikipédia

Les français à peine plus fainéants que les grecs ? Bien que vague, ce graphique, tiré de Wikipédia, a le mérite de souligner les disparités culturelles et topographiques.

Vélotaf, de vélo (bicyclette) et taf  (travail, familier) : utilisation du vélo comme moyen de transport pour se rendre sur son lieu de travail.

J’ai entrepris depuis cet été de me rendre quotidiennement sur mon lieu de travail à vélo, histoire de vivre avec mon temps. Une belle balade, du centre de Vincennes à porte Dauphine. On s’habitue vite ! Le vélo, c’est propre, silencieux, agréable, et bon pour la santé, tout le contraire de la plupart des autres modes de transport. Bien que beaucoup de progrès pourraient être accomplis dans les aménagements urbains, il n’a jamais été aussi facile et aussi peu dangereux de faire du vélo à Paris.

Itinéraire Vincennes-Dauphine express (environ 13,5 km). Le trajet Vincennes-Dauphine le plus rapide est assez rectiligne et suit la ligne 1 du métro, sauf entre Nation et Bastille, et après Étoile : avenue de Paris, porte et cours de Vincennes, place de la  Nation, rue du Faubourg Saint-Antoine, place de la Bastille, rue Saint-Antoine, rue de Rivoli, place de la Concorde, avenue Gabriel (palais de l’Élysée), rue de Ponthieu, rue de Berri, avenue des Champs-Élysées, place Charles-de-Gaulle – Étoile, avenue Foch. Plutôt roulant et protégé, ce trajet est agréable et pittoresque en dehors des heures de pointe. Les portions les moins faciles sont le bout de la rue du Faubourg Saint-Antoine, parfois encombrée, et le bout de la montée des Champs-Élysées, pentue et pavée. Le trajet comporte deux descentes notables : début de la rue du Faubourg Saint-Antoine, et avenue Foch (pour l’arrivée !). Il est également possible d’éviter les Champs-Élysées et la place de l’Étoile, en bifurquant vers la Seine, mais gare à la colline de Chaillot ! Par ailleurs, il est possible de commencer en traversant plutôt le bois de Vincennes et en rejoignant Bastille par Daumesnil (petite côte), ce qui donne un parcours d’environ 15 km.

Vincennes - Dauphine par bois de Vincennes puis axe est-ouest

Itinéraire Vincennes-Dauphine alternatif (environ 19 km). Une piste cyclable relie le centre de Vincennes au bois de Vincennes, puis, au niveau de Porte Dorée, à la piste cyclable des boulevards des Maréchaux (comme le tramway). Cette dernière fait le tour de Paris, et passe en particulier par Dauphine. L’itinéraire est plutôt agréable et roulant dans l’ensemble. Moins urbain et pittoresque que le précédent, il comporte deux belles côtes et une très belle descente. L’itinéraire est réversible, mais la descente devient côte.

Dauphine-Vincennes

Itinéraire Dauphine-Vincennes express (environ 13,5 km). Le trajet passe par la rue de Longchamp, place de Mexico (belle vue sur la tour Eiffel du haut de la colline), puis descente vers  place d’Iéna, avenue du Président Wilson (descente !), place de l’Alma, cours Albert premier – Cours de la Reine  (belle piste cyclable le long de la Seine, un délice le soir), quai des Tuileries, quai François Mitterand, quai de la Mégisserie, quai de l’Hôtel de  ville – quai des Célestins (piste cyclable de l’autre côté), boulevard Henri IV, place de la Bastille, rue de Lyon, avenue Daumesnil, boulevard Diderot, place de la Nation, cours de Vincennes, porte de Vincennes, avenue de Paris, avenue de la République, avenue Aubert. Ce bel itinéraire est l’occasion d’admirer la lumière du soir sur la Seine, les Invalides, le musée d’Orsay, etc.

Itinéraire Dauphine-Vincennes alternatif (environ 15 km). Même itinéraire que le précédent, mais suit l’avenue Daumesnil, jusqu’à la place Daumesnil (ça monte sur la fin !), puis jusqu’à porte Dorée (belle descente !), puis belle piste cyclable du bois de Vincennes.

Topographie. Paris compte quelques collines autour de sa Seine : rive droite on trouve Montmartre (131 m), Belleville (128,5 m), Ménilmontant (108 m), Buttes-Chaumont (103 m), Passy (71 m), Charonne (69 m), Chaillot (67 m), tandis que rive gauche on trouve Montsouris (78 m), Montparnasse (66 m), Butte-aux-Cailles (63 m), Montagne Sainte-Geneviève (61 m). C’est Chaillot et Passy qu’il faut éviter près de Dauphine. Il est utile de rappeler que les pentes les plus raides peuvent être sur le flanc des collines les moins hautes (maths…) !

Topographie d'une partie de Paris près de Dauphine.

Topographie d’une partie de Paris près de Dauphine.

Monture. Choisir un vélo adapté à la ville : un VTC urbain pour aborder les trottoirs, un vélo de route pour la vitesse, mais peut-être moins un VTT (éviter les motos sans moteur). Les gardes-boues sont indispensables pour la pluie. Un petit porte-bagage permet de poser un sac qu’on ne mettra plus sur le dos. Des feux pour être vu le soir venu. Un moyeu à vitesses intégrées permet de changer de vitesse à l’arrêt, et d’éviter déraillement, entretien, salissures ! Pourquoi pas des freins à disques pour un freinage efficace, voire des freins à disques hydrauliques pour les plus dépensiers. Les amortisseurs sont superflus pour un usage urbain.

Sudation. La sueur arrive surtout dans les vingt minutes qui suivent l’effort, après avoir quitté son vélo. Certains peuvent prendre une douche à l’arrivée et ne s’en privent pas. Il est utile d’embarquer un vêtement de rechange dans son sac. Voici quelques petites astuces :

  • Anticiper au maximum pour minimiser les efforts inutiles
  • Ne pas chercher à rattraper celui qui dépasse !
  • Ne pas mettre le sac à dos sur son dos, mais sur le porte-bagage !
  • Pédaler dans les descentes pour gagner du temps !
  • Ne jamais forcer, jouer sur les rapports  (un délice avec le Shimano Alfine 8 ou 11)
  • Porter un vêtement adapté, voire un vêtement «technique» (surtout l’hiver)
  • Opter éventuellement pour un vélo à assistance électrique (VAE) !

Sécurité.

  • Porter un casque (peut sauver la vie en cas de rencontre avec un trottoir)
  • Éviter les rues étroites, préférer les grands axes protégés
  • Se rendre visible (gilet) et prévisible (pas d’hésitations)
  • Se méfier des ouvertures de portières, et des clignotants oubliés
  • Garder les deux mains sur le guidon, prêt à freiner, vérifier l’arrière avant de tourner
  • Ne jamais rester dans l’angle mort des véhicules, notamment bus et camions
  • Anticiper : feux, bus, scooters, cyclistes, piétons, pentes, touristes, pigeons, poulets, …

Étoile. Suivre le fil bleu fermement et ne pas avoir peur, place et visibilité ne manquent pas.

Pavés. Tous les coureurs du Paris-Roubaix le savent : ne pas traîner sur les pavés !

Douleurs. La pratique du vélo peut soulager le mal de dos. Cependant, un vélo mal réglé peut faire mal. La selle notamment ne doit être ni trop haute, ni trop basse. Alterner le pied posé à terre à l’arrêt permet d’éviter de forcer sur le même genoux opposé au démarrage.

Vitesse. En ville, on atteint vite une vitesse moyenne de 15 km/h en respectant le code de la route (mode «zen»), ce qui rend déjà le vélo attractif. Avec un peu moins de respect, on peut dépasser les 20 km/h de moyenne (mode «sport»), et goûter à l’art de couler les feux quand la situation s’y prête. Le nombre de feux est important. Exemple : lundi 15 septembre 2014, itinéraire Vincennes-Dauphine express en mode sport modéré, départ à 7h30 du matin, bonnes conditions de circulation, 40 minutes, soit environ 20 km/h de moyenne.

Vol. On ne plaisante pas avec le vol. Même en attachant solidement son vélo deux fois par les deux roues et le cadre, on peut perdre les pédales, voire la selle ou le guidon, et oui.

Métrologie. Un compteur de vitesse est superflu. L’application Android Google Mes Parcours (My Tracks) est très pratique. Le site Géovélo constitue une alternative à Google Maps, et dispose d’une application Android. On peut aussi enregistrer ses parcours sur Bikemap plutôt que sur Google (signalé par Amic, ne fonctionne que sur l’application iOS pour l’instant).

Équipement. Dans le sac à dos posé sur le porte-bagage, un câble tendeur sandow pour fixer le sac à dos, une clé de 15 plate pour les pédales, des clés Allen (vis à six pans creux) pour les réglages, de petites diodes électroluminescentes autonomes rechargeables par USB, une pince à vélo pour le pentalon, un nécessaire anti-crevaison comprenant une chambre à air de rechange, des démonte-pneu, du papier de verre, des rustines autocollantes, et une petite cartouche de CO2 (plutôt qu’une pompe), et des ratons laveurs.

Hiver. À venir !

Share
Categories: Uncategorized

Probability and arXiv ubiquity in 2014 Fields medals

August 13th, 2014 No comments
Fields Medal

Fields Medal head and tail (fair coin)

Nowadays, most young mathematicians use aXiv to communicate their works. Moreover, in contrast with older generations, most of them are familiar with Probability Theory. Here is for instance a short proof of ubiquity of arXiv and of Probability in the 2014 Fields medals:

Note. Of course arXiv was already at the center of the attention in the affair of the declined Fields medal of Grigori Perelman in 2006, while Probability was more or less already present in the works of many Fields medals, including for instance Grigory Margulis (1978), Jean Bourgain (1994), Pierre-Louis Lions (1994), Timothy Gowers (1998), Andrei Okounkov (2006), Wendelin Werner (2006), Terence Tao (2006), Stanislav Smirnov (2010), and Cédric Villani (2010). Probability is naturally connected to analysis, to combinatorics, and can be useful on many structures. Even Laurent Schwartz (1915 — 2002) — one of the first Fields medals (1950) and by the way the first French Fields medal — worked on Probability at the end of his scientific life.

Share

Mathematical citation quotient of statistics journals

August 13th, 2014 2 comments
Allegory of the vanity of earthly things

Allegory of the vanity of earthly things

As suggested by Jian-Feng Yao, here is the Mathematical Citation Quotient (MCQ) for statistics journals, as we did already for probability journals in a previous post. We refer to this previous post for a presentation of the MCQ. We recall that the MCQ is computed just like the impact factor, except that

  • the citing window has a width of 5 years instead of 1 year (well adapted to mathematics);
  • the citing population is formed only by journals indexed in the MR database (not well adapted to applied statistics).

MCQ of Statistics journals

 

Share
Categories: Society, Statistics