Press "Enter" to skip to content

Category: Uncategorized

Unexpected phenomena for equilibrium measures

Photo of Marcel Riesz
Marcel Riesz (1886 -1969)

This post is about Riesz energy problems, a subject that I like to explore with Edward B. Saff (Vanderbilt University, USA) and Robert S. Womersley (UNSW Sydney, Australia).

Riesz kernel. For $-2<s<d$, the Riesz $s$-kernel in $\mathbb{R}^d$  is $$
K_s:=\begin{cases}
\displaystyle\frac{1}{s\left|\cdot\right|^{s}} & \text{if } s\neq0\\[1em]
\displaystyle-\log\left|\cdot\right| & \text{if } s=0
\end{cases}.
$$ We recover the Coulomb or Newton kernel when $s=d-2$. This definition of the $s$-kernel allows to pass from $K_s$ to $K_0$ by removing the $1/s$ singularity at $s=0$, namely, for $x\neq0$, $$-\log|x|=\lim_{\underset{s\neq0}{s\to0}}\frac{|x|^{-s}-1}{s-0}=\lim_{\underset {s\neq0}{s\to0}}\Bigr(\frac{1}{s|x|^s}-\frac{1}{s}\Bigr).$$

Riesz energy. For $-2<s<d$, the Riesz energy of a probability measure $\mu$ on $\mathbb{R}^d$ is $$
\mathrm{I}_s(\mu):=\iint K_s(x-y)\mathrm{d}\mu(x)\mathrm{d}\mu(y)
=\int(K_s*\mu)\mathrm{d}\mu.
$$ The Riesz energy is strictly convex and lower semi-continuous for the weak convergence of probability measures with respect to continuous and bounded test functions.

Equilibrium measure. The equilibrium measure on a ball $B_R:=\{x\in\mathbb{R}^d:|x|\leq R\}$ satisfies
$$
\mathrm{I}_s(\mu_{\mathrm{eq}})
=\min_{\substack{\mu\\\mathrm{supp}(\mu)\subset B_R}}\mathrm{I}_s(\mu).
$$

Marcel Riesz original problem (1938). Equilibrium measure on $B_R$ when $d\geq2$ :
$$
\mu_{\mathrm{eq}}
=
\begin{cases}
\sigma_R & \text{if $-2<s\leq d-2$}\\[1em]
\displaystyle\frac{\Gamma(1+\frac{s}{2})}{R^s\pi^{\frac{d}{2}}\Gamma(1+\frac{s-d}{2})}
\frac{\mathbf{1}_{B_R}}{(R^2-|x|^2)^{\frac{d-s}{2}}}\mathrm{d}x &
\text{if $0\leq d-2<s<d$}
\end{cases}
$$ where $\sigma_R$ is the uniform distribution on the sphere $\{x\in\mathbb{R}^d:|x|=R\}$ of radius $R$.

The proof relies on the following integral formula for the variational characterization : $$
\int_{|y|\leq R}
\frac{|x-y|^{-s}}{(R^2-|y|^2)^{\frac{d-s}{2}}}\mathrm{d} y
=\frac{\pi^{\frac{d}{2}+1}}{\Gamma(\frac{d}{2})\sin(\frac{\pi}{2}(d-s))},\,\quad
x\in B_R
$$ The proof of this integral formula involves in turn a Kelvin transform and a reduction to the planar case. It can be found in detail in the Appendix of the book by Landkof (1972), and also with even more details in our 2022 JMAA article. To our knowledge, a simple proof is still lacking!

The Riesz integral formula above reveals a threshold phenomenon : the support condensates on a sphere when $s$ passes the critical value $2$. Our main finding is that this Riesz problem admits a full space extension in which we replace the ball support constraint with an external field. We show that a new threshold phenomenon occurs, related to the strenght of the external field.

External field equilibrium problem. The energy with external field $V$ on $\mathbb{R}^d$ is defined by $$\mathrm{I}(\mu)
:=\iint\left[K_s(x-y)+V(x)+V(y)\right]\mathrm{d}\mu(x)\mathrm{d}\mu(y)$$
and the associated equilibrium measure satisfies $$\mathrm{I}(\mu_{\mathrm{eq}})=\min_{\mu}\mathrm{I}(\mu)$$ The Frostman or Euler-Lagrange variational characterization of $\mu_{\mathrm{eq}}$ reads $$K_s*\mu+V
\begin{cases}
=c& \text{quasi-everywhere on }\mathrm{supp}(\mu)\\
\geq c&\text{quasi-everywhere outside }\mathrm{supp}(\mu)
\end{cases}$$ Quasi-everywhere means except on a set that cannot carry a probability measure of finite energy. By taking $V=\infty\mathbf{1}_{B_R^c}$ we recover the Riesz problem on the ball mentioned previously.

Coulomb case : $s=d-2$. The kernel $K_{d-2}$ is a Laplace fundamental solution :
$$
\Delta K_{d-2}\overset{\mathcal{D}’}{=}-c_d\delta_0,\quad\text{with}\quad c_d=|\mathbb{S}^{d-1}|.
$$Also, restricted to the interior of $\mathrm{supp}(\mu_{\mathrm{eq}})$,
$$
\mu_{\mathrm{eq}}\overset{\mathcal{D}’}{=}\frac{\Delta V}{c_d}
$$In particular, if $V=\left|\cdot\right|^\alpha$, $\alpha>0$, then
$$
\mu_{\mathrm{eq}}
=\alpha(\alpha+d-2)\left|\cdot\right|^{\alpha-2}\mathbf{1}_{B_R}\mathrm{d}x
\quad\text{with}\quad R=\bigr(\frac{1}{\alpha}\bigr)^{\frac{1}{d-2+\alpha}}.$$ The proof relies crucially on the local nature of the Laplacian.

At this point we observe that the formula $$\Delta K_u=-c_{d,u}K_{u+2},\quad c_{d,u}:=d-2-u$$ suggests to apply iteratively $\Delta$ to reach the case $s=d-2n$ for an arbitrary positive integer $n$.

Findings for the iterated Coulomb case $s=d-2n, n=1,2,3,\ldots$. Then, restricted to the interior of $\mu_{\mathrm{eq}}$, in the sense of distributions,
$$
\mu_{\mathrm{eq}}
\overset{\mathcal{D}’}{=}
\frac{\Delta^{n}V}{c_dC_{d,n}},
\quad\text{where}\quad
C_{d,n}:=(-1)^{n-1}\prod_{k=0}^{n-2}c_{d,s+2k}.
$$ In particular : if $s=d-4$ and $V=\left|\cdot\right|^\alpha$, $\alpha\geq2$, then $C_{d,2}<0$ while $\Delta V=\alpha(\alpha+d-2)\left|\cdot\right|^{\alpha-2}\geq0$ and thus $\mu_{\mathrm{eq}}$ is necessarily singular! Actually the case $s=d-4$ can be analyzed completely, and this analysis reveals the singularity when $\alpha\geq2$ as well as a threshold condensation to this singular support when $\alpha$ reaches the critical value $2$.

Findings when $s=d-4$. Suppose that $V=\gamma\left|\cdot\right|^\alpha$, $\gamma>0,  \alpha>0$.

  • Let $d\geq4$ and $s=d-4\geq0$.
    • If $\alpha\geq2$ then $\mu_{\mathrm{eq}}=\sigma_R$ (indeed it is singular!) where $$
      R=\Bigr(\frac{2}{(s+4)\alpha\gamma}\Bigr)^{\frac{1}{\alpha+s}}$$
    • If $0<\alpha<2$ then (mixture!) $$\mu_{\mathrm{eq}}=\beta fm_d+(1-\beta)\sigma_R$$ where
      $$\beta=\frac{2-\alpha}{s+2},\
      f=\frac{\alpha+s}{R^{\alpha+s}|\mathbb{S}^{d-1}|}\mathbf{1}_{B_R},\
      R=\Bigr(\frac{2}{(\alpha+s+2)\alpha\gamma}\Bigr)^{\frac{1}{\alpha+s}}$$
  • Let $d=3$ and $s=d-4=-1$ (non-singular kernel!).
    • If $0<\alpha<1$, then $\mu_{\mathrm{eq}}$ does not exist (blowup)
    • If $\alpha=1$ and $\gamma\geq1$, then $\mu_{\mathrm{eq}}=\delta_0$ (collapse).
    • If $\alpha>1$, then $\mu_{\mathrm{eq}}$ is as above (mixture).

In contrast, there is no threshold condensation phenomenon when $s=d-3$.

Findings when $s=d-3$. Suppose that $V=\gamma\left|\cdot\right|^\alpha$, $\gamma>0, \alpha>0$.

  • If $s=d-3$ and $\alpha=2$ then $$\mu_{\mathrm{eq}}
    =\frac{\Gamma(\frac{s+4}{2})}{\pi^{\frac{s+4}{2}}R^{s+2}}
    \frac{\mathbf{1}_{B_R}}{\sqrt{R^2-\left|\cdot\right|^2}}
    \mathrm{d}x$$ where $$R=\Bigr(\frac{\sqrt{\pi}}{4\gamma}\frac{\Gamma(\frac{s+4}{2})}{\Gamma(\frac{s+5}{2})}\Bigr)^{\frac{1}{s+2}}$$
  • This is also $\mu_{\mathrm{eq}}$ for $s=d-1$ on $B_R$ with this $R$.

Methods of proof.

  • Frostman or Euler-Lagrange variational characterization
  • Applying Laplacian on support of $\mu_{\mathrm{eq}}$
  • Rotational invariance and maximum principle
  • Dimensional reduction with Funk-Hecke formula
  • Orthogonal polynomials expansions
  • Integral formulas and special functions

Challenges.

  • Super-harmonic kernel and sub-harmonic external field
  • Non-locality of fractional Laplacian

Selected Open Problems.

  • Find a simple proof of Riesz formula!
  • When $s=d-3$ with $\alpha\neq2$, we conjecture that the support of the equilibrium measure is a ball if $0<\alpha<2$ and a full dimensional shell (annulus) if $\alpha>2$
  • When $s=d-6$, it could be that the support of the equilibrium measure is disconnected
  • Other norms in kernel and external field

Marcel Riesz (1886 – 1969) is the young brother of Frigyes Riesz (1880 – 1956). I do not known if Naoum Samoilovitch Landkof (1915 – 2004) has ever met in person Marcel Riesz. Landkof was a student of Mikhaïl Alekseïevitch Lavrentiev (1900 – 1980),  who gave his name to the Lavrentiev phenomenon in the calcul of variations. Landkof was an expert in potential theory. He advised Vladimir Alexandrovich Marchenko (1922 – ), famous notably for his findings on random operators and matrices with his student Leonid Pastur (1937 – ).

Further reading.

Photo of Naoum Samoilovitch Landkof
Naoum Samoilovitch Landkof (1915 – 2004)

Foundations of Modern Potential Theory

Leave a Comment

Boltzmann-Gibbs entropic variational principle

Nicolas Léonard Sadi Carnot (1796 - 1932)
Nicolas Léonard Sadi Carnot (1796 – 1932), an Évariste Galois of Physics.

The aim of this short post is to explain why the maximum entropy principle could be better seen as a minimum relative entropy principle, in other words an entropic projection.

Relative entropy. Let $\lambda$ be a reference measure on some measurable space $E$. The relative entropy with respect to $\lambda$ is defined for every measure $\mu$ on $E$ with density $\mathrm{d}\mu/\mathrm{d}\lambda$ by $$\mathrm{H}(\mu\mid\lambda):=\int\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\log\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\mathrm{d}\lambda.$$ If the integral is not well defined, we could simply set $\mathrm{H}(\mu\mid\lambda):=+\infty$.

  • An important case is when $\lambda$ is a probability measure. In this case $\mathrm{H}$ becomes the Kullback-Leibler divergence, and the Jensen inequality for the strictly convex function $u\mapsto u\log(u)$ indicates then that $\mathrm{H}(\mu\mid\lambda)\geq0$ with equality if and only if $\mu=\lambda$.
  • Another important case is when $\lambda$ is the Lebesgue measure on $\mathbb{R}^n$ or the counting measure on a discrete set, then $$-\mathrm{H}(\mu\mid\lambda)$$ is the Boltzmann-Shannon entropy of $\mu$. Beware that when $E=\mathbb{R}^n$, this entropy takes its values in the whole $(-\infty,+\infty)$ since for all positive scale factor $\sigma>0$, denoting $\mu_\sigma$ the push forward of $\mu$ by the dilation $x\mapsto\sigma x$, we have $$\mathrm{H}(\mu_\sigma\mid\lambda)=\mathrm{H}(\mu\mid\lambda)-n\log \sigma.$$

Boltzmann-Gibbs probability measures. Such a probability measure $\mu_{V,\beta}$ takes the form $$\mathrm{d}\mu_{V,\beta}:=\frac{\mathrm{e}^{-\beta V}}{Z_{V,\beta}}\mathrm{d}\lambda$$ where $V:E\mapsto(-\infty,+\infty]$, $\beta\in[0,+\infty)$, and $$Z_{V,\beta}:=\int\mathrm{e}^{-\beta V}\mathrm{d}\lambda<\infty$$ is the normalizing factor. The more $\beta$ is large, the more $\mu_{V,\beta}$ puts its probability mass on the regions where $V$ is low. The corresponding asymptotic analysis, known as the Laplace method, states that as $\beta\to\infty$ the probability measure $\mu_{V,\beta}$ concentrates on the minimizers of $V$.

The mean of $V$ or $V$-moment of $\mu_{V,\beta}$ writes
$$
\int V\mathrm{d}\mu_{V,\beta}
=-\frac{1}{\beta}\mathrm{H}(\mu_{V,\beta}\mid\lambda)-\frac{1}{\beta}\log Z_{V,\beta}.
$$
In thermodynamics $-\frac{1}{\beta}\log Z_{V,\beta}$ appears as a Helmholtz free energy since it is equal to $\int V\mathrm{d}\mu_{V,\beta}$ (mean energy) minus $\frac{1}{\beta}\times-\mathrm{H}(\mu_{V,\beta}\mid\lambda)$ (temperature times entropy).

When $\beta$ ranges from $-\infty$ to $\infty$, the $V$-moment of $\mu_{V,\beta}$ ranges from $\sup V$ downto $\inf V$, and $$\partial_\beta\int V\mathrm{d}\mu_{V,\beta}=\Bigr(\int V\mathrm{d}\mu_{V,\beta}\Bigr)^2-\int V^2\mathrm{d}\mu_{V,\beta}\leq0.$$ If $\lambda(E)<\infty$ then $\mu_{V,0}=\frac{1}{\lambda(E)}\lambda$ and its $V$-moment is $\frac{1}{\lambda(E)}\int V\mathrm{d}\lambda$.

Variational principle. Let $\beta\geq0$ such that $Z_{V,\beta}<\infty$ and $c:=\int V\mathrm{d}\mu_{V,\beta}<\infty$. Then, among all the probability measures $\mu$ on $E$ with same $V$-moment as $\mu_{V,\beta}$, the relative entropy $\mathrm{H}(\mu\mid\lambda)$ is minimized by the Boltzmann-Gibbs measures $\mu_{V,\beta}$. In other words,$$\min_{\int V\mathrm{d}\mu=c}\mathrm{H}(\mu\mid\lambda)=\mathrm{H}(\mu_{V,\beta}\mid\lambda).$$

Indeed we have $$\begin{align*}\mathrm{H}(\mu\mid\lambda)-\mathrm{H}(\mu_{V,\beta}\mid\lambda)&=\int\log\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\mathrm{d}\lambda-\int\log\frac{\mathrm{d}\mu_{V,\beta}}{\mathrm{d}\lambda}\mathrm{d}\mu_{V,\beta}\\&=\int\log\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\mathrm{d}\lambda+\int(\log(Z_{V,\beta})+\beta V)\mathrm{d}\mu_{V,\beta}\\&=\int\log\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\mathrm{d}\lambda+\int(\log(Z_{V,\beta})+\beta V)\mathrm{d}\mu\\&=\int\log\frac{\mathrm{d}\mu}{\mathrm{d}\lambda}\mathrm{d}\lambda-\int\log\frac{\mathrm{d}\mu_{V,\beta}}{\mathrm{d}\lambda}\mathrm{d}\mu\\&=\mathrm{H}(\mu\mid\mu_{V,\beta})\geq0\end{align*}$$ with equality if and only if $\mu=\mu_{V,\beta}$. The crucial point is that $\mu$ and $\mu_{V,\beta}$ are equal on test functions of the form $a+bV$ where $a,b$ are arbitrary real constants, by assumption.

  • When $\lambda$ is the Lebesgue measure on $\mathbb{R}^n$ or the counting measure on a discrete set, we recover the usual maximum Boltzmann-Shannon entropy principe $$\max_{\int V\mathrm{d}\mu=c}-\mathrm{H}(\mu\mid\lambda)=-\mathrm{H}(\mu_{V,\beta}).$$In particular, Gaussians maximize the Boltzmann-Shannon entropy under variance constraint (take for $V$ a quadratic form), while the uniform measures maximize the Boltzmann-Shannon entropy under support constraint (take $V$ constant on a set of finite measure for $\lambda$, and infinity elsewere). Maximum entropy is minimum relative entropy with respect to Lebesgue or counting measure, a way to find, among the probability measures with a moment constraint, the closest to the Lebesgue or counting measure.
  • When $\lambda$ is a probability measure, then we recover the fact that the Boltzmann-Gibbs measures realize the projection or least Kullback-Leibler divergence of $\lambda$ on the set of probability measures with a given $V$-moment. This is the Csiszár $\mathrm{I}$-projection.
  • There are other interesiting applications, for instance when $\lambda$ is a Poisson point process.

Note. The concept of maximum entropy was studied notably by

and by Edwin Thompson Jaynes (1922 – 1998) in relation with thermodynamics, statistical physics, statistical mechanics, information theory, and Bayesian statistics. The concept of I-projection or minimum relative entropy was studied notably by Imre Csiszár (1938 – ).

Related.

Leave a Comment

Comment publier vertueusement ?

RNBMCe petit billet d’information et d’aide à la décision, à destination des mathématiciens, a été préparé par et pour le réseau national des bibliothèques de mathématiques (RNBM).

  1. Pourquoi chercher à publier vertueusement puisqu’il y a Sci-Hub ? D’une part Sci-Hub est illégal, et d’autre part Sci-Hub s’appuie par construction sur les abonnements des établissements académiques à travers le monde. Sci-Hub libère la science d’hier et d’aujourd’hui par une mutalisation pirate, ce qui peut avoir un bon effet systémique à terme. En attendant, une bonne manière de libérer sa propre production scientifique immédiatement, durablement, et légalement est de la déposer dans des dépôts ad hoc comme arXiv, dont le mirroir français est intégré à HAL.
  2. Est-il suffisant de déposer systématiquement sur arXiv ? Le dépôt sur arXiv est toujours bienvenu pour la diffusion ouverte de la science. Mais comme rien ne garantit que la version finale qui a bénéficié du processus éditorial de la revue est sur arXiv, les lecteurs vont souvent préférer la version publiée par la revue lorsqu’elle est accessible. De ce point de vue, la situation idéale est celle des revues en libre accès qui déposent elles-mêmes sur arXiv, ou qui s’appuient sur arXiv comme les épirevues de www.episciences.org par exemple. D’autre part, un certain nombre de revues pratiquent le libre accès pour les auteurs et les lecteurs (libre accès « diamant ») sans passer par arXiv.
  3. Pourquoi ne pas tout faire sur ResearchGate ? ResearchGate est une plateforme semi-fermée de même nature que Facebook, qui n’est pas portée par des institutions académiques, et qui a vocation un jour à monétiser son accès, ses services, et sa base de données. Elle n’aide pas vraiment la science ouverte, bien au contraire.
  4. Quelles sont les revues les plus vertueuses ? Les revues en libre accès à la fois pour les auteurs et les lecteurs, soutenues par une institution académique, et adossées à arXiv, font en général partie des plus vertueuses, bien que certaines fassent appel, pour la gestion éditoriale et la mise aux normes, au bénévolat des chercheurs. À l’opposé, les revues à accès payant ne sont pas toutes à mettre dans le même sac : certaines pratiquent des prix raisonnables, qu’elles soient à but lucratif ou pas. De manière générale, le fonctionnement éditorial d’une revue a un coût, et les différences se font sur le modèle et la politique de financement, d’accès, et de diffusion. Concrètement, pour un chercheur junior qui souhaite publier un article, il est possible d’établir une liste de revues envisageables sur des critères scientifiques, puis de trier cette liste en tenant compte du modèle de chaque revue par rapport à la science ouverte. Un chercheur sénior peut se permettre de viser d’emblée les revues les plus vertueuses sur le plan de la science ouverte, au détriment de leur prestige scientifique, car cela est moins impactant sur son devenir. Et ils peuvent tous déposer leur version finale sur arXiv si la revue ne le fait pas.
  5. Pourquoi il n’est pas vertueux de payer l’éditeur pour libérer l’article à la publication ? C’est le système des APC, pour article processing charges. Comme tout a un coût, l’idée de faire payer l’auteur à l’éditeur pour diffuser librement son article peut séduire. Mais ce paiment de l’auteur ne sera accessible qu’aux auteurs riches ou membres d’institutions riches, et les articles publiés par les moins riches resterons moins diffusés et surtout accessibles uniquement sur abonnement, ce qui fait au bout du compte payer deux fois les institutions académiques. Le modèle du subscribe to open (S2O), qui se développe en ce moment, est de ce point de vue plus vertueux, car il ne fait payer qu’une seule fois les institutions pour la libération de tous les articles.

Note. Pour répondre à une question fréquemment posée, le RNBM, en tant qu’entité de l’Institut des sciences mathématiques du CNRS, ne peut pas faire ouvertement la publicité pour un service illégal comme Sci-Hub ou libgen. En revanche, étant donné l’usage massif de Sci-Hub|libgen à travers le monde et en particulier en France, il est normal que le RNBM en fasse état et en explique les mécanismes et les enjeux. Chaque mathématicien peut souhaiter avoir recours à un service comme Sci-Hub|libgen, parce que cela est efficace, parce que le savoir doit être diffusé, parce que ce type de subversion anarchiste pourrait forcer à terme les multinationales de l’édition mercantile à changer leur système.

Lectures connexes.

Leave a Comment

Few bits of optimal transportation

Statue de Gaspard Monge à Beaune
Statue of Gaspard Monge (1746 – 1818), place Monge, Beaune, Côte d’Or, France.

This post is about some aspects of transportation of measure. It is mostly inspired from the lecture notes of an advanced master course prepared few years ago in collaboration with my colleague Joseph Lehec in Université Paris-Dauphine – PSL. The objective is to reach the Caffarelli contraction theorem, one of my favourite theorems.

Pushforward or image measure. Let \( {T :\mathbb{R}^n \rightarrow \mathbb{R}^n} \) and \( {\mu} \) be a probability measure on \( {\mathbb{R}^n} \). The pushforward of \( {\mu} \) by \( {T} \) is the measure \( {\nu} \) given, for every Borel set \( {A\subset\mathbb{R}^n} \), by

\[ \nu ( A ) = \mu ( T^{-1} ( A ) ). \]

In other words \( {T(X)\sim\nu} \) when \( {X\sim\mu} \), and thus for all test function \( {h} \),

\[ \int_{\mathbb{R}^n} h \mathrm{d}\nu = \int_{\mathbb{R}^n} h \circ T \mathrm{d}\mu. \]

The Brenier theorem. It states that if \( {\mu} \) and \( {\nu} \) are two probability measures on \( {\mathbb{R}^n} \) with \( {\mu} \) absolutely continuous with respect to the Lebesgue measure then there exists a unique map \( {T:\mathbb{R}^n\rightarrow\mathbb{R}^n} \) pushing forward \( {\mu} \) to \( {\nu} \) and \( {T=\nabla\phi} \) with \( {\phi} \) convex.

The uniqueness of the map \( {T} \) must be understood almost everywhere.

The convex function \( {\phi} \) is obviously not unique but its gradient is unique.

When \( {n=1} \) then \( {T=F^{-1}\circ G} \) where \( {F=\mu((-\infty,\bullet])} \) and \( {G=\nu((-\infty,\bullet])} \) are the cumulative distribution functions of \( {\mu} \) and \( {\nu} \). The Brenier theorem states that in arbitrary dimension, it is still possible to pushforward using a multivariate analogue of the notion of non-decreasing function: the gradient of a convex function.

Relation to Wasserstein-Kantorovich coupling distance. If \( {\mu} \) and \( {\nu} \) have finite second moment and if \( {T=\nabla\phi} \) is the Brenier map pushing forward \( {\mu} \) to \( {\nu} \) then

\[ W_2(\mu,\nu)^2 =\min_{\pi\in\Pi(\mu,\nu)}\int\frac{|x-y|^2}{2}\pi(\mathrm{d}x,\mathrm{d}y) =\int\frac{|x-T(x)|^2}{2}\mathrm{d}\mu(\mathrm{d}x). \]

In other words the optimal coupling is deterministic: \( {\pi(\mathrm{d}x,\mathrm{d}y)=\mu(\mathrm{d}x)\delta_{T(x)}(\mathrm{d}y)} \).
The transport map \( {T=\nabla\phi} \) realizes an optimal transport of \( {\mu} \) to \( {\nu} \).
A key here is the Kantorovich-Rubinstein dual formulation of \( {W_2} \):

\[ W_2(\mu,\nu)^2 =\sup_{f,g}\int f\mathrm{d}\mu-\int g\mathrm{d}\nu \]

where the infimum runs over the set of bounded and Lipschitz \( {f,g:\mathbb{R}^n\rightarrow\mathbb{R}} \) such that \( {f(x)\leq g(y)+\frac{|x-y|^2}{2}} \). We can also take the inf-convolution \( {f(x)=\inf_{y\in\mathbb{R}^n}(g(x)+\frac{|x-y|^2}{2})} \).

Reverse Brenier map, Legendre transform, convex duality. If \( {\nu} \) is absolutely continuous with respect to the Lebesgue measure then \( {\nabla \phi} \) is invertible and \( {(\nabla \phi)^{-1}=\nabla \phi^*} \) is the Brenier map between \( {\nu} \) and \( {\mu} \), where

\[ \phi^* (y) = \sup_x \left\{ \langle x , y \rangle – \phi (x) \right\}. \]

is the Legendre transform of \( {\phi} \) (it is the also the gradient of a convex function).

Regularity of Brenier map. The Brenier map is not always continuous. For example if \( {\mu} \) is uniform on \( {[0,1]} \) and \( {\nu} \) is uniform on \( {[0,1/2] \cup [3/2 , 2]} \) then the Brenier map must be the identity on \( {[0,1/2[} \) and identity plus \( {1} \) on \( {]1/2,1]} \).

A correct hypothesis for the regularity of the Brenier map is convexity of the support of the target measure. Indeed, Luis Caffarelli has proved that if \( {\mu} \) and \( {\nu} \) are absolutely continuous, and if their supports \( {K} \) and \( {L} \) are convex, and if their densities \( {f,g} \) are bounded away from \( {0} \) and \( {+\infty} \) on \( {K} \) and \( {L} \) respectively, then the Brenier map \( {\nabla \phi} \) is an homeomorphism between the interior of \( {K} \) and that of \( {L} \). Moreover if \( {f} \) and \( {g} \) are continuous then \( {\nabla \phi} \) is a \( {\mathcal C^1} \) diffeomorphism.

The regularity theory of transportation of measure is a delicate subject that was explored in the recent years by a bunch of mathematicians including Alessio Figalli.

Monge-Ampère equation. When \( {\nabla \phi} \) is a \( {\mathcal C^1} \) diffeomorphism, the change of variable formula \( {y=\phi(x)} \) gives, for all test function \( {h} \), since \( {\mathrm{Jac}\nabla\phi=\nabla^2\phi} \) (Hessian),

\[ \int_L h(y) g(y) \mathrm{d} y = \int_{K} h \left( \nabla \phi (x) \right) g \left( \nabla \phi (x) \right) \mathrm{det}( \nabla^2\phi (x) ) \mathrm{d} x . \]

On the other hand, by definition of the Brenier map

\[ \int_L h(y) g(y)\mathrm{d} y = \int_{\mathbb{R}^n} h \mathrm{d}\nu = \int_{\mathbb{R}^n} h \circ \nabla \phi \mathrm{d}\mu = \int_K h \left( \nabla \phi (x) \right) f(x)\mathrm{d} x . \]

Since this is valid for every test function \( {h} \) we obtain the following equality

\[ g \left( \nabla \phi (x) \right) \, \mathrm{det}( \nabla^2\phi (x) ) = f(x) , \ \ \ \ \ (1) \]

for every \( {x} \) in the interior of \( {K} \). This is called Monge-Ampère equation. This is an important basic nonlinear equation in mathematics and physics.

From Monge-Ampère to Poisson-Langevin. When \( {\phi(x)=\frac{1}{2}|x|^2} \) the Monge-Ampère simply reads \( {g(x)=f(x)} \). Let us consider a perturbation or linearization around this case by taking \( {\phi(x)=\frac{1}{2}|x|^2+\varepsilon\psi(x)+O(\varepsilon^2)} \) and \( {g(x)=(1+\varepsilon h(x)+O(\varepsilon^2))f(x)} \), then, as \( {\varepsilon\rightarrow0} \), we find the Poisson equation for the Langevin operator:

\[ \left(-\Delta-\frac{\nabla f}{f}\cdot\nabla\right)\psi=h. \]

In other words, this reads \( {-(\Delta-\nabla V\cdot\nabla)\psi=h} \) if we write \( {f=\mathrm{e}^{-V}} \). In the same spirit, the Wasserstein-Kantorovich distance can be interpreted as an inverse Sobolev norm.

The Caffarelli contraction theorem. If \( {\mu=\mathrm{e}^{-V}\mathrm{d}x} \) and \( {\nu=\mathrm{e}^{-W}\mathrm{d}x} \) are two probability measures on \( {\mathbb{R}^n} \) such that \( {\frac{\alpha}{2}\left|\cdot\right|^2-V} \) and \( {W-\frac{\beta}{2}\left|\cdot\right|^2} \) are convex for some constants \( {\alpha,\beta>0} \), then the Brenier map \( {T=\nabla \phi} \) pushing forward \( {\mu} \) to \( {\nu} \) satisfies \( {\left\Vert T\right\Vert_{\mathrm{Lip}}\leq\sqrt{\alpha / \beta}} \).

By taking \( {V=\frac{\alpha}{2}\left|\cdot\right|^2} \) we obtain that a probability measure which is log-concave with respect to a non trivial Gaussian is a Lipschitz deformation of this Gaussian!

Idea of proof. We begin with \( {n=1} \). Taking the logarithm in the Monge-Ampère equation gives \( {\frac{1}{2}\log(\varphi”^2)=\log|\varphi”|=-V+W(\varphi’)} \), and taking the derivative twice gives

\[ \frac{\varphi””\varphi”-\varphi”’^2}{\varphi”^2}=-V”+W”(\varphi’)\varphi”^2+W'(\varphi’)\varphi”’. \]

Now if \( {\varphi”} \) has a maximum at \( {x=x_*} \) then \( {\varphi”'(x_*)=0} \) and \( {\varphi””(x_*)\leq0} \), and thus

\[ 0\geq-V”(x_*)+W”(\varphi'(x_*))\varphi”^2(x_*) \quad\text{hence}\quad \varphi”^2(x_*)\leq\alpha/\beta. \]

This maximum principle argument is attractive but a maximum at the boundary may produce difficulties. Let us follow now the same idea in the case \( {n\geq1} \). Observe first that the Lipschitz constant of \( {\nabla \phi} \) is the supremum of the operator norm of \( {\nabla^2\phi} \). So it is enough to prove \( {\Vert \nabla^2\phi (x) \Vert_{\mathrm{op}} \leq \sqrt{ \alpha / \beta }} \) for every \( {x} \). Besides since \( {\phi} \) is convex \( {\nabla^2\phi} \) is a positive matrix so this amounts to proving that \( {\langle \nabla^2\phi (x) u, u \rangle \leq\sqrt{\alpha/\beta}} \) for every unit vector \( {u} \) and every \( {x\in \mathbb{R}^n} \). Now we fix a direction \( {u} \) and we assume that the map

\[ \ell \colon x\mapsto \langle \nabla^2\phi (x) u , u \rangle \]

attains its maximum for \( {x=x_*} \). The logarithm of the Monge-Ampère equation gives

\[ \log \mathrm{det} \left( \nabla^2\phi (x) \right) = – V (x) + W \left( \nabla \phi (x) \right). \]

Now we differentiate this equation twice in the direction \( {u} \). To differentiate the left hand side, observe that if \( {A} \) is an invertible matrix

\[ \begin{array}{rcl} \log \mathrm{det} ( A + H ) & =& \log \mathrm{det} ( A ) + \mathrm{tr} ( A^{-1} H ) + o (H)\\ (A+H)^{-1} & =& A^{-1} – A^{-1} H A^{-1} + o ( H ). \end{array} \]

We obtain (omitting variables) \begin{multline*} -\mathrm{tr} \left( (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \right) + \mathrm{tr} \left( (\nabla^2\phi)^{-1} \partial_{uu} \nabla^2\phi \right)
= – \partial_{uu} V + \sum_i \partial_i W \partial_{iuu} \phi + \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) . \end{multline*} We shall use this equation at \( {x_*} \). We claim that

\[ \mathrm{tr} \left( (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \right) \geq 0 . \]

Indeed, \( {\nabla^2\phi\geq0} \) so \( {(\nabla^2\phi)^{-1}\geq0} \) and since \( {\partial_u \nabla^2\phi} \) is symmetric, we get

\[ (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \geq 0 . \]

Now it remains to recall that the product of two positive matrices has positive trace, namely if \( {A} \) and \( {B} \) are \( {n\times n} \) real symmetric positive semidefinite then

\[ \mathrm{Tr}(AB)=\mathrm{Tr}(\sqrt{A}\sqrt{B}(\sqrt{A}\sqrt{B})^\top)\geq0. \]

Since function \( {\ell} \) attains its maximum at \( {x_*} \) we have \( {\nabla^2\ell (x_*)\leq 0} \). Therefore

\[ \mathrm{tr} \left( (\nabla^2\phi)^{-1} \partial_{uu} \nabla^2\phi \right) = \mathrm{tr} \left( (\nabla^2\phi)^{-1} \nabla^2\ell \right) \leq 0 . \]

In the same way

\[ \sum_i \partial_i W \partial_{iuu} \phi = \langle \nabla W , \nabla \ell \rangle = 0 . \]

So at point \( {x_*} \) the main identity above gives

\[ \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) \leq \partial_{uu} V . \]

Now the hypothesis made on \( {V} \) and \( {W} \) give \( {\partial_{uu} V \leq \alpha} \) and

\[ \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) \geq \beta \sum_{i} (\partial_{iu} \phi )^2 = \beta \vert \nabla^2\phi (u) \vert^2 . \]

Since \( {u} \) has norm \( {1} \), we get

\[ \ell ( x_* ) = \langle \nabla^2\phi (x_*) u , u \rangle \leq \vert \nabla^2\phi (x_*) (u) \vert \leq \sqrt{ \frac \alpha \beta } . \]

Therefore \( {\ell ( x ) \leq \sqrt{ \alpha / \beta }} \) for every \( {x} \) which is the desired inequality.

Application to functional inequalities. The Poincaré inequality for the standard Gaussian measure \( {\gamma_n=\mathcal{N}(0,I_n)=(2\pi)^{-\frac{n}{2}}\mathrm{e}^{-\frac{|x|^2}{2}}\mathrm{d}x} \) on \( {\mathbb{R}^n} \) states that for an arbitrary say \( {\mathcal{C}^1} \) and compactly supported test function \( {f:\mathbb{R}^n\rightarrow\mathbb{R}} \),

\[ \int f^2\mathrm{d}\gamma_n-\left(\int f\mathrm{d}\gamma_n\right)^2 \leq\int|\nabla f|^2\mathrm{d}\gamma_n. \]

Let \( {\mu} \) be a probability measure on \( {\mathbb{R}^n} \), image of \( {\gamma_n} \) by a \( {\mathcal{C}^1} \) map \( {T:\mathbb{R}^n\rightarrow\mathbb{R}^n} \). The Poincaré inequality above with \( {f=g\circ T} \) for an arbitrary \( {g:\mathbb{R}^n\rightarrow\mathbb{R}} \) gives

\[ \int g^2\mathrm{d}\mu-\left(\int g\mathrm{d}\mu\right)^2 \leq\left\Vert T\right\Vert_{\mathrm{Lip}}^2\int|\nabla g|^2\mathrm{d}\mu. \]

This is a Poincaré inequality for \( {\mu} \), provided that \( {T} \) is Lipschitz.

The Caffarelli contraction theorem states that if \( {\mu=\mathrm{e}^{-V}\mathrm{d}x} \) with \( {V-\frac{\rho}{2}\left|\cdot\right|^2} \) convex for some constant \( {\rho>0} \) then the map \( {T} \) pushing forward \( {\gamma_n} \) to \( {\mu} \) satisfies \( {\left\Vert T\right\Vert_{\mathrm{Lip}}^2\leq1/\rho} \), which implies by the argument above that \( {\mu} \) satisfies a Poincaré inequality of constant \( {1/\rho} \). The same argument works for other Sobolev type functional inequalities satisfied by the Gaussian measure, such as the logarithmic Sobolev inequality and the Bobkov isoperimetric functional inequalities. This transportation argument is a striking alternative to the Bakry-Émery curvature criterion in order to establish functional inequalities, but it does not prove the Gaussian case and does not have the extensibility of the latter to manifolds and abstract Markovian settings.

From Monge-Ampère to Gaussian log-Sobolev. Let us give a proof of the optimal logarithmic Sobolev inequality for the standard Gaussian measure \( {\gamma_n} \) by using directly the Monge-Ampère equation. Let \( {f:\mathbb{R}^n\rightarrow\mathbb{R}_+} \) be such that \( {\int f\mathrm{d}\gamma_n=1} \). Let \( {T=\nabla\phi} \) be the Brenier map pushing forward \( {f\mathrm{d}\gamma_n} \) to \( {\gamma_n} \). We set \( {\theta(x):=\phi(x)-\frac{1}{2}|x|^2} \) so that \( {\nabla\phi(x)=x+\nabla\theta(x)} \). We have \( {\mathrm{Hess}(\theta)(x)+I_n\geq0} \), and Monge-Ampère gives

\[ f(x)\mathrm{e}^{-\frac{|x|^2}{2}} =\det(I_n+\mathrm{Hess}(\theta)(x))\mathrm{e}^{-\frac{|x+\nabla\theta(x)|^2}{2}}. \]

Taking the logarithm gives

\[ \begin{array}{rcl} \log f(x) &=&-\frac{|x+\nabla\theta(x)|^2}{2}+\frac{|x|^2}{2}+\log\det(I_n+\mathrm{Hess}(\theta)(x))\\ &=&-x\cdot\nabla\theta(x)-\frac{|\nabla\theta(x)|^2}{2}+\log\det(I_n+\mathrm{Hess}(\theta)(x))\\ &\leq&-x\cdot\nabla\theta(x)-\frac{|\nabla\theta(x)|^2}{2}+\Delta\theta(x), \end{array} \]

where we have used \( {\log(1+t)\leq t} \) for \( {1+t>0} \) and the eigenvalues of the positive symmetric matrix \( {I_n+\mathrm{Hess}(\theta)(x)} \). Now integration with respect to \( {f\mathrm{d}\gamma_n} \) gives

\[ \int f\log f\mathrm{d}\gamma_n \leq \int f(\Delta\theta-x\cdot\nabla\theta)\mathrm{d}\gamma_n -\int\frac{|\nabla\theta|^2}{2}f\mathrm{d}\gamma_n. \]

Finally, using integration by parts (\( {\Delta\theta-x\cdot\nabla\theta} \) is O.-U.!), we get

\[ \begin{array}{rcl} \int f\log f\mathrm{d}\gamma_n &\leq&-\int\frac{1}{2}\Bigr|\sqrt{f}\nabla\theta+\frac{\nabla f}{\sqrt{f}}\Bigr|^2\mathrm{d}\gamma_n +\frac{1}{2}\int\frac{|\nabla f|^2}{f}\mathrm{d}\gamma_n\\ &\leq&\frac{1}{2}\int\frac{|\nabla f|^2}{f}\mathrm{d}\gamma_n. \end{array} \]

Recall that \( {T=\nabla\phi=x+\nabla\theta} \) pushes forward \( {\nu} \) to \( {\gamma_n} \), where \( {\mathrm{d}\nu=f\mathrm{d}\gamma_n} \). Therefore

\[ \int\frac{|\nabla\theta|^2}{2}f\mathrm{d}\gamma_n =\int\frac{|x-T(x)|^2}{2}\mathrm{d}\nu =W_2^2(\nu,\gamma_n). \]

Beyond the log-Sobolev inequality for the Gaussian measure, it is possible to obtain by this way, from the Monge-Ampère equation, HWI (H,W,I for entropy, Wasserstein, and Fisher information) functional inequalities for strongly log-concave measures. From this point of view, optimal transportation provides a partial alternative to the Bakry-Émery criterion on \( {\mathbb{R}^n} \).

Further reading

Leave a Comment
Syntax · Style · .