Press "Enter" to skip to content

Few bits of optimal transportation

Statue de Gaspard Monge à Beaune
Statue of Gaspard Monge (1746 – 1818), place Monge, Beaune, Côte d’Or, France.

This post is about some aspects of transportation of measure. It is mostly inspired from the lecture notes of an advanced master course prepared few years ago in collaboration with my colleague Joseph Lehec in Université Paris-Dauphine – PSL. The objective is to reach the Caffarelli contraction theorem, one of my favourite theorems.

Pushforward or image measure. Let \( {T :\mathbb{R}^n \rightarrow \mathbb{R}^n} \) and \( {\mu} \) be a probability measure on \( {\mathbb{R}^n} \). The pushforward of \( {\mu} \) by \( {T} \) is the measure \( {\nu} \) given, for every Borel set \( {A\subset\mathbb{R}^n} \), by

\[ \nu ( A ) = \mu ( T^{-1} ( A ) ). \]

In other words \( {T(X)\sim\nu} \) when \( {X\sim\mu} \), and thus for all test function \( {h} \),

\[ \int_{\mathbb{R}^n} h \mathrm{d}\nu = \int_{\mathbb{R}^n} h \circ T \mathrm{d}\mu. \]

The Brenier theorem. It states that if \( {\mu} \) and \( {\nu} \) are two probability measures on \( {\mathbb{R}^n} \) with \( {\mu} \) absolutely continuous with respect to the Lebesgue measure then there exists a unique map \( {T:\mathbb{R}^n\rightarrow\mathbb{R}^n} \) pushing forward \( {\mu} \) to \( {\nu} \) and \( {T=\nabla\phi} \) with \( {\phi} \) convex.

The uniqueness of the map \( {T} \) must be understood almost everywhere.

The convex function \( {\phi} \) is obviously not unique but its gradient is unique.

When \( {n=1} \) then \( {T=F^{-1}\circ G} \) where \( {F=\mu((-\infty,\bullet])} \) and \( {G=\nu((-\infty,\bullet])} \) are the cumulative distribution functions of \( {\mu} \) and \( {\nu} \). The Brenier theorem states that in arbitrary dimension, it is still possible to pushforward using a multivariate analogue of the notion of non-decreasing function: the gradient of a convex function.

Relation to Wasserstein-Kantorovich coupling distance. If \( {\mu} \) and \( {\nu} \) have finite second moment and if \( {T=\nabla\phi} \) is the Brenier map pushing forward \( {\mu} \) to \( {\nu} \) then

\[ W_2(\mu,\nu)^2 =\min_{\pi\in\Pi(\mu,\nu)}\int\frac{|x-y|^2}{2}\pi(\mathrm{d}x,\mathrm{d}y) =\int\frac{|x-T(x)|^2}{2}\mathrm{d}\mu(\mathrm{d}x). \]

In other words the optimal coupling is deterministic: \( {\pi(\mathrm{d}x,\mathrm{d}y)=\mu(\mathrm{d}x)\delta_{T(x)}(\mathrm{d}y)} \).
The transport map \( {T=\nabla\phi} \) realizes an optimal transport of \( {\mu} \) to \( {\nu} \).
A key here is the Kantorovich-Rubinstein dual formulation of \( {W_2} \):

\[ W_2(\mu,\nu)^2 =\sup_{f,g}\int f\mathrm{d}\mu-\int g\mathrm{d}\nu \]

where the infimum runs over the set of bounded and Lipschitz \( {f,g:\mathbb{R}^n\rightarrow\mathbb{R}} \) such that \( {f(x)\leq g(y)+\frac{|x-y|^2}{2}} \). We can also take the inf-convolution \( {f(x)=\inf_{y\in\mathbb{R}^n}(g(x)+\frac{|x-y|^2}{2})} \).

Reverse Brenier map, Legendre transform, convex duality. If \( {\nu} \) is absolutely continuous with respect to the Lebesgue measure then \( {\nabla \phi} \) is invertible and \( {(\nabla \phi)^{-1}=\nabla \phi^*} \) is the Brenier map between \( {\nu} \) and \( {\mu} \), where

\[ \phi^* (y) = \sup_x \left\{ \langle x , y \rangle – \phi (x) \right\}. \]

is the Legendre transform of \( {\phi} \) (it is the also the gradient of a convex function).

Regularity of Brenier map. The Brenier map is not always continuous. For example if \( {\mu} \) is uniform on \( {[0,1]} \) and \( {\nu} \) is uniform on \( {[0,1/2] \cup [3/2 , 2]} \) then the Brenier map must be the identity on \( {[0,1/2[} \) and identity plus \( {1} \) on \( {]1/2,1]} \).

A correct hypothesis for the regularity of the Brenier map is convexity of the support of the target measure. Indeed, Luis Caffarelli has proved that if \( {\mu} \) and \( {\nu} \) are absolutely continuous, and if their supports \( {K} \) and \( {L} \) are convex, and if their densities \( {f,g} \) are bounded away from \( {0} \) and \( {+\infty} \) on \( {K} \) and \( {L} \) respectively, then the Brenier map \( {\nabla \phi} \) is an homeomorphism between the interior of \( {K} \) and that of \( {L} \). Moreover if \( {f} \) and \( {g} \) are continuous then \( {\nabla \phi} \) is a \( {\mathcal C^1} \) diffeomorphism.

The regularity theory of transportation of measure is a delicate subject that was explored in the recent years by a bunch of mathematicians including Alessio Figalli.

Monge-Ampère equation. When \( {\nabla \phi} \) is a \( {\mathcal C^1} \) diffeomorphism, the change of variable formula \( {y=\phi(x)} \) gives, for all test function \( {h} \), since \( {\mathrm{Jac}\nabla\phi=\nabla^2\phi} \) (Hessian),

\[ \int_L h(y) g(y) \mathrm{d} y = \int_{K} h \left( \nabla \phi (x) \right) g \left( \nabla \phi (x) \right) \mathrm{det}( \nabla^2\phi (x) ) \mathrm{d} x . \]

On the other hand, by definition of the Brenier map

\[ \int_L h(y) g(y)\mathrm{d} y = \int_{\mathbb{R}^n} h \mathrm{d}\nu = \int_{\mathbb{R}^n} h \circ \nabla \phi \mathrm{d}\mu = \int_K h \left( \nabla \phi (x) \right) f(x)\mathrm{d} x . \]

Since this is valid for every test function \( {h} \) we obtain the following equality

\[ g \left( \nabla \phi (x) \right) \, \mathrm{det}( \nabla^2\phi (x) ) = f(x) , \ \ \ \ \ (1) \]

for every \( {x} \) in the interior of \( {K} \). This is called Monge-Ampère equation. This is an important basic nonlinear equation in mathematics and physics.

From Monge-Ampère to Poisson-Langevin. When \( {\phi(x)=\frac{1}{2}|x|^2} \) the Monge-Ampère simply reads \( {g(x)=f(x)} \). Let us consider a perturbation or linearization around this case by taking \( {\phi(x)=\frac{1}{2}|x|^2+\varepsilon\psi(x)+O(\varepsilon^2)} \) and \( {g(x)=(1+\varepsilon h(x)+O(\varepsilon^2))f(x)} \), then, as \( {\varepsilon\rightarrow0} \), we find the Poisson equation for the Langevin operator:

\[ \left(-\Delta-\frac{\nabla f}{f}\cdot\nabla\right)\psi=h. \]

In other words, this reads \( {-(\Delta-\nabla V\cdot\nabla)\psi=h} \) if we write \( {f=\mathrm{e}^{-V}} \). In the same spirit, the Wasserstein-Kantorovich distance can be interpreted as an inverse Sobolev norm.

The Caffarelli contraction theorem. If \( {\mu=\mathrm{e}^{-V}\mathrm{d}x} \) and \( {\nu=\mathrm{e}^{-W}\mathrm{d}x} \) are two probability measures on \( {\mathbb{R}^n} \) such that \( {\frac{\alpha}{2}\left|\cdot\right|^2-V} \) and \( {W-\frac{\beta}{2}\left|\cdot\right|^2} \) are convex for some constants \( {\alpha,\beta>0} \), then the Brenier map \( {T=\nabla \phi} \) pushing forward \( {\mu} \) to \( {\nu} \) satisfies \( {\left\Vert T\right\Vert_{\mathrm{Lip}}\leq\sqrt{\alpha / \beta}} \).

By taking \( {V=\frac{\alpha}{2}\left|\cdot\right|^2} \) we obtain that a probability measure which is log-concave with respect to a non trivial Gaussian is a Lipschitz deformation of this Gaussian!

Idea of proof. We begin with \( {n=1} \). Taking the logarithm in the Monge-Ampère equation gives \( {\frac{1}{2}\log(\varphi”^2)=\log|\varphi”|=-V+W(\varphi’)} \), and taking the derivative twice gives

\[ \frac{\varphi””\varphi”-\varphi”’^2}{\varphi”^2}=-V”+W”(\varphi’)\varphi”^2+W'(\varphi’)\varphi”’. \]

Now if \( {\varphi”} \) has a maximum at \( {x=x_*} \) then \( {\varphi”'(x_*)=0} \) and \( {\varphi””(x_*)\leq0} \), and thus

\[ 0\geq-V”(x_*)+W”(\varphi'(x_*))\varphi”^2(x_*) \quad\text{hence}\quad \varphi”^2(x_*)\leq\alpha/\beta. \]

This maximum principle argument is attractive but a maximum at the boundary may produce difficulties. Let us follow now the same idea in the case \( {n\geq1} \). Observe first that the Lipschitz constant of \( {\nabla \phi} \) is the supremum of the operator norm of \( {\nabla^2\phi} \). So it is enough to prove \( {\Vert \nabla^2\phi (x) \Vert_{\mathrm{op}} \leq \sqrt{ \alpha / \beta }} \) for every \( {x} \). Besides since \( {\phi} \) is convex \( {\nabla^2\phi} \) is a positive matrix so this amounts to proving that \( {\langle \nabla^2\phi (x) u, u \rangle \leq\sqrt{\alpha/\beta}} \) for every unit vector \( {u} \) and every \( {x\in \mathbb{R}^n} \). Now we fix a direction \( {u} \) and we assume that the map

\[ \ell \colon x\mapsto \langle \nabla^2\phi (x) u , u \rangle \]

attains its maximum for \( {x=x_*} \). The logarithm of the Monge-Ampère equation gives

\[ \log \mathrm{det} \left( \nabla^2\phi (x) \right) = – V (x) + W \left( \nabla \phi (x) \right). \]

Now we differentiate this equation twice in the direction \( {u} \). To differentiate the left hand side, observe that if \( {A} \) is an invertible matrix

\[ \begin{array}{rcl} \log \mathrm{det} ( A + H ) & =& \log \mathrm{det} ( A ) + \mathrm{tr} ( A^{-1} H ) + o (H)\\ (A+H)^{-1} & =& A^{-1} – A^{-1} H A^{-1} + o ( H ). \end{array} \]

We obtain (omitting variables) \begin{multline*} -\mathrm{tr} \left( (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \right) + \mathrm{tr} \left( (\nabla^2\phi)^{-1} \partial_{uu} \nabla^2\phi \right)
= – \partial_{uu} V + \sum_i \partial_i W \partial_{iuu} \phi + \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) . \end{multline*} We shall use this equation at \( {x_*} \). We claim that

\[ \mathrm{tr} \left( (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \right) \geq 0 . \]

Indeed, \( {\nabla^2\phi\geq0} \) so \( {(\nabla^2\phi)^{-1}\geq0} \) and since \( {\partial_u \nabla^2\phi} \) is symmetric, we get

\[ (\partial_u \nabla^2\phi) (\nabla^2\phi)^{-1} (\partial_u \nabla^2\phi) \geq 0 . \]

Now it remains to recall that the product of two positive matrices has positive trace, namely if \( {A} \) and \( {B} \) are \( {n\times n} \) real symmetric positive semidefinite then

\[ \mathrm{Tr}(AB)=\mathrm{Tr}(\sqrt{A}\sqrt{B}(\sqrt{A}\sqrt{B})^\top)\geq0. \]

Since function \( {\ell} \) attains its maximum at \( {x_*} \) we have \( {\nabla^2\ell (x_*)\leq 0} \). Therefore

\[ \mathrm{tr} \left( (\nabla^2\phi)^{-1} \partial_{uu} \nabla^2\phi \right) = \mathrm{tr} \left( (\nabla^2\phi)^{-1} \nabla^2\ell \right) \leq 0 . \]

In the same way

\[ \sum_i \partial_i W \partial_{iuu} \phi = \langle \nabla W , \nabla \ell \rangle = 0 . \]

So at point \( {x_*} \) the main identity above gives

\[ \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) \leq \partial_{uu} V . \]

Now the hypothesis made on \( {V} \) and \( {W} \) give \( {\partial_{uu} V \leq \alpha} \) and

\[ \sum_{ij} \partial_{ij} W (\partial_{iu} \phi ) ( \partial_{ju} \phi ) \geq \beta \sum_{i} (\partial_{iu} \phi )^2 = \beta \vert \nabla^2\phi (u) \vert^2 . \]

Since \( {u} \) has norm \( {1} \), we get

\[ \ell ( x_* ) = \langle \nabla^2\phi (x_*) u , u \rangle \leq \vert \nabla^2\phi (x_*) (u) \vert \leq \sqrt{ \frac \alpha \beta } . \]

Therefore \( {\ell ( x ) \leq \sqrt{ \alpha / \beta }} \) for every \( {x} \) which is the desired inequality.

Application to functional inequalities. The Poincaré inequality for the standard Gaussian measure \( {\gamma_n=\mathcal{N}(0,I_n)=(2\pi)^{-\frac{n}{2}}\mathrm{e}^{-\frac{|x|^2}{2}}\mathrm{d}x} \) on \( {\mathbb{R}^n} \) states that for an arbitrary say \( {\mathcal{C}^1} \) and compactly supported test function \( {f:\mathbb{R}^n\rightarrow\mathbb{R}} \),

\[ \int f^2\mathrm{d}\gamma_n-\left(\int f\mathrm{d}\gamma_n\right)^2 \leq\int|\nabla f|^2\mathrm{d}\gamma_n. \]

Let \( {\mu} \) be a probability measure on \( {\mathbb{R}^n} \), image of \( {\gamma_n} \) by a \( {\mathcal{C}^1} \) map \( {T:\mathbb{R}^n\rightarrow\mathbb{R}^n} \). The Poincaré inequality above with \( {f=g\circ T} \) for an arbitrary \( {g:\mathbb{R}^n\rightarrow\mathbb{R}} \) gives

\[ \int g^2\mathrm{d}\mu-\left(\int g\mathrm{d}\mu\right)^2 \leq\left\Vert T\right\Vert_{\mathrm{Lip}}^2\int|\nabla g|^2\mathrm{d}\mu. \]

This is a Poincaré inequality for \( {\mu} \), provided that \( {T} \) is Lipschitz.

The Caffarelli contraction theorem states that if \( {\mu=\mathrm{e}^{-V}\mathrm{d}x} \) with \( {V-\frac{\rho}{2}\left|\cdot\right|^2} \) convex for some constant \( {\rho>0} \) then the map \( {T} \) pushing forward \( {\gamma_n} \) to \( {\mu} \) satisfies \( {\left\Vert T\right\Vert_{\mathrm{Lip}}^2\leq1/\rho} \), which implies by the argument above that \( {\mu} \) satisfies a Poincaré inequality of constant \( {1/\rho} \). The same argument works for other Sobolev type functional inequalities satisfied by the Gaussian measure, such as the logarithmic Sobolev inequality and the Bobkov isoperimetric functional inequalities. This transportation argument is a striking alternative to the Bakry-Émery curvature criterion in order to establish functional inequalities, but it does not prove the Gaussian case and does not have the extensibility of the latter to manifolds and abstract Markovian settings.

From Monge-Ampère to Gaussian log-Sobolev. Let us give a proof of the optimal logarithmic Sobolev inequality for the standard Gaussian measure \( {\gamma_n} \) by using directly the Monge-Ampère equation. Let \( {f:\mathbb{R}^n\rightarrow\mathbb{R}_+} \) be such that \( {\int f\mathrm{d}\gamma_n=1} \). Let \( {T=\nabla\phi} \) be the Brenier map pushing forward \( {f\mathrm{d}\gamma_n} \) to \( {\gamma_n} \). We set \( {\theta(x):=\phi(x)-\frac{1}{2}|x|^2} \) so that \( {\nabla\phi(x)=x+\nabla\theta(x)} \). We have \( {\mathrm{Hess}(\theta)(x)+I_n\geq0} \), and Monge-Ampère gives

\[ f(x)\mathrm{e}^{-\frac{|x|^2}{2}} =\det(I_n+\mathrm{Hess}(\theta)(x))\mathrm{e}^{-\frac{|x+\nabla\theta(x)|^2}{2}}. \]

Taking the logarithm gives

\[ \begin{array}{rcl} \log f(x) &=&-\frac{|x+\nabla\theta(x)|^2}{2}+\frac{|x|^2}{2}+\log\det(I_n+\mathrm{Hess}(\theta)(x))\\ &=&-x\cdot\nabla\theta(x)-\frac{|\nabla\theta(x)|^2}{2}+\log\det(I_n+\mathrm{Hess}(\theta)(x))\\ &\leq&-x\cdot\nabla\theta(x)-\frac{|\nabla\theta(x)|^2}{2}+\Delta\theta(x), \end{array} \]

where we have used \( {\log(1+t)\leq t} \) for \( {1+t>0} \) and the eigenvalues of the positive symmetric matrix \( {I_n+\mathrm{Hess}(\theta)(x)} \). Now integration with respect to \( {f\mathrm{d}\gamma_n} \) gives

\[ \int f\log f\mathrm{d}\gamma_n \leq \int f(\Delta\theta-x\cdot\nabla\theta)\mathrm{d}\gamma_n -\int\frac{|\nabla\theta|^2}{2}f\mathrm{d}\gamma_n. \]

Finally, using integration by parts (\( {\Delta\theta-x\cdot\nabla\theta} \) is O.-U.!), we get

\[ \begin{array}{rcl} \int f\log f\mathrm{d}\gamma_n &\leq&-\int\frac{1}{2}\Bigr|\sqrt{f}\nabla\theta+\frac{\nabla f}{\sqrt{f}}\Bigr|^2\mathrm{d}\gamma_n +\frac{1}{2}\int\frac{|\nabla f|^2}{f}\mathrm{d}\gamma_n\\ &\leq&\frac{1}{2}\int\frac{|\nabla f|^2}{f}\mathrm{d}\gamma_n. \end{array} \]

Recall that \( {T=\nabla\phi=x+\nabla\theta} \) pushes forward \( {\nu} \) to \( {\gamma_n} \), where \( {\mathrm{d}\nu=f\mathrm{d}\gamma_n} \). Therefore

\[ \int\frac{|\nabla\theta|^2}{2}f\mathrm{d}\gamma_n =\int\frac{|x-T(x)|^2}{2}\mathrm{d}\nu =W_2^2(\nu,\gamma_n). \]

Beyond the log-Sobolev inequality for the Gaussian measure, it is possible to obtain by this way, from the Monge-Ampère equation, HWI (H,W,I for entropy, Wasserstein, and Fisher information) functional inequalities for strongly log-concave measures. From this point of view, optimal transportation provides a partial alternative to the Bakry-Émery criterion on \( {\mathbb{R}^n} \).

Further reading

    Leave a Reply

    Your email address will not be published.

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    Syntax · Style · .