# Month: October 2011

Let ${\mathcal{A}}$ be a ${\star}$-algebra over ${\mathbb{C}}$ with zero ${0}$, unit ${1}$, and involution ${a\mapsto a^*}$ such that ${(ab)^*=b^*a^*}$ for every ${a,b\in\mathcal{A}}$. Let ${\tau:\mathcal{A}\rightarrow\mathbb{C}}$ be a linear form such that ${\tau(aa^*)\geq0}$ for every ${a\in\mathcal{A}}$, and ${\tau(1)=1}$. We say then that ${(\mathcal{A},\tau)}$ is an algebraic probability space. We do not assume that ${\tau(ab)=\tau(ba)}$ for every ${a,b\in\mathcal{A}}$ or that ${\tau(aa^*)=0}$ iif ${a=0}$, even if it is the case in the following couple of examples. The simplest example, commutative, is given by

$\mathcal{A}=\bigcap_{1\leq p<\infty}\mathrm{L}^p(\Omega,\mathcal{F},\mathbb{P},\mathbb{C}) \quad\text{and}\quad \tau=\mathbb{E} \quad\text{and}\quad a^*=\bar{a}$

where ${(\Omega,\mathcal{F},\mathbb{P})}$ is a classical probability space. A non commutative example is given by

$\mathcal{A}=\mathcal{M}_n(\mathbb{C}) \quad\text{and}\quad \tau=\frac{1}{n}\mathrm{Trace} \quad\text{and}\quad a^*=\bar{a}^\top.$

One can mix the two by considering integrable random matrices equipped with ${\tau=\frac{1}{n}\mathbb{E}\mathrm{Trace}(\cdot)}$. Here we focus on the purely algebraic notion of ${\star}$-algebras, and we should not confuse this notion with the algebraic-analytic notions of ${C^*}$-algebras or von Neumann ${W^*}$-algebras.

Algebraic random variables. An element ${a\in\mathcal{A}}$ is called an algebraic random variable, and its ${\star}$-distribution is the collection of ${\star}$-moments

$\tau(a^{\varepsilon_1}\cdots a^{\varepsilon_n})$

for every ${n\geq1}$ and every ${\varepsilon_1,\ldots,\varepsilon_n}$ in ${\{1,\star\}}$. When ${a=a^*}$ (we say that ${a}$ is real) then the ${\star}$-distribution of ${a}$ is characterized by the sequence of moments ${\tau(a^n)}$, ${n\in\mathbb{N}}$. In this case, and thanks to the Hamburger moment theorem, this sequence of moments of ${a}$ is the sequence of moments of some probability measure ${\mu}$ on ${\mathbb{R}}$. This probability distribution ${\mu}$ is not unique in general, the Carleman condition says that uniqueness holds if

$\sum_n(\tau(a^{2n}))^{-1/(2n)}=\infty.$

Now we define four algebraic notions of independence, which corresponds actually to simplification rules for the computation of mixed moments. The first notion matches the classical notion of commutative probability theory, while the second notion is the one of free probability theory.

Commutative independence. A family ${(\mathcal{A}_i)_{i\in I}}$ of sub-${\star}$-algebras of ${\mathcal{A}}$ is commutative independent when for every ${i_1,\ldots,i_n\in I}$, and every ${a_1\in\mathcal{A}_{i_1},\ldots,a_n\in\mathcal{A}_{i_n}}$,

$\tau(a_1\cdots a_n) = \left\{ \begin{array}{ll} \tau(a_1)\tau(a_2\cdots a_n) & \mbox{if } i_1\not\in\{i_2,\ldots,i_n\} \\ \tau(a_2\cdots a_{r-1}(a_1a_r)a_{r+1}\cdots a_n) & \mbox{if } r=\min\{j>1:i_1=i_j\} \end{array} \right.$

This allows first to group the ${a_i}$ belonging to the same sub-${\star}$-algebra and then to break ${\tau}$. The ${\star}$-distribution of ${a}$ is in this case uniquely determined by the law of ${a}$ as a classical random variable (the converse is not true in general, since the moment problem may not have a unique solution).

Free independence. A family ${(\mathcal{A}_i)_{i\in I}}$ of sub-${\star}$-algebras of ${\mathcal{A}}$ is free independent when for every ${i_1,\ldots,i_n\in I}$ with ${i_1\neq\cdots\neq i_n}$ (any two consecutive indices are different), and every ${a_1\in\mathcal{A}_{i_1},\ldots,a_n\in\mathcal{A}_{i_n}}$,

$\tau((a_1-\tau(a_1))\cdots(a_n-\tau(a_n))) =0.$

Note that if ${a,b}$ are free independent (i.e. their generated algebra are free independent), and if ${\tau(a)=\tau(b)=0}$, then ${\tau(abab)=0}$, while for the commutative independence, ${\tau(abab)=\tau(a^2b^2)=\tau(a^2)\tau(b^2)}$ which is not zero in general.

Boolean independence. A family ${(\mathcal{A}_i)_{i\in I}}$ of sub-sets of ${\mathcal{A}}$ closed for the algebraic operations and the involution ${\star}$, but which may not contain the unit ${1}$, is Boolean independent when for every ${i_1,\ldots,i_n\in I}$ with ${i_1\neq\cdots\neq i_n}$ (any two consecutive indices are different), and every ${a_1\in\mathcal{A}_{i_1},\ldots,a_n\in\mathcal{A}_{i_n}}$,

$\tau(a_1\cdots a_n)=\tau(a_1)\tau(a_2\cdots a_n).$

For instance, if ${a}$ and ${b}$ are Boolean free (i.e. their generated sub-sets are Boolean free) then ${\tau(abab)=\tau(a)^2\tau(b)^2}$ and ${\tau(aba^2b)=\tau(a)\tau(b)^2\tau(a^2)}$.

Monotone independence. Recall that if ${i_1\neq \cdots \neq i_n}$ is a sequence of integers where any two consecutive are different, then ${i_k}$ is a peak when either ${i_1>i_2}$ (if ${k=1}$), ${i_{n-1}<i_n}$ (if ${k=n}$), ${i_{k-1}<i_k>i_{k+1}}$ (if ${1<k<n}$). A family ${(\mathcal{A}_i)_{i\in\mathbb{N}}}$ of sub-sets of ${\mathcal{A}}$ closed for the algebraic operations and the involution ${\star}$, but which may not contain the unit ${1}$, is monotone independent when for every ${i_1,\ldots,i_n\in\mathbb{N}}$ with ${i_1\neq\cdots\neq i_n}$ (any two consecutive indices are different), and every ${a_1\in\mathcal{A}_{i_1},\ldots,a_n\in\mathcal{A}_{i_n}}$,

$\tau(a_1\cdots a_n)=\tau(a_k)\tau(a_1\cdots \check{a}_k \cdots a_n)$

when ${k}$ is a peak of the sequence ${i_1,\ldots,i_n}$, and where ${\check{a}_k}$ is the removal of ${a_k}$.

Note. The Boolean and monotone independence are trivial when ${1\in\mathcal{A}_i}$ i.e. when ${\mathcal{A}_i}$ is a sub-${\star}$-algebras. The notion of free independence was introduced by Voiculescu and is at the heart of free probability theory. The notion of Boolean independence was developed by Bozjeko and his followers. The notion of monotone independence is due to Lu and Muraki. Various other notions of independence, not considered here, are studied in the literature.

Convolutions. If ${a}$ and ${b}$ are independent for one of the four notions of independence, then the ${\star}$-distribution of ${a+b}$ depends only on the ${\star}$-distribution of ${a}$ and of ${b}$, and is called the convolution of these distributions. We recover the classical notion of convolution for the commutative independence, and the Voiculescu notion of free convolution for the free independence.

Singleton property. The four notion of independence satisfy to the singleton property: if ${a_1\in\mathcal{A}_{i_1}\ldots,a_n\in\mathcal{A}_{i_n}}$ where ${\mathcal{A}_1,\ldots,\mathcal{A}_n}$ are independent (for any of these four notions), and if ${a_i=a_i^*}$ for any ${1\leq i\leq n}$, and if ${\tau(a_1)=\cdots=\tau(a_n)=0}$, and if there exists ${1\leq k\leq n}$ such that ${\{1\leq j\leq n:i_j=i_k\}=\{k\}}$ (i.e. ${a_k}$ is the unique element of ${\mathcal{A}_{i_k}}$ in the sequence ${a_1,\ldots,a_n}$), then ${\tau(a_1\cdots a_n)=0}$.

Central limit theorems. Let ${a_1,a_2,\ldots\in\mathcal{A}}$. We have, for every ${n,m\geq1}$,

$\tau((a_1+\cdots+a_n)^m)=\sum_{i_1,\ldots,i_m=1}^n\tau(a_{i_1}\cdots a_{i_m}).$

The mixed moment ${\tau(a_{i_1}\cdots a_{i_m})}$ can be computed using the notions of independence. Let us make the following assumptions on the variables:

• the variables are real: ${a_i=a_i^*}$ for any ${i\geq1}$
• the variables are centered and normalized: ${\tau(a_i)=0}$ and ${\tau(a_i^2)=1}$ for all ${i\geq1}$
• the variables have bounded mixed moments: for all ${n\geq1}$,

$\sup_{i_1\geq1,\ldots,i_n\geq1}|\tau(a_{i_1}\cdots a_{i_n})|<\infty$

• the variables are independent (for one of the four notions of independence).

Then it can be shown that for every ${m\geq1}$,

$\lim_{n\rightarrow\infty}\tau\left(\left(\frac{a_1+\cdots+a_n}{\sqrt{n}}\right)^m\right) = \int\!x^m\,d\mu$

where the limiting distribution ${\mu}$ is${\ldots}$

• for commutative independence, the standard Gaussian distribution

$\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}\,dx$

Its moments are given by ${m_{2k+1}=0}$ and ${m_{2k}=\frac{(2k)!}{2(k!)}=(2k-1)!!}$

• for free independence, the Wigner semi-circle distribution on ${[-2,2]}$

$\sqrt{4-x^2}\frac{\mathbf{1}_{[-2,2]}(x)}{2\pi}\,dx$

Its moments are given by ${m_{2k+1}=0}$ and ${m_{2k}=\frac{1}{k+1}\binom{2k}{k}=\frac{(2k)!}{k!(k+1)!}}$ (Catalan numbers)

• for Boolean independence, the symmetric Bernoulli distribution on ${\{-1,1\}}$

$\frac{1}{2}(\delta_{-1}+\delta_1)$

Its moments are given by ${m_{2k+1}=0}$ and ${m_{2k}=1}$

• for monotone independence, the arc-sine distribution on ${[-\sqrt{2},\sqrt{2}]}$

$\frac{\mathbf{1}_{[-\sqrt{2},\sqrt{2}]}(x)}{\pi\sqrt{2-x^2}}\,dx$

Its moments are given by ${m_{2k+1}=0}$ and ${m_{2k}=2^{-k}\binom{2k}{k}=\frac{(2k)!}{k!^22^k}}$

Stability. The Gaussian distribution is stable by the commutative convolution, the Wigner semi-circle distribution is stable by the free convolution, the Bernoulli distribution is stable by the Bernoulli convolution, while the arc-sine distribution is stable by the monotone convolution.

Open problem. Note that in the four cases, the second moment is constantly equal to ${1}$ along the central limit theorem (conservation law). In the case of commutative independence, it has been conjectured by Shannon and proved few years ago that the Boltzmann-Shannon entropy is monotonic along the central limit theorem (additionally, its maximum under a second moment constraint is achieved by the standard Gaussian law). Similarly, in the case of free independence, it has been proved by Shlyakhtenko few years ago that the Voiculescu entropy is monotonic along the central limit theorem (additionally, its maximum under a second moment constraint is achieved by the Wigner semi-circle distribution). Both entropies are additive for tensor products of random variables. The existence of such entropies for the Boolean and monotonic independence constitutes a natural problem (still open at the time of writing – any ideas?).

Recently, a French friend of mine, Mr C, was visiting Italy. He wanted to watch a streaming video on a French media website, say www.media.fr. The problem was that this website blocks accesses from outside France due to capitalistic reasons. So Mr C asked if he can use a sort of proxy based in France and accessible from Italy in order to watch his video. We have thus two constraints: throughput and location. This post is devoted to the quick description of two possible solutions using some knowledge in IPv4 TCP/IP networking.

IP level solution : Virtual Private Network. This is the best solution in principle, since it solves the problem at the IP level, for all services, not only for the web. For this solution, Mr C needs a machine in France, say machine.domain.fr, connected to the Internet, with a high capacity in upstream and downstream (this excludes machines connected with commercial ADSL due to the limited upstream). On machine.domain.fr, Mr C may install for instance as root the free software OpenVPN (this is quite easy on Debian GNU/Linux for instance). Mr C can then connect his laptop in Italy to this Virtual Private Network (VPN). The main problem for this solution is to find such a machine. Most machines in academic networks are protected by a firewall, blocking arbitrary connections from outside the academic network. Of course, one can break the firewall using SSH, but this complicates things and produces an ugly solution. Mr C is lucky if his university provides a VPN service. Some universities do. Mine does not. The Mathrice VPN (CNRS) allows connection on MathSciNet for instance but it seems that it does not allow connections on video streaming sites! (test by yourself).

Application level solution : SOCKS server over SSH. This is the simplest solution. Suppose that Mr C has access to an OpenSSH server located in his French University, say ssh.uni.fr. From his laptop in Italy, Mr C can connect to this server, say using the command ssh -D 6666 ssh.uni.fr. It remains for him to configure his favorite web browser(1) to use a SOCKS proxy with IP 127.0.0.1 and port 6666. One may replace 6666 by any number in [1024,65535]. This solution at the application level works very well for all applications able to use the SOCKS v5 protocol. For other applications, one can use a sockifing wrapper. Of course, this solution will not work if the server ssh.uni.fr blocks the SOCKS feature of the SSH server (test by yourself).

(1) for Firefox: Edit/Preferences/Advanced/Network/Parameters/SOCKS_Host (not HTTP_Proxy).

Note. It is also possible to use other proxies available on the Internet, either generic purpose proxies or specific proxies dedicated to video streams. Some of them are free. Personally, I prefer the solutions above since they do not involve untrusted third parties.

IPv6. Both solutions can be adapted to IPv6 (exercise!).

Thanks to my former teacher and now colleague Philippe Carmona – nephew of René Carmona – I had recently the opportunity to learn the basics of the job of Managing Editor for an electronic mathematical journal. The articles published in this journal are written in $\LaTeX$. Even if I do not pretend that my own $\LaTeX$ files are perfect – I am constantly learning! – I was disappointed by the poor quality of the $\LaTeX$ code produced by many young and old mathematicians.  Take a tour on arXiv to be convinced if needed : the source code is always available on this site, a museum of horrors. Here are some basic guidelines for sane habits:

• never use \def for defining macros, use instead \newcommand
• never use  for displayed equations, use instead the brackets $\backslash[\backslash]$
• use \textbf{}, \textit{}, and \emph{} instead of {\bf }, {\it }, and {\em }
• never use one letter names for macros or for environments
• never use strange names for environments and macros
• use the environment proof provided by amsmath
• use \newenvironment to define new environments
• use \binom{n}{k} instead of n \choose k
• use \frac{a}{b} instead of a \over b
• never use an exotic package if you do not *really* need it
• indent your code and  avoid too long lines
• use prefixed labels such as eq: for equations and th: for theorems
• to produce graphics, avoid using psfrag or xfig and use instead ipe
• learn how to use the error messages produced during compilation
• always use your bright sense of esthetics, not a dark laziness
• read the wiki-books on LaTeX and LaTeX Maths without moderation

All mathematicians believe that the mathematical result is the most important. Many of them believe that the esthetic of the proof is also important. Some of them believe that even the writing style is important. Few of them believe that the $\LaTeX$ code should also be nice. It is a matter of taste after all. To me, a mathematical proof and a $\LaTeX$ file are both programs, and I like nice programs. A good $\LaTeX$ code is easier to maintain, easier to convert, easier to read. Last but not least, a good $\LaTeX$ code helps your co-authors and helps to speed up the publication process. $\TeX$ is a good program, but an author using $\LaTeX$ should write a genuine $\LaTeX$ code, not an ugly mixture of both $\TeX$ and $\LaTeX$. Nothing is perfect, but this is not a reason to leave things ugly 😉 If you think that you have learned $\LaTeX$ once for all in your youth, you are wrong. Every non trivial language needs constant learning and practice.

The Heisenberg group is a remarkable simple mathematical object, with interesting algebraic, geometric, and probabilistic aspects. It is available in tow flavors: discrete and continuous. The (continuous) Heisenberg group ${\mathbb{H}}$ is formed by the real ${3\times 3}$ matrices of the form

$\begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \\ \end{pmatrix}, \quad x,y,z\in\mathbb{R}.$

The Heisenberg group is a non-commutative sub-group of ${\mathrm{GL}_3(\mathbb{R})}$:

$\begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \\ \end{pmatrix} \begin{pmatrix} 1 & x’ & z’ \\ 0 & 1 & y’ \\ 0 & 0 & 1 \\ \end{pmatrix} = \begin{pmatrix} 1 & x+x’ & z+z’+xy’ \\ 0 & 1 & y+y’ \\ 0 & 0 & 1 \\ \end{pmatrix}$

The inverse is given by

$\begin{pmatrix} 1 & x & z \\ 0 & 1 & y \\ 0 & 0 & 1 \\ \end{pmatrix}^{-1} = \begin{pmatrix} 1 & -x & -z+xy \\ 0 & 1 & -y \\ 0 & 0 & 1 \\ \end{pmatrix}.$

(the discrete Heisenberg group is the discrete sub-group of ${\mathbb{H}}$ formed by the elements of ${\mathbb{H}}$ with integer coordinates). The Heisenberg group ${\mathbb{H}}$ is a Lie group. Its Lie algebra ${\mathfrak{H}}$ is the sub-algebra of ${\mathcal{M}_3(\mathbb{R})}$ given by the ${3\times 3}$ real matrices of the form

$\begin{pmatrix} 0 & a & c \\ 0 & 0 & b \\ 0 & 0 & 0 \\ \end{pmatrix}, \quad a,b,c\in\mathbb{R}$

The exponential map ${\exp:A\in\mathcal{L}\mapsto\exp(A)\in\mathbb{H}}$ is a diffeomorphism. This allows to identify the group ${\mathbb{H}}$ with the algebra ${\mathfrak{H}}$. Let us define

$X= \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}, \quad Y= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{pmatrix}, \quad\text{and}\quad Z= \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}.$

We have then

$[X,Y]=XY-YX=Z\quad\text{and}\quad [X,Z]=[Y,Z]=0.$

The Lie algebra ${\mathfrak{H}}$ is nilpotent of order ${2}$:

$\mathfrak{H}=\mathrm{span}(X,Y)\oplus\mathrm{span}(Z).$

This makes the Baker-Campbell-Hausdorff formula on ${\mathfrak{H}}$ particularly simple:

$\exp(A)\exp(B)=\exp\left(A+B+\frac{1}{2}[A,B]\right).$

The names Heisenberg group and Heisenberg algebra come from the fact that in quantum physics, and following Werner Heisenberg and Hermann Weyl, the algebra generated by the position operator and the momentum operator is exactly ${\mathfrak{H}}$. The identification of ${\mathbb{H}}$ with ${\mathfrak{H}}$

$\begin{pmatrix} 1 & a & c \\ 0 & 1 & b \\ 0 & 0 & 1 \\ \end{pmatrix} \equiv \exp \begin{pmatrix} 0 & x & z \\ 0 & 0 & y \\ 0 & 0 & 0 \\ \end{pmatrix} =\exp(xX+yY+zZ)$

allows to identify ${\mathbb{H}}$ with ${\mathbb{R}^3}$ equipped with the group structure

$(x,y,z)(x’,y’,z’)=(x+x’,y+y’,z+z’+\frac{1}{2}(xy’-yx’))$

and

$(x,y,z)^{-1}=(-x,-y,-z).$

This is the exponential coordinates of ${\mathbb{H}}$. Geometrically, the quantity ${\frac{1}{2}(xy’-yx’)}$ is the algebraic area in ${\mathbb{R}^2}$ between the piecewise linear path

$[(0,0),(x,y)]\cup[(x,y),(x+x’,y+y’)]$

and its chord

$[(0,0),(x+x’,y+y’)].$

This area is zero if ${(x,y)}$ and ${(x’,y’)}$ are colinear. The group product encodes the sum of increments in ${\mathbb{R}^2}$ and computes automatically the generated area.

The Heisenberg group ${\mathbb{H}}$ is topologically homeomorphic to ${\mathbb{R}^3}$ and the Lebesgue measure on ${\mathbb{R}^3}$ is a Haar measure on ${\mathbb{H}}$. However, as a manifold, its geometry is sub-Riemannian: the tangent space (at the origin and thus everywhere) is of dimension ${2}$ instead of ${3}$, putting a constraint on the geodesics (due to the lack of vertical speed vector, some of them are helices instead of straight lines). The Heisenberg group ${\mathbb{H}}$ is also a metric space for the so called Carnot-Carathéodory sub-Riemannian distance. The Heisenberg group is a Carnot group. Its Hausdorff dimension with respect to the Carnot-Carathéodory metric is ${4}$, in contrast with its dimension as a topological manifold which is ${3}$.

The dilation semigroup of automorphisms ${(\mathrm{dil}_t)_{t\geq0}}$ on ${\mathbb{H}}$ is defined by

$\mathrm{dil}_t \exp \begin{pmatrix} 0 & x & z \\ 0 & 0 & y \\ 0 & 0 & 0 \\ \end{pmatrix} = \exp \begin{pmatrix} 0 & tx & t^2z \\ 0 & 0 & ty \\ 0 & 0 & 0 \\ \end{pmatrix} .$

Let ${(x_n,y_n)_{n\geq0}}$ be the simple random walk on ${\mathbb{Z}^2}$ starting from the origin. If one embed ${\mathbb{Z}^2}$ into ${\mathbb{H}}$ by ${(x,y)\mapsto xX+yY}$ then one can consider the position at time ${n}$ in the group by taking the product of increments in the group:

$\begin{array}{rcl} S_n=(x_1,y_1)\cdots(x_n,y_n) &=&(s_{n,1},s_{n,2},s_{n,3}) \\ &=&(x_1+\cdots+x_n,y_1+\cdots+y_n,s_{n,3}). \end{array}$

These increments are commutative for the first two coordinates (called the horizontal coordinates) and non commutative for the third coordinate. The first two coordinates of ${S_n}$ form the position in ${\mathbb{Z}^2}$ of the random walk while the third coordinate is exactly the algebraic area between the random walk path and its chord on the time interval ${[0,n]}$. We are now able to state the Central Limit Theorem on the Heisenberg group:

$\left(\mathrm{dil}_{1/\sqrt{n}}(S_{\lfloor nt\rfloor})\right)_{t\geq0} \quad \underset{n\rightarrow\infty}{\overset{\text{law}}{\longrightarrow}} \quad \left(B_t,A_t\right)_{t\geq0}$

where ${(B_t)_{t\geq0}}$ is a simple Brownian motion on ${\mathbb{R}^2}$ and where ${(A_t)_{t\geq0}}$ is its Lévy area (algebraic area between the Brownian path and its chord, seen as a stochastic integral). The stochastic process ${(\mathbb{B}_t)_{t\geq0}=((B_t,A_t))_{t\geq0}}$ is the sub-Riemannian Brownian motion on ${\mathbb{H}}$.

$\begin{array}{rcl} \mathbb{B}_t &=& (B_t,A_t) \\ &=&(B_{t,1},B_{t,2},A_t) \\ &=& \exp \begin{pmatrix} 0 & B_{t,1} & \frac{1}{2}\left(\int_0^t\!B_{s,1}dB_{s,2}-\int_0^t\!B_{t,2}dB_{t,1}\right) \\ 0 & 0 & B_{t,2} \\ 0 & 0 & 0 \end{pmatrix} \\ &=& \begin{pmatrix} 1 & B_{t,1} & \int_0^t\!B_{s,1}dB_{s,2} \\ 0 &1 & B_{t,2} \\ 0 &0 &1 \end{pmatrix}. \end{array}$

The process ${(\mathbb{B}_t)_{t\geq0}}$ has independent and stationary (non-commutative) increments and belong the class of Lévy processes, associated to (non-commutative) convolution semigroups on ${\mathbb{H}}$. The law of ${\mathbb{B}_t}$ is infinitely divisible and maybe seen as a sort of Gaussian measure on ${\mathbb{H}}$. The process ${(\mathbb{B}_t)_{t\geq0}}$ is also a Markov diffusion process on ${\mathbb{R}^3}$ admitting the Lebesgue measure as an invariant reversible measure, and with infinitesimal generator

$L=\frac{1}{2}(V_1^2+V_2^2)=\frac{1}{2}\left((\partial_x-\frac{1}{2}y\partial_z)^2+(\partial_y+\frac{1}{2}x\partial_z)^2\right).$

We have ${V_3:=[V_1,V_2]=\partial_z}$ and ${[V_1,V_3]=[V_2,V_3]=0}$. The linear differential operator ${L}$ on ${\mathbb{R}^3}$ is hypoelliptic but is not elliptic. It is called the sub-Laplacian on ${\mathbb{H}}$. A formula for its heat kernel was computed by Lévy using Fourier analysis (it is an oscillatory integral).

Note that ${L}$ acts like the two dimensional Laplacian on functions depending only on ${x,y}$. Note also that the Riemannian Laplacian on ${\mathbb{H}}$ is given by

$L+\frac{1}{2}V_3^2=\frac{1}{2}\left(V_1^2+V_2^2+V_3^3\right) =\frac{1}{2}\left((\partial_x-\frac{1}{2}y\partial_z)^2+(\partial_y+\frac{1}{2}x\partial_z)^2+(\partial_z)^2\right).$

Open question. Use the CLT to obtain a sub-Riemannian Poincaré inequality or even a logarithmic Sobolev inequality on ${\mathbb{H}}$ for the heat kernel. It is tempting to try to adapt to the sub-Riemannian case the strategy used by L. Gross (for Riemannian Lie groups). This question is naturally connected to my previous work on gradient bounds for the heat kernel on the Heisenberg group, in collaboration with D. Bakry, F. Baudoin, and M. Bonnefont.

Related reading. (among many other references)

Syntax · Style · Tracking & Privacy.