This post is about the recent work arXiv:2405.00120 with Ryan Matzke, Edward Saff, Minh Quan Vu, and Robert Womersley. It belongs to classical analysis and potential theory. More precisely, we study Riesz energy problems with radial external fields, on the full Euclidean space. In particular, when the external field is a power of the Euclidean norm, we completely characterize the values of the power for which dimension reduction occurs in the sense that the support of the equilibrium measure becomes a sphere.

**Riesz kernel.** The Riesz kernel of parameter $s\in(-2,\infty)$ in dimension $d\geq1$ is defined by

\[

K(x):=

\begin{cases}

\displaystyle\frac{1}{s\|x\|^s} & \text{if $s\neq0$}\\[1.5em]

\displaystyle\log\frac{1}{\|x\|} & \text{if $s=0$}

\end{cases},

\] for $x\in\mathbb{R}^d$, $x\neq0$. The special case $s=d-2$ is the Coulomb or Newton kernel.

**Riesz energy with external field.** For a probability measure $\mu$ on $\mathbb{R}^d$, we define

\[

\mathrm{I}(\mu):=\iint(K(x-y)+V(x)+V(y))\mathrm{d}\mu(x)\mathrm{d}\mu(y).

\] We focus on the situation where the external field $V$ is a power :

\[

V(x):=\frac{\gamma}{\alpha}\|x\|^\alpha,\quad\gamma>0,\ \alpha>\max(-s,0).

\]

**Equilibrium measure.** The functional $\mathrm{I}$ is strictly convex, and there exists a unique probability measure $\mu$, called the equilibrium measure, denoted $\mu_{\mathrm{eq}}$, such that

\[

\mathrm{I}(\mu_{\mathrm{eq}})=\min_{\mu}\mathrm{I}(\mu)<\infty.
\] It is radially symmetric and has compact support.

**Threshold phenomenon.** Let $\sigma_R$ be the uniform probability measure on the sphere of radius $R$. Our main discovery is that if $s\in(-2,d-3)$ then there exists a critical value $\alpha_*$ such that

\[

\mu_{\mathrm{eq}}=\sigma_R\text{ for some }R\quad\text{ if and only if }\quad\alpha\geq\alpha_*.

\] Moreover $\alpha_*$ and $R$ are explicit functions of $(s,d)$ and $(s,d,\alpha)$ respectively :

\begin{align*}

\alpha_*

&= \max\left\{ \dfrac{sc}{2-2c}, \ 2- \dfrac{(s+2)(d-s-4)}{2(d-s-3)}\right\}\\

R

&= \left( \frac{c}{2 \gamma} \right)^{\frac{1}{\alpha + s}}

= \left( \frac{\Gamma(\frac{d}{2}) \Gamma(d-s-1)}{ 2 \gamma \Gamma( \frac{d-s}{2}) \Gamma(d – \frac{s}{2}-1)}

\right)^{\frac{1}{\alpha + s}}

\end{align*} where $c := {}_2\mathrm{F}_1\Bigr(\frac{s}{2}, \frac{2+s-d}{2}; \frac{d}{2}; 1\Bigr)%

=\frac{\Gamma(\frac{d}{2})\Gamma(d -s-1)}

{\Gamma(\frac{d-s}{2})\Gamma(d-\frac{s}{2}-1)}$. The proof involves the Frostman characterization, the Funk-Hecke formula, the calculus of hypergeometric functions, and some black magic.

The active bound in $\alpha_*$ changes at $s=d-4$, for which $\alpha_*=2$. The properties of $c$ ensure that $\alpha_*\geq\max(-s,0)$ and $\alpha_*$ is continuous at $s=0$. More is in arXiv:2405.00120.

**Riesz brothers.** Frigyes Riesz (1880 — 1956) and Marcell Riesz (1886 — 1969) were brothers. Frigyes is one of the founders of functional analysis, and Alfréd Rényi was one of his doctoral students. Marcell is one of the developpers of potential theory and harmonic analysis among other fields, Harald Cramér, Otto Frostman, Lars Hörmander, and Olof Thorin were one of his doctoral students. The Riesz brothers have a unique joint work, on analytic measures.

Marcel Riesz was born in Györ, Hungary, November 16, 1886 and died in Lund, Sweden, on September 4, 1969. He studied in Budapest, Göttingen and Paris. In 1911 Mittag-Leffler invited him to come to Sweden where he taught at Stockholms Högskola. In 1926 he was appointed professor of mathematics at the university of Lund. After retiring from this position in 1952 he went to the United States where he was visiting research professor at the universities of Maryland and Chicago and other places. He returned to Lund in 1962.

…

Marcel Riesz was the youngest member of a generation of brilliant Hungarian mathematicians that included among others Leopold Fejér, Riesz’s elder brother Frederick Riesz and Alfred Haar. His first paper (1906) is an exposition in Hungarian of a subject of current interest at the time, namely various summation methods for Taylor series of analytic functions. One of these methods, due to Mittag-Leffler, sums the series in a starshaped region bounded by singular points, the Mittag-Leffler star. Common interest in these matters was the beginning of the association with Mittag-Leffler. They seem to have met for the first time in Stockholm in 1908.

…

Riesz’s work after he moved to Lund marks a break with the past. He acquired new interests, starting work in potential theory and wave propagation including Dirac’s equation of the electron and relativity theory. He also took a continuing interest in elementary number theory. His most important contributions are in potential theory and wave propagation. In both cases he invented new multi-dimensional analogues of the Riemann-Liouville integral.

…

Riesz wrote clearly and well and paid much attention to form. His favourite language was French and his style, steeped in the classical tradition, sometimes borders on the precious. Mathematical research always involves competition for fame and a place in the hierarchy, but he made it seem a gentleman’s game. He was of course no stranger to ambition and had to assert himself both in Sweden and in the cosmopolitan world he came from. He admired his illustrious brother Frederick and they had cordial relations. They wrote one paper together, but otherwise there is a clear distance in content between their work, perhaps a result of a conscious effort on Marcel’s part. Seen together they had much physical resemblance but very different temperaments, Frederick calm with great poise and Marcel quick and restless in comparison. Marcel Riesz knew an astonishing number of mathematicians and over the years made and kept many friends among them.

…

Mittag-Leffler had made Stockholms Högskola a center of mathematical research. It had a peak of activity before and around the turn of the century and its other mathematicians, Bendixson, von Koch and Fredholm were famous names. Some ten years later, however, their scientific activity was on the wane for various reasons. Riesz filled a mathematical vacuum. He learnt Swedish quickly and he was very active in the local mathematical society where he soon became the dominating figure. He was lively, accessible, an enthusiastic teacher and a good lecturer with a thorough knowledge of his field. His charming expository lecture from 1913, written in Swedish, has a distinct personal touch reflecting these qualities. In 1923, Riesz lost a competition for a chair in Lund to Carleman. Shortly afterwards von Koch died and Bendixson, Fredholm and Phragmén made a move to appoint Riesz as von Koch’s successor. The move failed and the call went to Carleman. Shortly afterwards, Riesz got a position in Lund. At least in the beginning, he must have felt his stay in Lund as an exile. He had been very successful in Stockholm where, among others, Harald Cramér and Einar Hille had been his personal students.Lund did not have much of a mathematical tradition but Riesz’s arrival meant a change of atmosphere. He was now an international star, active with his own research and he also had the time and incentive to broaden his interests. Frostman’s thesis was a success and there were others after him. Lars Hörmander was one of Riesz’s last personal students in Sweden. Riesz’s work on fractional potentials was the origin of the contributions from Lund to the theory of partial differential operators.

…

Excerpt from Marcel Riesz in memoriam by Lars Gårding, in Acta Math. 124 (1970)

…Lars Gårding invited me to spend three days in Lund, one of the best mathematics departments in Sweden. He initiated me to the very important work of the Soviet mathematieian Petrowski; “Petrowski is my God, and I am his prophet,” he said laughing. I returned to Lund several times later on, and in 1981 I was awarded a doctorate honoris causa there. I had many conversations with Lars Gårding, a devotee of distributions, and with Marcel Riesz. Marcel and Frederic Riesz were two remarkable Hungarian mathematicians: Frederic was tall and thin, Marcel small and fat (Laurel and Hardy). Frederic lived in Hungary, but Marcel was in Lund. He discovered results on partial differential equations, in particular the solution of hyperbolic equations of non-integral order, via symbolic calculation on Hadamard’s finite parts. This problem was adapted to the use of distributions, so he was interested in my talks. Marcel Riesz was a solid drinker, occasionally pushing the limits. One night, we stayed in his house until two in the morning, with a glass of liquor in front of each of us. His glass kept refilling itself. I didn’t touch mine, so every time he looked at it, he saw it was full and thought I had served myself. Later he told Gårding that I was an “excellent drinker” . I remained very close to them. When Riesz came to dinner at our house, we served a chocolate cake, decorated with the wave equation written in white cream – he was enchanted! The young Lars Hörmander was at that time still in high school. Some years later, this future Fields Medalist, trained by Gårding, stood out at the university and became one of the world specialists in partial differential equations. He profited indirectly from my stay, because he made extensive use of distributions. We also became close friends.

…

Excerpt fromA Mathematician Grappling with His Century, Birkhäuser 2001, by Laurent Schwartz. This last excerpt was suggested to me by my old friend Arnaud Guyader.

**Further reading on this blog.**

- Mellin transform and Riesz potentials

2022-08-26 - Unexpected phenomena for equilibrium measures

2022-06-27 - The Funk-Hecke formula

2021-05-22 - Equilibrium measures and obstacle problems

2022-12-11

**Fourier transform.** Recall the formula

\[

\int_{\mathbb{R}}\mathrm{e}^{\mathrm{i}tx}\tfrac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{x^2}{2\sigma^2}}\mathrm{d}x

=\mathrm{e}^{-\frac{\sigma^2 t^2}{2}}.

\] A good way to remember that the dispersion parameter $\sigma^2$ is in the numerator in the right hand side is to have in mind the uncertainty principle : if the signal is localized then its Fourier transform is not, and vice versa. In case of doubt regarding the sign inside the right hand side, remember that the standard Gaussian is an eigenfunction of the Fourier transform.

**Moments.** The simplest way to compute the even moments of the standard Gaussian is neither integration by parts nor Fourier transform : just take the $n$-th derivative with respect to $\beta$ of

\[

\int_{\mathbb{R}}\mathrm{e}^{-\beta x^2}\mathrm{d}x=\sqrt{\frac{\pi}{\beta}}

\] which gives

\[

(-1)^n\int_{\mathbb{R}}x^{2n}\mathrm{e}^{-\beta x^2}\mathrm{d}x

=(-1)^n\sqrt{\frac{\pi}{\beta^{2n+1}}}\prod_{k=1}^{n}\Bigr(\frac{1}{2}+k-1\Bigr),

\] in other words, with $\beta=\frac{1}{2\sigma^2}$,

\[

\int_{\mathbb{R}} x^{2n}\tfrac{1}{\sqrt{2\pi\sigma^2}}\mathrm{e}^{-\frac{x^2}{2\sigma^2}}\mathrm{d}x

=\sigma^{2n}\prod_{k=1}^n(2k-1)=\sigma^{2n}(2n-1)!!.

\] This trick also works for the Gamma distribution and is an instance of the energy trick in statistical physics : in the same spirit, if $\lambda$ is some reference measure on some space and

\[

Z:=\int\mathrm{e}^{-\beta V(x)}\mathrm{d}\lambda(x),

\]then, denoting $X$ a random variable with density $\tfrac{1}{Z} \mathrm{e}^{-\beta V}$ with respect to $\lambda$,

\[

\partial_\beta\log Z=-\mathbb{E}(V(X))

\quad\text{while}\quad

\partial^2_\beta\log Z=\mathbb{Var}(V(X)).

\]

Why and how entropy emerges in basic mathematics? This tiny post aims to provide some answers. We have already tried something in this spirit in a previous post almost ten years ago.

**Combinatorics.** Asymptotic analysis of the multinomial coefficient $\binom{n}{n_1,\ldots,n_r}:=\frac{n!}{n_1!\cdots n_r!}$ :

\[

\frac{1}{n}\log\binom{n}{n_1,\ldots,n_r}

\xrightarrow[n=n_1+\cdots+n_r\to\infty]{\nu_i=\frac{n_i}{n}\to p_i}

\mathrm{S}(p):=-\sum_{i=1}^rp_i\log(p_i).

\] Recall that if $A=\{a_1,\ldots,a_r\}$ is a finite set of cardinal $r$ and $n=n_1+\cdots+n_r$ then

\[

\mathrm{Card}\Bigr\{(x_1,\ldots,x_n)\in A^n:\forall 1\leq i\leq r,\sum_{k=1}^n\mathbf{1}_{x_k=a_i}=n_i\Bigr\}=\binom{n}{n_1,\ldots,n_r}.

\] The multinomial coefficient can be interpreted as the number of microstates $(x_1,\ldots,x_n)$ compatible with the macrostate $(n_1,\ldots,n_r)$, while the quantity $\mathrm{S}(p)$ appears as a normalized asymptotic measure of additive degrees of freedom or disorder. This is already in the work of Ludwig Eduard Boltzmann (1844 — 1906) in kinetic gas theory at the origins of statistical physics. The quantity $\mathrm{S}(p)$ is also the one used by Claude Elwood Shannon (1916 — 2001) in information and communication theory as the average length of optimal lossless coding. It is characterized by the following three natural axioms or properties, denoting $\mathrm{S}^{(n)}$ to remember $n$ :

- for all $n\geq1$, $p\mapsto\mathrm{S}^{(n)}(p)$ is continuous
- for all $n\geq1$, $\mathrm{S}^{(n)}(\frac{1}{n},\ldots,\frac{1}{n})<\mathrm{S}^{(n+1)}(\frac{1}{n+1},\ldots,\frac{1}{n+1})$
- for all $n=n_1+\cdots+n_r\geq1$, $\mathrm{S}^{(n)}(\frac{1}{n},\ldots,\frac{1}{n})=\mathrm{S}^{(r)}(\frac{n_1}{n},\ldots,\frac{n_r}{n})+\sum_{i=1}^r\frac{n_i}{n}\mathrm{S}^{(n_i)}(\frac{1}{n_i},\ldots,\frac{1}{n_i}).$

**Probability.** If $X_1,\ldots,X_n$ are independent and identically distributed random variables of law $\mu$ on a finite set or alphabet $A=\{a_1,\ldots,a_r\}$, then for all $x_1,\ldots,x_n\in A$,

\begin{align*}

\mathbb{P}((X_1,\ldots,X_n)=(x_1,\ldots,x_n))

&=\prod_{i=1}^r\mu_i^{\sum_{k=1}^n\mathbb{1}_{x_k=a_i}}

=\prod_{i=1}^r\mu_i^{n\nu_i}

=\mathrm{e}^{n\sum_{i=1}^n\nu_i\log\mu_i}\\

&=\mathrm{e}^{-n(\mathrm{S}(\nu)+\mathrm{H}(\nu\mid\mu))},

\end{align*} a remarkable identity where $\mathrm{S}(\nu)$ is the Boltzmann-Shannon entropy considered before, and where $\mathrm{H}(\nu\mid\mu)$ is a new quantity known as the Kullback-Leibler divergence or relative entropy :

\[

\mathrm{S}(\nu):=-\sum_{i=1}^r\nu_i\log\nu_i=-\int f(x)\log f(x)\mathrm{d}x

\] where $f$ is the density of $\nu$ with respect to the counting measure $\mathrm{d}x$, and

\[

\mathrm{H}(\nu\mid\mu):=\sum_{i=1}^r\nu_i\log\frac{\nu_i}{\mu_i}

=\sum_{i=1}^r\frac{\nu_i}{\mu_i}\log\frac{\nu_i}{\mu_i}\mu_i

=\int\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\log\frac{\mathrm{d}\nu}{\mathrm{d}\mu}\mathrm{d}\mu.

\] This comes from information theory after Solomon Kullback (1907 – 1994) and Richard Leibler (1914 — 2003). Here $\mathrm{S}(\nu)$ measures the combinatorics on $x_1,\ldots,x_n$ at prescribed frequencies $\nu$, while $\mathrm{H}(\nu\mid\mu)$ measures the cost or energy of deviation from the actual distribution $\mu$. This is a Boltzmann–Gibbsfication of the probability $\mathbb{P}((X_1,\ldots,X_n)=(x_1,\ldots,x_n))$, see below, leading via the Laplace method to the large deviations principle of Ivan Nikolaevich Sanov (1919 — 1968). The Jensen inequality for the strictly convex function $u\mapsto u\log(u)$ gives

\[

\mathrm{H}(\nu\mid\mu)\geq0\quad\text{with equality iff}\quad\nu=\mu.

\]

**Statistics.** If $Y_1,\ldots,Y_n$ are independent and identically distributed random variables of law $\mu^{(\theta)}$ in parametric family parametrized by $\theta$, on a finite set $A$, then, following Ronald Aylmer Fisher (1890 – 1962), the likelihood of data $(x_1,\ldots,x_n)\in A^n$ is

\[

\ell_{x_1,\ldots,x_n}(\theta):=\mathbb{P}(Y_1=x_1,\ldots,Y_n=x_n)

=\prod_{i=1}^n\mu^{(\theta)}_{x_i}.

\] It can also be seen as the likelihood of $\theta$ with respect to $x_1,\ldots,x_n$. This dual point of view leads to the following : if $X_1,\ldots,X_n$ is an observed sample of $\mu^{(\theta_*)}$ with $\theta_*$ unknown then the maximum likelihood estimator of $\theta_*$ is

\[

\widehat{\theta}_n:=\arg\max_{\theta\in\Theta}\ell_{X_1,\ldots,X_n}(\theta)

=\arg\max_{\theta\in\Theta}\Bigr(\frac{1}{n}\log\ell_{X_1,\ldots,X_n}(\theta)\Bigr).

\] The asymptotic analysis via the law of large numbers reveals entropy as asymptotic contrast

\begin{align*}

\frac{1}{n}\log\ell_{X_1,\ldots,X_n}(\theta)

&=\frac{1}{n}\sum_{i=1}^n\log\mu^{(\theta)}_{X_i}\\&\xrightarrow[n\to\infty]{\mathrm{a.s.}}

\sum_{k=1}^r\mu^{(\theta_*)}_k\log\mu^{(\theta)}_k

=\underbrace{-\mathrm{S}(\mu^{(\theta_*)})}_{\text{const}}-\mathrm{H}(\mu^{(\theta_*)}\mid\mu^{(\theta)}).

\end{align*}

**Analysis.** The entropy appears naturally as a derivative of the $L^p$ norm of $f\geq0$ as follows:

\[

\partial_p\|f\|_p^p

=\partial_p\int f^p\mathrm{d}\mu

=\partial_p\int \mathrm{e}^{-p\log(f)}\mathrm{d}\mu

=\int f^p\log(f)\mathrm{d}\mu

=\frac{1}{p}\int f^p\log(f^p)\mathrm{d}\mu.

\] This is at the heart of the Leonard Gross (1931 — ) theorem relating the hypercontractivity of Markov semigroups with the logarithmic Sobolev inequality for the invariant measure. This can also be used to extract from the William Henry Young (1843 – 1942) convolution inequalities certain entropic uncertainty principles.

**Boltzmann-Gibbs measures, variational characterizations, and Helmholtz free energy.** We take $V:A\to\mathbb{R}$, interpreted as an energy. Maximizing $\mu\mapsto\mathrm{S}(\mu)$ over the constraint of average energy $\int V\mathrm{d}\mu=v$ gives the maximizer \[

\mu_\beta

:=\frac{1}{Z_\beta}\mathrm{e}^{-\beta V}\mathrm{d}x

\quad\text{where}\quad

Z_\beta:=\int\mathrm{e}^{-\beta V}\mathrm{d}x.

\] We use integrals instead of sums to lightnen notation. The notation $\mathrm{d}x$ stands for the counting measure on $A$. The parameter $\beta>0$, interpreted as inverse temperature, is dictated by $v$. Such a probability distribution $\mu_\beta$ is known as a Boltzmann-Gibbs distribution, after Ludwig Eduard Boltzmann (1844 – 1906) and Josiah Willard Gibbs (1839 – 1903). We have a variational characterization as a maximum entropy at fixed average energy :

\[

\int V\mathrm{d}\mu=\int V\mathrm{d}\mu_\beta

\quad\Rightarrow\quad

\mathrm{S}(\mu_\beta)-\mathrm{S}(\mu)

=\mathrm{H}(\mu\mid\mu_\beta).

\] There is a dual point of view in which instead of fixing the average energy, we fix the inverse temperature $\beta$ and we introduce the Hermann von Helmholtz (1821 – 1894) free energy

\[

\mathrm{F}(\mu):=\int V\mathrm{d}\mu-\frac{1}{\beta}\mathrm{S}(\mu)

\] This can be seen as a Joseph-Louis Lagrange (1736 – 1813) point of view in which the constraint is added to the functional. We have

\[

\mathrm{F}(\mu_\beta)=-\frac{1}{\beta}\log(Z_\beta)

\quad\text{since}\quad

\mathrm{S}(\mu_\beta)=\beta\int V\mathrm{d}\mu_\beta+\log Z_\beta.

\] We have then a new variational characterization as a minimum free energy at fixed temperature :

\[

\mathrm{F}(\mu)-\mathrm{F}(\mu_\beta)=\frac{1}{\beta}\mathrm{H}(\mu\mid\mu_\beta).

\] This explains why $\mathrm{H}$ is often called free energy.

**Legrendre transform.** The relative entropy $\nu\mapsto\mathrm{H}(\nu\mid\mu)$ is the Legendre transform of the log-Laplace transform, in the sense that

\[

\sup_g\Bigr\{\int g\mathrm{d}\nu-\log\int\mathrm{e}^g\mathrm{d}\mu\Bigr\}=\mathrm{H}(\nu\mid\mu).

\] Indeed, for all $h$ such that $\int\mathrm{e}^h\mathrm{d}\mu=1$, by the Jensen inequality, with $f:=\frac{\mathrm{d}\nu}{\mathrm{d}\mu}$,

\begin{align*}

\int h\mathrm{d}\nu

&=\int f\log(f)\mathrm{d}\mu+\int\log\frac{\mathrm{e}^h}{f}f\mathrm{d}\mu\\

&\leq\int f\log(f)\mathrm{d}\mu+\log\int\mathrm{e}^h\mathrm{d}\mu

=\int f\log(f)\mathrm{d}\mu=\mathrm{H}(\nu\mid\mu),

\end{align*} and equality is achieved for $h=\log f$. It remains to reparametrize with $h=g-\log\int\mathrm{e}^g\mathrm{d}\mu$. Conversely, the Legendre transform of the relative entropy is the log-Laplace transform :

\[

\sup_{\nu}

\Bigr\{\int g\mathrm{d}\nu-\mathrm{H}(\nu\mid\mu)\Bigr\}=\log\int\mathrm{e}^g\mathrm{d}\mu.

\] This is an instance of the convex duality for the convex functional $\nu\mapsto\mathrm{H}(\nu\mid\mu)$.

Same story for $-\mathrm{S}$ which is convex as a function of the Lebesgue density of its argument.

**Heat equation.** The heat equation $\partial_tf_t=\Delta f_t$ is the gradient flow of entropy :$$\partial_t\int f_t\log(f_t)\mathrm{d}x=-\int\frac{\|\nabla f_t\|^2}{f_t}\mathrm{d}x$$where we used integration by parts, the right hand side is the Fisher information. In other words, the entropy is a Lyapunov function for the heat equation seen as an infinite dimensional ODE.

**Further reading.**

- Boltzmann-Gibbs entropic variational principle

On this blog (2022) - Entropy ubiquity

On this blog (2015) - Bosons and fermions

On this blog (2012)

En France comme dans bien d’autres pays développés, les temples de l’élitisme sont avant tout ceux de l’auto-reproduction de dominants socio-culturels. Ceci explique peut-être l’effacement relatif de la question sociale dans certains de ces établissements, au profit de questions sociétales qui préoccupent les dominants et leur progéniture. Il faut dire que ces bourgeois, petits ou grands, bohèmes ou pas, d’extrême gauche ou pas, font des enfants, mais pas des enfants d’ouvriers, et sont pratiquement les seuls à pouvoir optimiser le parcours scolaire.

L’establishment, travaillé par ses propres convictions et une certaine militance parmi ses gouvernés, peut même aller jusqu’à tenter d’imposer à tous un point de vue dogmatique bien-pensant sur des sujets de société ou d’actualité. C’est qu’il faut faire en sorte que tout le monde réfléchisse correctement, éclairer les déviants, intimider les dissidents. Totalitarisme d’opérette ? Maccarthysme, inquisition, chasse aux sorcières qui ne disent pas leur nom ? Ces termes sont excessifs ? Nombreux sont ceux qui haussent les épaules, courbent l’échine, préfèrent se taire et attendre des jours meilleurs. Après tout, staliniens et autres maoïstes n’ont fait que passer.

L’Histoire suggère que la vérité tient plus d’une quête permanente que d’un aboutissement définitif. Il va sans dire que toutes les certitudes sont revisitées ou revisitables, mais toutes ne sont pas à mettre sur le même plan, certaines sont plus étayées que d’autres. Et il ne suffit pas de se nourrir de déconstruction ou de confiance pour avoir raison. La subversion et la nouveauté, pas plus que le conformisme et la tradition, ne garantissent justesse et pertinence, même s’ils exercent un fort pouvoir de séduction sur les esprits. Curieusement, les dominants socio-culturels, héritiers et pratiquants de la liberté de pensée et d’expression, sont parfois les premiers à vouloir la contrôler, au nom d’une orthodoxie morale de nature religieuse, convaincus de détenir la vérité, et d’avoir le devoir de faire taire ceux qui pensent différemment. L’Histoire nous enseigne qu’un totalitarisme peut se bâtir sur une absence de doute et d’esprit critique parmi les puissants, une médiocrité intellectuelle vécue comme juste, visionnaire, et à l’avant-garde.

]]>