# Libres pensées d'un mathématicien ordinaire Posts

This tiny post is an invitation to play with hypergeometric functions. These remarkable special functions can be useful to all mathematicians. They are bizarely not known by many, however.

Pochhammer symbol for rising factorial. Named after Leo August Pochhammer (1841 – 1920):
$$(z)_k:=z(z+1)\cdots(z+k-1)$$ with the convention $(z)_0:=1$ if $z\neq0$. Note that $(1)_k=k!$. We have $$\Gamma(z+k)=(z)_k\Gamma(z)\quad\text{where}\quad\Gamma(z):=\int_0^\infty t^{z-1}\mathrm{e}^{-t}\mathrm{d}t.$$ When $z=n$ then this boils down to $(n)_k=(n+k-1)!/(n-1)!$.

Hypergeometric function. If $a\in\mathbb{R}^p$, $b\in\mathbb{R}^q$, and $z\in\mathbb{C}$, $|z|<1$, then, when it makes sense,
$${}_pF_q\begin{pmatrix}a_1,\ldots,a_p\\b_1,\ldots,b_q\\z\end{pmatrix}:=\sum_{k=0}^\infty\frac{(a_1)_k\dots(a_p)_k}{(b_1)_k\cdots(b_q)_k}\frac{z^k}{k!},$$
The formula for ${}_pF_q$ remains valid for more values of $z$ by analytic continuation. Hypergeometric functions where studied by many including notably Leonhard Euler (1707 – 1783) and Carl-Friedrich Gauss (1777 – 1855). This kind of special function contains several others, for instance

• ${}_2F_1(1,1;2;-z)=\frac{\log(1+z)}{z}$
• ${}_2F_1(a,b;b;z)=\frac{1}{(1-z)^a}$
• ${}_2F_1(\frac{1}{2},\frac{1}{2};\frac{3}{2};z^2)=\frac{\arcsin(z)}{z}$

It is also possible to embed Jacobi orthogonal polynomials into hypergeometric functions and thus several families of orthogonal polynomials, more precisely $${}_{2}F_{1}(-n,a+1+b+n;a+1;x)={\frac {n!}{(a+1)_{n}}}P_{n}^{(a,b)}(1-2x).$$ Note that $(z)_k=0$ for large enough $k$ when $z$ is a negative integer, hence ${}_pF_q(a;b;z)$ is a polynomial when one of the $a_i$ is a negative integer.

Hypergeometric functions admit integral representations, and conversely, certain integrals can be computed using hypergeometric functions. Here is the most basic example.

Euler integral representation formula for ${}_2F_1$ published in 1769. If $a>0$, $b>0$, $|z|\leq 1$, then
$$\int_0^1u^{a-1}(1-u)^{b-1}(1-zu)^{-c}\mathrm{d}u ={}_2F_1\begin{pmatrix}a,c\\a+b\\z\end{pmatrix} \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$$ In other words, for all $a,b,c$ with $c>a>0$ and all $|z|\leq 1$,
$${}_2F_1\begin{pmatrix}a,b\\c\\z\end{pmatrix}=\frac{\Gamma(c)}{\Gamma(a)\Gamma(c-a)}\int_0^1u^{a-1}(1-u)^{c-a-1}(1-zu)^{-b}\mathrm{d}u.$$

A proof. A binomial series expansion of $(1-zu)^{-c}$ gives
$$\int_0^1u^{a-1}(1-u)^{b-1}(1-zu)^{-c}\mathrm{d}u =\sum_{k=0}^\infty\frac{(c)_k}{k!}z^k \int_0^1u^{a+k-1}(1-u)^{b-1}\mathrm{d}u.$$
Now the beta-gamma formula $\displaystyle \int_0^1u^{a+k-1}(1-u)^{b-1}\mathrm{d}u=\frac{\Gamma(a+k)\Gamma(b)}{\Gamma(a+b+k)}$ gives
$$\int_0^1u^{a-1}(1-u)^{b-1}(1-zu)^{-c}\mathrm{d}u =\Gamma(b)\sum_{k=0}^\infty\frac{(c)_k\Gamma(a+k)}{\Gamma(a+b+k)} \frac{z^k}{k!}.$$
Finally the formula $\Gamma(z+k)=(z)_k\Gamma(z)$ gives
$$\int_0^1u^{a-1}(1-u)^{b-1}(1-zu)^{-c}\mathrm{d}u =\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\sum_{k=0}^\infty\frac{(c)_k(a)_k}{(a+b)_k} \frac{z^k}{k!}\\ =\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\ {}_2F_1\begin{pmatrix}a,c\\a+b\\z\end{pmatrix}.$$

Immediate corollary. By sending $z$ to $1$, taking $b>c$, and using the beta-gamma formula $$\int_0^1u^{a-1}(1-u)^{b-c-1}\mathrm{d}u =\frac{\Gamma(a)\Gamma(b-c)}{\Gamma(a+b-c)},$$ we obtain the following identity discovered by Gauss (1812), for all $a,b,c$ with $c>a+b$, $$\sum_{k=0}^\infty\frac{(a)_k(b)_k}{(c)_k}\frac{1}{k!}={}_2F_1\begin{pmatrix}a,b\\c\\1\end{pmatrix}=\frac{\Gamma(c-a-b)\Gamma(c)}{\Gamma(c-a)\Gamma(c-b)}.$$

Maple, Mathematica, and Maxima. All implement hypergeometric functions. Here is an example with the Euler integral formula with Mathematica:

In[1]:= Integrate[u^{a-1}*(1 - u)^{b-1}*(1 - z*u)^{-c}, {u, 0, 1}]

Out[1]= {ConditionalExpression[
Gamma[a] Gamma[b] Hypergeometric2F1Regularized[a, c, a + b, z],
Re[a] > 0 && Re[b] > 0 && (Re[z] <= 1 || z ∉ ℝ)]}

The regularized ${}_2F_1$ hypergeometric function used by Mathematica is ${}_2F_1(a,b;c;z)/\Gamma(c)$.

It is a great time for scientific conferences over the Internet, a discovery for many colleagues and communities. Ideally, such events should be organized using software platforms implementing virtual reality, just like for certain video games, with virtual buildings, virtual rooms, virtual characters, virtual discussions, virtual restaurants, etc. Unfortunately, such platforms do not seem to be available yet with the expected level of quality and flexibility, even if I have heard that some colleagues from PSL University managed to organize a virtual poster session using a software initially designed for virtual museums!

Online scientific conferences are very easy to organize, cost almost nothing, and have an excellent carbon footprint in comparison with traditional face-to-face on-site conferences. The carbon footprint of Internet is not zero, but the fraction that is used for an online scientific event has nothing to do with what is used for an on-site conference with plenty of participants coming from far away by airplanes. The main drawback of online scientific events is of course the limited interactions between participants, and the fact that they do not extract the participants from their daily life and duties. Online conferences are an excellent way to maintain the social link inside scientific communities. The term webinar/webconference is sometimes used but emphasizes the web which is not the heart of the concept.

It would be unfortunate to make all scientific conferences online. The best would be to reduce the number of traditional conferences, and try to increase their quality, for instance by asking participants to stay longer. Also I would not be surprised to see the development of blended or hybrid conferences mixing on-site participants and remote online participants.

I had recently the opportunity to co-organize with a few colleagues an online scientific conference on Random Matrices and Their Applications, in replacement of a conventional on-site face-to-face conference in New York canceled due to COVID-19. The initial conference was supposed to last a whole week, with about thirty talks and a poster session. For the online replacement, we have decided to keep the same week. About twelve of the initial speakers accepted to give an online talk. For simplicity, we have then decided to put three 45 minutes talks per day on Monday, Tuesday, Thursday, and Friday, and to give up the poster session. We have used the same schedule for each day, with a first talk at 10 am New York local time. We had between 80 and 150 participants per talk, from all over the world. In short:

• Schedule. Few talks per day, compatible with as many local times as possible
• Website. Speakers, titles, abstracts, slides, registration
• Talks. To improve speakers experience, turn-on few cameras along the talks, typically the chairperson/organizers/coauthors. The integrated text chat can be used for questions
• Workspace. An online collaborative workspace in parallel is useful. A private channel can host the discussions between organizers, replacing emails, a general channel can host the interaction with the speakers and the participants, and between them, etc.

On the technical side, we have decided to use current social standards instead of best quality solutions, namely Dokuwiki for the website, Zoom for the talks, and Slack for the workspace.

The COVID-19 pandemic is an interesting subject of study from many perspectives. Looking back at recent and less recent history, this pandemic by itself appears for now as rather ordinary, while the political responses are truly exceptional. In particular and among several aspects, we can observe, beyond the risk aversion and the international mimetism, a certain role played by risk analysis for decision making based on mathematical modelling for epidemiology.

Mathematical modelling is remarkably successful to predict, with a high degree of accuracy, the behavior of many natural phenomena, such as for example, and very concretely, the trajectory of satellites or the propagation of sound and light. The numerous successes of mathematical modelling have enormous positive concrete impact on our world and our daily life. Around these topics, there is for instance a famous article by Eugene Wigner entitled The Unreasonable Effectiveness of Mathematics in the Natural Sciences (1960), and another one by Richard Hamming entitled The unreasonable effectiveness of mathematics (1980).

On the other hand, the mechanisms of many natural phenomena are not well understood, and even when they are well understood at a certain scale, their mathematical modelling is often an approximation of their complexity and subtleties, which is not always accurate. Approximation may also come from the mathematical and numerical analysis of the model by itself, as well as from the lack of data to fit the model. All these aspects are well known by every mathematician, and it is customary to say that all models are wrong, but some are useful.  This reminds on this topic the article entitled The Reasonable Ineffectiveness of Mathematics (2013) by Derek Abbott, pointing out some of the limitations of mathematization.

The case of meteorology is particularly interesting. The mechanisms of the natural phenomenon are relatively well understood and are modeled mathematically by the equations of fluid mechanics, related to one of the greatest questions of mathematical physics. Unfortunately, the sensitivity of these equations to perturbations make the prediction relatively limited, despite the striking progresses made in numerical analysis and computational power, and the enormous amount of data collected by satellite remote sensing. Weather forecasting remains difficult.

The situation is even worse for the social sciences such as economics or sociology, for which we do not have the analogue of the equations of fluid mechanics. Historically, the quantitative analysis of social phenomena were first approached by using statistics, notably by Adolphe Quetelet, who produced among other studies his famous Sur l’homme et le développement de ses facultés, ou Essai de physique sociale (1835). Quetelet discovered some of the mechanical sides of disordered phenomena, paving the way to the mathematical modelling of disordered systems and their predictability. He was not the only scientist to explore the mechanical view of nature at that time, the famous others include certainly Charles Darwin and Ludwig Boltzmann. The mechanization of disordered systems led to the great successes of probability and statistics that we all know, which are also at the heart of statistical physics, quantum mechanics, and information theory. But the social sciences remain too complex for many aspects. This is well explained for instance for economics in Le Capital au XXIe siècle (2013) by Thomas Piketty.

The tremendous development of digitization, computers, and networks has led to the widespread use of mathematical modelling and numerical experiments. It has also stimulated the development of various types of machine learning, producing striking concrete successes. This type of algorithmic data processing is still considered as modelling but may differ from usual modelling in that it can produce empirical prediction without understanding.

How about epidemiology? It turns out that the mechanisms of viral epidemics are not well understood by the scientists for now. The available mathematical or computational modelling incorporates what is known. It remains limited for prediction, and the problem does not reduce to data collection. In particular it produces questionable risk analysis for decision making. We could alternatively use the historical statistics of epidemics to produce predictions, at least at the qualitative or phenomenological level, but this is also relatively limited. We are thus condemned to live for now with important uncertainties. This is somewhat difficult to accept for our present societies.

About the author. Mainly a mathematician, professor at Université Paris-Dauphine – PSL since 2013, presently active in probability theory, mathematical analysis, and statistical physics. Also strongly interested in computer science. Served in the past as a research engineer on data assimilation for the Météo-France research center (one year), researcher in mathematics and signal processing for University of Oxford (one year), researcher in biostatistics and modelling for INRAE (six years), professor of mathematics at Université Paris-Est Marne-la-Vallée (four years), and part-time professor at École Polytechnique (six years). Serves presently as a vice-president in charge of digital strategy for Université Paris-Dauphine for the period 2017-2020.

Further reading on this blog.

France. Concernant COVID-19 en France, voici un graphique intéressant de l’INSEE, permettant de comparer la mortalité avec quelques éléments du passé, notamment la canicule de l’été 2003, et la grippe de Hong Hong d’il y a cinquante ans. Le confinement est une différence importante. Cependant, on ne sait pas ce qu’aurait donné COVID-19 sans confinement, peut-être un pic plus élevé et moins étalé dans le temps, peut-être pas. Il se peut très bien que le confinement tel qu’il a été organisé n’ait servi à rien voire ait aggravé la situation dans certaines familles et collectivités. L’effet sur les accidents de la route est bien réel mais ne change pas énormément les choses. Une autre différence avec la grippe de Hong Kong est la taille de la population, bien plus petite à l’époque, ainsi que la pyramide des âges, bien plus jeune à l’époque, l’essentiel des décès de la COVID-19 concernant les personnes âgées, pour une bonne part en Ehpad. Notons également que la grippe espagnole en fin de première guerre mondiale – absente du graphique – a plutôt tué les jeunes adultes, par surinfection bactérienne, avant l’ère des antibiotiques. Tout cela souligne la difficulté à comparer à travers le temps. Ces phénomènes extrêmes et récurrents sont encore plus complexes et hétérogènes que les crues des fleuves dont les bassins évoluent. Il s’agit là d’un problème majeur de l’analyse de données à travers le temps et l’espace. La principale difficulté à laquelle est confronté Thomas Piketty dans son travail sur le capital est précisément l’hétérogénéité spatio-temporelle des données statistiques concernant l’économie.

Il y a dix ans, jour pour jour, naissait ce modeste blog avec un très court billet sur Louis Antoine. Plusieurs centaines de billets ont suivi depuis, parfois beaucoup plus longs. Tout n’est pas réussi, loin de là. Il faut tâtonner, caler un mode de rédaction de billet, un format. Ce temps passé à écrire ici reste une source intarissable d’excitation intellectuelle, de joie de synthétiser, de partager. Le savoir mérite d’être diffusé, les idées agitées, les points de vue exprimés. Cela convient bien aux travailleurs de la pensée que sont les universitaires. Se concentrer sur l’écriture, et laisser les moteurs de recherche faire leur indexation. C’est ainsi que certains billets de ce  blog sans prétention sont entrés dans la bibliographie de quelques articles de recherche. Une surprise, qui souligne l’importance actuelle de ce mode de diffusion de l’information, et qui pose aussi la question de la pérennité de l’auto-publication électronique. Ce type d’auto-publication échappe au dépôt légal et les dispositifs du type Wayback machine sont limités.

Une pensée pour Louis Antoine, devenu aveugle, puis mathématicien, géomètre.

Louis Antoine

Syntax · Style · .