Press "Enter" to skip to content

Libres pensées d'un mathématicien ordinaire Posts

An algorithm

The algorithm (several instances may run in parallel):

  1. be curious and open minded, do retain concepts, beyond techniques
  2. when you catch a new idea (for instance in a seminar) try it on all your problems
  3. if your imagination fails to produce a promising problem, return to first step
  4. if the problem is already sparsely solved by others, write down an expository synthesis
  5. if not, never hesitate to learn new techniques to solve your problem
  6. work hard, generalize, specialize, follow your intuition, use simulations
  7. if the problem is still too difficult, seek for complementary friends
  8. if the problem is still unsolved, keep it in mind for future use
  9. it is time to take vacations… maybe
  10. return to first step (and never forget to eat and to sleep 😉

For a poor human like me, this algorithm is more a paradigm than a reality… 🙂

Paul Halmos: Don’t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate cases? Where does the proof use the hypothesis?

Leave a Comment

How to speed up MCMC algorithms

Markov-Chains-Monte-Carlo (MCMC for short) methods are widely used in practice for the approximate computation of integrals on various types of spaces. More precisely, let \(\mu\) be a probability measure on \(E\), known only up to a multiplicative constant. Let \(K\) be an irreducible Markov kernel on \(E\). Then by using a classical Metropolis-Hastings type construction, one cook up a computable ergodic Markov kernel \(M\) on \(E\) which admits \(\mu\) as a unique reversible invariant law. One can then simulate a Markov chain \((X_n)_{n\geq1}\) with state space \(E\) and kernel \(M\). When \(n\) is large, the law of the random variable \(X_n\) is approximately \(\mu\). The construction of \(K\) relies in general on the knowledge of the local structure of \(E\). The quantitative control of the speed of convergence is a delicate problem. Many research articles are devoted to specific models. You may take a look at the expository article of Laurent Michel and of Persi Diaconis.

It is well known that nonreversible ergodic Markov chains can avoid diffusive effects and may converge faster than reversible ergodic Markov chains. It is thus tempting to think about nonreversible MCMC algorithms. This program is currently under developpement. You may take a look at the recent work of Diaconis, Miclo, and Zuñiga on the spectral analysis of second order Markov chains i.e. \(\mathcal{L}(X_{n+1}|X_n,X_{n-1},\ldots)=\mathcal{L}(X_{n+1}|X_n,X_{n-1})\).

Leave a Comment

Books on combinatorial optimization, information, and algorithms

Leave a Comment

Free software

If you are looking for a free software clone of Matlab (numerical matrix computation), give a try to GNU Octave/QtOctave or to FreeMat or to Scilab. If you are looking for a free software clone of Maple (computer algebra system), give a try to Giac/Xcas or  to Maxima/wxMaxima. You may also take a look at SAGE. If you are looking for a free software clone of S-Plus (statistical data analysis and graphical representation), give a try to GNU R. If you want to install GNU/Linux on your desktop or laptop computer, try Ubuntu, which is based on Debian GNU/Linux. If your laptop is a MacBook, you may also take a look at the MacBook Ubuntu Community Documentation. If you like Unix but feel that GNU/Linux is too gregarious, give a try to FreeBSD (powers Mac OS X).

Leave a Comment