Press "Enter" to skip to content

Month: May 2010

How to speed up MCMC algorithms

Markov-Chains-Monte-Carlo (MCMC for short) methods are widely used in practice for the approximate computation of integrals on various types of spaces. More precisely, let \(\mu\) be a probability measure on \(E\), known only up to a multiplicative constant. Let \(K\) be an irreducible Markov kernel on \(E\). Then by using a classical Metropolis-Hastings type construction, one cook up a computable ergodic Markov kernel \(M\) on \(E\) which admits \(\mu\) as a unique reversible invariant law. One can then simulate a Markov chain \((X_n)_{n\geq1}\) with state space \(E\) and kernel \(M\). When \(n\) is large, the law of the random variable \(X_n\) is approximately \(\mu\). The construction of \(K\) relies in general on the knowledge of the local structure of \(E\). The quantitative control of the speed of convergence is a delicate problem. Many research articles are devoted to specific models. You may take a look at the expository article of Laurent Michel and of Persi Diaconis.

It is well known that nonreversible ergodic Markov chains can avoid diffusive effects and may converge faster than reversible ergodic Markov chains. It is thus tempting to think about nonreversible MCMC algorithms. This program is currently under developpement. You may take a look at the recent work of Diaconis, Miclo, and Zuñiga on the spectral analysis of second order Markov chains i.e. \(\mathcal{L}(X_{n+1}|X_n,X_{n-1},\ldots)=\mathcal{L}(X_{n+1}|X_n,X_{n-1})\).

Leave a Comment
Syntax · Style · .