Monday, January 01, 2007

Gibbs sampling (I)

Gibbs sampling is one special version of Metropolis-Hastings algorithm.

It is used to generate a sequence of samples from the joint probability distribution of two or more random variables. The purpose of such a sequence is to approximate the joint distribution or to compute an integral. The algorithm is named after the physicist J.W.Gibbs, in reference to an analogy between the sampling algorithm and statistical physics. The algorithm was devised by Geman and Geman, some eight decades after the passing of Gibbs, and is also called the Gibbs sampler.

Gibbs sampling is applicable when the joint distribution is not known explicitly, but the conditional distribution of each variable is known. The Gibbs sampling algorithm is to generate an instance from the distribution of each variable in turn, conditional on the current values of the other variables. It can be shown that the sequence of samples comprises a Markov chain, and the stationary distribution of that Markov chain is just the sought-after joint distribution.

Gibbs sampling is particularly well-adapted to sampling the posterior distribution of a Bayesian network, since Bayesian networks are typically specified as a collection of conditional distributions. BUGS is a program for carrying out Gibbs sampling on Bayesian networks.

No comments: