Archive for May, 2012
Two Hamiltonian Monte Carlo papers
Two papers involving Hamiltonian Monte Carlo (HMC) have recently appeared on arxiv.org — Jascha Sohl-Dickstein’s Hamiltonian Monte Carlo with reduced momentum flips, and Jascha Sohl-Dickstein and Benjamin Culpepper’s Hamiltonian annealed importance sampling for partition function estimation.
These papers both relate to the variant of HMC in which momentum is only partially refreshed after each trajectory, which allows random-walk behaviour to be suppressed even when trajectories are short (even just one leapfrog step). This variant is described in Section 5.3 of my HMC review. It seems that the method described in the first paper by Sohl-Dickstein could be applied in the context of the second paper by Sohl-Dickstein and Culpepper, but if so it seems they haven’t tried it yet (or haven’t yet written it up).
(more…)
Non-random MCMC
In my post on MCMC simulation as a random permutation (paper available at arxiv.org here), I mentioned that this view of MCMC also has implications for the role of randomness in MCMC. This has also been discussed in a recent paper by Iain Murray and Lloyd Elliott on Driving Markov chain Monte Carlo with a dependent random stream.
For the simple case of Gibbs sampling for a continuous distribution, Murray and Elliott’s procedure is the same as mine, except that they do not have the updates of extra variables needed to produce a volume-preserving map. These extra variables are relevant for my importance sampling application, but not for what I’ll discuss here. The method is a simple modification of the usual Gibbs sampling procedure, assuming that sampling from conditional distributions is done by inverting their CDFs (a common method for many standard distributions). It turns out that after this modification, one can often eliminate the random aspect of the simulation and still get good results! (more…)
MCMC simulation as a random permutation
I’ve just finished a new paper. Continuing my recent use of unwieldy titles, I call it “How to view an MCMC simulation as a permutation, with applications to parallel simulation and improved importance sampling”.
The paper may look a bit technical in places, but the basic idea is fairly simple. I show that, after extending the state space a bit, it’s possible to view an MCMC simulation (done for some number of iterations) as a randomly selected map from an initial state to a final state that is either a permutation, if the extended state space is finite, or more generally a one-to-one map that preserves volume.
Why is this interesting? I think it’s a useful mathematical fact — sort of the opposite of how one can “couple” MCMC simulations in a way that promotes coalescence of states. It may turn out to be applicable in many contexts. I present two of these in the paper. (more…)