Posts filed under ‘Monte Carlo Methods’

Two Hamiltonian Monte Carlo papers

Two papers involving Hamiltonian Monte Carlo (HMC) have recently appeared on arxiv.org — Jascha Sohl-Dickstein’s Hamiltonian Monte Carlo with reduced momentum flips, and Jascha Sohl-Dickstein and Benjamin Culpepper’s Hamiltonian annealed importance sampling for partition function estimation.

These papers both relate to the variant of HMC in which momentum is only partially refreshed after each trajectory, which allows random-walk behaviour to be suppressed even when trajectories are short (even just one leapfrog step). This variant is described in Section 5.3 of my HMC review. It seems that the method described in the first paper by Sohl-Dickstein could be applied in the context of the second paper by Sohl-Dickstein and Culpepper, but if so it seems they haven’t tried it yet (or haven’t yet written it up).
(more…)

2012-05-21 at 4:06 pm 3 comments

Non-random MCMC

In my post on MCMC simulation as a random permutation (paper available at arxiv.org here), I mentioned that this view of MCMC also has implications for the role of randomness in MCMC. This has also been discussed in a recent paper by Iain Murray and Lloyd Elliott on Driving Markov chain Monte Carlo with a dependent random stream.

For the simple case of Gibbs sampling for a continuous distribution, Murray and Elliott’s procedure is the same as mine, except that they do not have the updates of extra variables needed to produce a volume-preserving map. These extra variables are relevant for my importance sampling application, but not for what I’ll discuss here. The method is a simple modification of the usual Gibbs sampling procedure, assuming that sampling from conditional distributions is done by inverting their CDFs (a common method for many standard distributions). It turns out that after this modification, one can often eliminate the random aspect of the simulation and still get good results! (more…)

2012-05-03 at 11:37 pm 7 comments

MCMC simulation as a random permutation

I’ve just finished a new paper. Continuing my recent use of unwieldy titles, I call it “How to view an MCMC simulation as a permutation, with applications to parallel simulation and improved importance sampling”.

The paper may look a bit technical in places, but the basic idea is fairly simple. I show that, after extending the state space a bit, it’s possible to view an MCMC simulation (done for some number of iterations) as a randomly selected map from an initial state to a final state that is either a permutation, if the extended state space is finite, or more generally a one-to-one map that preserves volume.

Why is this interesting? I think it’s a useful mathematical fact — sort of the opposite of how one can “couple” MCMC simulations in a way that promotes coalescence of states. It may turn out to be applicable in many contexts. I present two of these in the paper. (more…)

2012-05-02 at 6:36 am 1 comment

Evaluation of NUTS — more comments on the paper by Hoffman and Gelman

Here is my second post on the paper by Matthew Hoffman and Andrew Gelman on “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo”, available from arxiv.org. In my first post, I discussed how well the two main innovations in this “NUTS'” method — ending a trajectory when a “U-Turn” is encountered, and adaptively setting the stepsize — can be expected to work. In this post, I will discuss the empirical evaluations in the NUTS paper, and report on an evaluation of my own, made possible by the authors having kindly made available the NUTS software, concluding that the paper’s claims for NUTS are somewhat overstated. The issues I discuss are also of more general interest for other evaluations of HMC. (more…)

2012-01-27 at 4:31 pm 14 comments

No U-Turns for Hamiltonian Monte Carlo – comments on a paper by Hoffman and Gelman

Matthew Hoffman and Andrew Gelman recently posted a paper called “The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo” on arxiv.org. It has been discussed on Andrew’s blog.

It’s a good paper, which addresses two big barriers to wider use of Hamiltonian Monte Carlo — the difficulties of tuning the trajectory length and tuning the stepsize to use when simulating a trajectory. The name “No-U-Turn Sampler” (NUTS) comes from their way of addressing the problem of tuning the trajectory length — repeatedly double the length of the current trajectory, until (simplifying a bit) there is a part of the trajectory that makes a “U-Turn”, heading back towards its starting point. This doubling method is clever, and (as I discuss below) one aspect of it seems useful even apart from any attempt to adaptively set the trajectory length. They also introduce a method of adapting the stepsize during the burn-in period, so as to achieve some desired acceptance probability.

However, I don’t think these are completely satisfactory ways of setting trajectory lengths and stepsizes. As I’ll discuss below, these problems are more complicated than they might at first appear. (more…)

2012-01-21 at 12:38 am 8 comments

GRIMS — General R Interface for Markov Sampling

I have released a (very) preliminary version of my new MCMC software in R, which I’m calling GRIMS, for General R Interface for Markov Sampling. You can get it here.

This software differs from other more-or-less general MCMC packages in several respects, all but one of which make it, I think, a much better tool for serious MCMC applications. Here are some highlights: (more…)

2011-06-26 at 12:58 am 5 comments

Ensemble MCMC

I’m glad to have managed, before teaching starts again, to have finished a Technical Report (available here or at arxiv.org) with what may be my most unwieldy title ever:

MCMC Using Ensembles of States for Problems with Fast and Slow Variables such as Gaussian Process Regression

I wanted the title to mention all three of the nested ideas in the paper.  Actually, I wasn’t able to fit in a fourth, most general, idea, of MCMC methods based on caching and mapping (see here).   Here is the abstract:

I introduce a Markov chain Monte Carlo (MCMC) scheme in which sampling from a distribution with density π(x) is done using updates operating on an “ensemble” of states. The current state x is first stochastically mapped to an ensemble, x(1),…,x(K). This ensemble is then updated using MCMC updates that leave invariant a suitable ensemble density, ρ(x(1),…,x(K)), defined in terms of π(x(i)) for i=1,…,K. Finally a single state is stochastically selected from the ensemble after these updates. Such ensemble MCMC updates can be useful when characteristics of π and the ensemble permit π(x(i)) for all i in {1,…,K} to be computed in less than K times the amount of computation time needed to compute π(x) for a single x. One common situation of this type is when changes to some “fast” variables allow for quick re-computation of the density, whereas changes to other “slow” variables do not. Gaussian process regression models are an example of this sort of problem, with an overall scaling factor for covariances and the noise variance being fast variables. I show that ensemble MCMC for Gaussian process regression models can indeed substantially improve sampling performance. Finally, I discuss other possible applications of ensemble MCMC, and its relationship to the “multiple-try Metropolis” method of Liu, Liang, and Wong and the “multiset sampler” of Leman, Chen, and Lavine.

I’ve also posted the programs used to produce the results.  These haven’t been tested much beyond their use for the paper, but I hope to incorporate them into a general MCMC package in R (also including programs accompanying my review of Hamiltonian Monte Carlo).   That’s my next project to do in whatever time I have available after teaching, administration, and a three-year-old daughter, along with more efforts to speed up R, so the that this MCMC package won’t be too slow.

2011-01-01 at 11:26 pm 10 comments

Older Posts


Calendar

October 2014
M T W T F S S
« Jun    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Posts by Month

Posts by Category


Follow

Get every new post delivered to your Inbox.

Join 119 other followers