## GRIMS — General R Interface for Markov Sampling

*2011-06-26 at 12:58 am* *
5 comments *

I have released a (very) preliminary version of my new MCMC software in R, which I’m calling GRIMS, for General R Interface for Markov Sampling. You can get it here.

This software differs from other more-or-less general MCMC packages in several respects, all but one of which make it, I think, a much better tool for serious MCMC applications. Here are some highlights:

- The distribution to be sampled is defined by an R function. This allows for full flexibility in defining models. For simple models, it is a bit clumsier than a BUGS specification, but one could address this by “compiling” a BUGS-like specification into such an R function (though I have no plans to write such a compiler myself).

- The Markov chain updates are also defined by R functions, which can easily be written by the user. Of course, some updates, such as random-walk Metropolis, are pre-defined.

- The interface between the R functions defining distributions and the R functions defining Markov chain updates converts between their different preferred representations of the state. A general-purpose update function (eg, Metropolis) is most naturally written to operate on a state that is a simple vector. But a function defining the posterior distribution for a Bayesian model is most naturally written to work with a state that is a list of vectors, with named elements for various types of parameters, hyperparameters, etc.

- The top-level function for running an MCMC simulation allows one to combine several different types of updates, which may each operate on only part of the state. This is often necessary for efficient MCMC sampling for a complex model.

- GRIMS supports incremental computation, in which the log probability density for a state is quickly recomputed after only a subset of the state variables change. This is a crucial capability for some sampling methods, such as Metropolis updates with proposals that change a single variable in the state. This support depends, of course, on the function that computes the log density being written to cache suitable intermediate quantities, and make use of them when possible.

- Updates that use the gradient of the log density are supported, provided the function defining the distribution is able to compute the gradient.

- The state can be supplemented with auxiliary “momentum” variables, allowing full use of sampling methods based on Hamiltonian dynamics.

- GRIMS has flexible facilities for specifying tuning parameters of update methods (eg, proposal standard deviations), and for collecting information (eg, acceptance rates) about how the update methods are operating.

There are other additional aspects of the design (or extensions to it that I envision) that are intended to support both complex applications and complex sampling methods, including methods like tempered transtions and ensemble MCMC in which a whole series of Markov chain updates are nested within a complex outer update (though nothing that elaborate is implemented yet). It’s not completely general, however. For example, with it’s present design, GRIMS can’t easily handle states of varying dimensionality.

So what’s the one undesirable respect in which GRIMS differs from other MCMC packages? Speed. Though I haven’t quantified this yet, GRIMS is likely to be rather slow, due to the overhead of implementing the nice things listed above in R. This is one of my motivations for speeding up R, though ironically, GRIMS has ended up making heavy use of some R operations, such as subscripting lists with strings using “[[…]]”, that I haven’t (yet) looked at speeding up.

We’ll have to see how fast or slow it ends up. In any case, speed isn’t essential to all uses of GRIMS. Since much of my research is on new MCMC methods, I want a better environment for quickly trying out new MCMC methods on a variety of applications. I also want these new MCMC methods, as well as new Bayesian models that use MCMC, to have implementations that are easily accessible to statisticians, for which an R function is a lot better than a C program.

If any readers want to try out this rather unpolished preliminary version of GRIMS (or just read the documentation), I’d be interested in your comments.

UPDATE: I’ve put up a new version that fixes some bugs, adds a Gibbs sampling / overrelaxation update for normals, and adds some tests. There may still be plenty of bugs remaining.

Entry filed under: Computing, Monte Carlo Methods, R Programming, Statistics, Statistics - Computing.

1.John Salvatier | 2011-06-26 at 1:59 amYou may be interested in an alternative package for easily experimenting with MCMC methods: my Python package MCEx (https://github.com/jsalvatier/mcex). It’s very short and elegant (200 lines) and has all of the features you mentioned except incremental computation, for which I would have to figure out how to get Theano (the package on which MCEx relies for efficient computation) to do this. The package is also reasonably fast because the posterior and gradient functions are compiled into C by Theano. The package already contains HMC sampler.

The package does contain a flaw (see the end of this thread http://groups.google.com/group/pymc/browse_thread/thread/bfd73d233d667e7a), but it is really only relevant for widespread usage, not developmental usage.

If you’re interested, I can explain it’s workings in detail.

2.Seref Arikan | 2011-10-27 at 4:21 pmGreetings,

Very interesting work. I’m working really hard on implementing various MCMC methods, and especially the ones where log concavity is not a requirement. I have not seen any references to the method used here. The code confused me, could you explain the specifics of the approach a little bit? What particular configuration of MCMC is this package implementing? Especially the log density is very interesting for me, since I’ve been having issues with underflow, trying to implement Griddy Gibbs sampler of Tanner in R. Your feedback would be much appreciated. Thanks for opening this up to the rest of the world!

3.Radford Neal | 2011-10-27 at 5:32 pmI’m not sure I understand your question here. GRIMS is a general framework for implementing and using MCMC methods, not a particular method (though some particular methods are implemented within it).

In any MCMC software, it is essential to implement methods in terms of the log of the density, not the density itself, in order to avoid overflow/underflow problems.

4.Seref Arikan | 2011-11-21 at 10:45 amAh, I was expecting to receive a notification to your response. I’ve just seen this. I’ve managed to build a log density based Griddy Gibbs sampler. I’ll be looking at your code with more coffee on the table :) I hope I’ll understand the way it works so that I can come back with more questions (almost guaranteed to happen)

Kind regards

Seref

5.Longhai Li | 2012-11-08 at 12:18 pmI have played the software a little bit, and tried to compare with other alternative such as stan and jags, as beginners to all of them. I think that I have come up with a few more arguments to support your effort in grims. First of all, the R users don’t need to learn a new “language” by using grims. stan and other bugs software all use some sort of new definitions. Although their languages are all comprehensible by human brain, but in actual implementation, I always worry that whether the specified model is what I mean, and I don’t find an easy way to check interactively. Defining the model using R is less worried, and in particularly one can interactively check the model by sampling the data. Second, grims allows customized specification of mcmc samplers. An important aspect of grims is that one can use different samplers for different block of parameters, such as hyperparameters and parameters, which is often better than sampling all of them jointly and blindly. This aspect may remedy the speed loss from using slow R to do all computation.