The University of Toronto Department of Computer Science is recruiting for three positions at the Assistant Professor level — in Machine Learning, Computational Biology, and Systems.
The Machine Learning group in CS currently consists of Geoffrey Hinton, Richard Zemel, and myself (or at least, 25% of myself, the rest being in Statistics), along with a number of other assocated faculty in CS and other departments, such as Ruslan Salakhutdinov and Brendan Frey.
The position in Computational Biology is joint with the The Donnelly Centre for Cellular and Biomolecular Research. There are many research groups at the University of Toronto also working on computational biology, including significant interests within the Machine Learning group.
The systems position is at the suburban campus in Mississauga, though with substantial reasearch and graduate teaching activity at the downtown campus.
Two papers involving Hamiltonian Monte Carlo (HMC) have recently appeared on arxiv.org — Jascha Sohl-Dickstein’s Hamiltonian Monte Carlo with reduced momentum flips, and Jascha Sohl-Dickstein and Benjamin Culpepper’s Hamiltonian annealed importance sampling for partition function estimation.
These papers both relate to the variant of HMC in which momentum is only partially refreshed after each trajectory, which allows random-walk behaviour to be suppressed even when trajectories are short (even just one leapfrog step). This variant is described in Section 5.3 of my HMC review. It seems that the method described in the first paper by Sohl-Dickstein could be applied in the context of the second paper by Sohl-Dickstein and Culpepper, but if so it seems they haven’t tried it yet (or haven’t yet written it up).
In my post on MCMC simulation as a random permutation (paper available at arxiv.org here), I mentioned that this view of MCMC also has implications for the role of randomness in MCMC. This has also been discussed in a recent paper by Iain Murray and Lloyd Elliott on Driving Markov chain Monte Carlo with a dependent random stream.
For the simple case of Gibbs sampling for a continuous distribution, Murray and Elliott’s procedure is the same as mine, except that they do not have the updates of extra variables needed to produce a volume-preserving map. These extra variables are relevant for my importance sampling application, but not for what I’ll discuss here. The method is a simple modification of the usual Gibbs sampling procedure, assuming that sampling from conditional distributions is done by inverting their CDFs (a common method for many standard distributions). It turns out that after this modification, one can often eliminate the random aspect of the simulation and still get good results! (more…)
I’ve just finished a new paper. Continuing my recent use of unwieldy titles, I call it “How to view an MCMC simulation as a permutation, with applications to parallel simulation and improved importance sampling”.
The paper may look a bit technical in places, but the basic idea is fairly simple. I show that, after extending the state space a bit, it’s possible to view an MCMC simulation (done for some number of iterations) as a randomly selected map from an initial state to a final state that is either a permutation, if the extended state space is finite, or more generally a one-to-one map that preserves volume.
Why is this interesting? I think it’s a useful mathematical fact — sort of the opposite of how one can “couple” MCMC simulations in a way that promotes coalescence of states. It may turn out to be applicable in many contexts. I present two of these in the paper. (more…)
I’ve released a new version of my software for Low Density Parity Check (LDPC) codes. These error-correcting codes were invented by Robert Gallager in the early 1960′s, and re-invented and shown to have very good performance by David MacKay and myself in the mid-1990′s. The decoding algorithm for LDPC codes is related to that used for Turbo codes, and to probabilistic inference methods used in other fields. Variations on LDPC and Turbo codes are currently the best practical codes known, in terms of their ability to transmit data at rates approaching channel capacity with very low error probability.
This new version has only a few bug fixes and minor enhancements. The big change is that I’ve put up the source code as a Github repository, that uses the git source code control system. This should make it easier for other people to create their own extensions of the software. The software is available here, and the Github repository is here.
For me, this is also an exercise in learning about git and Github. From my initial experience, git does seem better than the source code control systems I’ve used previously (which are Subversion, CVS, and the “modify” utility of the Kronos operating system for the CDC 6000 series of computers). The help pages are a bit cryptic in places, though, at least for the novice user…