Posts filed under ‘Statistics’
|The latest version of pqR that I just released uses a new way of implementing subset replacement operations — such as a[i]<-1 or L$M[1:100,i]<-v. The new approach is much faster, and eliminates some strange behaviour of the previous approach.|
This change affects only interpreted code. The bytecode compiler (available since R-2.13.0) introduced a different mechanism, which is also faster than the previous approach used by the interpreter (though it still has some of the strange behaviour). This faster mechanism was one of the main reasons for byte-compiled code to be faster than interpreted code (although it would have been possible to use the new mechanism in the interpreter as well). With pqR’s new implementation of subset replacement, this advantage of byte-compiled over interpreted code is much reduced.
In addition to being faster, pqR’s new approach is also more coherent than the previous approach (still current in the interpreter for R Core releases to at least R-3.1.1), which despite its gross inefficiency and confused semantics has remained essentially unchanged for 18 years. Unfortunately, the new approach in pqR is not as coherent as it might be, because past confusion has resulted in some packages doing “wrong” things, which have to be accommodated, as least in the short term.
|I’ve released a new version, pqR-2014-09-30, of my speedier, “pretty quick”, implementation of R, with some major performance improvements, and some features from recent R Core versions. It also has fixes for bugs (some also in R-3.1.1) and installation glitches.|
|I have released a new version, pqR-2014-06-19, of my speedier, “pretty quick”, implementation of R. This and the previous release (pqR-2014-02-23) are maintenance releases, with bug fixes, improved documentation, and better test procedures.|
The result is that pqR now works with a large collection of 3438 packages.
The microbenchmark package is a popular way of comparing the time it takes to evaluate different R expressions — perhaps more popular than the alternative of just using system.time to see how long it takes to execute a loop that evaluates an expression many times. Unfortunately, when used in the usual way, microbenchmark can give inaccurate results.
The inaccuracy of microbenchmark has two main sources — first, it does not correctly allocate the time for garbage collection to the expression that is responsible for it, and second, its summarizes the results by the median time for many repetitions, when the mean is what is needed. The median and mean can differ drastically, because just a few of the repetitions will include time for a garbage collection. These flaws can result in comparisons being reversed, with the expression that is actually faster looking slower in the output of microbenchmark. (more…)
I’ve now released pqR-2013-12-29, a new version of my speedier implementation of R. There’s a new website, pqR-project.org, as well, and a new logo, seen here.
The big improvement in this version is that vector operations are sped up using task merging.
With task merging, several arithmetic operations on a vector may be merged into a single operation, reducing the time spent on memory stores and fetches of intermediate results. I was inspired to add task merging to pqR by Renjin and Riposte (see my post here and the subsequent discussion). (more…)
Several Assistant Professor positions are open at the University of Toronto, in Statistics and areas of Computer Science related to Statistics.
The suburban Scarborough campus of the University of Toronto has a position for an Assistant Professor in any area of Statistics. Faculty at Scarborough teach undergraduate courses at the suburban campus, but Statistics faculty there also spend much time at the Department of Statistical Sciences on the downtown campus, teaching graduate courses, supervising graduate students, attending research seminars, etc.
There is a position in Computational Biology at the downtown campus joint between the Department of Computer Science and the The Donnelly Centre for Cellular and Biomolecular Research. There are many research groups at the University of Toronto also working on computational biology, including significant interests within Statistics, Biostatistics, the Machine Learning group in Computer Science.
There is also a position in Computer Science on “Big Data”, broadly interpreted. You’ll note at the link that there are also two other Computer Science Assistant Professor positions open (at the two suburban campuses). And there’s also a position for a lecturer (full-time teaching faculty, with a permanent appointment, subject to performance review) .
U of T has recently recruited two new faculty in Statistics and Machine Learning — Ruslan Salakhutdinov and Raquel Urtasun. They join the existing faculty interested in Machine Learning, who include Geoffrey Hinton, Richard Zemel, Brendan Frey, and myself.
The deadline for applying to the Assistant Professor position in Statistics is December 10. For the Computer Science Assistant Professor positions, the deadline is January 10, and for the lecturer position, the deadline is January 15.
The previously sleepy world of R implementation is waking up. Shortly after I announced pqR, my “pretty quick” implementation of R, the Renjin implementation was announced at UserR! 2013. Work also proceeds on Riposte, with release planned for a year from now. These three implementations differ greatly in some respects, but interestingly they all try to use multiple processor cores, and they all use some form of deferred evaluation.
Deferred evaluation isn’t the same as “lazy evaluation” (which is how R handles function arguments). Deferred evaluation is purely an implementation technique, invisible to the user, apart from its effect on performance. The idea is to sometimes not do an operation immediately, but instead wait, hoping that later events will allow the operation to be done faster, perhaps because a processor core becomes available for doing it in another thread, or perhaps because it turns out that it can be combined with a later operation, and both done at once.
Below, I’ll sketch how deferred evaluation is implemented and used in these three new R implementations, and also comment a bit on their other characteristics. I’ll then consider whether these implementations might be able to borrow ideas from each other to further expand the usefulness of deferred evaluaton. (more…)