Posts filed under ‘R Programming’

Parallel computation with helper threads in pqR

One innovative feature of pqR (my new, faster, version of R), is that it can perform some numeric computations in “helper” threads, in parallel with other such numeric computations, and with interpretive operations performed in the “master” thread. This can potentially speed up your computations by a factor as large as the number of processor cores your system has, with no change to your R programs. Of course, this is a best-case scenario — you may see little or no speed improvement if your R program operates only on small objects, or is structured in a way that inhibits pqR from scheduling computations in parallel. Below, I’ll explain a bit about helper threads, and illustrate when they do and do not produce good speed ups. (more…)

2013-06-23 at 5:00 pm 3 comments

Announcing pqR: A faster version of R

pqR — a “pretty quick” version of R — is now available to be downloaded, built, and installed on Linux/Unix systems. This version of R is based on R-2.15.0, but with many performance improvements, as well as some bug fixes and new features. Notable improvements in pqR include:

  • Multiple processor cores can automatically be used to perform some numerical computations in parallel with other numerical computations, and with the thread performing interpretive operations. No changes to R code are required to take advantage of such computation in “helper threads”.
  • pqR makes a better attempt at avoiding unnecessary copying of objects, by maintaining a real count of “name” references, that can decrease when the object bound to a name changes. Further improvements in this scheme are expected in future versions of pqR.
  • Some operations are avoided completely in pqR — for example, in pqR, the statement for (i in 1:10000000) ... does not actually create a vector of 10000000 integers, but simply sets i to each of these integers in turn.

There are also many detailed improvements in pqR that decrease general interpretive overhead or speed up particular operations.

I will be posting more soon about many of these improvements, and about the gain in performance obtained using pqR. For the moment, a quick idea of how much improvement pqR gives on simple operations can be obtained from the graph below (click to enlarge):

plot-rel

This shows the relative run times (on an Intel X5680 processor) of nine simple test programs (from the 2013-06-18 version of my R speed tests), using pqR, and using all releases of R by the R Core Team from 2.11.1 to 3.0.1. These programs mostly operate on small objects, doing simple operations, so this is a test of general interpretive overhead. A single thread was used for pqR (there is not much scope in these programs for parallelizing numeric computations).

As one can see, there has been little change in speed of interpreted programs since R-2.12.0, when some modifications that I proposed were incorporated into the R Core versions (and the R Core Team declined to incorporate many other modifications I suggested), though the speed of compiled programs has improved a bit since the compiler was introduced in R-2.13.0. The gain for interpreted programs from using pqR is almost as large as the gain from compilation. pqR also improves the speed of compiled programs, though the gain is less than for interpreted programs, with the result that the advantage of compilation has decreased in pqR. As I’ll discuss in future posts, for some operations, pqR is substantially faster when the compiler is not used. In particular, parallel computation in helper threads does not occur for operations started from compiled R code.

For some operations, the speed-up from using pqR is much larger than seen in the graph above. For example, vector-matrix multiplies are over ten times faster in pqR than in R-2.15.0 or R-3.0.1 (see here for the main reason why, though pqR solves the problem differently than suggested there).

The speed improvement from using pqR will therefore vary considerably from one R program to another. I encourage readers who are comfortable installing R from source on a Unix/Linux system to try it out, and let me know what performance improvements (and of course bugs) you find for your programs.  You can leave a comment on this post, or mail me at radfordneal@gmail.com.

You can get pqR here, where you can also find links to the source repository, a place to report bugs and other issues, and a wiki that lists systems where pqR has been tested, plus a few packages known to have problems with pqR. As of now, pqR has not been tested on Windows and Mac systems, and compiled versions for those systems are not available, but I hope they will be fairly soon.

UPDATE: You can read more about pqR in my posts on parallel computation with helper threads in pqR, comparing the speed of pqR with R-2.15.0 and R-3.0.1, how pqR makes programs faster by not doing things, and fixing R’s NAMED problems in pqR.

2013-06-22 at 2:32 pm 35 comments

GRIMS — General R Interface for Markov Sampling

I have released a (very) preliminary version of my new MCMC software in R, which I’m calling GRIMS, for General R Interface for Markov Sampling. You can get it here.

This software differs from other more-or-less general MCMC packages in several respects, all but one of which make it, I think, a much better tool for serious MCMC applications. Here are some highlights: (more…)

2011-06-26 at 12:58 am 5 comments

Two textbooks on probability using R

This fall, I’ll be teaching a second-year course on Probability with Computer Applications, which is required for Computer Science majors.  I’ve taught this before, but that was five years ago, so I’ve been looking to see what new textbooks would be suitable.  The course aims not just to use computer science applications as examples, but also to reinforce concepts of probability with programs, and to show how simulation can be used to solve problems that aren’t easily solved analytically. I’ve used R for the programming part, and plan to again, so I was naturally interested in two recent textbooks that seemed to have similar aims:

Introduction to Probability with R, Kenneth Baclawski, Chapman & Hall / CRC.

Probability with R: An Introduction with Computer Science Applications, Jane M. Horgan, Wiley.

I’ve now had a look at both of these textbooks.  Unfortunately, they are both seriously flawed.  Even more unfortunately, although some of the flaws in these books are particularly striking, I’ve seen similar, if usually less serious, problems in many other textbooks. (more…)

2011-06-18 at 11:34 pm 24 comments

New patches to speed up R 2.13.0

I have now released a new collection of 30 patches to speed up R version 2.13.0. You can get them here

Assessing how much these patches speed up R is difficult. First of all, the speedup varies tremendously with the type of program. It also varies quite a bit with the machine and compiler used to run R. Finally, it varies in apparently random ways — changing code in one part of the R interpreter can change the speed of operations that never use the modified code by plus or minus 5% or more.  This is presumably due to the change altering the exact addresses of other code segments, with consequent effects on alignment of memory fetches or on cache behaviour.

Nevertheless, here is a comparison of R 2.13.0 without modification and with all my patches applied, with and without compilation of R functions. The tests were done with an Intel X5680 processor running at 3.33GHz in 64-bit mode using gcc 4.4.4 under Red Hat Linux with default R configuration parameters. The tests use my suite of speed tests for R.

Here are some highlights: (more…)

2011-06-09 at 3:03 pm 5 comments

Slowing down matrix multiplication in R

After I realized that some aspects of R’s implementation are rather inefficient, one of the first things I looked at was matrix multiplication.  There I found a huge performance penalty for many matrix multiplies, a penalty which remains in the current version, 2.13.0.  As discussed below, eliminating this penalty speeds up long vector dot products by a factor of 9.5 (on my new machine), and other operations where the result matrix has at least one small dimension are sped up by factors that are somewhat smaller, but still substantial. There’s a long story behind why R’s matrix multiplies are so slow… (more…)

2011-05-21 at 10:52 pm 12 comments

Speed tests for R — and a look at the compiler

I’ve gotten back to work on speeding up R, starting with improving my suite of speed tests.  Among other new features, this suite allows one to easily try out the “byte-code” compiler that is now a standard part of the latest release of R, version 2.13.0. You can get the suite here.

I’ve been running these tests on my new workstation, which has a six-core Intel X5680 processor, running at 3.33GHz.  Unfortunately, it’s clear that thing runs somewhat slower when you use all the cores at once, so for consistency one needs to do the speed tests using just one core.  (Or one needs some more elaborate, and unclear, protocol for testing the speed of R in a muticore environment.)  I haven’t figured out how to get Red Hat Linux to compile 32-bit applications yet, so all the tests are in a 64-bit environment.

I’ve started with comparing the speed of R-2.13.0 with and without functions being compiled, and with comparing R-2.13.0 (without the compiler) to R-2.11.1, which was the last release before some of my speed improvements were incorporated.  A plot of the results is here. (more…)

2011-05-13 at 9:16 am 9 comments

Older Posts Newer Posts


Calendar

September 2014
M T W T F S S
« Jun    
1234567
891011121314
15161718192021
22232425262728
2930  

Posts by Month

Posts by Category


Follow

Get every new post delivered to your Inbox.

Join 115 other followers