FBM implements several general-purpose Markov chain sampling methods, such as Metropolis updates, Hamiltonian (Hybrid) Monte Carlo, and slice sampling. These methods can be applied to distributions defined by simple formulas, including posterior distributions for simple Bayesian models. Several more specialized modules have been written that implement posterior distributions for more complex models, including Bayesian neural networks, Gaussian process models, and mixture models (including Dirichlet process mixture models).

Compared to the version from 2004, additions and improvements include:

- A more user-friendly syntax for specifying neural network architectures.
- A somewhat rudimentary module for simulating molecular systems with the Lennard-Jones potential, in NVT and NPT ensembles. This is meant to support research into how well MCMC methods work in this application area.
- A module implementing a specialized Bayesian model for inferring the location of sources of atmospheric contamination.
- An ‘slevel’ Markov chain operation, that supports updating the Uniform(0,1) value used to make accept/reject decisions, as part of the Markov chain state.
- Numerous detailed improvements and bug fixes.

One thing to note is that the computation times in the examples are for a long-obsolete computer. Compute times on a modern desktop computer will likely be smaller by a factor of ten or more.

This software comes with extensive documentation, including tutorial examples, which may be accessed here.

]]>Click on image for larger version.

Canal du Midi, Toulouse, July 2019. Nikon FG, Nikon Series E 50mm 1:1.8 lens, Kodak Color Plus 200 film, Nikon Coolscan V.

]]>Click on image for larger version.

Canal du Midi, Toulouse, July 2019. Nikon FG, Nikon Series E 50mm 1:1.8 lens, Kodak Color Plus 200 film, Nikon Coolscan V.

]]>Note that this version very likely has various bugs — mostly showing up only if you use automatic differentiation, I hope.

You can read about the automatic differentation facilities here, or with help(Gradient) after installing the test version. Below are a few examples to show a bit of what you can do.

You can get derivatives (gradients) using the new `with gradient` construct. Here’s a simple example:

> with gradient (abc=7) abc^2+5 [1] 54 attr(,"gradient") [1] 14

The `with gradient` construct creates a new local environment with a variable `abc`, which is initialized to 7. The expression `abc^2+5` is evaluated in this enviroment, and the result is returned along with its derivative with respect to `abc`, attached as a `gradient` attribute.

The expression whose gradient is found needn’t be so simple. Here’s another example:

> f <- function (x) { + s <- 0 + for (i in 1:length(x)) s <- s + i*x[i] + s + } > vec <- c(3,1,9) > with gradient (vec) f(vec) [1] 32 attr(,"gradient") [,1] [,2] [,3] [1,] 1 2 3 > with gradient (vec) f(vec^2) [1] 254 attr(,"gradient") [,1] [,2] [,3] [1,] 6 4 54

Here, `with gradient (vec)` initializes the `vec` variable to the value of the existing variable of that name (same as if `(vec=vec)` had been written).

The output of `with gradient` matches what the existing `numericDeriv` function gives:

> numericDeriv (quote (f(vec^2)), "vec") [1] 254 attr(,"gradient") [,1] [,2] [,3] [1,] 6 4 54

However, `with gradient` computes the derivatives exactly (apart from roundoff error), whereas `numericDeriv` computes an approximation using finite differences.

The output of both `with gradient` and `numericDeriv` in this example is a “Jacobian” matrix, with one row in this case, since the value of `f(vec^2)` is a scalar, and three columns, giving the derivatives with respect to the three elements of `vec`.

When finding the gradient with respect to many variables (or vector elements), `numericDeriv` may be very slow, since getting numerical derivatives with respect to N variables requires at least N+1 evaluations of the expression. In contrast, `with gradient` evaluates the expression once, either computing gradients as it goes (called “forward mode” differentiation) or recording how the value was computed and later finding the gradient by “backpropagation” (also called “reverse mode” diffferentiation). The pqR implementation tries to choose which of these modes is best, though at present forward mode is always used for some operations for which reverse mode hasn’t been implemented yet.

Neural network training with multiple hidden layers is one application where reverse mode differentiation is crucial. Here’s a function to compute the output of a neural network with two hidden layers, given an input vector `x` and network parameters in `L`:

nnet <- function (x, L) { h1 <- tanh (L$b1 + x %*% L$W1) h2 <- tanh (L$b2 + h1 %*% L$W2) as.vector (L$b3 + h2 %*% L$W3) }

If the two hidden layers both have 10 units, so there are 100 weights in L$W2, naive forward mode differentiation would record a 10-by-100 element Jacobian matrix associated with `h2`. But reverse mode differentiation starting from the final scalar output value only needs to compute Jacobian matrices with a single row (and up to 100 columns), as it works backwards.

The current automatic differentiation implementation in pqR manages to achieve this automatically (though it doesn’t yet when some other operations are done).

Here’s how this network function could be used for gradient descent training to minimize squared error when predicting responses in a training set, using a network with `n1` and `n2` units in the hidden layers:

train <- function (X, y, n1, n2, iters, step, sd=0.1) { # Initialize parameters randomly. L <- list() n0 <- ncol(X) L$b1 <- rnorm (n1,sd=sd) L$W1 <- matrix (rnorm(n0*n1,sd=sd), n0, n1) L$b2 <- rnorm (n2,sd=sd) L$W2 <- matrix (rnorm(n1*n2,sd=sd), n1, n2) L$b3 <- rnorm (1,sd=sd) L$W3 <- rnorm (n2,sd=sd) # Train for 'iters' iterations to minimize squared # error predicting y. for (i in 1:iters) { # Find gradient of squared error (summed over all # training examples) with respect to the parameters. r <- with gradient (L) { e <- 0 for (i in 1:nrow(X)) { o <- nnet (X[i,], L) e <- e + (y[i]-o)^2 } e } g <- attr(r,"gradient") if (i %% 100 == 0) cat ("Iteration", i, ": Error", round(r,4), "\n") # Update parameters to reduce squared error. L$b1 <- L$b1 - step * as.vector (g$b1) L$W1 <- L$W1 - step * as.vector (g$W1) L$b2 <- L$b2 - step * as.vector (g$b2) L$W2 <- L$W2 - step * as.vector (g$W2) L$b3 <- L$b3 - step * as.vector (g$b3) L$W3 <- L$W3 - step * as.vector (g$W3) } L }

Note that when the gradient is with respect to a list (`L` here), the gradient is a list of the corresponding Jacobian matrices (which here have just one row).

Here’s code to use this `train` function to learn an example function of two inputs, based on 100 slightly noisy examples:

set.seed(1) truef <- function (X) cos (2*sqrt(X[,1]*X[,2])) - 2*(0.4-X[,1])^2 N <- 100 X <- cbind (runif(N), runif(N)) y <- truef (X) + rnorm(N,sd=0.01) print (system.time (L <- train (X, y, 10, 10, 30000, 0.001)))

The 30000 training iterations (each looking at all 100 training cases) take 30 seconds on my computer.

The result is pretty good, as seen from the contour plots below:

You can get the source for this and other examples from a repository of pqR automatic differentiation examples.

I’ll be talking about automatic differentiation for pqR at the RIOT workshop held in Toulouse on July 11, in conjunction with UseR! 2019.

]]>

Here, I’ll give an overview of how the new scheme works, and present some performance comparisons with R-3.5.1. Some more details are presented in this talk.

The garbage collector is implemented as a separate module, which could also be of use in projects unrelated to R.

Objects in the new scheme are stored in “segments” — many objects per segment for small objects, just one per segment for big objects. This allows objects to be identified by a segment identifier and an offset within a segment (measured in “chunks”, currently 16 bytes in size), which together fit in 32 bits regardless of the size of a machine address.

It’s possible to configure pqR to use such 32-bit “compressed pointers” for all references, which reduces memory usage considerably on machines with 64-bit addresses, though at a cost of up to a 40% slowdown for scalar code dominated by interpretive overhead. (There are also compatibility issues with Rcpp and Rstudio when compressed pointers are used). The default is still to use machine addresses for references in R objects, but compressed pointers are always used internally by the garbage collector.

The new scheme reduces the space occupied by an R object even if references in the object do not use compressed pointers. The garbage collector needs to keep track of several sets of objects — for example, newly-allocated objects versus objects that were retained after the previous garbage collection. For this purpose, the old R Core garbage collector requires that every object contain two pointers used to implement such sets as doubly-linked lists. The new pqR garbage collector instead represents these sets much more compactly as bit vectors. If pqR is configured so object references are not done with compressed pointers, each object needs to store a compressed pointer to itself to allow the garbage collector to access these bit vectors, but that takes only 4 bytes, much less than the 16 bytes needed for two 64-bit pointers.

A full garbage collection requires that all accessible objects be scanned, and marked for retention. This can potentially result in accesses scattered over large areas of memory, many of which would not be to cache. On modern computers, an access to a memory location not in a cache can be hundreds of times slower than an in-cache reference.

This problem is reduced by the more compact layout of objects in pqR (considerably more compact if compressed pointers are used, and still somewhat more compact if not), since if the total memory occupied is smaller, a larger fraction of it will fit in cache. Locality of reference is also important, since an accesses to a location near to one accessed recently will likely go to a cache.

The use by pqR of bit vectors to represent membership in sets, including the set of objects that have been marked for retention, helps with locality. These bit vectors are stored in 64-byte structures associated with segments, allocated in blocks, which should often result in good locality of access. In contrast, with the old R Core garbage collector these operations involve accessing and writing to a “mark” bit in the object header and accessing and modifying the pointers in an object used for the doubly-linked lists. These accesses will be scattered over the whole area of memory used to hold objects.

It’s difficult to conduct meaningful speed comparisons of the garbage collector alone between pqR and R Core implementations, since they differ not just in their garbage collectors, but also in how many objects they allocate, and how many objects exist that may need to be scanned during garbage collection.

Regarding the last point, the R Core implementation is at a disadvantage because in its recommended configuration all the base R functions will be byte-compiled, increasing the number of objects that need to be scanned in a full garbage collection, whereas byte-compilation is not recommended for pqR. This is not a difference in the garbage collectors themselves, however.

But one can get some insight by looking at the performance of R code for which garbage collection speed might be expected to be an issue. In two tests I show below, garbage collection is more significant in the second than in the first, because in the second test many objects are allocated, retained for some time, but finally recovered.

Here are the two tests run (separately) with pqR-2018-11-18:

> a<-c(3,4,1); r <- rep(list(0),100) > system.time(for (i in 1:100000) for (j in 1:100) + r[[j]] <- list(x1=a+1,x2=a-1,x3=a+2,x4=a-2)) user system elapsed 5.993 0.000 5.993

> a<-c(3,4,1); r <- rep(list(0),100000) > system.time(for (i in 1:100) for (j in 1:100000) + r[[j]] <- list(x1=a+1,x2=a-1,x3=a+2,x4=a-2)) user system elapsed 8.217 0.041 8.257

And here are the same tests run with R-3.5.1 (with the JIT enabled):

> a<-c(3,4,1); r <- rep(list(0),100) > system.time(for (i in 1:100000) for (j in 1:100) + r[[j]] <- list(x1=a+1,x2=a-1,x3=a+2,x4=a-2)) user system elapsed 5.238 0.008 5.246

> a<-c(3,4,1); r <- rep(list(0),100000) > system.time(for (i in 1:100) for (j in 1:100000) + r[[j]] <- list(x1=a+1,x2=a-1,x3=a+2,x4=a-2)) user system elapsed 14.098 0.031 14.129

One can see that R-3.5.1 is a bit faster than pqR-2018-11-18 for the first test, but much slower for the second test. The difference seems to be due to pqR’s faster garbage collector. For the first test, the Linux “perf record” command reveals that both implementations spend less than 5% of their time in the garbage collector (much less than 5% for pqR). For the second test, about 57% of the compute time for R-3.5.1 is spent in the garbage collector, whereas pqR-2018-11-18 spends about 12% of its time in the garbage collector during this test. The faster garbage collection seen here for pqR is presumably due to the factors such as locality discussed above.

The R Core garbage collector is also slower for a more specific reason, involving handling of character strings. Both pqR and R Core garbage collectors are of the “generational” sort, in which most garbage collections only attempt to recover unused objects that were recently allocated, and consequently do not have to scan old objects that were allocated long ago (they are recovered if no longer used only in the infrequent full collections). But in the R Core implementation, even these partial garbage collections scan *all* character strings. Consequently, as more strings are kept around, all operations that allocate memory (and hence may trigger garbage collection) become slower.

Here’s an illustration. First, some times with pqR-2018-11-18:

> a<-seq(0,1,length=100); n <- 1000000 > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 0.477 0.004 0.480 > x<-paste("a",1:1000000,"a") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 0.869 0.004 0.873 > y<-paste("b",1:1000000,"b") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 0.899 0.000 0.898 > z<-paste("c",1:1000000,"c") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 0.940 0.003 0.943

And here are the times with R-3.5.1:

> a<-seq(0,1,length=100); n <- 1000000 > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 0.504 0.008 0.512 > x<-paste("a",1:1000000,"a") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 1.975 0.000 1.975 > y<-paste("b",1:1000000,"b") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 2.857 0.005 2.861 > z<-paste("c",1:1000000,"c") > system.time(for (i in 1:n) r <- list(x=a+1,y=a-1)) user system elapsed 4.073 0.009 4.082

As more strings are created (with references kept to them), the list creation operations slow down only a bit in pqR, but they slow down enormously in R-3.5.1, as every garbage collection (even partial ones) has to scan an increasing number of character strings.

]]>

Probably most new R programmers have encountered the following problem. They put some statements like the following in a file, say “sltp.r”:

sltp <- iris$Sepal.Width < iris$Petal.Width if (any(sltp)) print(iris[sltp,]) else cat("Sepal width is never less than petal width\n")

They then try to execute this script with “source”, and get this result:

> source("sltp.r") Error in source("sltp.r") : sltp.r:3:1: unexpected 'else' 2: if (any(sltp)) print(iris[sltp,]) 3: else ^

Trying to run the script with Rscript has the same problem:

$ Rscript sltp.r Error: unexpected 'else' in "else" Execution halted $

This is puzzling, since the same code works fine inside a function whose body is in curly brackets:

> f <- function () { + sltp <- iris$Sepal.Width < iris$Petal.Width + if (any(sltp)) print(iris[sltp,]) + else cat("Sepal width is never less than petal width\n") + } > f() Sepal width is never less than petal width

Now, there’s a reason for this. When entering expressions interactively, R is supposed to print the value of the expression as soon

as it’s been entered. But how can it know whether an “if” statement at the end of a line with no else clause is complete (there is no “else” part), or whether instead the user intends to enter an “else” clause on the next line?

One can imagine kludges to get around this in many cases, but the following example shows that there’s not going to be a general

solution:

> a <- 1:9; for (i in 1:9) if (a[i]>5) print(a[i]) [1] 6 [1] 7 [1] 8 [1] 9

After entering the first line above, the user expects to see the output below. They don’t expect to be asked to enter more text in case that text might be an “else” clause.

Situations like the one above are fairly rare, however. One *could* decide that the user has to enter a blank line after such an “if” statement to confirm that there is no “else” clause — that’s what Python does. But let’s suppose we don’t want to do that. Then we have to evaluate the expression containing the “if” immediately, and consider a following “else” to be an error.

But for *non-interactive* input, there’s no need to disallow a top-level “else” at the start of a line. Yet it has always been an error in R Core implementations. It no longer is an error in pqR — in input from a file read by Rscript, by “source”, or by “parse”, as well when “parse” is applied to a vector of character strings, it is now OK for an “else” be appear at the start of a line. This avoids annoyances when writing scripts, and also problems when pasting code into a script that was copied from inside a function with curly brackets (where “else” at the start of a line has always been legal).

I don’t know whether this problem has persisted so long in R Core implementations just due to inertia, or because it’s hard to modify the R Core parser to do this. It was fairly easy to modify the pqR parser, though there were some picky details involved.

The new pqR parser has also facilitated the introduction of new operators in pqR. The `..` operator for reliable sequence generation was introduced previously. The latest release introduces `!` and `!!` as operator forms of “paste0” and “paste”. It might be hard to modify the R Core parser to include these operators, because their correct parsing requires use of context — “`..`” is allowed as part of an identifier (but only at the start or end in pqR), and “`!!`” can be two successive unary-not operators, but not in a context where the new `!!` operator can legally appear.

This leads into the biggest difference between the new pqR parser and the old R Core parser.

The R Core parser is a bottom-up parser automatically generated from a grammar using the Bison/Yacc parser generator. Automatically generated? Sounds wonderful! But one of the inside secrets of computer science is that, despite decades of development, automatic parser generators are not actually very useful. As an illustration of this, gcc previously used such a parser, but abandoned it.

The generally-preferred method is to manually write a top-down “recursive-descent” parser. In theory, this shouldn’t be. Top-down parsers with k-symbol lookahead can handle grammars in the class LL(k); bottom-up parsers with k-symbol lookahead can handle grammars in the class LR(k), which is larger than LL(k). But in practice, most programming languages are in the LL(k) class if one assumes low-level tokenization has been handled, and dealing with funny contextual issues like the pqR `..` and `!!` operators is easier in a top-down parser. Furthermore, the advantage of just writing a grammar and having code for the parser generated automatically is largely illusory, since the code for recursive-descent parsers reads almost like a grammar, with one recursive function for each non-terminal symbol. The code gets more cluttered when one puts in the semantics, but this aspect is also easier in a recursive-descent parser than when using a parser generator.

The new pqR parser is faster than the R Core parser, but typically only moderately so. One exception, however, is when parsing is done for Rscript, and an expression (e.g., a function definition) extends over many lines — R Core implementations take time growing as the *square* of the number of lines, whereas pqR takes linear time (as one would expect).

This gross inefficiency is due not just to the R Core parser itself but also to how it interfaces to R’s “read-eval-print loop” (the “REPL”). The R Core implementation used in Rscript first tries parsing the first line of input, and checks whether the parser says it has a complete expression. If not, it tries parsing the first two lines of input. If that doesn’t give a complete expression, it tries parsing the first three lines of input. And so forth.

Here’s an illustration of the resulting inefficiency. File x2.r contains the following:

f <- function (a) { a <- a+1; a <- a+1; a <- a+1; a <- a+1 a <- a+1; a <- a+1; a <- a+1; a <- a+1 } print(f(0))

Files x300.r and x600.r contain the same thing except that there are 300 and 600 repetitions of the line in the function definition rather than two repetitions.

The time for running these scripts with R-3.5.1’s version of Rscript can be seen here:

$ time R-3.5.1-gcc8/bin/Rscript x2.r [1] 8 real 0m0.109s user 0m0.096s sys 0m0.014s $ time R-3.5.1-gcc8/bin/Rscript x300.r [1] 1200 real 0m0.866s user 0m0.839s sys 0m0.029s $ time R-3.5.1-gcc8/bin/Rscript x600.r [1] 2400 real 0m2.554s user 0m2.523s sys 0m0.032s

And for comparison, here are the times with pqR-2018-11-18’s version of Rscript:

$ time pqR-2018-11-18-gcc8/bin/Rscript x2.r [1] 8 real 0m0.070s user 0m0.052s sys 0m0.019s $ time pqR-2018-11-18-gcc8/bin/Rscript x300.r [1] 1200 real 0m0.072s user 0m0.062s sys 0m0.012s $ time pqR-2018-11-18-gcc8/bin/Rscript x600.r [1] 2400 real 0m0.069s user 0m0.058s sys 0m0.012s

One can see that pqR is faster even for x2.r, but more importantly, with pqR the time for x300.r and x600.r is negligibly different, as one would expect since even 2400 additions should take negligible time. The huge increase in time with R-3.5.1 is due to the quadratic growth in the time to parse the function definition.

]]>This version has some major speed improvements, as well as some new features. I’ll details some of these improvements in future posts. Here, I’ll just mention a few things to show the flavour of the improvements in this release, and why you might be interested in pqR as an alternative to the R Core implementation.

One landmark reached in this release is that it is no longer advisable to use the byte-code compiler in pqR. The speed of direct interpretation of R code has now been improved to the point where it is about as fast at executing simple scalar code as the byte-code interpreter. Eliminating the byte-code compiler simplifies the overall implementation, and avoids possible semantic differences between interpreted and byte-compiled code. It is also important for pqR because some pqR optimizations and some new pqR features are not implemented in byte-code. For example, only the interpreter does optimizations such as deferring vector operations so that they may automatically be merged with other operations or be done in parallel when multiple cores are available.

Some vector operations have been substantially sped up compared to the previous release of pqR-2017-06-09. The improvement compared to R-3.5.1 can be even greater. Here is an example of replacing a subset of vector elements, benchmarked on an Intel “Skylake” processor, with both pqR-2018-11-18 and R-3.5.1 compiled from source with gcc 8.2.0 at optimization level -O3:

Here’s R-3.5.1:

> a <- numeric(20000) > system.time(for (i in 1:100000) a[2:19999] <- 3.1) user system elapsed 4.211 0.148 4.360

And here’s pqR-2018-11-18:

> a <- numeric(20000) > system.time(for (i in 1:100000) a[2:19999] <- 3.1) user system elapsed 0.256 0.000 0.257

So the current R Core implementation is 17 times slower than pqR for this replacement operation.

The advantage of pqR isn’t always this large, but many vector operations are sped up by smaller but still significant factors. An example:

With R-3.5.1:

> a <- seq(0,1,length=2000); b <- seq(1,0,length=2000) > system.time (for (i in 1:100000) { + d <- abs(a-b); r <- sum (d>0.4 & d<0.7) }) user system elapsed 1.215 0.015 1.231

With pqR-2018-11-18:

> a <- seq(0,1,length=2000); b <- seq(1,0,length=2000) > system.time (for (i in 1:100000) { + d <- abs(a-b); r <- sum (d>0.4 & d<0.7) }) user system elapsed 0.654 0.008 0.662

So for this example, pqR is almost twice as fast.

For some operations, pqR’s implementation has lower asymptotic time complexity, and so can be enormously faster. An example is the following convenient coding pattern that R programmers are currently warned to avoid:

With R-3.5.1:

> n <- 200000; a <- numeric(0); > system.time (for (i in 1:n) a <- c(a,(i+1)^2)) user system elapsed 30.387 0.223 30.612

With pqR-2018-11-18:

> n <- 200000; a <- numeric(0); > system.time (for (i in 1:n) a <- c(a,(i+1)^2)) user system elapsed 0.040 0.004 0.045

In R-3.5.1, extending a vector one element at a time with “c” takes time growing as n^{2}, as a new vector is allocated when each element is appended. With the latest version of pqR, the time grows only as n log n. In this example, that leads to pqR being 680 times faster, but the ratio could be made arbitrarily large by increasing n.

It’s still faster in pqR to preallocate a vector of length n, but only by about a factor of three, which would often be tolerable when writing one-off code if using “c” is more convenient.

The latest version of pqR has some new features. As for earlier pqR versions, some new features are aimed at addressing design flaws in R that lead to unreliable code, and others are aimed at making R more convenient for programming and scripting.

One new convenience feature is that the paste and paste0 operations can now be written with new `!!` and `!` operators. For example,

> city <- "Toronto"; province <- "Ontario" > city ! "," !! province [1] "Toronto, Ontario"

The `!!` operator pastes strings together with space separation; the `!` operator pastes with no separation. Of course, `!` retains its meaning of “not” when used as a unary operator; there is no ambiguity.

I’ll be writing some more blog posts regarding improvements in pqR-2018-11-18, and regarding some improvements in earlier pqR versions that I haven’t already blogged about. Of course, you can read about these now in the pqR NEWS file.

The main disadvantage of pqR is that it is not fully compatible with the current R Core version. It is a fork of R-2.15.0, with many, but not all, later changes incorporated. This affects what packages will work with pqR.

Addressing this compatibility issue is one thing that needs to be done going forward. I’ll discuss this and other plans — notably implementing automatic differentiation — in another future blog post.

I’m open to other people getting involved in this project. Of course, you can contribute now by trying out pqR and reporting any problems in the comments here or at the pqR issues page. Performance comparisons, especially on real-world applications, are also welcome.

Finally, for the paranoid, here are the shasum values for the compressed and uncompressed tar files that you can download from pqR-project.org:

89216dc76be23b3928c26561acc155b6e5ad32f3 pqR-2018-11-18.tar.gz f0ee8a37198b7e078fa1aec7dd5cda762f1a7799 pqR-2018-11-18.tar]]>

Click on image for larger version.

Brickworks, Toronto, June 2018. Nikon F3, Nikkor AIS 135mm 1:2.8 lens, Kodak Portra 400 film, Nikon Coolscan V.

]]>One major change with this version is that pqR, which was based on R-2.15.0, is now compatible with R-2.15.1. This allows for an increased number of packages in the pqR repository.

This release also has some significant speed improvements, a new form of the “for” statement, for conveniently iterating across columns or down rows of a matrix, and a new, less error-prone way for C functions to “protect” objects from garbage collection. There are also a few bug fixes (including fixes for some bugs that are also in the current R core release).

You can read more in the NEWS file, and get it from pqR-project.org.

Currently, pqR is distributed in source form only, and so you need to be comfortable compiling it yourself. It has been tested on Linux/Unix systems (with Intel/AMD, ARM, PowerPC, and SPARC processors), on Mac OS X (including macOS Sierra), and on Microsoft Windows (XP, 7, 8, 10) systems.

I plan to soon put up posts with more details on some of the features of this and the previous pqR release, as well as a post describing some of my future plans for pqR.

]]>Click on image for larger version.

Nikon FG, Nikkor AF-D 35mm 1:2 lens, Kodak Portra 160 film, Nikon Coolscan V.

]]>