You claim that consistency of this MLE is “a mathematical fact”, demonstrated by some theorem, which you don’t quote. There are lots of theorems about consistency of the MLE. They all have premises. The conclusion only holds if the premises hold. They don’t in this case.

]]>The ML estimator for the considered likelihood function is perfectly consistent and asymptotically normal (provided t>0). This is a mathematical fact. Consistence or asymptotic normality are theorems, not simulations. In this case, it is. Naturally, the second step is to obtain a bound on the error committed approximating the distribution of the estimator by a normal variable, and that is provided by the Berry-Esseen Theorem, form where you will know that a large n is required, or alternatively if your sample is small (30 or 100) at least you should use an Edgeworth Expansion approximation instead of a normal. So theory works well, but you need to be more careful.

The post shows clearly how misleading is to use simulations as a poor substitute of mathematics, and therefore it does not prove inconsistence, just that sample sizes must be larger. But to show it you need a full simulation. Run a Monte Carlo with 5.000 sample draws, and take n=10.000 and then take a look to the results. There is nothing in the model that affects consistency or asymptotic normality.

]]>Also someone who is not an expert in the area might not be able to tell whether there is error in the book. And there might be lot of experts out there who know there is error but not willing to say it.

Yes, I am looking for a good introduction to probability book too.

I have only taken an introductory analysis course (very very basic delta , epsilon proof), and first year of linear algebra, any recommendation would be appreciated.

]]>A version of pqR on a Mac that supports helper threads does take more work, since you have to get a recent version of gcc (4.7 or later) from macports.org or brew.sh, which can potentially be tedious and/or problematic.

I do hope to distribute pqR in more packaged form sometime. But further improvements and testing it out on Windows are higher priority at the moment.

]]>http://econ.ucsb.edu/~doug/researchpapers/Testing%20for%20Regime%20Switching%20A%20Comment.pdf

]]>For a vector of 100 million elements, replacing one element with no copying required will indeed be much, much faster than when a copy is required. However, it’s still quite possible for a sequence of updates of single elements with no copy done to take a long, long time. You’ll see that a loop doing updates one at a time to all 100 million elements of x will be slower than a copy (by a lot). Whether such updates of single elements (or small parts) are an important portion of the total computation time for your program will, of course, depend on your program…

]]>Second the amount of time taken compared to when I force an actual copy of an entire long vector is miniscule. So in terms of time taken, it’s either not making a copy or is somehow implemented so effectively that the time is essentially none…

Here’s the address not changing:

x = rnorm(5)

.Internal(inspect(x))

# @28ae698 14 REALSXP g0c4 [NAM(1)] (len=5, tl=0) 0.394722,0.212128,1.58906,0.0491079,-0.817408

x[2:3] = rnorm(2)

.Internal(inspect(x))

# @28ae698 14 REALSXP g0c4 [NAM(1)] (len=5, tl=0) 0.394722,0.794627,0.791023,0.0491079,-0.817408

and here’s the stark example of how little time the replacement takes, compared to when I force a copy of the entire vector to be made, where the actual copy is delayed because of R’s copy-on-change semantics:

x = rnorm(1e8)

system.time({x[1] = 3})

# user system elapsed

# 0 0 0

system.time({y = x})

# user system elapsed

# 0 0 0

system.time({y[1] = 3})

# user system elapsed

# 0.160 0.172 0.332

Guessing at what you might have meant for (b) at least, the code I show involving *tmp* is an R-level picture of what the R Core interpeter more-or-less does, but the interpreter does do it faster than it would be done at the R level. In particular, at the R level, *tmp* will be treated as an ordinary variable that has to appear different from x, whereas in the interpreter it isn’t, so there’s not as much copying done.

]]>