> On 17 Jun 2017, at 08:47, Moshe Olshansky via R-devel <r-devel at
r-project.org> wrote:
>
> I am visualising high dimensional genomic data and for this purpose I need
to compute pairwise distances between many points in a high-dimensional space
(say I have a matrix of 5,000 rows and 20,000 columns, so the result is a
5,000x5,000 matrix or it's upper diagonal).Computing such thing in R takes
many hours (I am doing this on a Linux server with more than 100 GB of RAM, so
this is not the problem). When I write the matrix to disk, read it ans compute
the distances in C, write them to the disk and read them into R it takes 10 - 15
minutes (and I did not spend much time on optimising my C code).The question is
why the R function is so slow? I understand that it calls C (or C++) to compute
the distance. My suspicion is that the transposed matrix is passed to C and so
each time a distance between two columns of a matrix is computed, and since C
stores matrices by rows it is very inefficient and causes many cache misses (my
first C implementation was like this and I had to stop the run after an hour
when it failed to complete).
There are two many reasons for the relatively low speed of the built-in dist()
function: (i) it operates on row vectors, which leads to many cache misses
because matrices are stored by column in R (as you guessed); (ii) the function
takes care to handle missing values correctly, which adds a relatively expensive
test and conditional branch to each iteration of the inner loop.
A faster implementation, which omits the NA test and can compute distances
between column vectors, is available as dist.matrix() in the
"wordspace" package. However, it cannot be used with matrices that
might contain NAs (and doesn't warn about such arguments).
If you want the best possible speed, use cosine similarity (or equivalently,
angular distance). The underlying cross product is very efficient with a
suitable BLAS implementation.
Best,
Stefan