>>>>> Andrew Robinson via R-help
>>>>> on Thu, 14 Nov 2024 12:45:44 +0000 writes:
> Not a direct answer but you may find lm.fit worth
> experimenting with.
Yes, lm.fit() is already faster, and
.lm.fit() {added to base R by me, when a similar question
was asked years ago ...}
is even an order of magnitude faster in some cases.
See ?lm.fit
and notably
example(lm.fit)
which uses pkg microbenchmark for timing and after which
png("lmfit-ex.png")
boxplot(mb, notch=TRUE)
dev.off()
produces the attached nice image.
> Also try the high-performance computing task view on CRAN
> Cheers,
> Andrew
> --
> Andrew Robinson Chief Executive Officer, CEBRA and
> Professor of Biosecurity, School/s of BioSciences and
> Mathematics & Statistics University of Melbourne, VIC 3010
> Australia Tel: (+61) 0403 138 955 Email:
> apro at unimelb.edu.au Website:
> https://researchers.ms.unimelb.edu.au/~apro at unimelb/
> I acknowledge the Traditional Owners of the land I
> inhabit, and pay my respects to their Elders. On 14 Nov
> 2024 at 1:13?PM +0100, Ivo Welch <ivo.welch at gmail.com>,
> wrote: External email: Please exercise caution
> I have found more general questions, but I have a specific
> one. I have a few million (independent) short regressions
> that I would like to run (each reg has about 60
> observations, though they can have missing observations
> [yikes]). So, I would like to be running as many `lm` and
> `coef(lm)` in parallel as possible. my hardware is Mac,
> with nice GPUs and integrated memory --- and so far
> completely useless to me. `mclapply` is obviously very
> useful, but I want more, more, more cores.
> is there a recommended plug-in library to speed up just
> `lm` by also using the GPU cores?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lmfit-ex.png
Type: image/png
Size: 7605 bytes
Desc: not available
URL:
<https://stat.ethz.ch/pipermail/r-help/attachments/20241115/9ee1c007/attachment.png>