Displaying 20 results from an estimated 299 matches for "microbenchmarked".
Did you mean:
microbenchmark
2013 Jul 02
2
cache most-recent dispatch
Hi,
S4 method dispatch can be very slow. Would it be reasonable to cache the
most
recent dispatch, anticipating the next invocation will be on the same
type? This
would be very helpful in loops.
fun0 <- function(x)
sapply(x, paste, collapse="+")
fun1 <- function(x) {
paste <- selectMethod(paste, class(x[[1]]))
sapply(x, paste,
2017 Aug 22
4
How to benchmark speed of load/readRDS correctly
Dear all
I was thinking about efficient reading data into R and tried several ways to test if load(file.Rdata) or readRDS(file.rds) is faster. The files file.Rdata and file.rds contain the same data, the first created with save(d, ' file.Rdata', compress=F) and the second with saveRDS(d, ' file.rds', compress=F).
First I used the function microbenchmark() and was a astonished
2012 Mar 17
2
Coalesce function in BBmisc, emoa, and microbenchmark packages
Hello All,
Need to coalesce some columns using R. Looked online to see how this is done. One approach appears to be to use ifelse. Also uncovered a coalesce function in the BBmisc, emoa, and microbenchmark packages.
Trouble is I can't seem to get it to work in any of these packages. Or perhaps I misunderstand what it's intended to do. The documentation is generally pretty scant.
Working
2017 Nov 20
2
Small performance bug in [.Date
Hi all,
I think there's an unnecessary line in [.Date which has a considerable
impact on performance when subsetting large dates:
x <- Sys.Date() + 1:1e6
microbenchmark::microbenchmark(x[1])
#> Unit: microseconds
#> expr min lq mean median uq max neval
#> x[1] 920.651 1039.346 3624.833 2294.404 3786.881 41176.38 100
`[.Date` <- function(x, ...,
2018 Mar 13
2
Possible Improvement to sapply
FYI, in R devel (to become 3.5.0), there's isFALSE() which will cut
some corners compared to identical():
> microbenchmark::microbenchmark(identical(FALSE, FALSE), isFALSE(FALSE))
Unit: nanoseconds
expr min lq mean median uq max neval
identical(FALSE, FALSE) 984 1138 1694.13 1218.0 1337.5 13584 100
isFALSE(FALSE) 713 761 1133.53 809.5 871.5
2011 Aug 23
2
Increase transparency: suggestion on how to avoid namespaces and/or unnecessary overwrites of existing functions
aDear list,
I'm aware of the fact that I posted on something related a while ago,
but I just can't sweat this off and would like to ask your for an opinion:
The problem:
Namespaces are great, but they don't resolve certain conflicts regarding
name clashes. There are more and more people out there trying to come up
with their own R packages, which is great also! Yet, it becomes
2017 Aug 22
0
How to benchmark speed of load/readRDS correctly
The large value for maximum time may be due to garbage collection, which
happens periodically. E.g., try the following, where the
unlist(as.list()) creates a lot of garbage. I get a very large time every
102 or 51 iterations and a moderately large time more often
mb <- microbenchmark::microbenchmark({ x <- as.list(sin(1:5e5)); x <-
unlist(x) / cos(1:5e5) ; sum(x) }, times=1000)
2017 Aug 22
1
How to benchmark speed of load/readRDS correctly
Note that if you force a garbage collection each iteration the times are
more stable. However, on the average it is faster to let the garbage
collector decide when to leap into action.
mb_gc <- microbenchmark::microbenchmark(gc(), { x <- as.list(sin(1:5e5)); x
<- unlist(x) / cos(1:5e5) ; sum(x) }, times=1000,
control=list(order="inorder"))
with(mb_gc,
2015 Nov 17
2
[RFC] A new intrinsic, `llvm.blackbox`, to explicitly prevent constprop, die, etc optimizations
On Mon, Nov 16, 2015 at 6:59 PM, Dmitri Gribenko via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> On Mon, Nov 16, 2015 at 10:03 AM, James Molloy via llvm-dev
> <llvm-dev at lists.llvm.org> wrote:
> > You don't appear to have addressed my suggestion to not require a perfect
> > external world, instead to measure the overhead of an imperfect world (by
> >
2018 Feb 11
4
Parallel assignments and goto
Hi guys,
I am working on some code for automatically translating recursive functions into looping functions to implemented tail-recursion optimisations. See https://github.com/mailund/tailr
As a toy-example, consider the factorial function
factorial <- function(n, acc = 1) {
if (n <= 1) acc
else factorial(n - 1, acc * n)
}
I can automatically translate this into the loop-version
2014 Sep 07
2
normalizePath is sometimes very slow for nonexistent UNC paths
I'm having an issue with occasionally slow-running calls to
normalizePath. If the path is a non-existent UNC path, then
normalizePath sometimes takes 6 or 7 seconds to run, rather than its
usual few microseconds. My big problem is that I can't reliably
reproduce this across machines.
The example below generates one or two slow runs out of 10000 on my
Windows machine. I haven't been
2024 Feb 29
2
[External] converting MATLAB -> R | element-wise operation
I decided to do a direct comparison of transpose and sweep.
library(microbenchmark)
NN <- matrix(c(1, 2, 3, 4, 5, 6), nrow = 2, byrow = TRUE) # Example matrix
lambda <- c(2, 3, 4) # Example vector
colNN <- t(NN)
microbenchmark(
sweep = sweep(NN, 2, lambda, "/"),
transpose = t(t(NN)/lambda),
colNN = colNN/lambda
)
Unit: nanoseconds
expr min lq
2015 Nov 17
2
[RFC] A new intrinsic, `llvm.blackbox`, to explicitly prevent constprop, die, etc optimizations
On Mon, Nov 16, 2015 at 9:07 PM, Dmitri Gribenko <gribozavr at gmail.com>
wrote:
> On Mon, Nov 16, 2015 at 8:55 PM, Sean Silva <chisophugis at gmail.com> wrote:
> >
> >
> > On Mon, Nov 16, 2015 at 6:59 PM, Dmitri Gribenko via llvm-dev
> > <llvm-dev at lists.llvm.org> wrote:
> >>
> >> On Mon, Nov 16, 2015 at 10:03 AM, James Molloy via
2015 Mar 02
3
R-devel does not update the C++ returned variables
On 03/02/2015 04:37 PM, Martin Maechler wrote:
>
>> On 2 March 2015 at 09:09, Duncan Murdoch wrote:
>> | I generally recommend that people use Rcpp, which hides a lot of the
>> | details. It will generate your .Call calls for you, and generate the
>> | C++ code that receives them; you just need to think about the real
>> | problem, not the interface. It has its
2015 Jan 26
2
speedbump in library
>>>>> Winston Chang <winstonchang1 at gmail.com>
>>>>> on Fri, 23 Jan 2015 10:15:53 -0600 writes:
> I think you can simplify a little by replacing this:
> pkg %in% loadedNamespaces()
> with this:
> .getNamespace(pkg)
almost: It would be
!is.null(.getNamespace(pkg))
> Whereas getNamespace(pkg) will load the
2017 Aug 04
2
Why is as.function() slower than eval(call("function"())?
(Apologies if this is better suited for R-help.)
On my system (macOS Sierra, late 2014 MacBook Pro; R 3.4.1, Homebrew build), I found that it is faster to construct a function using eval(call("function", ...)) than using as.function(list(...)). Example:
make_fn_1 <- function(a, b) eval(call("function", a, b), env = parent.frame())
make_fn_2 <- function(a, b)
2018 Mar 14
0
Possible Improvement to sapply
>>>>> Henrik Bengtsson <henrik.bengtsson at gmail.com>
>>>>> on Tue, 13 Mar 2018 10:12:55 -0700 writes:
> FYI, in R devel (to become 3.5.0), there's isFALSE() which will cut
> some corners compared to identical():
> > microbenchmark::microbenchmark(identical(FALSE, FALSE), isFALSE(FALSE))
> Unit: nanoseconds
> expr
2018 Feb 27
2
Parallel assignments and goto
Interestingly, the <<- operator is also a lot faster than using a namespace explicitly, and only slightly slower than using <- with local variables, see below. But, surely, both must at some point insert values in a given environment ? either the local one, for <-, or an enclosing one, for <<- ? so I guess I am asking if there is a more low-level assignment operation I can get my
2018 Feb 26
0
Parallel assignments and goto
Following up on this attempt of implementing the tail-recursion optimisation ? now that I?ve finally had the chance to look at it again ? I find that non-local return implemented with callCC doesn?t actually incur much overhead once I do it more sensibly. I haven?t found a good way to handle parallel assignments that isn?t vastly slower than simply introducing extra variables, so I am going with
2015 Jan 02
3
Benchmark code, but avoid printing
Dear all,
I am trying to benchmark code that occasionally prints on the screen
and I want to
suppress the printing. Is there an idiom for this?
If I do
sink(tempfile)
microbenchmark(...)
sink()
then I'll be also measuring the costs of writing to tempfile. I could
also sink to /dev/null, which is probably fast, but that is not
portable.
Is there a better solution? Is writing to a