Displaying 9 results from an estimated 9 matches for "batchsize".
Did you mean:
batch_size
2006 Jan 30
5
Multiple ajax calls
This is slightly OT for which I apologise in advance, but I was
wondering if anyone here has had any problems when making multiple
ajax calls at the same time. I''m working on a large Intranet
application which makes heavy use of ajax calls and the bugs are
flying in from the testers that if they repeatedly click on a link
that makes an ajax call then Internet Explorer can fall
2019 Aug 23
3
Vectorization fails when dealing with a lot of for loops.
Hello, could you please have a look at this code posted on godbolt.org:
https://godbolt.org/z/O-O-Q7
The problem is that inside the compute function, only the first loop vectorizes while the rest copies of it don't. But if I remove any of the for loops, then the rest vectorize successfully. Could you please confirm that this is a bug, otherwise give me more insight on why the vectorization
2006 May 10
12
What to do with HUGE instance variables in Rails?
I''m learning rails and I can succesfully use the following things in the
controller:
@var1 = Var.find :all
@var2 = Var2.find :all
Problem is that the DB has about 260,000 lines which considerably slows
everything down if I load everything in @var1.
Isn''t there a way to load those items progressively? I treat them
separately (e.g. no interactions between them) in the program so
2009 Apr 03
1
bigglm "update" with ff
...{
if (first) {
first <- FALSE
fit <- bigglm(eqn, as.data.frame(bigdata[i1:i2,,drop=FALSE]), chunksize =
10000, family = binomial())
} else {
fit <- update(fit, as.data.frame(bigdata[i1:i2,,drop=FALSE]), chunksize =
10000, family = binomial())
}
}, X=bigdata, VERBOSE = TRUE, BATCHSIZE = nmax)
Many thanks.
Yuesheng
[[alternative HTML version deleted]]
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...opy = nvq->ubufs;
>
> + /* Disable zerocopy batched fetching for simplicity */
This special case can perhaps be avoided if we no longer block
on vhost_exceeds_maxpend, but revert to copying.
> + if (zcopy) {
> + heads = &used;
Can this special case of batchsize 1 not use vq->heads?
> + batched = 1;
> + }
> +
> for (;;) {
> /* Release DMAs done buffers first */
> if (zcopy)
> @@ -486,95 +492,114 @@ static void handle_tx(struct vhost_net *net)
> if (unlik...
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...opy = nvq->ubufs;
>
> + /* Disable zerocopy batched fetching for simplicity */
This special case can perhaps be avoided if we no longer block
on vhost_exceeds_maxpend, but revert to copying.
> + if (zcopy) {
> + heads = &used;
Can this special case of batchsize 1 not use vq->heads?
> + batched = 1;
> + }
> +
> for (;;) {
> /* Release DMAs done buffers first */
> if (zcopy)
> @@ -486,95 +492,114 @@ static void handle_tx(struct vhost_net *net)
> if (unlik...
2013 Jan 15
0
paper - download - pubmed
...ch for a paper in pubmed,
which is possible by using GetPubMed function in the package "NCBI2R?".
GetPubMed(searchterm, file = "", download = TRUE , showurl = FALSE,
xldiv = ";", hyper = "HYPERLINK",
MaxRet = 30000, sme = FALSE, smt = FALSE, quiet = TRUE,
batchsize=500,descHead=FALSE)
With this function I can not download the pdfs for all hits, although if
I go to the pubmed, I can download it.
So, the problem is not that, for each paper I have to download the pdfs
(which are available if I go to the pubmed and search directly there)
and the correspondin...
2017 Sep 22
17
[PATCH net-next RFC 0/5] batched tx processing in vhost_net
Hi:
This series tries to implement basic tx batched processing. This is
done by prefetching descriptor indices and update used ring in a
batch. This intends to speed up used ring updating and improve the
cache utilization. Test shows about ~22% improvement in tx pss.
Please review.
Jason Wang (5):
vhost: split out ring head fetching logic
vhost: introduce helper to prefetch desc index
2017 Sep 22
17
[PATCH net-next RFC 0/5] batched tx processing in vhost_net
Hi:
This series tries to implement basic tx batched processing. This is
done by prefetching descriptor indices and update used ring in a
batch. This intends to speed up used ring updating and improve the
cache utilization. Test shows about ~22% improvement in tx pss.
Please review.
Jason Wang (5):
vhost: split out ring head fetching logic
vhost: introduce helper to prefetch desc index