search for: ncore

Displaying 13 results from an estimated 13 matches for "ncore".

Did you mean: core
2019 Feb 01
2
Set the number of threads using openmp with .Fortran?
...compile instructions accordingly but i had no luck. *This is my makevars:* PKG_FCFLAGS="-fno-stack-protector" F90FLAGS = "-fopenmp" LDFLAGS = "-fopenmp" *This is my Fortran module:* module hello_openmp use omp_lib implicit none contains subroutine hello(ncores) bind(C, name="hello_") use, intrinsic :: iso_c_binding, only : c_double, c_int integer(c_int), intent(in) :: ncores integer :: iam ! Specify number of threads to us...
2019 Feb 02
1
Set the number of threads using openmp with .Fortran?
..._CFLAGS) > > ##### Phony target for R's build system to invoke ##### > all: $(SHLIB) > > ##### Clean target ##### > clean: > rm -f *.o *.mod > > And when I run my hello world function all the threads are used > regardless of what i specify: > > > hello(ncores = 2) Hello from 1 > Hello from 3 > Hello from 0 > Hello from 9 > Hello from 8 > Hello from 2 > Hello from 6 > Hello from 10 > Hello from 11 > Hello from 5 > Hel...
2019 Feb 02
0
Set the number of threads using openmp with .Fortran?
...FLAGS = $(SHLIB_OPENMP_FFLAGS) PKG_LIBS = $(SHLIB_OPENMP_CFLAGS) ##### Phony target for R's build system to invoke ##### all: $(SHLIB) ##### Clean target ##### clean: rm -f *.o *.mod And when I run my hello world function all the threads are used regardless of what i specify: > hello(ncores = 2) Hello from 1 Hello from 3 Hello from 0 Hello from 9 Hello from 8 Hello from 2 Hello from 6 Hello from 10 Hello from 11 Hello from 5 Hello from 7 Hello from 4 $ncore...
2013 Apr 18
1
snow: cluster initialization
Dear all, I found a strange thing with the snow package. This will work: y = matrix(1:4, 2) cl = makeCluster(rep('localhost', 8), type='SOCK') parMM(cl, y, y) This will not: y = matrix(1:4, 2) ncore = system('nproc') parMM(cl, y, y) Error in cut.default(i, breaks) : invalid number of intervals I also tried: cl = makeCluster(rep('localhost', ncore), type='SOCK') cl = makeCluster(rep('localhost', as.integer(ncore)), type='SOCK') no luck. Could anyone p...
2014 Apr 18
2
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
...I have 10 engineers in my company, > I probably want to give them 10 working desks as well. But let's not go > insane. If I have 1000 engineers, 100 desks must be enough for them. This > must reduce costs. > The baseline memory consumption for systems (and amount of RAM!) is > O(NCORES), not O(1). In some read-mostly cases it's possible to achieve > O(1) memory consumption, and that's great. But if it's not the case here, > let it be so. > > > > > shard_count = std::min(MAX, std::max(NUMBER_OF_THREADS, NUMBER_OF_CORES)) > > Threads do not p...
2014 Apr 18
4
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
On Apr 17, 2014, at 2:04 PM, Chandler Carruth <chandlerc at google.com> wrote: > On Thu, Apr 17, 2014 at 1:27 PM, Justin Bogner <mail at justinbogner.com> wrote: > Chandler Carruth <chandlerc at google.com> writes: > > if (thread-ID != main's thread-ID && shard_count < std::min(MAX, NUMBER_OF_CORES)) { > > shard_count = std::min(MAX,
2015 May 11
1
Foreach %dopar% operator incorrectly load balancing
...o load balance between all of them. The issue with this is that it doesn't actually seem to perform non-trivial tasks at all anymore. This is an example of testing code I've been using for testing the %dopar% loop. library(iterators) library(foreach) library(doParallel) library(Parallel) nCores <- 4 cl <- makeCluster(nCores) registerDoParallel(cl) trials = 100000 x <- iris[which(iris[,5] != "setosa"),c(1,5)] t2 <- system.time({ r2 <- foreach(icount(trials), .combine=cbind) %dopar% { ind <- sample(100,100,replace= TRUE) results1 <- glm(x[in...
2014 Apr 18
2
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
...company, I probably want to give them 10 working desks as well. But let's >>> not go insane. If I have 1000 engineers, 100 desks must be enough for them. >>> This must reduce costs. >>> The baseline memory consumption for systems (and amount of RAM!) is >>> O(NCORES), not O(1). In some read-mostly cases it's possible to achieve >>> O(1) memory consumption, and that's great. But if it's not the case here, >>> let it be so. >>> >>> >>> >>> > shard_count = std::min(MAX, std::max(NUMBER_OF_THREA...
2014 Aug 07
2
How to (appropropriately) use require in a package?
Dear All, What is the preferred way for Package A, to initialize a cluster, and load Package B on all nodes? I am writing a package that parallelizes some functions through the use of a cluster if useRs are on a Windows machine (using parLapply and family). I also make use of another package in some of my code, so it is necessary to load the required packages on each slave once the cluster is
2012 Sep 03
1
[GIT-PULL] XFS filesystem driver
...xfs_dir2_sf_t *sf = (xfs_dir2_sf_t *)&core->di_literal_area[0]; + xfs_dir2_sf_entry_t *sf_entry; + uint8_t count = sf->hdr.i8count ? sf->hdr.i8count : sf->hdr.count; + struct fs_info *fs = parent->fs; + struct inode *inode; + xfs_intino_t ino; + xfs_dinode_t *ncore = NULL; + + xfs_debug("count %hhu i8count %hhu", sf->hdr.count, sf->hdr.i8count); + + sf_entry = (xfs_dir2_sf_entry_t *)((uint8_t *)&sf->list[0] - + (!sf->hdr.i8count ? 4 : 0)); + while (count--) { + uint8_t *start_name = &sf_entry->name[0]; + uin...
2015 Jul 18
1
[PATCH 1/2] xfs: rename xfs_is_valid_magicnum to xfs_is_valid_sb
xfs_is_valid_magicnum is not actually a generic function that checks for magic numbers, instead it checks only for superblock's one. Signed-off-by: Paulo Alcantara <pcacjr at zytor.com> --- core/fs/xfs/xfs.c | 13 +++++-------- core/fs/xfs/xfs.h | 19 ++++++++++--------- 2 files changed, 15 insertions(+), 17 deletions(-) diff --git a/core/fs/xfs/xfs.c b/core/fs/xfs/xfs.c index
2013 Jan 29
0
Package parallel left orphan processes, how to clean-up?
Hi, I've using package DEXSeq that implements functions with nCores for speed-up. The functions work fine, but I found out that the children processes were not terminated, they still hold memory, and new command will start up new children processes. So if I don't manually kill those orphan processes, they will cause problem. I was qlogin to SGE cluster node to...
2015 Dec 15
8
[PATCH] xfs: Add support for v3 directories
...2_sf_entry_t *sf_entry; + uint8_t ftypelen = core->di_version == 3 ? 1 : 0; uint8_t count = sf->hdr.i8count ? sf->hdr.i8count : sf->hdr.count; struct fs_info *fs = parent->fs; struct inode *inode; + xfs_dir2_inou_t *inou; xfs_intino_t ino; xfs_dinode_t *ncore = NULL; xfs_debug("dname %s parent %p core %p", dname, parent, core); xfs_debug("count %hhu i8count %hhu", sf->hdr.count, sf->hdr.i8count); - sf_entry = (xfs_dir2_sf_entry_t *)((uint8_t *)&sf->list[0] - + sf_entry = (xfs_dir2_sf_entry_t *)((uint8...