search for: staticclusterapply

Displaying 6 results from an estimated 6 matches for "staticclusterapply".

2024 Mar 25
1
Wish: a way to track progress of parallel operations
...lit into. Could it be feasible to add an optional .progress argument after the ellipsis to parLapply() and its friends? We can require it to be a function accepting (done_chunk, total_chunks, ...). If not a new argument, what other interfaces could be used to get accurate progress information from staticClusterApply and dynamicClusterApply? I understand that the default parLapply() behaviour is not very amenable to progress tracking, but when running clusterMap(.scheduling = 'dynamic') spanning multiple hours if not whole days, having progress information sets the mind at ease. I would be happy to pr...
2009 Aug 13
0
Efficiently Extracting Meta Data from TM Corpora
...running is: urllist <- lapply(workingcorpus, meta, tag = "FeedUrl") Unfortunately, I receive the following error message when I try to use the command "parLapply" "Error in checkCluster(cl) : not a valid cluster Calls: parLapply ... is.vector -> clusterApply -> staticClusterApply -> checkCluster" 2) Alternatively, I wonder if there might be a way of extracting all of the meta data into a data.frame that would be faster for processing? Thanks for any suggestions or ideas! Shad shad thomas | president | glass box research company | +1 (312) 451-3611 tel | shad...
2024 Mar 25
3
Wish: a way to track progress of parallel operations
...be feasible to add an optional .progress argument after the > ellipsis to parLapply() and its friends? We can require it to be a > function accepting (done_chunk, total_chunks, ...). If not a new > argument, what other interfaces could be used to get accurate progress > information from staticClusterApply and dynamicClusterApply? > > I understand that the default parLapply() behaviour is not very > amenable to progress tracking, but when running clusterMap(.scheduling > = 'dynamic') spanning multiple hours if not whole days, having progress > information sets the mind at ease....
2018 Feb 26
2
[parallel] fixes load balancing of parLapplyLB
...thank you for spotting the problem and suggestions for a fix. We'll probably add a chunk.size argument to parLapplyLB and parLapply to follow OpenMP terminology, which has already been an inspiration for the present code (parLapply already implements static scheduling via internal function staticClusterApply, yet with a fixed chunk size; parLapplyLB already implements dynamic scheduling via internal function dynamicClusterApply, but with a fixed chunk size set to an unlucky value so that it behaves like static scheduling). The default chunk size for parallelLapplyLB will be set so that there is som...
2018 Mar 01
0
[parallel] fixes load balancing of parLapplyLB
...> thank you for spotting the problem and suggestions for a fix. We'll probably add a chunk.size argument to parLapplyLB and parLapply to follow OpenMP terminology, which has already been an inspiration for the present code (parLapply already implements static scheduling via internal function staticClusterApply, yet with a fixed chunk size; parLapplyLB already implements dynamic scheduling via internal function dynamicClusterApply, but with a fixed chunk size set to an unlucky value so that it behaves like static scheduling). The default chunk size for parallelLapplyLB will be set so that there is some dy...
2018 Feb 19
2
[parallel] fixes load balancing of parLapplyLB
Hi, I'm trying to understand the rationale for your proposed amount of splitting and more precisely why that one is THE one. If I put labels on your example numbers in one of your previous post: nbrOfElements <- 97 nbrOfWorkers <- 5 With these, there are two extremes in how you can split up the processing in chunks such that all workers are utilized: (A) Each worker, called