Displaying 6 results from an estimated 6 matches for "staticclusterappli".
Did you mean:
staticclusterapply
2024 Mar 25
1
Wish: a way to track progress of parallel operations
Hello R-devel,
A function to be run inside lapply() or one of its friends is trivial
to augment with side effects to show a progress bar. When the code is
intended to be run on a 'parallel' cluster, it generally cannot rely on
its own side effects to report progress.
I've found three approaches to progress bars for parallel processes on
CRAN:
- Importing 'snow' (not
2009 Aug 13
0
Efficiently Extracting Meta Data from TM Corpora
I'm using text miner (the "tm" package) to process large numbers of blog and message board postings (about 245,000). Does anyone have any advice for how to efficiently extract the meta data from a corpus of this size?
TM does a great job of using MPI for many functions (e.g. tmMap) which greatly speed up the processing. However, the "meta" function that I need does not
2024 Mar 25
3
Wish: a way to track progress of parallel operations
Hello,
thanks for bringing this topic up, and it would be excellent if we
could come of with a generic solution for this in base R. It is one
of the top frequently asked questions and requested features in
parallel processing, but also in sequential processing. We have also
seen lots of variants on how to attack the problem of reporting on
progress when running in parallel.
As the author
2018 Feb 26
2
[parallel] fixes load balancing of parLapplyLB
Dear Christian and Henrik,
thank you for spotting the problem and suggestions for a fix. We'll
probably add a chunk.size argument to parLapplyLB and parLapply to
follow OpenMP terminology, which has already been an inspiration for the
present code (parLapply already implements static scheduling via
internal function staticClusterApply, yet with a fixed chunk size;
parLapplyLB already
2018 Mar 01
0
[parallel] fixes load balancing of parLapplyLB
Dear Tomas,
Thanks for your commitment to fix this issue and also to add the chunk size as an argument. If you want our input, let us know ;)
Best Regards
On 02/26/2018 04:01 PM, Tomas Kalibera wrote:
> Dear Christian and Henrik,
>
> thank you for spotting the problem and suggestions for a fix. We'll probably add a chunk.size argument to parLapplyLB and parLapply to follow OpenMP
2018 Feb 19
2
[parallel] fixes load balancing of parLapplyLB
Hi, I'm trying to understand the rationale for your proposed amount of
splitting and more precisely why that one is THE one.
If I put labels on your example numbers in one of your previous post:
nbrOfElements <- 97
nbrOfWorkers <- 5
With these, there are two extremes in how you can split up the
processing in chunks such that all workers are utilized:
(A) Each worker, called