Displaying 10 results from an estimated 10 matches for "dynamicclusterapply".
2014 Dec 06
1
does parLapplyLB do load-balancing?
Looking at parLapplyLB, one sees that it takes in X and then passes
splitList(X, length(cl)) to clusterApplyLB, which then calls
dynamicClusterApply. Thus while dynamicClusterApply does handle tasks
in a load-balancing fashion, sending out individual tasks as previous
tasks complete, parLapplyLB preempts that by splitting up the tasks in
advance into as many groups of tasks as there are cluster processes.
This seems to defeat the purpose of lo...
2018 Feb 12
2
[parallel] fixes load balancing of parLapplyLB
...d=16792
## The Call Chain
First, we traced the relevant R function calls through the code, beginning with `parLapplyLB`:
1. **parLapplyLB:** clusterApply.R:177, calls **splitList**, then **clusterApplyLB**
2. **splitList:** clusterApply.R:157
3. **clusterApplyLB:** clusterApply.R:87, calls **dynamicClusterApply**
4. **dynamicClusterApply:** clusterApply.R:39
## splitList
We used both our whiteboard and an R session to manually *run* a few examples. We were using lists of 100 elements and 5
workers. First, lets take a look at **splitList**:
```r
> sapply(parallel:::splitList(1:100, 5), length)
[1]...
2018 Feb 26
2
[parallel] fixes load balancing of parLapplyLB
...and parLapply to
follow OpenMP terminology, which has already been an inspiration for the
present code (parLapply already implements static scheduling via
internal function staticClusterApply, yet with a fixed chunk size;
parLapplyLB already implements dynamic scheduling via internal function
dynamicClusterApply, but with a fixed chunk size set to an unlucky value
so that it behaves like static scheduling). The default chunk size for
parallelLapplyLB will be set so that there is some dynamism in the
schedule even by default. I am now testing a patch with these changes.
Best
Tomas
On 02/20/2018 11:45...
2018 Feb 19
2
[parallel] fixes load balancing of parLapplyLB
...we traced the relevant R function calls through the code, beginning with `parLapplyLB`:
>>
>> 1. **parLapplyLB:** clusterApply.R:177, calls **splitList**, then **clusterApplyLB**
>> 2. **splitList:** clusterApply.R:157
>> 3. **clusterApplyLB:** clusterApply.R:87, calls **dynamicClusterApply**
>> 4. **dynamicClusterApply:** clusterApply.R:39
>>
>>
>> ## splitList
>>
>> We used both our whiteboard and an R session to manually *run* a few examples. We were using lists of 100 elements and 5
>> workers. First, lets take a look at **splitList**:
&g...
2018 Feb 19
0
[parallel] fixes load balancing of parLapplyLB
...t;
> First, we traced the relevant R function calls through the code, beginning with `parLapplyLB`:
>
> 1. **parLapplyLB:** clusterApply.R:177, calls **splitList**, then **clusterApplyLB**
> 2. **splitList:** clusterApply.R:157
> 3. **clusterApplyLB:** clusterApply.R:87, calls **dynamicClusterApply**
> 4. **dynamicClusterApply:** clusterApply.R:39
>
>
> ## splitList
>
> We used both our whiteboard and an R session to manually *run* a few examples. We were using lists of 100 elements and 5
> workers. First, lets take a look at **splitList**:
>
> ```r
>> sa...
2018 Mar 01
0
[parallel] fixes load balancing of parLapplyLB
...plyLB and parLapply to follow OpenMP terminology, which has already been an inspiration for the present code (parLapply already implements static scheduling via internal function staticClusterApply, yet with a fixed chunk size; parLapplyLB already implements dynamic scheduling via internal function dynamicClusterApply, but with a fixed chunk size set to an unlucky value so that it behaves like static scheduling). The default chunk size for parallelLapplyLB will be set so that there is some dynamism in the schedule even by default. I am now testing a patch with these changes.
>
> Best
> Tomas
>
>...
2018 Feb 20
0
[parallel] fixes load balancing of parLapplyLB
...nction calls through the code,
>beginning with `parLapplyLB`:
>>>
>>> 1. **parLapplyLB:** clusterApply.R:177, calls **splitList**, then
>**clusterApplyLB**
>>> 2. **splitList:** clusterApply.R:157
>>> 3. **clusterApplyLB:** clusterApply.R:87, calls
>**dynamicClusterApply**
>>> 4. **dynamicClusterApply:** clusterApply.R:39
>>>
>>>
>>> ## splitList
>>>
>>> We used both our whiteboard and an R session to manually *run* a few
>examples. We were using lists of 100 elements and 5
>>> workers. First, lets...
2013 Feb 07
1
R intermittently crashes across cluster
...imes it will run for 50
iterations of this loop then crash. Sometimes 15 iterations,
sometimes 2. When the crash happens, I receive the following error
message every time:
Error in checkForRemoteErrors(val) :
one node produced an error: cannot open the connection
Calls: clusterApplyLB -> dynamicClusterApply -> checkForRemoteErrors
Execution halted
Any ideas as to what might be going on? I have run this code
successfully many times when I do not use the loop. I have a lot of
data to process and recreating the cluster every time that I want to
run my function is a waste of time.
Thanx,
Ken
&...
2024 Mar 25
1
Wish: a way to track progress of parallel operations
...feasible to add an optional .progress argument after the
ellipsis to parLapply() and its friends? We can require it to be a
function accepting (done_chunk, total_chunks, ...). If not a new
argument, what other interfaces could be used to get accurate progress
information from staticClusterApply and dynamicClusterApply?
I understand that the default parLapply() behaviour is not very
amenable to progress tracking, but when running clusterMap(.scheduling
= 'dynamic') spanning multiple hours if not whole days, having progress
information sets the mind at ease.
I would be happy to prepare code and documenta...
2024 Mar 25
3
Wish: a way to track progress of parallel operations
...ptional .progress argument after the
> ellipsis to parLapply() and its friends? We can require it to be a
> function accepting (done_chunk, total_chunks, ...). If not a new
> argument, what other interfaces could be used to get accurate progress
> information from staticClusterApply and dynamicClusterApply?
>
> I understand that the default parLapply() behaviour is not very
> amenable to progress tracking, but when running clusterMap(.scheduling
> = 'dynamic') spanning multiple hours if not whole days, having progress
> information sets the mind at ease.
>
> I would be ha...