Displaying 3 results from an estimated 3 matches for "r3vol".
2018 Apr 11
2
Unreasonably poor performance of replicated volumes
...band network. Iperf3 shows around *23
Gbit/s network bandwidth *between each 2 of them.
Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical
volume created on top of it, formatted with *xfs*. Gluster top reports the
following throughput:
root at fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count 524288
> list-cnt 0
> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
> Throughput *631.82 MBps *time 3.3989 secs
> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
> Throughput *566.96 MBps *time 3.7877 secs
> Brick: fsnode4.ibnet:/data/glu...
2018 Apr 12
0
Unreasonably poor performance of replicated volumes
...*23
> Gbit/s network bandwidth *between each 2 of them.
>
> Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with logical
> volume created on top of it, formatted with *xfs*. Gluster top reports
> the following throughput:
>
> root at fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count 524288
>> list-cnt 0
>> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Throughput *631.82 MBps *time 3.3989 secs
>> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>> Throughput *566.96 MBps *time 3.7877 secs
>> Brick:...
2018 Apr 13
1
Unreasonably poor performance of replicated volumes
...bandwidth *between each 2 of them.
>>
>> Each server has 3 HDD put into a *stripe*3 thin pool (LVM2) *with
>> logical volume created on top of it, formatted with *xfs*. Gluster top
>> reports the following throughput:
>>
>> root at fsnode2 ~ $ gluster volume top r3vol write-perf bs 4096 count
>>> 524288 list-cnt 0
>>> Brick: fsnode2.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *631.82 MBps *time 3.3989 secs
>>> Brick: fsnode6.ibnet:/data/glusterfs/r3vol/brick1/brick
>>> Throughput *566.96 MBps *time 3.7877...