Displaying 4 results from an estimated 4 matches for "vmssd".
Did you mean:
vmsd
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
....20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...servers).
>>
>> To put this into perspective: I was getting better behaviour from NFS4
>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>> feel right at all.
>>
>> My volume configuration looks like this:
>>
>> Volume Name: vmssd
>> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick...
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
...gt; To put this into perspective: I was getting better behaviour from NFS4
>>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>>> feel right at all.
>>>
>>> My volume configuration looks like this:
>>>
>>> Volume Name: vmssd
>>> Type: Distributed-Replicate
>>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x (2 + 1) = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt3:/gluster...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...ves bandwidth measurements of >=
9.20 Gbits/sec between the three servers).
To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.
My volume configuration looks like this:
Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arb...