search for: 4e0d

Displaying 7 results from an estimated 7 matches for "4e0d".

Did you mean: 3e0d
2017 Jun 20
2
[ovirt-users] Very poor GlusterFS performance
...perspective: I was getting better behaviour from NFS4 > on a gigabit connection than I am with GlusterFS on 10G: that doesn't > feel right at all. > > My volume configuration looks like this: > > Volume Name: vmssd > Type: Distributed-Replicate > Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 > Status: Started > Snapshot Count: 0 > Number of Bricks: 2 x (2 + 1) = 6 > Transport-type: tcp > Bricks: > Brick1: ovirt3:/gluster/ssd0_vmssd/brick > Brick2: ovirt1:/gluster/ssd0_vmssd/brick > Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) > Brick4:...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...r behaviour from NFS4 >> on a gigabit connection than I am with GlusterFS on 10G: that doesn't >> feel right at all. >> >> My volume configuration looks like this: >> >> Volume Name: vmssd >> Type: Distributed-Replicate >> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 >> Status: Started >> Snapshot Count: 0 >> Number of Bricks: 2 x (2 + 1) = 6 >> Transport-type: tcp >> Bricks: >> Brick1: ovirt3:/gluster/ssd0_vmssd/brick >> Brick2: ovirt1:/gluster/ssd0_vmssd/brick >> Brick3: ovirt2:/gluster/ssd0_vm...
2017 Jun 20
5
[ovirt-users] Very poor GlusterFS performance
...t; on a gigabit connection than I am with GlusterFS on 10G: that doesn't >>> feel right at all. >>> >>> My volume configuration looks like this: >>> >>> Volume Name: vmssd >>> Type: Distributed-Replicate >>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 >>> Status: Started >>> Snapshot Count: 0 >>> Number of Bricks: 2 x (2 + 1) = 6 >>> Transport-type: tcp >>> Bricks: >>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick >>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick >>>...
2017 Jun 20
0
[ovirt-users] Very poor GlusterFS performance
...the three servers). To put this into perspective: I was getting better behaviour from NFS4 on a gigabit connection than I am with GlusterFS on 10G: that doesn't feel right at all. My volume configuration looks like this: Volume Name: vmssd Type: Distributed-Replicate Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853 Status: Started Snapshot Count: 0 Number of Bricks: 2 x (2 + 1) = 6 Transport-type: tcp Bricks: Brick1: ovirt3:/gluster/ssd0_vmssd/brick Brick2: ovirt1:/gluster/ssd0_vmssd/brick Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter) Brick4: ovirt3:/gluster/ssd1_vmssd/brick Brick5: ovi...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...BDEC75FBBFCCC66ADDDF3B145 (635c0cdf-259d-496f-9ae7-0691eb4c24ed) on home-client-2 [2017-10-25 10:13:40.626475] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/E551EAAA2590DA0FA6946FB51A1FF8A9D705F3E2 (95574e0d-e6f4-4e41-abcd-0b7d6a1919cb) on home-client-2 [2017-10-25 10:13:42.266167] W [MSGID: 108015] [afr-self-heal-entry.c:56:afr_selfheal_entry_delete] 0-home-replicate-0: expunging file a3f5a769-8859-48e3-96ca-60a988eb9358/50661EA881415B62AA2DD42ADA4098761462E5B2 (a8382269-7b79-47aa-a161-7811165b1ddf) o...