search for: becb

Displaying 8 results from an estimated 8 matches for "becb".

Did you mean: bec
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...ocaldomain.local hostname2=10.10.2.102 [root at ovirt03 ~]# But not the gluster info on the second and third node that have lost the ovirt01/gl01 host brick information... Eg on ovirt02 [root at ovirt02 peers]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 0 x (2 + 1) = 2 Transport-type: tcp Bricks: Brick1: ovirt02.localdomain.local:/gluster/brick3/export Brick2: ovirt03.localdomain.local:/gluster/brick3/export Options Reconfigured: transport.address-family: inet performance.r...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...ocal. > Please check log file for details. > Commit failed on ovirt03.localdomain.local. Please check log file for > details. > [root at ovirt01 ~]# > > [root at ovirt01 bricks]# gluster volume info export > > Volume Name: export > Type: Replicate > Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: gl01.localdomain.local:/gluster/brick3/export > Brick2: ovirt02.localdomain.local:/gluster/brick3/export > Brick3: ovirt03.localdomain.l...
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> OK, so the log just hints to the following: >> >> [2017-07-05 15:04:07.178204] E [MSGID: 106123] >> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit >>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...: failed: Commit failed on ovirt02.localdomain.local. Please check log file for details. Commit failed on ovirt03.localdomain.local. Please check log file for details. [root at ovirt01 ~]# [root at ovirt01 bricks]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gl01.localdomain.local:/gluster/brick3/export Brick2: ovirt02.localdomain.local:/gluster/brick3/export Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter) Op...
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...ed "iso", that I can use, but I would like to use it as clean after understanding the problem on "export" volume. Currently on "export" volume in fact I have this [root at ovirt01 ~]# gluster volume info export Volume Name: export Type: Replicate Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 Status: Started Snapshot Count: 0 Number of Bricks: 0 x (2 + 1) = 1 Transport-type: tcp Bricks: Brick1: gl01.localdomain.local:/gluster/brick3/export Options Reconfigured: ... While on the other two nodes [root at ovirt02 ~]# gluster volume info export Volume Name: export...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...> > > But not the gluster info on the second and third node that have lost the > ovirt01/gl01 host brick information... > > Eg on ovirt02 > > > [root at ovirt02 peers]# gluster volume info export > > Volume Name: export > Type: Replicate > Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153 > Status: Started > Snapshot Count: 0 > Number of Bricks: 0 x (2 + 1) = 2 > Transport-type: tcp > Bricks: > Brick1: ovirt02.localdomain.local:/gluster/brick3/export > Brick2: ovirt03.localdomain.local:/gluster/brick3/export > Options Reconfigured: >...
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >>> ... >>> >>> then the commands I need to run would be: >>> >>> gluster volume reset-brick export
2009 Jul 23
1
[PATCH server] changes required for fedora rawhide inclusion.
..._<t6S=t;pUBA3>bZGT;*4!a@<-N}3~Anl{u`s49uYo? zvwl7@{7HRIr{>j}vQL-ghcxG3dbnZB<sJLh=};ul8TV4<FU7fcbc3f%3W<E&8X~xt zvEBR0{0jYB{YGy3`i28v%o}#|_XtqCBlM at S#GQc-{E}b*ev7%4cHwIE=T}!P%0ip# z-*0S4yz<2V;;W?hK6!T=ez)C^A4i5JzRZg`^J;C%j16laqebqIw;~!6xBPVET&HvG zpS*r~3WA;b5P;{IU-j`BeCb~Mjm-xFH!Pa8!taG;;k2Z~t3La7SiE1f`tJ4g<^7jW z&fKtKUL at KaI+mIf+XTYajCt?n;zx7e+xlhX%rm>Q`VEb{uzd%}`vC~!qwTvF9AA22 zK;+7vOM2ThWfdyoavo at 3zOsDLqsxZ!eBa(=_e|@N%rBzO=YV`)AUbog;LMll6<eEo z*MBuGa%uM3r4B(C!R^|O0S%zrl!SGW9XIDRhx|G$_s9J3n#DZ^4{rqPUIXF*`yb_B zoBirm(6HCN!V7mTNFNErk_M17=ScCUYnNt8Z...