Displaying 7 results from an estimated 7 matches for "844f".
Did you mean:
844
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
....local
hostname2=10.10.2.102
[root at ovirt03 ~]#
But not the gluster info on the second and third node that have lost the
ovirt01/gl01 host brick information...
Eg on ovirt02
[root at ovirt02 peers]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahe...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...Please check log file for details.
> Commit failed on ovirt03.localdomain.local. Please check log file for
> details.
> [root at ovirt01 ~]#
>
> [root at ovirt01 bricks]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gl01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:/glus...
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
> On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>> OK, so the log just hints to the following:
>>
>> [2017-07-05 15:04:07.178204] E [MSGID: 106123]
>> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit
>>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...Commit failed on ovirt02.localdomain.local.
Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for
details.
[root at ovirt01 ~]#
[root at ovirt01 bricks]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Brick2: ovirt02.localdomain.local:/gluster/brick3/export
Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
Options Reco...
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...so", that I can use, but I would like to use
it as clean after understanding the problem on "export" volume.
Currently on "export" volume in fact I have this
[root at ovirt01 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...
While on the other two nodes
[root at ovirt02 ~]# gluster volume info export
Volume Name: export
Type: Rep...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...> But not the gluster info on the second and third node that have lost the
> ovirt01/gl01 host brick information...
>
> Eg on ovirt02
>
>
> [root at ovirt02 peers]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 0 x (2 + 1) = 2
> Transport-type: tcp
> Bricks:
> Brick1: ovirt02.localdomain.local:/gluster/brick3/export
> Brick2: ovirt03.localdomain.local:/gluster/brick3/export
> Options Reconfigured:
> transport...
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export