Displaying 11 results from an estimated 11 matches for "47e7".
Did you mean:
47,7
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...omain.local
hostname2=10.10.2.102
[root at ovirt03 ~]#
But not the gluster info on the second and third node that have lost the
ovirt01/gl01 host brick information...
Eg on ovirt02
[root at ovirt02 peers]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 2
Transport-type: tcp
Bricks:
Brick1: ovirt02.localdomain.local:/gluster/brick3/export
Brick2: ovirt03.localdomain.local:/gluster/brick3/export
Options Reconfigured:
transport.address-family: inet
performance.readdi...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...> Please check log file for details.
> Commit failed on ovirt03.localdomain.local. Please check log file for
> details.
> [root at ovirt01 ~]#
>
> [root at ovirt01 bricks]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gl01.localdomain.local:/gluster/brick3/export
> Brick2: ovirt02.localdomain.local:/gluster/brick3/export
> Brick3: ovirt03.localdomain.local:...
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
> On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>> OK, so the log just hints to the following:
>>
>> [2017-07-05 15:04:07.178204] E [MSGID: 106123]
>> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit
>>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...led: Commit failed on ovirt02.localdomain.local.
Please check log file for details.
Commit failed on ovirt03.localdomain.local. Please check log file for
details.
[root at ovirt01 ~]#
[root at ovirt01 bricks]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Brick2: ovirt02.localdomain.local:/gluster/brick3/export
Brick3: ovirt03.localdomain.local:/gluster/brick3/export (arbiter)
Options...
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...uot;iso", that I can use, but I would like to use
it as clean after understanding the problem on "export" volume.
Currently on "export" volume in fact I have this
[root at ovirt01 ~]# gluster volume info export
Volume Name: export
Type: Replicate
Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
Status: Started
Snapshot Count: 0
Number of Bricks: 0 x (2 + 1) = 1
Transport-type: tcp
Bricks:
Brick1: gl01.localdomain.local:/gluster/brick3/export
Options Reconfigured:
...
While on the other two nodes
[root at ovirt02 ~]# gluster volume info export
Volume Name: export
Type...
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
...>
> But not the gluster info on the second and third node that have lost the
> ovirt01/gl01 host brick information...
>
> Eg on ovirt02
>
>
> [root at ovirt02 peers]# gluster volume info export
>
> Volume Name: export
> Type: Replicate
> Volume ID: b00e5839-becb-47e7-844f-6ce6ce1b7153
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 0 x (2 + 1) = 2
> Transport-type: tcp
> Bricks:
> Brick1: ovirt02.localdomain.local:/gluster/brick3/export
> Brick2: ovirt03.localdomain.local:/gluster/brick3/export
> Options Reconfigured:
> tran...
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
>
>
> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote:
>
>>
>>
>>> ...
>>>
>>> then the commands I need to run would be:
>>>
>>> gluster volume reset-brick export
2017 Dec 11
3
Libguestfs Hangs on CentOS 7.4
...../../sdb
/dev/disk/by-path:
total 0
lrwxrwxrwx 1 root root 9 Dec 11 14:05 virtio-pci-0000:00:03.0-scsi-0:0:0:0
-> ../../sda
lrwxrwxrwx 1 root root 9 Dec 11 14:05 virtio-pci-0000:00:03.0-scsi-0:0:1:0
-> ../../sdb
/dev/disk/by-uuid:
total 0
lrwxrwxrwx 1 root root 9 Dec 11 14:05 6c72a8f2-97da-47e7-94d5-939e7caa79bf
-> ../../sdb
/dev/input:
total 0
drwxr-xr-x 2 root root 120 Dec 11 14:05 by-path
crw------- 1 root root 13, 64 Dec 11 14:05 event0
crw------- 1 root root 13, 65 Dec 11 14:05 event1
crw------- 1 root root 13, 66 Dec 11 14:05 event2
crw------- 1 root root 13, 63 Dec 11 14:05...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 3cf62f21-9f35-4b14-9fbe-55165732978b. sources=0 [2] sinks=1
[2017-10-25 10:40:35.181608] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 2bd9d081-9d0a-47e7-897f-45d864856d50. sources=0 [2] sinks=1
[2017-10-25 10:40:35.185550] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on 3c1b6a6b-1584-460b-93a6-29078de908a2
[2017-10-25 10:40:35.188351] I [MSGID: 108026] [afr-self-heal-c...