Displaying 6 results from an estimated 6 matches for "kworker02".
Did you mean:
kworker01
2018 May 10
2
broken gluster config
...-r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw-------. 2 root root 85912715264 May 10 12:18 kubernetes-template.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker01.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker02.qcow2
[root at glusterp1 gv0]#
while,
[root at glusterp2 gv0]# ls -l glusterp1/images/
total 11209084
-rw-------. 2 root root 107390828544 May 9 14:45 centos-server-001.qcow2
-rw-r--r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw---...
2018 May 10
0
broken gluster config
...in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02.qcow2
Status: Connected
Number of entries: 5
Brick glusterp3:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
/glusterp1/images/centos-server-001.qcow2
/glusterp1/images/kubernetes-template.qcow2
/glusterp1/images/kworker01.qcow2
/glusterp1/images/kworker02...
2018 May 10
2
broken gluster config
...er of entries: 1
>
> Brick glusterp2:/bricks/brick1/gv0
> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>
> /glusterp1/images/centos-server-001.qcow2
> /glusterp1/images/kubernetes-template.qcow2
> /glusterp1/images/kworker01.qcow2
> /glusterp1/images/kworker02.qcow2
> Status: Connected
> Number of entries: 5
>
> Brick glusterp3:/bricks/brick1/gv0
> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>
> /glusterp1/images/centos-server-001.qcow2
> /glusterp1/images/kubernetes-template.qcow2
> /glusterp1/images/...
2018 May 10
0
broken gluster config
...t; Brick glusterp2:/bricks/brick1/gv0
>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>>
>> /glusterp1/images/centos-server-001.qcow2
>> /glusterp1/images/kubernetes-template.qcow2
>> /glusterp1/images/kworker01.qcow2
>> /glusterp1/images/kworker02.qcow2
>> Status: Connected
>> Number of entries: 5
>>
>> Brick glusterp3:/bricks/brick1/gv0
>> <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
>>
>> /glusterp1/images/centos-server-001.qcow2
>> /glusterp1/images/kubernetes-templa...
2018 May 10
0
broken gluster config
Show us output from: gluster v status
It should be easy to fix. Stop gluster daemon on that node, mount the
brick, start gluster daemon again.
Check: gluster v status
Does it show the brick up?
HTH,
Diego
On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
> Hi,
>
> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>
> Due to an oopsie on my part for
2018 May 10
2
broken gluster config
Hi,
I have 3 Centos7.4 machines setup as a 3 way raid 1.
Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt mount on
boot and as a result its empty.
Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
/bricks/brick1/gv0 as expected.
Is there a way to get glusterp1's gv0 to sync off the other 2? there must
be but,
I have looked at the gluster docs and I