[root at glusterp1 gv0]# !737
gluster v status
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glusterp1:/bricks/brick1/gv0 49152 0 Y
5229
Brick glusterp2:/bricks/brick1/gv0 49152 0 Y
2054
Brick glusterp3:/bricks/brick1/gv0 49152 0 Y
2110
Self-heal Daemon on localhost N/A N/A Y
5219
Self-heal Daemon on glusterp2 N/A N/A Y
1943
Self-heal Daemon on glusterp3 N/A N/A Y
2067
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
[root at glusterp1 gv0]# ls -l glusterp1/images/
total 2877064
-rw-------. 2 root root 107390828544 May 10 12:18 centos-server-001.qcow2
-rw-r--r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw-------. 2 root root 85912715264 May 10 12:18 kubernetes-template.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker01.qcow2
-rw-------. 2 root root 0 May 10 12:08 kworker02.qcow2
[root at glusterp1 gv0]#
while,
[root at glusterp2 gv0]# ls -l glusterp1/images/
total 11209084
-rw-------. 2 root root 107390828544 May 9 14:45 centos-server-001.qcow2
-rw-r--r--. 2 root root 0 May 8 14:32 file1
-rw-r--r--. 2 root root 0 May 9 14:41 file1-1
-rw-------. 2 root root 85912715264 May 9 15:59 kubernetes-template.qcow2
-rw-------. 2 root root 3792371712 May 9 16:15 kworker01.qcow2
-rw-------. 2 root root 3792371712 May 10 11:20 kworker02.qcow2
[root at glusterp2 gv0]#
So some files have re-synced but not the kworker machines network
activity has stopped.
On 10 May 2018 at 12:05, Diego Remolina <dijuremo at gmail.com> wrote:
> Show us output from: gluster v status
>
> It should be easy to fix. Stop gluster daemon on that node, mount the
> brick, start gluster daemon again.
>
> Check: gluster v status
>
> Does it show the brick up?
>
> HTH,
>
> Diego
>
>
> On Wed, May 9, 2018, 20:01 Thing <thing.thing at gmail.com> wrote:
>
>> Hi,
>>
>> I have 3 Centos7.4 machines setup as a 3 way raid 1.
>>
>> Due to an oopsie on my part for glusterp1 /bricks/brick1/gv0 didnt
mount
>> on boot and as a result its empty.
>>
>> Meanwhile I have data on glusterp2 /bricks/brick1/gv0 and glusterp3
>> /bricks/brick1/gv0 as expected.
>>
>> Is there a way to get glusterp1's gv0 to sync off the other 2?
there must
>> be but,
>>
>> I have looked at the gluster docs and I cant find anything about
>> repairing resyncing?
>>
>> Where am I meant to look for such info?
>>
>> thanks
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20180510/34c6c9ff/attachment.html>