I have a setup with 3 nodes running GlusterFS. gluster volume create myBrick replica 3 node01:/mnt/data/myBrick node02:/mnt/data/myBrick node03:/mnt/data/myBrick Unfortunately node1 seemed to stop syncing with the other nodes, but this was undetected for weeks! When I noticed it, I did a "service glusterd restart" on node1, hoping the three nodes would sync again. But this did not happen. Only the CPU load went up on all three nodes + the access time went up. When I look into the physical storage of the bricks, node1 is very different node01:/mnt/data/myBrick : 9GB data node02:/mnt/data/myBrick : 12GB data node03:/mnt/data/myBrick : 12GB data How do I sync data from the healthy nodes Node2/Node3 back to Node1? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180208/cd3b6bfa/attachment.html>
Hi,>From the information you provided, I am guessing that you have a replica 3volume configured. In that case you can run "gluster volume heal <volname>" which should do the trick for you. Regards, Karthik On Thu, Feb 8, 2018 at 6:16 AM, Frizz <frizzthecat at googlemail.com> wrote:> I have a setup with 3 nodes running GlusterFS. > > gluster volume create myBrick replica 3 node01:/mnt/data/myBrick > node02:/mnt/data/myBrick node03:/mnt/data/myBrick > > Unfortunately node1 seemed to stop syncing with the other nodes, but this > was undetected for weeks! > > When I noticed it, I did a "service glusterd restart" on node1, hoping the > three nodes would sync again. > > But this did not happen. Only the CPU load went up on all three nodes + > the access time went up. > > When I look into the physical storage of the bricks, node1 is very > different > node01:/mnt/data/myBrick : 9GB data > node02:/mnt/data/myBrick : 12GB data > node03:/mnt/data/myBrick : 12GB data > > How do I sync data from the healthy nodes Node2/Node3 back to Node1? > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180208/3e592e64/attachment.html>
Seemingly Similar Threads
- very high traffic without any load
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- Slow performance - 4 hosts, 10 gigabit ethernet, Gluster 3.2.3
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements