Hi all, I have a very large (65 TB) brick in a replica 2 volume that needs to be re-copied from scratch. A heal will take a very long time with performance degradation on the volume so I investigated using rsync to do the brunt of the work. The command: rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 /data/brick1/ Running with -H assures that the hard links in .glusterfs are preserved, and -X preserves all of gluster's extended attributes. I've tested this on my test environment as follows: 1. Stop glusterd and kill procs 2. Move brick volume to backup dir 3. Run rsync 4. Start glusterd 5. Observe gluster status All appears to be working correctly. Gluster status reports all bricks online, all data is accessible in the volume, and I don't see any errors in the logs. Anybody else have experience trying this? Thanks -Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190401/5da8ce49/attachment.html>
Jim Kinney
2019-Apr-01 20:23 UTC
[Gluster-users] Rsync in place of heal after brick failure
Nice! I didn't use -H -X and the system had to do some clean up. I'll add this in my next migration progress as I move 120TB to new hard drives. On Mon, 2019-04-01 at 14:27 -0400, Tom Fite wrote:> Hi all, > I have a very large (65 TB) brick in a replica 2 volume that needs to > be re-copied from scratch. A heal will take a very long time with > performance degradation on the volume so I investigated using rsync > to do the brunt of the work. > > The command: > > rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 > /data/brick1/ > > Running with -H assures that the hard links in .glusterfs are > preserved, and -X preserves all of gluster's extended attributes. > > I've tested this on my test environment as follows: > > 1. Stop glusterd and kill procs > 2. Move brick volume to backup dir > 3. Run rsync > 4. Start glusterd > 5. Observe gluster status > > All appears to be working correctly. Gluster status reports all > bricks online, all data is accessible in the volume, and I don't see > any errors in the logs. > > Anybody else have experience trying this? > > Thanks > -Tom > > _______________________________________________Gluster-users mailing > listGluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-- James P. Kinney III Every time you stop a school, you will have to build a jail. What you gain at one end you lose at the other. It's like feeding a dog on his own tail. It won't fatten the dog. - Speech 11/23/1900 Mark Twain http://heretothereideas.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190401/8a4f165e/attachment.html>
Poornima Gurusiddaiah
2019-Apr-02 01:55 UTC
[Gluster-users] Rsync in place of heal after brick failure
You could also try xfsdump and xfsrestore if you brick filesystem is xfs and the destination disk can be attached locally? This will be much faster. Regards, Poornima On Tue, Apr 2, 2019, 12:05 AM Tom Fite <tomfite at gmail.com> wrote:> Hi all, > > I have a very large (65 TB) brick in a replica 2 volume that needs to be > re-copied from scratch. A heal will take a very long time with performance > degradation on the volume so I investigated using rsync to do the brunt of > the work. > > The command: > > rsync -av -H -X --numeric-ids --progress server1:/data/brick1/gv0 > /data/brick1/ > > Running with -H assures that the hard links in .glusterfs are preserved, > and -X preserves all of gluster's extended attributes. > > I've tested this on my test environment as follows: > > 1. Stop glusterd and kill procs > 2. Move brick volume to backup dir > 3. Run rsync > 4. Start glusterd > 5. Observe gluster status > > All appears to be working correctly. Gluster status reports all bricks > online, all data is accessible in the volume, and I don't see any errors in > the logs. > > Anybody else have experience trying this? > > Thanks > -Tom > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190402/006d88d5/attachment.html>