Alex Wakefield
2020-Sep-16 05:33 UTC
[Gluster-users] Correct way to migrate brick to new server (Gluster 6.10)
Hi all, We have a distribute replicate gluster volume running Gluster 6.10 on Ubuntu 18.04 machines. Its a 2 x 2 brick setup (2 bricks, 2 replicas). We need to migrate the existing bricks to new hardware without downtime and are lost at whats the proper way to do it. I've found this post [1] which suggests that we can do a replace-brick command and move it to the new server without downtime but this [2] mailinglist thread suggests this isn't the correct way to do it anymore? The gluster docs [3] have information for replacing _faulty_ bricks but our bricks aren't faulty, we just need to move them to new hardware. We've tried using this method mentioned in the docs in the past but have found that the volume gets into weird states where files go into read-only mode or have their permissions set to root:root. It basically plays havoc on the fs mount that the clients use. Any help would be greatly appreciated. Apologies if I've left any information out. [1]: https://joejulian.name/post/replacing-a-glusterfs-server-best-practice/ [2]: https://lists.gluster.org/pipermail/gluster-users/2012-October/011502.html [3]: https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick Cheers, Alex
Ronny Adsetts
2020-Sep-16 10:50 UTC
[Gluster-users] Correct way to migrate brick to new server (Gluster 6.10)
Alex Wakefield wrote on 16/09/2020 06:33:> > We have a distribute replicate gluster volume running Gluster 6.10 on > Ubuntu 18.04 machines. Its a 2 x 2 brick setup (2 bricks, 2 > replicas). > > We need to migrate the existing bricks to new hardware without > downtime and are lost at whats the proper way to do it. I've found > this post [1] which suggests that we can do a replace-brick command > and move it to the new server without downtime but this [2] > mailinglist thread suggests this isn't the correct way to do it > anymore? > > The gluster docs [3] have information for replacing _faulty_ bricks > but our bricks aren't faulty, we just need to move them to new > hardware. We've tried using this method mentioned in the docs in the > past but have found that the volume gets into weird states where > files go into read-only mode or have their permissions set to > root:root. It basically plays havoc on the fs mount that the clients > use. > > Any help would be greatly appreciated. Apologies if I've left any > information out.Hi, We had this same quandary in March[1]. We first tested using add-brick/remove-brick which resulted in permissions/ownership mayhem. After some head scratching, I took the replace-brick approach which worked fine. Something like so for each brick, waiting for all heals to complete between brick replacements: $ sudo gluster volume replace-brick volname stor-old-1:/data/glusterfs/volname/brick1/brick stor-new-1:/data/glusterfs/volname/brick1/brick commit force I did this on a live volume starting with the data I cared about least first. Nerves were properly on edge for the first volume I can tell you! :-). I would, if feasible, recommend doing a test migration on a small volume and checksum the data before and after. Thanks. Ronny [1] https://lists.gluster.org/pipermail/gluster-users/2020-March/037786.html -- Ronny Adsetts Technical Director Amazing Internet Ltd, London t: +44 20 8977 8943 w: www.amazinginternet.com Registered office: 85 Waldegrave Park, Twickenham, TW1 4TJ Registered in England. Company No. 4042957 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: OpenPGP digital signature URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200916/c7083b2a/attachment.sig>
Strahil Nikolov
2020-Sep-16 19:43 UTC
[Gluster-users] Correct way to migrate brick to new server (Gluster 6.10)
Actually I used 'replace-brick' several times and I had no issues. I guess you can 'remove brick replica <old count - 1> old_brick' and later 'add brick replica <reduced count + 1> new brick' ... Best Regards, Strahil Nikolov ? ?????, 16 ????????? 2020 ?., 08:41:29 ???????+3, Alex Wakefield <alexwakefield at fastmail.com.au> ??????: Hi all, We have a distribute replicate gluster volume running Gluster 6.10 on Ubuntu 18.04 machines. Its a 2 x 2 brick setup (2 bricks, 2 replicas). We need to migrate the existing bricks to new hardware without downtime and are lost at whats the proper way to do it. I've found this post [1] which suggests that we can do a replace-brick command and move it to the new server without downtime but this [2] mailinglist thread suggests this isn't the correct way to do it anymore? The gluster docs [3] have information for replacing _faulty_ bricks but our bricks aren't faulty, we just need to move them to new hardware. We've tried using this method mentioned in the docs in the past but have found that the volume gets into weird states where files go into read-only mode or have their permissions set to root:root. It basically plays havoc on the fs mount that the clients use. Any help would be greatly appreciated. Apologies if I've left any information out. [1]: https://joejulian.name/post/replacing-a-glusterfs-server-best-practice/ [2]: https://lists.gluster.org/pipermail/gluster-users/2012-October/011502.html [3]: https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick Cheers, Alex ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users