I'm using 3.7.11, this command works with me, !remove-brick [root at node2 ~]# gluster volume remove-brick v1 replica 2 192.168.3.73:/gfs/b1/v1 force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: success Don't know about the commit thingy... On Thursday, July 28, 2016 3:47 PM, Richard Klein (RSI) <rklein at rsitex.com> wrote: <!--#yiv5393753197 _filtered #yiv5393753197 {font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv5393753197 #yiv5393753197 p.yiv5393753197MsoNormal, #yiv5393753197 li.yiv5393753197MsoNormal, #yiv5393753197 div.yiv5393753197MsoNormal {margin:0in;margin-bottom:.0001pt;font-size:11.0pt;font-family:"Calibri", "sans-serif";}#yiv5393753197 a:link, #yiv5393753197 span.yiv5393753197MsoHyperlink {color:blue;text-decoration:underline;}#yiv5393753197 a:visited, #yiv5393753197 span.yiv5393753197MsoHyperlinkFollowed {color:purple;text-decoration:underline;}#yiv5393753197 span.yiv5393753197EmailStyle17 {font-family:"Calibri", "sans-serif";color:windowtext;}#yiv5393753197 .yiv5393753197MsoChpDefault {font-family:"Calibri", "sans-serif";} _filtered #yiv5393753197 {margin:1.0in 1.0in 1.0in 1.0in;}#yiv5393753197 div.yiv5393753197WordSection1 {}-->We are using Gluster 3.7.6 in a replica 2 distributed-replicate configuration.? I am wondering when we do a remove-brick with just one brick pair will the data be moved off the bricks once the status show complete and then you do the commit?? ??Also, if you start a remove-brick process can you stop it?? Is there an abort or stop command or do you just don?t do the commit? ?Any help would be appreciated. ?Richard KleinRSI ? ? ? _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160728/7dfe9e2c/attachment.html>
I will summarize the procedure for removing a brick with description. 1) start an add brick operation using gluster volume remov-brick command. This command will mark the mentioned brick as a decommissioned brick. Also, this will kick a process that will start migrating data from the decommissioned brick to the other bricks. 2) Once the migration is finished you can safely do a remove-brick commit. 3) Or if you wish to stop the process and reset the decommissioned brick, you can do remove-brick stop. This will not migrate the data back to the decommissioned brick. It will stay in the other bricks and the data will be still accessible, if you want to have proper load balancing after this, you can start rebalance process. 4) If you wish to do an instant remove brick you can use force option, which will not migrate data, hence your whole data in the removed brick will be lost from mount point. On 07/29/2016 01:25 AM, Lenovo Lastname wrote:> I'm using 3.7.11, this command works with me, > > !remove-brick > [root at node2 ~]# gluster volume remove-brick v1 replica 2 > 192.168.3.73:/gfs/b1/v1 force > Removing brick(s) can result in data loss. Do you want to Continue? > (y/n) y > volume remove-brick commit force: success > > Don't know about the commit thingy... > > > On Thursday, July 28, 2016 3:47 PM, Richard Klein (RSI) > <rklein at rsitex.com> wrote: > > > We are using Gluster 3.7.6 in a replica 2 distributed-replicate > configuration. I am wondering when we do a remove-brick with just one > brick pair will the data be moved off the bricks once the status show > complete and then you do the commit? Also, if you start a > remove-brick process can you stop it? Is there an abort or stop > command or do you just don?t do the commit? > > Any help would be appreciated. > > Richard Klein > RSI > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> > http://www.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160729/eee7dbc3/attachment.html>