Pranith Kumar Karampuri
2015-Oct-08 18:46 UTC
[Gluster-users] How to replace a dead brick? (3.6.5)
On 3.7.4, all you need to do is execute "gluster volume replace-brick <volname> commit force" and rest will be taken care by afr. We are in the process of coming up with new commands like "gluster volume reset-brick <volname> start/commit" for wiping/re-formatting of the disk. So wait just a little longer :-). Pranith On 10/08/2015 11:26 AM, Lindsay Mathieson wrote:> > On 8 October 2015 at 07:19, Joe Julian <joe at julianfamily.org > <mailto:joe at julianfamily.org>> wrote: > > I documented this on my blog at > https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ > which is still accurate for the latest version. > > The bug report I filed for this was closed without resolution. I > assume there's no plans for ever making this easy for administrators. > https://bugzilla.redhat.com/show_bug.cgi?id=991084 > > > > Yes, its the sort of workaround one can never remember in an > emergency, you'd have to google it up ... > > In the case I was working with, probably easier and quicker to do a > remove-brick/add-brick. > > thanks, > > > -- > Lindsay > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151009/acb456a2/attachment.html>
So... this kinda applies to me too and I want to get some clarification: I have the following setup # gluster volume info Volume Name: gv0 Type: Replicate Volume ID: fc50d049-cebe-4a3f-82a6-748847226099 Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: eapps-gluster01:/export/sdb1/gv0 Brick2: eapps-gluster02:/export/sdb1/gv0 Brick3: eapps-gluster03:/export/sdb1/gv0 Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on nfs.drc: off eapps-gluster03 had a hard drive failure so I replaced it, formatted the drive and now need gluster to be happy again. Gluster put a .glusterfs folder in /export/sdb1/gv0 but nothing else has shown up and the brick is offline. I read the docs on replacing a brick but seem to be missing something and would appreciate some help. Thanks! -- *Gene Liverman* Systems Integration Architect Information Technology Services University of West Georgia gliverma at westga.edu ITS: Making Technology Work for You! On Thu, Oct 8, 2015 at 2:46 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote:> On 3.7.4, all you need to do is execute "gluster volume replace-brick > <volname> commit force" and rest will be taken care by afr. We are in the > process of coming up with new commands like "gluster volume reset-brick > <volname> start/commit" for wiping/re-formatting of the disk. So wait just > a little longer :-). > > Pranith > > > On 10/08/2015 11:26 AM, Lindsay Mathieson wrote: > > > On 8 October 2015 at 07:19, Joe Julian <joe at julianfamily.org> wrote: > >> I documented this on my blog at >> https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is >> still accurate for the latest version. >> >> The bug report I filed for this was closed without resolution. I assume >> there's no plans for ever making this easy for administrators. >> https://bugzilla.redhat.com/show_bug.cgi?id=991084 >> > > > Yes, its the sort of workaround one can never remember in an emergency, > you'd have to google it up ... > > In the case I was working with, probably easier and quicker to do a > remove-brick/add-brick. > > thanks, > > > -- > Lindsay > > > _______________________________________________ > Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151008/83dd88ee/attachment.html>
On 10/08/2015 11:46 AM, Pranith Kumar Karampuri wrote:> On 3.7.4, all you need to do is execute "gluster volume replace-brick > <volname> commit force" and rest will be taken care by afr. We are in > the process of coming up with new commands like "gluster volume > reset-brick <volname> start/commit" for wiping/re-formatting of the > disk. So wait just a little longer :-). > > Pranith >Nope. Volume Name: test Type: Replicate Volume ID: 426a1719-7cc2-4dac-97b4-67491679e00e Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: questor:/tmp/foo1.1 Brick2: questor:/tmp/foo1.2 Status of volume: test Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick questor:/tmp/foo1.1 49162 0 Y 20825 Brick questor:/tmp/foo1.2 49163 0 Y 20859 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 20887 [root at questor]# kill 20825 [root at questor]# rm -rf /tmp/foo1.1 [root at questor]# mkdir /tmp/foo1.1 [root at questor]# gluster volume replace-brick test commit force Usage: volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> {commit force} [root at questor]# gluster volume replace-brick test questor:/tmp/foo1.1 questor:/tmp/foo1.1 commit force volume replace-brick: failed: Brick: questor:/tmp/foo1.1 not available. Brick may be containing or be contained by an existing brick> On 10/08/2015 11:26 AM, Lindsay Mathieson wrote: >> >> On 8 October 2015 at 07:19, Joe Julian <joe at julianfamily.org >> <mailto:joe at julianfamily.org>> wrote: >> >> I documented this on my blog at >> https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ >> which is still accurate for the latest version. >> >> The bug report I filed for this was closed without resolution. I >> assume there's no plans for ever making this easy for administrators. >> https://bugzilla.redhat.com/show_bug.cgi?id=991084 >> >> >> >> Yes, its the sort of workaround one can never remember in an >> emergency, you'd have to google it up ... >> >> In the case I was working with, probably easier and quicker to do a >> remove-brick/add-brick. >> >> thanks, >> >> >> -- >> Lindsay >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151008/0289ce9b/attachment.html>
Lindsay Mathieson
2015-Oct-08 21:24 UTC
[Gluster-users] How to replace a dead brick? (3.6.5)
Very nice! Any chance of a wheezy repo? ... ? Sent from Mail for Windows 10 From: Pranith Kumar Karampuri Sent: Friday, 9 October 2015 4:46 AM To: Lindsay Mathieson;Joe Julian Cc: gluster-users Subject: Re: [Gluster-users] How to replace a dead brick? (3.6.5) On 3.7.4, all you need to do is execute "gluster volume replace-brick <volname> commit force" and rest will be taken care by afr. We are in the process of coming up with new commands like "gluster volume reset-brick <volname> start/commit" for wiping/re-formatting of the disk. So wait just a little longer :-). Pranith On 10/08/2015 11:26 AM, Lindsay Mathieson wrote: On 8 October 2015 at 07:19, Joe Julian <joe at julianfamily.org> wrote: I documented this on my blog at https://joejulian.name/blog/replacing-a-brick-on-glusterfs-340/ which is still accurate for the latest version. The bug report I filed for this was closed without resolution. I assume there's no plans for ever making this easy for administrators. https://bugzilla.redhat.com/show_bug.cgi?id=991084 Yes, its the sort of workaround one can never remember in an emergency, you'd have to google it up ... In the case I was working with, probably easier and quicker to do a remove-brick/add-brick. thanks, -- Lindsay _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151009/31276ad1/attachment.html>