On 10/7/2014 1:56 PM, Ryan Nix wrote:> Hello,
>
> I seem to have hosed my installation while trying to replace a failed
> brick. The instructions for replacing the brick with a different host
> name/IP on the Gluster site are no longer available so I used the
> instructions from the Redhat Storage class that I attended last week, which
> assumed the replacement had the same host name.
>
>
http://community.gluster.org/q/a-replica-node-has-failed-completely-and-must-be-replaced-with-new-empty-hardware-how-do-i-add-the-new-hardware-and-bricks-back-into-the-replica-pair-and-begin-the-healing-process/
>
> It seems the working brick (I had two servers with simple replication only)
> will not release the DNS entry of the failed brick.
>
> Is there any way to simply reset Gluster completely?
The simple way to "reset gluster completely" would be to delete the
volume
and start over. Sometimes this is the quickest way, especially if you only
have one or two volumes.
If nothing has changed, deleting the volume will not affect the data on the
brick.
You can either:
Find and follow the instructions to delete the "markers" that
glusterfs puts
on the brick, in which case the create process should be the same as any new
volume creation.
Otherwise, when you do the "volume create..." step, it will give you
an
error, something like 'brick already in use'. You used to be able to
override that by adding --force to the command line. (Have not needed it
lately, so don't know if it still works.)
Hope this helps
Ted Miller
Elkhart, IN>
> Just to confirm, if I delete the volume so I can start over, deleting the
> volume will not delete the data. Is this correct? Finally, once the
> volume is deleted, do I have to do what Joe Julian recommended here?
>
http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
>
> Thanks for any insights.
>
> - Ryan
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20141009/b148d600/attachment.html>