Alejandro Planas
2014-Mar-13 03:48 UTC
[Gluster-users] Gluster Volume Replication using 2 AWS instances on Autoscaling
Hello, We have 2 AWS instances, 1 brick on each instance, one replicated volume among both instances. When one of the instances fails completely and autoscaling replaces it with a new one, we are having issues recreating the replicated volume again. Can anyone provide some light on the gluster commands required to include this new replacement instance (with one brick) as a member of the replicated volume? Best Regards, Alejandro Planas Managing Partner Escala24x7 Office: +1 (754) 816 2390 Mobile: +1 (754) 244 7894. alejandro.planas at escala24x7.com www.escala24x7.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140312/784da428/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: logo escala24x7 Signature.png Type: image/png Size: 21428 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140312/784da428/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: Logo Amazon Partner HighRes Signature.png Type: image/png Size: 21248 bytes Desc: not available URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140312/784da428/attachment-0001.png>
Vijay Bellur
2014-Mar-13 16:48 UTC
[Gluster-users] Gluster Volume Replication using 2 AWS instances on Autoscaling
On 03/13/2014 09:18 AM, Alejandro Planas wrote:> Hello, > > We have 2 AWS instances, 1 brick on each instance, one replicated volume > among both instances. When one of the instances fails completely and > autoscaling replaces it with a new one, we are having issues recreating > the replicated volume again. > > Can anyone provide some light on the gluster commands required to > include this new replacement instance (with one brick) as a member of > the replicated volume? >You can probably use: volume replace-brick <volname> <old-brick> <new-brick> commit force This will remove the old-brick from the volume and bring in new-brick to the volume. self-healing can then synchronize data to the new brick. Regards, Vijay