韦远科
2013-Nov-28 08:39 UTC
[Gluster-users] how to recover a accidentally delete brick directory?
hi all, I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2. now the situation is , there is no corresponding glusterfsd process on this node, and 'glusterfs volume status' shows that the brick is offline, like this: Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 Brick 192.168.64.12:/opt/gluster_data/eccp_glance 49161 Y 2542 Brick 192.168.64.17:/opt/gluster_data/eccp_glance 49164 Y 2537 Brick 192.168.64.18:/opt/gluster_data/eccp_glance 49154 Y 4978 Brick 192.168.64.29:/opt/gluster_data/eccp_glance N/A N N/A Brick 192.168.64.30:/opt/gluster_data/eccp_glance 49154 Y 4072 Brick 192.168.64.25:/opt/gluster_data/eccp_glance 49155 Y 11975 Brick 192.168.64.26:/opt/gluster_data/eccp_glance 49155 Y 17947 Brick 192.168.64.13:/opt/gluster_data/eccp_glance 49154 Y 26045 Brick 192.168.64.14:/opt/gluster_data/eccp_glance 49154 Y 22143 so are there ways to bring this brick back to normal? thanks! ----------------------------------------------------------------- ??? ????? ????????? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/cbb98d9a/attachment.html>
shwetha
2013-Nov-28 08:50 UTC
[Gluster-users] how to recover a accidentally delete brick directory?
1) create the brick directory "/opt/gluster_data/eccp_glance" on the nodes where you deleted the directories. 2) From any of the storage node execute : 1. gluster volume start <volume_name> force : To restart the brick process 2. gluster volume status <volume_name> : Check all the brick process are started. 3. gluster volume heal <volume_name> full : To trigger self-heal on to the removed bricks. -Shwetha On 11/28/2013 02:09 PM, ??? wrote:> hi all, > > I accidentally removed the brick directory of a volume on one node, > the replica for this volume is 2. > > now the situation is , there is no corresponding glusterfsd process > on this node, and 'glusterfs volume status' shows that the brick is > offline, like this: > Brick 192.168.64.11:/opt/gluster_data/eccp_glance N/A Y 2513 > Brick 192.168.64.12:/opt/gluster_data/eccp_glance 49161 Y 2542 > Brick 192.168.64.17:/opt/gluster_data/eccp_glance 49164 Y 2537 > Brick 192.168.64.18:/opt/gluster_data/eccp_glance 49154 Y 4978 > Brick 192.168.64.29:/opt/gluster_data/eccp_glance N/A N N/A > Brick 192.168.64.30:/opt/gluster_data/eccp_glance 49154 Y 4072 > Brick 192.168.64.25:/opt/gluster_data/eccp_glance 49155 Y 11975 > Brick 192.168.64.26:/opt/gluster_data/eccp_glance 49155 Y 17947 > Brick 192.168.64.13:/opt/gluster_data/eccp_glance 49154 Y 26045 > Brick 192.168.64.14:/opt/gluster_data/eccp_glance 49154 Y 22143 > > > so are there ways to bring this brick back to normal? > > thanks! > > > ----------------------------------------------------------------- > ??? > ????? ????????? > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/a6447af5/attachment.html>