Merlin Morgenstern
2015-Nov-19 20:00 UTC
[Gluster-users] restoring brick on new server failes on glusterfs
I am triying to attach a brick from another server to a local gluster development server. Therfore I have done a dd from a snapshot on production and a dd on the lvm volume on development. Then I deleted the .glusterfs folder on root. Unfortunatelly forming a new brick failed nevertheless with the info that this brick is already part of a volume. (how does gluster know that?!) I then issued the following: sudo setfattr -x trusted.gfid /bricks/staging/brick1/ sudo setfattr -x trusted.glusterfs.volume-id /bricks/staging/brick1/ sudo /etc/init.d/glusterfs-server restart Magically gluster still seems to know that this brick is from another server as it knows the peered gluster nodes which are aparently different on the dev server: sudo gluster volume create staging node1:/bricks/staging/brick1 volume create: staging: failed: Staging failed on gs3. Error: Host node1 is not in 'Peer in Cluster' state Staging failed on gs2. Error: Host node1 is not in 'Peer in Cluster' state Is there a way to achive a restorage of that brick on a new server? Thank you for any help on this. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151119/8af09c88/attachment.html>
Atin Mukherjee
2015-Nov-20 03:56 UTC
[Gluster-users] restoring brick on new server failes on glusterfs
On 11/20/2015 01:30 AM, Merlin Morgenstern wrote:> I am triying to attach a brick from another server to a local gluster > development server. Therfore I have done a dd from a snapshot on > production and a dd on the lvm volume on development. Then I deleted the > .glusterfs folder on root. > > Unfortunatelly forming a new brick failed nevertheless with the info > that this brick is already part of a volume. (how does gluster know that?!)This is because the brick has an extended attribute 'volume-id' set which indicates that this brick has been already used by another volume. You could clean this xattr or alternatively a 'force' option would bypass this validation.> > I then issued the following: > > sudo setfattr -x trusted.gfid /bricks/staging/brick1/ > sudo setfattr -x trusted.glusterfs.volume-id /bricks/staging/brick1/ > sudo /etc/init.d/glusterfs-server restart > > > Magically gluster still seems to know that this brick is from another > server as it knows the peered gluster nodes which are aparently > different on the dev server: > > sudo gluster volume create staging > node1:/bricks/staging/brick1 > > volume create: staging: failed: Staging failed on gs3. Error: Host node1 > is not in 'Peer in Cluster' state > > Staging failed on gs2. Error: Host node1 is not in 'Peer in Cluster' state > > Is there a way to achive a restorage of that brick on a new server? > Thank you for any help on this.The problem here is the brick which you want to restore to a cluster is not part of the cluster. You'd first need to probe the server to make it added in the trusted storage pool.> > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >