Thanks for your quick reply. This is the output of my remaining healty peer: getfattr -d -m. -e hex /brick/raidvolb/data/ getfattr: Removing leading '/' from absolute path names # file: brick/raidvolb/data/ trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0x8786357b9d114c01a34baee949c116e9 On Mon, Mar 30, 2015 at 12:38 PM, Pranith Kumar Karampuri <pkarampu at redhat.com> wrote:> > On 03/30/2015 03:59 PM, Ml Ml wrote: >> >> Anyone? >> >> Is this a dumb question or just a hard one? >> I already tried: >> >> http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server >> >> but i got stuck with the setfattr command. >> >> So i was wondering if this is the way to go? > > could you paste output of getfattr -d -m. -e hex > <any-of-the-other-bricks-in-replication>. > > Pranith >> >> >> >> On Thu, Mar 26, 2015 at 10:31 PM, Ml Ml <mliebherr99 at googlemail.com> >> wrote: >>> >>> Hello List, >>> >>> i have a 3 Peer Replica Gluster. On one of my peers the hard drive of >>> a brick failed. >>> I replaced it and formated the brick device it with ext4. >>> >>> How do i get it back into my gluster? Is there a official way how to >>> re-integrade it? >>> >>> Thanks, >>> Mario >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > >
Pranith Kumar Karampuri
2015-Mar-30 11:56 UTC
[Gluster-users] replace brick with gluster 3.6
On 03/30/2015 05:05 PM, Ml Ml wrote:> Thanks for your quick reply. > > This is the output of my remaining healty peer: > > getfattr -d -m. -e hex /brick/raidvolb/data/ > getfattr: Removing leading '/' from absolute path names > # file: brick/raidvolb/data/ > trusted.gfid=0x00000000000000000000000000000001 > trusted.glusterfs.dht=0x000000010000000000000000ffffffff > trusted.glusterfs.volume-id=0x8786357b9d114c01a34baee949c116e9On the new brick execute: setfattr -n trusted.glusterfs.volume-id -v 0x8786357b9d114c01a34baee949c116e9 <new-brick-path> Before bringing glusterd, new brick up, do the following: From the mount point execute 'mkdir <non-existent-dir-name>; rmdir <same-dir-you-created-before>; setfattr -n trusted.abc -v def <gluster-mount-path>; setfattr -x trusted.abc <gluster-mount-path>' Could you please post the output at this point of all the bricks? i.e. both the healthy bricks and old bricks? I would like to check that there are no mistakes in the operations you performed above. Once I confirm things are fine, do: start glusterd on the machine with new brick and self-heal should automatically start. These steps are from latest document: http://review.gluster.com/8503 Let me know if you have any doubts. I will try to get this in as soon as possible. Pranith> > > > > On Mon, Mar 30, 2015 at 12:38 PM, Pranith Kumar Karampuri > <pkarampu at redhat.com> wrote: >> On 03/30/2015 03:59 PM, Ml Ml wrote: >>> Anyone? >>> >>> Is this a dumb question or just a hard one? >>> I already tried: >>> >>> http://www.gluster.org/community/documentation/index.php/Gluster_3.4:_Brick_Restoration_-_Replace_Crashed_Server >>> >>> but i got stuck with the setfattr command. >>> >>> So i was wondering if this is the way to go? >> could you paste output of getfattr -d -m. -e hex >> <any-of-the-other-bricks-in-replication>. >> >> Pranith >>> >>> >>> On Thu, Mar 26, 2015 at 10:31 PM, Ml Ml <mliebherr99 at googlemail.com> >>> wrote: >>>> Hello List, >>>> >>>> i have a 3 Peer Replica Gluster. On one of my peers the hard drive of >>>> a brick failed. >>>> I replaced it and formated the brick device it with ext4. >>>> >>>> How do i get it back into my gluster? Is there a official way how to >>>> re-integrade it? >>>> >>>> Thanks, >>>> Mario >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users at gluster.org >>> http://www.gluster.org/mailman/listinfo/gluster-users >>