On 04/26/2017 02:06 PM, J.R. W wrote:> Hi everyone,
>
> Let?s see. I have one of my servers in my trusted storage pool have a drive
go down. It had all the volume configurations in it. /etc/lib/glusterd/vols/ was
wiped.
>
>
> Now this drive is completely separate from the glusterfs brick. The drive
that was corrupted was just part of the /root. The glusterfs brick is actually a
direct attached storage to this server. What is the easiest way to copy all the
configurations from another member in the trusted pool?
>
> I thought I could just remove it and then re-probe it and it would
autogenerate. But since it?s part of the volume, no such luck.
>
> Jordan
>
Hi Jordan,
The following steps should restore the configuration on the bad node:
1. On the bad node (I'm calling this 'N1' for the rest of the steps)
remove everything inside /var/lib/glusterd/* (in case there are any
contents) and then restart glusterd.
2. From one of the good nodes (calling this 'N2'), execute the peer
status command and copy the uuid mentioned for N1.
3. On N1, replace the uuid in /var/lib/glusterd/glusterd.info with the
uuid retrieved from step 2.
4. Next, copy /var/lib/glusterd/peers/* from N2 to N1.
5. Now on N1:
- Find the file which has a filename same as that of the uuid in
/var/lib/glusterd/glusterd.info
- Rename that file to the uuid of N2 (you can find the uuid of N2
from the /var/lib/glusterd/glusterd.info file on N2)
- Open this renamed file now, and replace the uuid and hostname
sections with the uuid and hostname of N2 respectively.
- Restart glusterd on N1
Hope this helps.
~ Samikshan
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>