Niklaus Hofer
2017-Jun-01 06:58 UTC
[Gluster-users] Restore a node in a replicating Gluster setup after data loss
Hi We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1, Server2 and Server3 where Server3 is the Arbiter node. There are several Gluster volumes ontop of that setup. They all look a bit like this: gluster volume info gv-tier1-vm-01 [...] Number of Bricks: 1 x (2 + 1) = 3 [...] Bricks: Brick1: Server1:/var/data/lv-vm-01 Brick2: Server2:/var/data/lv-vm-01 Brick3: Server3:/var/data/lv-vm-01/brick (arbiter) [...] cluster.data-self-heal-algorithm: full [...] We took down Server2 because we needed to do maintenance on this server's storage. During maintenance work, we ended up having to completely rebuild the storage on Server2. This means that "/var/data/lv-vm-01" on Server2 is now empty. However, all the Gluster Metadata in "/var/lib/glusterd/" is still in tact. Gluster has not been started on Server2. Here is what our sample gluster volume currently looks like on the still active nodes: gluster volume status gv-tier1-vm-01 Status of volume: gv-tier1-vm-01 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick Server1:/var/data/lv-vm-01 49204 0 Y 22775 Brick Server3:/var/data/lv-vm-01/brick 49161 0 Y 15334 Self-heal Daemon on localhost N/A N/A Y 19233 Self-heal Daemon on Server3 N/A N/A Y 20839 Now we would like to rebuild the data on Server2 from the still in tact data on Server1. That is to say, we hope to start up Gluster on Server2 in such a way that it will sync the data from Server1 back. If at all possible, the Gluster cluster should stay up during this process and access to the Gluster volumes should not be interrupted. What is the correct / recommended way of doing this? Greetings Niklaus Hofer -- stepping stone GmbH Neufeldstrasse 9 CH-3012 Bern Telefon: +41 31 332 53 63 www.stepping-stone.ch niklaus.hofer at stepping-stone.ch
Karthik Subrahmanya
2017-Jun-05 05:33 UTC
[Gluster-users] Restore a node in a replicating Gluster setup after data loss
Hay Niklaus, Sorry for the delay. The *reset-brick* should do the trick for you. You can have a look at [1] for more details. [1] https://gluster.readthedocs.io/en/latest/release-notes/3.9.0/ HTH, Karthik On Thu, Jun 1, 2017 at 12:28 PM, Niklaus Hofer < niklaus.hofer at stepping-stone.ch> wrote:> Hi > > We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1, Server2 > and Server3 where Server3 is the Arbiter node. There are several Gluster > volumes ontop of that setup. They all look a bit like this: > > gluster volume info gv-tier1-vm-01 > > [...] > Number of Bricks: 1 x (2 + 1) = 3 > [...] > Bricks: > Brick1: Server1:/var/data/lv-vm-01 > Brick2: Server2:/var/data/lv-vm-01 > Brick3: Server3:/var/data/lv-vm-01/brick (arbiter) > [...] > cluster.data-self-heal-algorithm: full > [...] > > We took down Server2 because we needed to do maintenance on this server's > storage. During maintenance work, we ended up having to completely rebuild > the storage on Server2. This means that "/var/data/lv-vm-01" on Server2 is > now empty. However, all the Gluster Metadata in "/var/lib/glusterd/" is > still in tact. Gluster has not been started on Server2. > > Here is what our sample gluster volume currently looks like on the still > active nodes: > > gluster volume status gv-tier1-vm-01 > > Status of volume: gv-tier1-vm-01 > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick Server1:/var/data/lv-vm-01 49204 0 Y 22775 > Brick Server3:/var/data/lv-vm-01/brick 49161 0 Y 15334 > Self-heal Daemon on localhost N/A N/A Y 19233 > Self-heal Daemon on Server3 N/A N/A Y 20839 > > > Now we would like to rebuild the data on Server2 from the still in tact > data on Server1. That is to say, we hope to start up Gluster on Server2 in > such a way that it will sync the data from Server1 back. If at all > possible, the Gluster cluster should stay up during this process and access > to the Gluster volumes should not be interrupted. > > What is the correct / recommended way of doing this? > > Greetings > Niklaus Hofer > -- > stepping stone GmbH > Neufeldstrasse 9 > CH-3012 Bern > > Telefon: +41 31 332 53 63 > www.stepping-stone.ch > niklaus.hofer at stepping-stone.ch > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170605/6cd57ecc/attachment.html>
Possibly Parallel Threads
- Handling a tiered subscription service
- subscription to mail folders not working properly anymore with dovecot1.0x ?
- Cannot authenticate new ldap users (unless they are in /etc/passwd too)
- Unknown connection error: (2006) MySQL server has gone away
- Samba AD - two servers - backup and restore AD procedure