William Kwan
2014-Jan-19 06:43 UTC
[Gluster-users] repair a brick which had filesystem wiped
Hi, Gluster 3.4.2 on CentOS 6.5 I have a volume with 2 replica on two systems.? The filesystem where a brick was created on system 2 was corrupted.? Hardware issue was resolved.? The filesystem is recreated and mounted at the same mount point. ? But I can't get the volume replicated.? I'm missing some steps? # gluster volume status kvm1 Status of volume: kvm1 Gluster process??? ??? ??? ??? ??? ??? Port??? Online??? Pid ------------------------------------------------------------------------------ Brick uathost1:/data/glusterfs/kvm1/brick1/brick??? 49152??? Y??? 22548 Brick uathost2:/data/glusterfs/kvm1/brick1/brick??? N/A??? N??? N/A? <--------???? NFS Server on localhost??? ??? ??? ??? ??? 2049??? Y??? 4430 Self-heal Daemon on localhost??? ??? ??? ??? N/A??? Y??? 4438 NFS Server on 10.100.253.28??? ??? ??? ??? 2049??? Y??? 22869 Self-heal Daemon on 10.100.253.28??? ??? ??? N/A??? Y??? 22876 ? There are no active volume tasks uathost1 # gluster peer status Number of Peers: 1 Hostname: uathost2 Uuid: c94520d2-99f1-471a-a3e6-04e64560f48d State: Peer in Cluster (Connected) uathost1# gluster volume info ? Volume Name: kvm1 Type: Replicate Volume ID: 6a169bda-cf03-4c09-a8d9-50704f08dc1c Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: uathost1:/data/glusterfs/kvm1/brick1/brick Brick2: uathost2:/data/glusterfs/kvm1/brick1/brick Options Reconfigured: storage.owner-uid: 36 storage.owner-gid: 36 On uathost2, I did # gluster volume sync uathost1 kvm1 Sync volume may make data inaccessible while the sync is in progress. Do you want to continue? (y/n) y volume sync: success ?When I check the directory uathost2:/data/glusterfs/kvm1/brick1/brick, it is blank. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140118/44d3af9e/attachment.html>
Vijay Bellur
2014-Jan-20 09:17 UTC
[Gluster-users] repair a brick which had filesystem wiped
On 01/19/2014 12:13 PM, William Kwan wrote:> Hi, > > Gluster 3.4.2 on CentOS 6.5 > > I have a volume with 2 replica on two systems. The filesystem where a > brick was created on system 2 was corrupted. Hardware issue was > resolved. The filesystem is recreated and mounted at the same mount > point. But I can't get the volume replicated. I'm missing some steps? > > # gluster volume status kvm1 > Status of volume: kvm1 > Gluster process Port Online Pid > ------------------------------------------------------------------------------ > Brick uathost1:/data/glusterfs/kvm1/brick1/brick 49152 Y 22548 > Brick uathost2:/data/glusterfs/kvm1/brick1/brick N/A N N/A > <--------????This means that the glusterfsd process for this brick is offline. Can you please check the log files of this brick to determine why it failed? It could be related to missing extended attribute "trusted.glusterfs.volume-id" on the brick directory. If that is the case you can copy the same extended attribute from the other brick directory. Once both bricks are online, self-heal would kick in to rebuild data on the replaced brick. -Vijay