----- Original Message -----> From: "songxin" <songxin_1980 at 126.com> > To: "gluster-user" <gluster-users at gluster.org> > Sent: Tuesday, March 1, 2016 7:19:23 PM > Subject: [Gluster-users] about tail command > > Hi, > > recondition: > A node:128.224.95.140 > B node:128.224.162.255 > > brick on A node:/data/brick/gv0 > brick on B node:/data/brick/gv0 > > > reproduce steps: > 1.gluster peer probe 128.224.162.255 (on A node) > 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 > 128.224.162.255:/data/brick/gv0 force (on A node) > 3.gluster volume start gv0 (on A node) > 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node) > 5.create some files (a,b,c) in dir gluster (on A node) > 6.shutdown the B node > 7.change the files (a,b,c) in dir gluster (on A node) > 8.reboot B node > 9.start glusterd on B node but glusterfsd is offline (on B node) > 10. gluster volume remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv0 > force (on A node) > 11. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 > force (on A node) > > Now the files are not same between two brick > > 12." gluster volume heal gv0 info " show entry num is 0 (on A node) > > Now What I should do if I want to sync file(a,b,c) on two brick? >Currently, once you add a brick to a cluster, files won't sync automatically. Patch has been sent to handle this requirement. Auto-heal will be available soon. You could kill the newly added brick and perform the following operations from mount for the sync to start : 1) create a directory <dirname> 2) setfattr -n "user.dirname" -v "value" <dirname> 3) delete the directory <dirname> Once these steps are done, start the killed brick. self-heal-daemon will heal the files. But, for the case you have mentioned, why are you removing brick and using add-brick again? Is it because you don't want to change the brick-path? You could use "replace-brick" command. gluster v replace-brick <volname> <hostname:old-brick-path> <hostname:new-brick-path> Note that source and destination should be different for this command to work.> I know the "heal full" can work , but I think the command take too long time. > > So I run "tail -n 1 file" to all file on A node, but some files are sync but > some files are not. > > My question is below: > 1.Why the tail can't sync all files?Did you run the tail command on mount point or from the backend (bricks)? If you run from bricks, sync won't happen. Was client-side healing on? To check if they were on or off, run `gluster v get volname all | grep self-heal`, cluster.metadata-self-heal, cluster.data-self-heal, cluster.entry-self-heal should be on.> 2.Can the command "tail -n 1 filename" trigger selfheal, just like "ls -l > filename"? > > Thanks, > Xin > > > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-- Thanks, Anuradha.
Thank you for your reply.I have two more questions as below 1. the command "gluster v replace-brick " is async or sync? The replace is complete when the command quit ? 2.I run "tail -n 0" on mount point.Does it trigger the heal? Thanks, Xin At 2016-03-02 15:22:35, "Anuradha Talur" <atalur at redhat.com> wrote:> > >----- Original Message ----- >> From: "songxin" <songxin_1980 at 126.com> >> To: "gluster-user" <gluster-users at gluster.org> >> Sent: Tuesday, March 1, 2016 7:19:23 PM >> Subject: [Gluster-users] about tail command >> >> Hi, >> >> recondition: >> A node:128.224.95.140 >> B node:128.224.162.255 >> >> brick on A node:/data/brick/gv0 >> brick on B node:/data/brick/gv0 >> >> >> reproduce steps: >> 1.gluster peer probe 128.224.162.255 (on A node) >> 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 >> 128.224.162.255:/data/brick/gv0 force (on A node) >> 3.gluster volume start gv0 (on A node) >> 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node) >> 5.create some files (a,b,c) in dir gluster (on A node) >> 6.shutdown the B node >> 7.change the files (a,b,c) in dir gluster (on A node) >> 8.reboot B node >> 9.start glusterd on B node but glusterfsd is offline (on B node) >> 10. gluster volume remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv0 >> force (on A node) >> 11. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 >> force (on A node) >> >> Now the files are not same between two brick >> >> 12." gluster volume heal gv0 info " show entry num is 0 (on A node) >> >> Now What I should do if I want to sync file(a,b,c) on two brick? >> >Currently, once you add a brick to a cluster, files won't sync automatically. >Patch has been sent to handle this requirement. Auto-heal will be available soon. > >You could kill the newly added brick and perform the following operations from mount >for the sync to start : >1) create a directory <dirname> >2) setfattr -n "user.dirname" -v "value" <dirname> >3) delete the directory <dirname> > >Once these steps are done, start the killed brick. self-heal-daemon will heal the files. > >But, for the case you have mentioned, why are you removing brick and using add-brick again? >Is it because you don't want to change the brick-path? > >You could use "replace-brick" command. >gluster v replace-brick <volname> <hostname:old-brick-path> <hostname:new-brick-path> >Note that source and destination should be different for this command to work. > >> I know the "heal full" can work , but I think the command take too long time. >> >> So I run "tail -n 1 file" to all file on A node, but some files are sync but >> some files are not. >> >> My question is below: >> 1.Why the tail can't sync all files? >Did you run the tail command on mount point or from the backend (bricks)? >If you run from bricks, sync won't happen. Was client-side healing on? >To check if they were on or off, run `gluster v get volname all | grep self-heal`, cluster.metadata-self-heal, cluster.data-self-heal, cluster.entry-self-heal should be on. > >> 2.Can the command "tail -n 1 filename" trigger selfheal, just like "ls -l >> filename"? >> >> Thanks, >> Xin >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users > >-- >Thanks, >Anuradha.-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160302/5a1462a4/attachment.html>
songxin
2016-Mar-03 03:28 UTC
[Gluster-users] question about command "getfattr" in replicate volume
Hi, recondition: glusterfs version is 3.7.6 A node:128.224.95.140 B node:128.224.162.255 brick on A node:/data/brick/gv0 brick on B node:/data/brick/gv0 reproduce steps: 1.gluster peer probe 128.224.162.255 (on A node) 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 128.224.162.255:/data/brick/gv0 force (on A node) 3.gluster volume start gv0 (on A node) 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node) 5.create file 11 in dir gluster (on A node) 6.getfattr -m. -d -e hex /data/brick/gv0/11 (on A node) # file: data/brick/gv0/11 trusted.afr.dirty=0x000000000000000000000000 trusted.bit-rot.version=0x00000000000000025696d78700029573 trusted.gfid=0xe696148665c343f7ace19184f0b5e7fa 6.getfattr -m. -d -e hex /data/brick/gv0/11 (on B node) # file: data/brick/gv0/11 trusted.afr.dirty=0x000000000000000000000000 trusted.bit-rot.version=0x000000000000000256653d270006d953 trusted.gfid=0xe696148665c343f7ace19184f0b5e7fa My question is following. Why the file a only has one trusted.afr.dirty extended attribute about change log in replicate volume? I know right info by run "getfattr" is like below. Example: [root at store3 ~]# getfattr -d -e hex -m. brick-a/file.txt #file: brick-a/file.txt security.selinux=0x726f6f743a6f626a6563745f723a66696c655f743a733000 trusted.afr.vol-client-2=0x000000000000000000000000 trusted.afr.vol-client-3=0x000000000200000000000000 trusted.gfid=0x307a5c9efddd4e7c96e94fd4bcdcbd1b replica pair, i.e.brick-b: trusted.afr.vol-client-0=0x000000000000000000000000 -->changelog for itself (brick-a) trusted.afr.vol-client-1=0x000000000000000000000000 -->changelog for brick-b as seen by brick-a Likewise, all files in brick-b will have: trusted.afr.vol-client-0=0x000000000000000000000000 -->changelog for brick-a as seen by brick-b trusted.afr.vol-client-1=0x000000000000000000000000 -->changelog for itself (brick-b) Above info is getting from link https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md. Thanks, Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160303/1a905c8a/attachment.html>