----- Original Message -----> From: "songxin" <songxin_1980 at 126.com> > To: "Anuradha Talur" <atalur at redhat.com> > Cc: "gluster-user" <gluster-users at gluster.org> > Sent: Wednesday, March 2, 2016 4:09:01 PM > Subject: Re:Re: [Gluster-users] about tail command > > > > Thank you for your reply.I have two more questions as below > > > 1. the command "gluster v replace-brick " is async or sync? The replace is > complete when the command quit ?It is a sync command, replacing the brick finishes as the command returns. In one of the earlier mails I gave incomplete command for replace brick, sorry about that. The only replace-brick operation allowed from glusterfs 3.7.9 onwards is 'gluster v replace-brick <volname> <hostname:src_brick> <hostname:dst_brick> commit force'.> 2.I run "tail -n 0" on mount point.Does it trigger the heal? > > > Thanks, > Xin > > > > > > > > At 2016-03-02 15:22:35, "Anuradha Talur" <atalur at redhat.com> wrote: > > > > > >----- Original Message ----- > >> From: "songxin" <songxin_1980 at 126.com> > >> To: "gluster-user" <gluster-users at gluster.org> > >> Sent: Tuesday, March 1, 2016 7:19:23 PM > >> Subject: [Gluster-users] about tail command > >> > >> Hi, > >> > >> recondition: > >> A node:128.224.95.140 > >> B node:128.224.162.255 > >> > >> brick on A node:/data/brick/gv0 > >> brick on B node:/data/brick/gv0 > >> > >> > >> reproduce steps: > >> 1.gluster peer probe 128.224.162.255 (on A node) > >> 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 > >> 128.224.162.255:/data/brick/gv0 force (on A node) > >> 3.gluster volume start gv0 (on A node) > >> 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node) > >> 5.create some files (a,b,c) in dir gluster (on A node) > >> 6.shutdown the B node > >> 7.change the files (a,b,c) in dir gluster (on A node) > >> 8.reboot B node > >> 9.start glusterd on B node but glusterfsd is offline (on B node) > >> 10. gluster volume remove-brick gv0 replica 1 > >> 128.224.162.255:/data/brick/gv0 > >> force (on A node) > >> 11. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 > >> force (on A node) > >> > >> Now the files are not same between two brick > >> > >> 12." gluster volume heal gv0 info " show entry num is 0 (on A node) > >> > >> Now What I should do if I want to sync file(a,b,c) on two brick? > >> > >Currently, once you add a brick to a cluster, files won't sync > >automatically. > >Patch has been sent to handle this requirement. Auto-heal will be available > >soon. > > > >You could kill the newly added brick and perform the following operations > >from mount > >for the sync to start : > >1) create a directory <dirname> > >2) setfattr -n "user.dirname" -v "value" <dirname> > >3) delete the directory <dirname> > > > >Once these steps are done, start the killed brick. self-heal-daemon will > >heal the files. > > > >But, for the case you have mentioned, why are you removing brick and using > >add-brick again? > >Is it because you don't want to change the brick-path? > > > >You could use "replace-brick" command. > >gluster v replace-brick <volname> <hostname:old-brick-path> > ><hostname:new-brick-path> > >Note that source and destination should be different for this command to > >work. > > > >> I know the "heal full" can work , but I think the command take too long > >> time. > >> > >> So I run "tail -n 1 file" to all file on A node, but some files are sync > >> but > >> some files are not. > >> > >> My question is below: > >> 1.Why the tail can't sync all files? > >Did you run the tail command on mount point or from the backend (bricks)? > >If you run from bricks, sync won't happen. Was client-side healing on? > >To check if they were on or off, run `gluster v get volname all | grep > >self-heal`, cluster.metadata-self-heal, cluster.data-self-heal, > >cluster.entry-self-heal should be on. > > > >> 2.Can the command "tail -n 1 filename" trigger selfheal, just like "ls -l > >> filename"? > >> > >> Thanks, > >> Xin > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org > >> http://www.gluster.org/mailman/listinfo/gluster-users > > > >-- > >Thanks, > >Anuradha. >-- Thanks, Anuradha.
----- Original Message -----> From: "Anuradha Talur" <atalur at redhat.com> > To: "songxin" <songxin_1980 at 126.com> > Cc: "gluster-user" <gluster-users at gluster.org> > Sent: Thursday, March 3, 2016 12:31:41 PM > Subject: Re: [Gluster-users] about tail command > > > > ----- Original Message ----- > > From: "songxin" <songxin_1980 at 126.com> > > To: "Anuradha Talur" <atalur at redhat.com> > > Cc: "gluster-user" <gluster-users at gluster.org> > > Sent: Wednesday, March 2, 2016 4:09:01 PM > > Subject: Re:Re: [Gluster-users] about tail command > > > > > > > > Thank you for your reply.I have two more questions as below > > > > > > 1. the command "gluster v replace-brick " is async or sync? The replace is > > complete when the command quit ? > It is a sync command, replacing the brick finishes as the command returns. > > In one of the earlier mails I gave incomplete command for replace brick, > sorry about that. > The only replace-brick operation allowed from glusterfs 3.7.9 onwards is > 'gluster v replace-brick <volname> <hostname:src_brick> <hostname:dst_brick> > commit force'.Sorry for spamming, but there is a typo here, I meant glusterfs 3.7.0 onwards, not 3.7.9.> > 2.I run "tail -n 0" on mount point.Does it trigger the heal? > > > > > > Thanks, > > Xin > > > > > > > > > > > > > > > > At 2016-03-02 15:22:35, "Anuradha Talur" <atalur at redhat.com> wrote: > > > > > > > > >----- Original Message ----- > > >> From: "songxin" <songxin_1980 at 126.com> > > >> To: "gluster-user" <gluster-users at gluster.org> > > >> Sent: Tuesday, March 1, 2016 7:19:23 PM > > >> Subject: [Gluster-users] about tail command > > >> > > >> Hi, > > >> > > >> recondition: > > >> A node:128.224.95.140 > > >> B node:128.224.162.255 > > >> > > >> brick on A node:/data/brick/gv0 > > >> brick on B node:/data/brick/gv0 > > >> > > >> > > >> reproduce steps: > > >> 1.gluster peer probe 128.224.162.255 (on A node) > > >> 2. gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 > > >> 128.224.162.255:/data/brick/gv0 force (on A node) > > >> 3.gluster volume start gv0 (on A node) > > >> 4. mount -t glusterfs 128.224.95.140:/gv0 gluster (on A node) > > >> 5.create some files (a,b,c) in dir gluster (on A node) > > >> 6.shutdown the B node > > >> 7.change the files (a,b,c) in dir gluster (on A node) > > >> 8.reboot B node > > >> 9.start glusterd on B node but glusterfsd is offline (on B node) > > >> 10. gluster volume remove-brick gv0 replica 1 > > >> 128.224.162.255:/data/brick/gv0 > > >> force (on A node) > > >> 11. gluster volume add-brick gv0 replica 2 > > >> 128.224.162.255:/data/brick/gv0 > > >> force (on A node) > > >> > > >> Now the files are not same between two brick > > >> > > >> 12." gluster volume heal gv0 info " show entry num is 0 (on A node) > > >> > > >> Now What I should do if I want to sync file(a,b,c) on two brick? > > >> > > >Currently, once you add a brick to a cluster, files won't sync > > >automatically. > > >Patch has been sent to handle this requirement. Auto-heal will be > > >available > > >soon. > > > > > >You could kill the newly added brick and perform the following operations > > >from mount > > >for the sync to start : > > >1) create a directory <dirname> > > >2) setfattr -n "user.dirname" -v "value" <dirname> > > >3) delete the directory <dirname> > > > > > >Once these steps are done, start the killed brick. self-heal-daemon will > > >heal the files. > > > > > >But, for the case you have mentioned, why are you removing brick and using > > >add-brick again? > > >Is it because you don't want to change the brick-path? > > > > > >You could use "replace-brick" command. > > >gluster v replace-brick <volname> <hostname:old-brick-path> > > ><hostname:new-brick-path> > > >Note that source and destination should be different for this command to > > >work. > > > > > >> I know the "heal full" can work , but I think the command take too long > > >> time. > > >> > > >> So I run "tail -n 1 file" to all file on A node, but some files are sync > > >> but > > >> some files are not. > > >> > > >> My question is below: > > >> 1.Why the tail can't sync all files? > > >Did you run the tail command on mount point or from the backend (bricks)? > > >If you run from bricks, sync won't happen. Was client-side healing on? > > >To check if they were on or off, run `gluster v get volname all | grep > > >self-heal`, cluster.metadata-self-heal, cluster.data-self-heal, > > >cluster.entry-self-heal should be on. > > > > > >> 2.Can the command "tail -n 1 filename" trigger selfheal, just like "ls > > >> -l > > >> filename"? > > >> > > >> Thanks, > > >> Xin > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> _______________________________________________ > > >> Gluster-users mailing list > > >> Gluster-users at gluster.org > > >> http://www.gluster.org/mailman/listinfo/gluster-users > > > > > >-- > > >Thanks, > > >Anuradha. > > > > -- > Thanks, > Anuradha. > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-- Thanks, Anuradha.
>> 1. the command "gluster v replace-brick " is async or sync? The > replace is >> complete when the command quit ? > It is a sync command, replacing the brick finishes as the command returns.Hmm, that has not been my experience with 3.7.6 and 3.7.8. Perhaps there is a question of semantics here or definition of what the command precisely does. What I found is that the command returns fairly quickly, in a few seconds. In that amount of time, the old brick is removed from the volume configuration and the new brick is added to the volume configuration. As soon as the replace-brick command is done, the "volume info" will show the new configuration with the old brick gone and the new brick included. So in the sense of volume configuration, it is complete. But the data is not moved or healed at this point; that is only starting. The heal process will then proceed separately after the "replace-brick" command. I certainly think of the overall brick replacement process as including the full replication of data to the new brick, even if the "replace-brick" command does not do that. I imagine other people might think the same way also. A new empty brick isn't protecting your replicated data, so it is an incomplete replacement. Older documentation certainly refers to "replace brick start". I couldn't find any 3.7 documentation that explained why that was gone, and why "commit force" was the only option available now. I just got errors at the command line trying to do "start". I think it would help if the new documentation was a little clearer about this, and how to look at heal info to find out when your brick replacement is fully finished. That's my opinion :-) - Alan
Hi all, I have a problem about how to recovery a replicate volume. precondition: glusterfs version:3.7.6 brick of A board :128.224.95.140:/data/brick/gv0 brick of B board:128.224.162.255:/data/brick/gv0 reproduce: 1.gluster peer probe 128.224.162.255 (on A board) 2.gluster volume create gv0 replica 2 128.224.95.140:/data/brick/gv0 128.224.162.255:/data/brick/gv0 force (on A board) 3.gluster volume start gv0 (on A board) 4.reboot the B board After B board reboot,sometimes I have problems as below. 1.the peer status some times is rejected when I run "gluster peer status". (on A or B board) 2.The brick on B board sometimes is offline When I run "gluster volume status" (on A or B board) I want to know how I should do to recovery my replicate volume. PS. Now I do following operation to recovery my replicate volume.But sometimes I can't sync all the files in replicate volume even if I run "heal full". 1.gluster volume remove-brick gv0 replica 1 128.224.162.255:/data/brick/gv0 force (on A board) 2. gluster peer detach 128.224.162.255 (on A board) 3.gluster peer probe 128.224.162.255 (on A board) 4.gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 force (on A board) Please help me. Thanks, Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160307/ff0b481a/attachment.html>