Hi songxin, please find comment inline. ----- Original Message ----- From: "songxin" <songxin_1980 at 126.com> To: gluster-users at gluster.org Cc: gluster-users at gluster.org Sent: Wednesday, February 24, 2016 7:16:02 AM Subject: [Gluster-users] question about replicate volume Hi all, I have a question about replicate volume as below. precondition? 1.A node ip: 128.224.162.163 2.B node ip:128.224.162.255 3.A node brick:/data/brick/gv0 4. B node brick:/data/brick/gv0 reproduce step: 1. gluster peer probe 128.224.162.255 //run on A node 2.gluster volume create gv0 128.224.162.163 : /data/brick/gv0 force //run on A node 3.gluster volume start gv0 //run on A node 4. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on A node 5.create some file(a,b,c) in directory gluster //run on A node 6. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 force //run on A node 7. create some file(d,e,f) in directory gluster //run on A node 8. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on B node 9.ls gluster //run on B node My question is as below. After step 6, the volume type is change from distribute to replicate. The file (a,b,c) is created when the volume type is distribute. The file (d,e,f) is created when the volume type is replicate.>>After step 6, does the volume will replicate the file (a,b,c) in two brick?Or it just replicate the file(d,e,f) in two brick?If I run "gluster volume heal gv0 full", does the volume will replicate the file (a,b,c) in two brick? After step 6 volume have converted to replicate volume. So if you create file from mount point it will replicate these file to all replica set. In your case after step 6 it will replicate only file(d,e,f) because before step 6 volume was distributed. For replicating all the file (before step 6) you need to run #gluster volume heal <volname> full. After executing this command file in both replica set should be same. Thanks, ~Gaurav Thanks, Xin _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
On 02/23/2016 07:58 PM, Gaurav Garg wrote:> Hi songxin, > > please find comment inline. > > ----- Original Message ----- > From: "songxin" <songxin_1980 at 126.com> > To: gluster-users at gluster.org > Cc: gluster-users at gluster.org > Sent: Wednesday, February 24, 2016 7:16:02 AM > Subject: [Gluster-users] question about replicate volume > > Hi all, > I have a question about replicate volume as below. > > precondition? > 1.A node ip: 128.224.162.163 > 2.B node ip:128.224.162.255 > 3.A node brick:/data/brick/gv0 > 4. B node brick:/data/brick/gv0 > > reproduce step: > 1. gluster peer probe 128.224.162.255 //run on A node > 2.gluster volume create gv0 128.224.162.163 : /data/brick/gv0 force //run on A node > 3.gluster volume start gv0 //run on A node > 4. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on A node > 5.create some file(a,b,c) in directory gluster //run on A node > 6. gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 force //run on A node > 7. create some file(d,e,f) in directory gluster //run on A node > 8. mount -t glusterfs 128.224.162.163:/gv0 gluster //run on B node > 9.ls gluster //run on B node > > My question is as below. > > After step 6, the volume type is change from distribute to replicate. > The file (a,b,c) is created when the volume type is distribute. > The file (d,e,f) is created when the volume type is replicate. > >>> After step 6, does the volume will replicate the file (a,b,c) in two brick?Or it just replicate the file(d,e,f) in two brick? > If I run "gluster volume heal gv0 full", does the volume will replicate the file (a,b,c) in two brick? > > > After step 6 volume have converted to replicate volume. So if you create file from mount point it will replicate these file to all replica set. In your case after step 6 it will replicate only file(d,e,f) because before step 6 volume was distributed. For replicating all the file (before step 6) you need to run #gluster volume heal <volname> full. After executing this command file in both replica set should be same.Did that change? It used to trigger the heal crawl automatically when you changed the replica count.> Thanks, > > ~Gaurav > > Thanks, > Xin > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
Hi, I have a problem as below when I start the gluster after reboot a board. precondition: I use two boards do this test. The version of glusterfs is 3.7.6. A board ip:128.224.162.255 B board ip:128.224.95.140 reproduce steps? 1.systemctl start glusterd (A board) 2.systemctl start glusterd (B board) 3.gluster peer probe 128.224.95.140 (A board) 4.gluster volume create gv0 replica 2 128.224.95.140:/tmp/brick1/gv0 128.224.162.255:/data/brick/gv0 force (local board) 5.gluster volume start gv0 (A board) 6.press the reset button on the A board.It is a develop board so it has a reset button that is similar to reset button on pc (A board) 7.run command "systemctl start glusterd" after A board reboot. And command failed because the file /var/lib/glusterd/snaps/.nfsxxxxxxxxx (local board) . Log is as below. [2015-12-07 07:55:38.260084] E [MSGID: 101032] [store.c:434:gf_store_handle_retrieve] 0-: Path corresponding to /var/lib/glusterd/snaps/.nfs0000000001722f4000000002 [2015-12-07 07:55:38.260120] D [MSGID: 0] [store.c:439:gf_store_handle_retrieve] 0-: Returning -1 [2015-12-07 07:55:38.260152] E [MSGID: 106200] [glusterd-store.c:3332:glusterd_store_update_snap] 0-management: snap handle is NULL [2015-12-07 07:55:38.260180] E [MSGID: 106196] [glusterd-store.c:3427:glusterd_store_retrieve_snap] 0-management: Failed to update snapshot for .nfs0000000001722f40 [2015-12-07 07:55:38.260208] E [MSGID: 106043] [glusterd-store.c:3589:glusterd_store_retrieve_snaps] 0-management: Unable to restore snapshot: .nfs0000000001722f400 [2015-12-07 07:55:38.260241] D [MSGID: 0] [glusterd-store.c:3607:glusterd_store_retrieve_snaps] 0-management: Returning with -1 [2015-12-07 07:55:38.260268] D [MSGID: 0] [glusterd-store.c:4339:glusterd_restore] 0-management: Returning -1 [2015-12-07 07:55:38.260325] E [MSGID: 101019] [xlator.c:428:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2015-12-07 07:55:38.260355] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2015-12-07 07:55:38.260374] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed 8.rm /var/lib/glusterd/snaps/.nfsxxxxxxxxx (A board) 9..run command "systemctl start glusterd" and success. 10.at this point the peer status is Peer in Cluster (Connected) and all process is online. If a node abnormal reset, must I remove the /var/lib/glusterd/snaps/.nfsxxxxxx before starting the glusterd? I want to know if it is nomal. Thanks, Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160225/5b0d67ae/attachment.html>