Hi Xin, I didn't heard about this issue in Gluster V 3.7.6/3.7.8 or any other version. After checking all logs i can say that whether its a issue or something else. Thanks, Gaurav ----- Original Message ----- From: "songxin" <songxin_1980 at 126.com> To: "Gaurav Garg" <ggarg at redhat.com> Cc: gluster-users at gluster.org Sent: Saturday, February 20, 2016 4:56:12 AM Subject: Re: [Gluster-users] two same ip addr in peer list Hi Gaurav, Thank you for your reply. I will do these test as you said. I face this issue on glusterd version is 3.7.6. Do you know if this issue has been fixed on latest version 3.7.8. Thanks, Xin ???? iPhone> ? 2016?2?20??02:17?Gaurav Garg <ggarg at redhat.com> ??? > > Hi xin, > > Thanks for bringing up your Gluster issue. > > Abhishek (another Gluster community member) also faced the same issue. I asked below things for futher analysing this issue. could you provide me following information? > > > Did you perform any manual operation with GlusterFS configuration file which resides in /var/lib/glusterd/* folder.? > > Can you provide output of "ls /var/lib/glusterd/peers" from both of your nodes. > > Can you provide output of #gluster volume info command > > Could you provide output of #gluster peer status command when 2nd node is down > > Down the glusterd on both node and bring glusterd one by one on both node and provide me output of #gluster peer status command > > Can you provide full logs details of cmd_history.log and etc-glusterfs-glusterd.vol.log from both the nodes. > > > following things will be very useful for analysing this issue. > > You can restart your glusterd as of now as a workaround but we need to analysis this issue further. > > > Thanks, > > ~Gaurav > > ----- Original Message ----- > From: "songxin" <songxin_1980 at 126.com> > To: gluster-users at gluster.org > Sent: Friday, February 19, 2016 7:07:48 PM > Subject: [Gluster-users] two same ip addr in peer list > > Hi, > I create a replicate volume with 2 brick.And I frequently reboot my two nodes and frequently run ?peer detach? ?peer detach? ?add-brick? "remove-brick". > A borad ip: 10.32.0.48 > B borad ip: 10.32.1.144 > > After that, I run "gluster peer status" on A board and it show as below. > > Number of Peers: 2 > > Hostname: 10.32.1.144 > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e > State: Peer in Cluster (Connected) > > Hostname: 10.32.1.144 > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e > State: Peer in Cluster (Connected) > > > > > I don't understand why the 10.32.0.48 has two peers which are both 10.32.1.144. > Does glusterd not check duplicate ip addr? > Any can help me to answer my quesion? > > Thanks? > Xin > > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
Hi all, I have a question about replicate volume as below. precondition? 1.A node ip: 128.224.162.163 2.B node ip:128.224.162.255 3.A node brick:/data/brick/gv0 4.B node brick:/data/brick/gv0 reproduce step: 1.gluster peer probe 128.224.162.255 //run on A node 2.gluster volume create gv0 128.224.162.163:/data/brick/gv0 force //run on A node 3.gluster volume start gv0 //run on A node 4.mount -t glusterfs 128.224.162.163:/gv0 gluster //run on A node 5.create some file(a,b,c) in directory gluster //run on A node 6.gluster volume add-brick gv0 replica 2 128.224.162.255:/data/brick/gv0 force //run on A node 7.create some file(d,e,f) in directory gluster //run on A node 8.mount -t glusterfs 128.224.162.163:/gv0 gluster //run on B node 9.ls gluster //run on B node My question is as below. After step 6, the volume type is change from distribute to replicate. The file (a,b,c) is created when the volume type is distribute. The file (d,e,f) is created when the volume type is replicate. After step 6, does the volume will replicate the file (a,b,c) in two brick?Or it just replicate the file(d,e,f) in two brick? If I run "gluster volume heal gv0 full", does the volume will replicate the file (a,b,c) in two brick? Thanks, Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160224/d57aaec3/attachment.html>