Hi, I create a replicate volume with 2 brick.And I frequently reboot my two nodes and frequently run ?peer detach? ?peer detach? ?add-brick? "remove-brick". A borad ip: 10.32.0.48 B borad ip: 10.32.1.144 After that, I run "gluster peer status" on A board and it show as below. Number of Peers: 2 Hostname: 10.32.1.144 Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e State: Peer in Cluster (Connected) Hostname: 10.32.1.144 Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e State: Peer in Cluster (Connected) I don't understand why the 10.32.0.48 has two peers which are both 10.32.1.144. Does glusterd not check duplicate ip addr? Any can help me to answer my quesion? Thanks? Xin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160219/f166a5a4/attachment.html>
Abhilash has already raised a concern and Gaurav is looking into it. -Atin Sent from one plus one On 19-Feb-2016 7:07 pm, "songxin" <songxin_1980 at 126.com> wrote:> Hi, > > I create a replicate volume with 2 brick.And I frequently reboot my two nodes and frequently run ?peer detach? ?peer detach? ?add-brick? "remove-brick". > > A borad ip: 10.32.0.48 > > B borad ip: 10.32.1.144 > > > After that, I run "gluster peer status" on A board and it show as below. > > Number of Peers: 2 > > Hostname: 10.32.1.144 > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e > State: Peer in Cluster (Connected) > > Hostname: 10.32.1.144 > Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e > State: Peer in Cluster (Connected) > > > > > I don't understand why the 10.32.0.48 has two peers which are both 10.32.1.144. > > Does glusterd not check duplicate ip addr? > > Any can help me to answer my quesion? > > > Thanks? > > Xin > > > > > > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160219/eee31471/attachment.html>
Hi xin, Thanks for bringing up your Gluster issue. Abhishek (another Gluster community member) also faced the same issue. I asked below things for futher analysing this issue. could you provide me following information? Did you perform any manual operation with GlusterFS configuration file which resides in /var/lib/glusterd/* folder.? Can you provide output of "ls /var/lib/glusterd/peers" from both of your nodes. Can you provide output of #gluster volume info command Could you provide output of #gluster peer status command when 2nd node is down Down the glusterd on both node and bring glusterd one by one on both node and provide me output of #gluster peer status command Can you provide full logs details of cmd_history.log and etc-glusterfs-glusterd.vol.log from both the nodes. following things will be very useful for analysing this issue. You can restart your glusterd as of now as a workaround but we need to analysis this issue further. Thanks, ~Gaurav ----- Original Message ----- From: "songxin" <songxin_1980 at 126.com> To: gluster-users at gluster.org Sent: Friday, February 19, 2016 7:07:48 PM Subject: [Gluster-users] two same ip addr in peer list Hi, I create a replicate volume with 2 brick.And I frequently reboot my two nodes and frequently run ?peer detach? ?peer detach? ?add-brick? "remove-brick". A borad ip: 10.32.0.48 B borad ip: 10.32.1.144 After that, I run "gluster peer status" on A board and it show as below. Number of Peers: 2 Hostname: 10.32.1.144 Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e State: Peer in Cluster (Connected) Hostname: 10.32.1.144 Uuid: bbe2a458-ad3d-406d-b233-b6027c12174e State: Peer in Cluster (Connected) I don't understand why the 10.32.0.48 has two peers which are both 10.32.1.144. Does glusterd not check duplicate ip addr? Any can help me to answer my quesion? Thanks? Xin _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users
On 02/19/2016 05:37 AM, songxin wrote:> Hi, > > I create a replicate volume with 2 brick.And I frequently reboot my two nodes and frequently > run ?peer detach? ?peer detach? ?add-brick? "remove-brick". >[snip] Why? You don't need to disassemble and reassemble your cluster every time you reboot a server. Why not just reboot? Do make sure self-heal has completed first, though. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160219/5d45eecf/attachment.html>