Hi all, I'm currently trying to deploy ucarp with GlusterFS, especially with nfs access. Ucarp works well when I ping the VIP and i shutdown Master (and then when the Master came back up),but I face of a problem with NFS connections. I got a client mounted on the VIP, when the Master fall, the client switch automaticaly on the Slave with almost no delay, it works like a charm. But when the Master come back up, the mount point on the client freeze. I've done a monitoring with tcpdump, when the master came up, the client send paquets on the master but the master seems to not establish the TCP connection. My volume config : Volume Name: hermes Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/users/export Brick2: ylal2960:/users/export Options Reconfigured: performance.cache-size: 1GB performance.cache-refresh-timeout: 60 network.ping-timeout: 25 nfs.port: 2049 As Craig wrote previously, I well probed hosts and created the volume with their real IP, I only used the VIP with the client. Does anyone got experience with UCARP and GlusterFS ? Anthony -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110908/acc517b1/attachment.html>
Yes, working for me on centos 5.6 samba/glusterfs/ext3. Seems to me ucarp is not configured to work master/slave: Master: ID=1 BIND_INTERFACE=eth1 #Real IP SOURCE_ADDRESS=xxx.xxx.xxx.xxx #slaveconfig,OPTIONS="--shutdown --preempt -b 1 -k 50" OPTIONS="--shutdown --preempt -b 1" #Virtual IP used by ucarp VIP_ADDRESS="zzz.zzz.zzz.zzz" #Ucarp Password PASSWORD=yyyyyyyy On you slave you need: OPTIONS="--shutdown --preempt -b 1 -k 50" Good luck Daniel EDV Daniel M?ller Leitung EDV Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24 72076 T?bingen Tel.: 07071/206-463, Fax: 07071/206-499 eMail: mueller at tropenklinik.de Internet: www.tropenklinik.de Von: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] Im Auftrag von anthony garnier Gesendet: Donnerstag, 8. September 2011 15:03 An: gluster-users at gluster.org Betreff: [Gluster-users] UCARP with NFS Hi all, I'm currently trying to deploy ucarp with GlusterFS, especially with nfs access. Ucarp works well when I ping the VIP and i shutdown Master (and then when the Master came back up),but I face of a problem with NFS connections. I got a client mounted on the VIP, when the Master fall, the client switch automaticaly on the Slave with almost no delay, it works like a charm. But when the Master come back up, the mount point on the client freeze. I've done a monitoring with tcpdump, when the master came up, the client send paquets on the master but the master seems to not establish the TCP connection. My? volume config : Volume Name: hermes Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: ylal2950:/users/export Brick2: ylal2960:/users/export Options Reconfigured: performance.cache-size: 1GB performance.cache-refresh-timeout: 60 network.ping-timeout: 25 nfs.port: 2049 As Craig wrote previously, I well probed hosts and created the volume with their real IP, I only used the VIP with the client. Does anyone got experience with UCARP and GlusterFS ? Anthony
On Thu, Sep 08, 2011 at 01:02:41PM +0000, anthony garnier wrote:> I got a client mounted on the VIP, when the Master fall, the client switch > automaticaly on the Slave with almost no delay, it works like a charm. But when > the Master come back up, the mount point on the client freeze. > I've done a monitoring with tcpdump, when the master came up, the client send > paquets on the master but the master seems to not establish the TCP connection.Anthony, Your UCARP command line choices and scripts would be worth looking at here. There are different UCARP behavior options for when the master comes back up. If the initial failover works fine, it may be that you'll have better results if you don't have a preferred master. That is, you can either have UCARP set so that the slave relinquishes the IP back to the master when the master comes back up, or you can have UCARP set so that the slave becomes the new master, until such time as the new master goes down, in which case the former master becomes master again. If you're doing it the first way, there may be a brief overlap, where both systems claim the VIP. That may be where your mount is failing. By doing it the second way, where the VIP is held by whichever system has it until that system actually goes down, there's no overlap. There shouldn't be a reason, in the Gluster context, to care which system is master, is there? Whit