Displaying 4 results from an estimated 4 matches for "noipfailback".
2017 Nov 08
1
ctdb vacuum timeouts and record locks
...ARPs and tickle ACKs on the takeover node. It doesn't
> actually assign the public IP addresses to nodes.
Hm, okay, I was clear that using 10.external it is a human's
responsibility to deal with assigning IPs to physical interfaces. In
re-reading the docs, I see DeterministicIPs and NoIPFailback are
required for moveip, which I am not sure are set. will check next
opportunity, if they aren't that might explain the behaviour, however,
the ips were correctly assigned using the ip command.
The reason I am using 10.external is because when I initially set up my
cluster test environme...
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
...n this tunable is enabled, ctdb will no longer attempt to recover
the cluster by failing IP addresses over to other nodes. This leads to
a service outage until the administrator has manually performed IP
failover to replacement nodes using the 'ctdb moveip' command.
NoIPFailback
Default: 0
When set to 1, ctdb will not perform failback of IP addresses when a
node becomes healthy. When a node becomes UNHEALTHY, ctdb WILL perform
failover of public IP addresses, but when the node becomes HEALTHY
again, ctdb will not fail the addresses back....
2014 Jul 03
0
ctdb split brain nodes doesn't see each other
...9.053602 [33243]: Freeze priority 3
2014/07/03 16:07:59.229670 [33243]: Freeze priority 1
2014/07/03 16:07:59.229780 [33243]: Freeze priority 2
2014/07/03 16:07:59.229863 [33243]: Freeze priority 3
2014/07/03 16:07:59.247015 [33243]: Set DeterministicIPs to 0
2014/07/03 16:07:59.253600 [33243]: Set NoIpFailback to 1
2014/07/03 16:08:03.235484 [33287]: Taking out recovery lock from recovery daemon
2014/07/03 16:08:03.235584 [33287]: Take the recovery lock
2014/07/03 16:08:03.236070 [33287]: Recovery lock taken successfully
2014/07/03 16:08:03.236198 [33287]: Recovery lock taken successfully by recovery dae...
2017 Nov 02
2
ctdb vacuum timeouts and record locks
hm, I stand correct on the problem solved statement below. Ip addresses
are simply not cooperating on the 2nd node.
root at vault1:~# ctdb ip
Public IPs on node 0
192.168.120.90 0
192.168.120.91 0
192.168.120.92 0
192.168.120.93 0
root at vault2:/service/ctdb/log/main# ctdb ip
Public IPs on node 1
192.168.120.90 0
192.168.120.91 0
192.168.120.92 0
192.168.120.93 0
root at