search for: group_tool

Displaying 4 results from an estimated 4 matches for "group_tool".

2009 Feb 13
5
GFS + Restarting iptables
...start. I have a custom fence script for our ipoman powerswitch, which is all tested and is working fine. When I do iptables restart the following will happen: - After approx 10 seconds the process gfs_controld will go to 100% cpu usage (at all nodes!) - I can still access my gfs Mount - The Group_tool dump gfs tells me: ---------------------- 1234541723 config_no_withdraw 0 1234541723 config_no_plock 0 1234541723 config_plock_rate_limit 100 1234541723 config_plock_ownership 0 1234541723 config_drop_resources_time 10000 1234541723 config_drop_resources_count 10 1234541723 config_drop_resources_ag...
2012 Nov 04
3
Problem with CLVM (really openais)
...is[7179]: [SYNC ] This node is within the primary component and will provide service. Nov 3 17:44:34 calgb-blade1 openais[7179]: [TOTEM] entering OPERATIONAL state. ... When this happens, the node that lost connections kills CMAN. The other 3 nodes get into this state: [root at calgb-blade2 ~]# group_tool type level name id state fence 0 default 00010001 FAIL_START_WAIT [1 2 3] dlm 1 clvmd 00010003 FAIL_ALL_STOPPED [1 2 3 4] dlm 1 rgmanager 00020003 FAIL_ALL_STOPPED [1 2 3 4] One of the nodes will be barking ab...
2008 Nov 14
5
[RFC] Splitting cluster.git into separate projects/trees
Hi everybody, as discussed and agreed at the Cluster Summit we need to split our tree to make life easier in the long run (etc. etc.). We need to decide how we want to do it and there are different approaches to that. I was able to think of 3. There might be more and I might not have taken everything into consideration so comments and ideas are welcome. At this point we haven't really
2010 Mar 24
3
mounting gfs partition hangs
...00.35 Node addresses: 147.83.41.130 [root at node2 ~]# cman_tool nodes Node Sts Inc Joined Name 0 M 0 2010-03-24 14:46:22 /dev/web/web 1 M 4156 2010-03-24 17:08:36 node1.fib.upc.es 2 M 4132 2010-03-24 14:46:09 node2.fib.upc.es [root at node2 ~]# group_tool hangs... [root at node1 ~]# mount -t gfs /dev/home2/home2 /home2 hangs... If I cancel the command I can return to the terminal and I don't see anything in log files. The resource /dev/home2/home2 is accessible by the two nodes and if I try to mount /home2 with lock_nolock there is no problem....