search for: gfs_controld

Displaying 11 results from an estimated 11 matches for "gfs_controld".

2011 Feb 27
1
Recover botched drdb gfs2 setup .
...]# drbd-overview 1:r0 WFConnection Primary/Unknown UpToDate/DUnknown C r---- Cman: [root at mcvpsam01 init.d]# /etc/init.d/cman status groupd is stopped gfs2 mount [root at mcvpsam01 init.d]# ./gfsmount.sh start Mounting gfs2 partition /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused /sbin/mount.gfs2: can't connect to gfs_controld: Connection refused /sbin/mount.gfs2: can't connect to gfs_controld: Connec...
2010 Sep 15
0
problem with gfs_controld
Hi, We have two nodes with centos 5.5 x64 and cluster+gfs offering samba and NFS services. Recently one node displayed the following messages in log files: Sep 13 08:19:07 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:07 NODE1 gfs_controld[3101]: send plock message error -1 Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:11 NODE1 gfs_controld[3101]: send plock message e...
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
.../drbd Server 2: LogVol09, Secondary for /dev/drbd, cat /proc/drbd is ok What i thought to do was to export via GNBD the /dev/drbd0 to import it on a couple of nodes. Using GFS2 as FS for concurrent usage. I got to format the GFS2 FS on the Server 1 drbd0 device. But was unable to mount it because gfs_controld wasn't accepting connections, i discovered that cman manage gfs_controld. In any case i started it manually and discover that it try to use ccsd daemon which i have not installed. Ok, i do not need to mount it locally so i started configuring GNBD, started manualli the server end stated the ex...
2010 Jul 19
1
GFS performance issue
...head = 262144 quota_quantum = 60 quota_warn_period = 10 jindex_refresh_secs = 60 log_flush_secs = 60 incore_log_blocks = 1024 [root at www3 www]# cat /etc/cluster/cluster.conf |egrep '(dlm)|(gfs)' <dlm plock_ownership="1" plock_rate_limit="0"/> <gfs_controld plock_rate_limit="0"/> Fred Wittekind
2010 Mar 24
3
mounting gfs partition hangs
Hi, I have configured two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name:
2011 Apr 22
0
GFS2 performance
...2 implementation is quit slow. When running the ping_pong test we get no more than a 1000 locks/sec on the disk. ./ping_pong /mnt/backup/test.dat 4 879 locks/sec The cluster config has been updated with: <dlm plock_ownership="1" plock_rate_limit="0"/> <gfs_controld plock_rate_limit="0"/> gfs2_tool getargs /mnt/backup statfs_percent 0 data 2 suiddir 0 quota 0 posix_acl 0 upgrade 0 debug 0 localflocks 0 localcaching 0 ignore_local_fs 0 spectator 0 hostdata jid=1:id=2752514:first=0 locktable lockproto gfs2_tool df -H /mnt/backup: SB lock proto =...
2012 Nov 27
6
CTDB / Samba / GFS2 - Performance - with Picture Link
Hello, maybe there is someone they can help and answer a question why i get these network screen on my ctdb clusters. I have two ctdb clusters. One physical and one in a vmware enviroment. So when i transfer any files (copy) in a samba share so i get such network curves with performance breaks. I dont see that the transfer will stop but why is that so? can i change anything or does anybody know
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
...112 2649 2879 5676 6- 2843 1460 1444 2839 2722 3948 422 2164 2713 2957 5778 rhel6 x86_64/GFS2 two nodes, shared FC lun on a SAN(Used RDM in VMWare vSphere for GFS2 lun) Tunned cluster suite cluster.conf + <dlm plock_ownership="1" plock_rate_limit="0"/> <gfs_controld plock_rate_limit="0"/> Totals: Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo 100% 50% 50% 100% 100% 100% 50% 100% 100% 100% 100% 30% 5% 1- 2730 1340 1356 2704 2644 3748 522 2125 2643 2662 5422 2- 3309 1618 1659 3294 3223...
2009 Feb 13
5
GFS + Restarting iptables
...I reproduced the issue now several times so I am quite sure it has to do with the iptables restart. I have a custom fence script for our ipoman powerswitch, which is all tested and is working fine. When I do iptables restart the following will happen: - After approx 10 seconds the process gfs_controld will go to 100% cpu usage (at all nodes!) - I can still access my gfs Mount - The Group_tool dump gfs tells me: ---------------------- 1234541723 config_no_withdraw 0 1234541723 config_no_plock 0 1234541723 config_plock_rate_limit 100 1234541723 config_plock_ownership 0 1234541723 config_drop_res...
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2008 Nov 14
5
[RFC] Splitting cluster.git into separate projects/trees
Hi everybody, as discussed and agreed at the Cluster Summit we need to split our tree to make life easier in the long run (etc. etc.). We need to decide how we want to do it and there are different approaches to that. I was able to think of 3. There might be more and I might not have taken everything into consideration so comments and ideas are welcome. At this point we haven't really