similar to: Recover botched drdb gfs2 setup .

Displaying 20 results from an estimated 1000 matches similar to: "Recover botched drdb gfs2 setup ."

2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2010 Mar 24
3
mounting gfs partition hangs
Hi, I have configured two machines for testing gfs filesystems. They are attached to a iscsi device and centos versions are: CentOS release 5.4 (Final) Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009 i686 i686 i386 GNU/Linux The problem is if I try to mount a gfs partition it hangs. [root at node2 ~]# cman_tool status Version: 6.2.0 Config Version: 29 Cluster Name:
2010 Sep 15
0
problem with gfs_controld
Hi, We have two nodes with centos 5.5 x64 and cluster+gfs offering samba and NFS services. Recently one node displayed the following messages in log files: Sep 13 08:19:07 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2 handle 2846d7ad00000000 MSG_PLOCK Sep 13 08:19:07 NODE1 gfs_controld[3101]: send plock message error -1 Sep 13 08:19:11 NODE1 gfs_controld[3101]: cpg_mcast_joined error 2
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2010 Aug 09
2
HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum
2010 Aug 16
1
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. First of all do not install the bind package coming with centos 5.5!! Install needs for samba yum install libacl* gnutls* readline* python* gdb* autoconf* Named installation: Here is a description on what to do: http://jason.roysdon.net/2009/10/16/building-bind-9-6-on-rhel5-centos5-for-d nssec-nsec3-support/ The steps, yum
2009 Feb 13
5
GFS + Restarting iptables
Dear List, I have one last little problem with setting up an cluster. My gfs Mount will hang as soon as I do an iptables restart on one of the nodes.. First, let me describe my setup: - 4 nodes, all running an updated Centos 5.2 installation - 1 Dell MD3000i ISCSI SAN - All nodes are connected by Dell?s Supplied RDAC driver Everything is running stable when the cluster is started (tested
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2012 Nov 27
6
CTDB / Samba / GFS2 - Performance - with Picture Link
Hello, maybe there is someone they can help and answer a question why i get these network screen on my ctdb clusters. I have two ctdb clusters. One physical and one in a vmware enviroment. So when i transfer any files (copy) in a samba share so i get such network curves with performance breaks. I dont see that the transfer will stop but why is that so? can i change anything or does anybody know
2011 Nov 25
0
Failed to start a "virtual machine " service on RHCS in CentOS 6
Hi? All: I have two physical machines as KVM hosts (clusterA.RHCS and clusterB.RHCS) , an iscsi target set into GFS. All I want is a HA Cluster which could migrate all the virtual machines on a node to another when the first node failed into some error status. So I created a cluster "cluster" using RHCS ,added the two hosts into the cluster . created a fence device . for every virtual
2014 Oct 29
2
CentOS 6.5 RHCS fence loops
Hi Guys, I'm using centos 6.5 as guest on RHEV and rhcs for cluster web environment. The environtment : web1.example.com web2.example.com When cluster being quorum, the web1 reboots by web2. When web2 is going up, web2 reboots by web1. Does anybody know how to solving this "fence loop" ? master_wins="1" is not working properly, qdisk also. Below the cluster.conf, I
2010 Sep 30
10
using DRBD VBDs with Xen
Hi, Not totally new to Xen but still very green and meeting some problems. Feel free to kick me to the DRBD people if this is not relevent here. I''ll be providing more info upon request but for now I''ll be brief. Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21) Xen hypervisor 4.0.1~rc6-1 and drbd-8.3.8. One domU configured, with disk and swap image: root =
2007 Jun 25
1
I/O errors in domU with LVM on DRBD
Hi, Sorry for the need of the long winded email. Looking for some answers to the following. I am setting up a xen PV domU on top of a LVM partitioned DRBD device. Everything was going just fine until I tried to test the filesystems in the domU. Here is my setup; Dom0 OS: CentOS release 5 (Final) Kernel: 2.6.18-8.1.4.el5.centos.plusxen Xen: xen-3.0.3-25.0.3.el5 DRBD:
2012 Nov 04
3
Problem with CLVM (really openais)
I'm desparately looking for more ideas on how to debug what's going on with our CLVM cluster. Background: 4 node "cluster"-- machines are Dell blades with Dell M6220/M6348 switches. Sole purpose of Cluster Suite tools is to use CLVM against an iSCSI storage array. Machines are running CentOS 5.8 with the Xen kernels. These blades host various VMs for a project. The iSCSI
2010 Oct 05
0
WG: HOWTO samba4 centos5.5 named dnsupdate drbd simple failover
centOs5.5/samba4/named here is a short guide setting it up to work. I added TSIG for bind-master amd bind-slave. Update to samba4 alpha13 added (installing git on CentOs 5.5). If you do this howto right now you will start with samba4 alpha13. You do not need the update section. But you need git for your installation because the rsync-thing is broken!!!!!! First of all do not install the bind
2011 Jun 02
3
Problems with descriptions.
Hi guys! I can?t find an answer in google, so my last hope is this mailing list. Story. I have two servers with same arrays. Servers connected by DRBD. I used ocfs2 as file system, also I used NFS4 to access to the ocfs2 drive. I do not have any idea, but the allocated descriptors in /proc/sys/fs/file-nr increasing every time while drive accessed. So after some time allocated descriptions over
2011 Apr 22
0
GFS2 performance
Hi, I'm trying to get more performance out of my DRBD cluster with gfs2. It seems that our gfs2 implementation is quit slow. When running the ping_pong test we get no more than a 1000 locks/sec on the disk. ./ping_pong /mnt/backup/test.dat 4 879 locks/sec The cluster config has been updated with: <dlm plock_ownership="1" plock_rate_limit="0"/>
2014 Jul 05
1
samba4 + drbd + ctdb + failover
Hi We've got drbd going between 2 nodes:) ATM there is un-partitioned space on each node but (we think) they are syncing OK. It looks as though it has synced the whole partition (2GB) from the primary node 1 to the other node: node 1 smb1:/home/steve # cat /proc/drbd version: 8.4.4 (api:1/proto:86-101) GIT-hash: 3c1f46cb19993f98b22fdf7e18958c21ad75176d build by SuSE Build Service 1:
2010 Jul 19
1
GFS performance issue
Two web servers, both virtualized with CentOS Xen servers as host (residing on two different physical servers). GFS used to store home directories containing web document roots. Shared block device used by GFS is an ISCSI target with the ISCSI initiator residing on the Dom-0, and presented to Dom-U webservers as drives. Also, providing a second shared block device for quorum disk. If I hit the