similar to: Hard system restart when DRBD connection fails while in use

Displaying 20 results from an estimated 10000 matches similar to: "Hard system restart when DRBD connection fails while in use"

2008 Feb 08
1
Searching patch for 2.6.18-xen with drbd 8.0.10
hi, i tried many times to get ocfs2 from 2.6.18-xen (Xen 3.1) running with drbd 8.0.10 running, but no luck. One developer from drbd tolds me, that there is bug in those kernels from SLES SP1 (fixed in SP2) and Debian Etch/Xen 3.1. So i need a patch, if there are any ... so, where can i get one? cu denny -- Stoppt den ?berwachungswahn - Stoppt den Sch?uble Katalog: http://www.nopsis.de
2011 Apr 01
1
Node Recovery locks I/O in two-node OCFS2 cluster (DRBD 8.3.8 / Ubuntu 10.10)
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary (v8.3.8) and Pacemaker. Everything seems to be working great, except during testing of hard-boot scenarios. Whenever I hard-boot one of the nodes, the other node is successfully fenced and marked ?Outdated? * <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"
2009 Sep 30
2
MySQL fails with tablespace on OCFS2
I have set up an cluster with ocfs2 on top of drbd in primary/primary mode. As long as the datadir is in /var/lib/mysql, everything works fine. But as soon as I put the datadir on the ocfs2 filesystem /mnt/data/mysql mysql fails with: [Ubuntu] root at fs2:/etc# mysqld --safe-mode 090930 18:08:14 [Warning] Can't create test file /mnt/data/mysql/fs2.lower-test 090930 18:08:14 [Warning]
2012 Dec 11
4
Configuring Xen + DRBD + Corosync + Pacemaker
Hi everyone, I need some help to setup my configuration failover system. My goal is to have a redundance system using Xen + DRBD + Corosync + Pacemaker On Xen I will have one virtual machine. When this computer has network down, I will do a Live migration to the second computer. The first configuration I will need is a crossover cable, won''t I? It is really necessary? Ok, I did it. eth0
2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys, I'm looking at the possibility of running a pair of servers with Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or other clustered FS) for the mail storage and ext3 for the root drive. I'm currently using maildrop for delivery and Dovecot imap/pop3 with the stores over NFS. I'm looking for better performance but still keeping the HA element I have now with
2009 Jan 26
1
ocfs2 + drbd primary/primary "No space left on device"
Hello. I'm having issues using ocfs2 and drbd in dual primary mode. After running some filesystem test's that create a lot of small files I run really fast into "No space left on device" The non failing node is able to write/read from the filesystem. And the failing node is also able to delete/read from the filesystem Ubuntu custom kernel 2.6.27.2 o2cb_ctl version 1.3.9 drbd
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared disk with either OCFS2 or GFS. The environment is a Centos 5.3 with DRBD82 (but also tried with DRBD83 from testing) . Setting up a single primary disk and running bonnie++ on it works. Setting up a dual-primary disk, only mounting it on one node (ext3) and running bonnie++ works When setting up ocfs2 on the /dev/drbd0
2007 Sep 04
3
Ocfs2 and debian
Hi. I'm pretty new to ocfs2 and clusters. I'm trying to make ocfs2 running over a drbd device. I know it's not the best solution but for now i must deal with this. I set up drbd and work perfectly. I set up ocfs and i'm not able to make it to work. /etc/init.d/o2cb status: Module "configfs": Loaded Filesystem "configfs": Mounted Module
2009 Jul 15
1
CentOS-5.3 + DRBD-8.2 + OCFS2-1.4
I've run into a problem mounting an OCFS2 filesystem on a DRBD device. I think it's the same one discussed at http://lists.linbit.com/pipermail/drbd-user/2007-April/006681.html When I try to mount the filesystem I get a ocfs2_hb_ctl: I/O error: [root at node-6A ~]# mount -t ocfs2 /dev/drbd2 /cshare ocfs2_hb_ctl: I/O error on channel while starting heartbeat mount.ocfs2: Error when
2010 Jan 26
2
No space left on device in one node
Hi! We operate a 2-node cluster running OCFS2 on top of DRBD. It shows about 4.3 GB free space on the OCFS2 filesystem using df on both nodes, but one node can't even write 10 MB: df (ouput identical on both the nodes) $ df -k /cluster Filesystem 1K-blocks Used Available Use% Mounted on /dev/drbd0 83883484 80071096 3812388 96% /cluster $ df -i /cluster
2011 May 10
3
DRBD, Xen, HVM and live migration
Hi, I want to combine all the above mentioned technologies. The Linbit pages warn not to use the drbd: VBD with HVM DomUs. This page however: http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pacemaker-drbd-ocfs2.en (thank you Jean), simply puts two DRBD devices in dual primary mode and starts Xen DomUs while pointing to the DRBD devices with phy: in the DomU config files.
2010 Sep 30
10
using DRBD VBDs with Xen
Hi, Not totally new to Xen but still very green and meeting some problems. Feel free to kick me to the DRBD people if this is not relevent here. I''ll be providing more info upon request but for now I''ll be brief. Debian/Squeeze running 2.6.32-5-xen-amd64 (2.6.32-21) Xen hypervisor 4.0.1~rc6-1 and drbd-8.3.8. One domU configured, with disk and swap image: root =
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed... URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2009 Nov 23
5
[OT] DRBD
Hello all, has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :) Thanks, Rodolfo.
2011 Jan 19
8
Xen on two node DRBD cluster with Pacemaker
Hi all, could somebody point me to what is considered a sound way to offer Xen guests on a two node DRBD cluster in combination with Pacemaker? I prefer block devices over images for the DomU''s. I understand that for live migration DRBD 8.3 is needed, but I''m not sure as to what kind of resource agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD config
2007 Feb 21
1
Performance Problems while reading
Hi all We are using a 2 node cluster with drbd 8 (primary/primary state) and ocfs2. Reading a file on one node while it will be written on the other node is very slow. Reading a file on node while it will be written on the same node is fast. In the first case the node which wants to read the file has to ask the other to downgrade the locklevel. In my opinion this is a bottleneck, if the files are
2008 Sep 24
1
help needed with ocfs2 on centos5.2
Hi, I was interested in a simple configuration with 2 machines sharing a drbd drive in primary-primary mode. I am done with drbdb, but I am a newbie to ocfs, so wanted some help. I have 2 machines, with centos5.2 , kernel 2.6.18-8.el5. I downloaded the following packages : ocfs2-2.6.18-92.el5-1.4.1-1.el5.x86_64.rpm ocfs2-tools-1.4.1-1.el5.x86_64.rpm ocfs2console-1.4.1-1.el5.x86_64.rpm (download
2011 Sep 11
1
[XCP] primary/primary DRBD 8.4.0-1 LVM-based shared SR (xcp 1.1) preformance tuning
Hi all, we have followed the very good HOWTO by http://wherethebitsroam.com/blogs/jeffw/drbd-xcp-05 and set up DRBD on XCP 1.1 in primary/primary mode. It works fine, but I am wondering how to squeeze more performance out of the system (we currently use a crossover GB Ethernet connection). When writing a 1 GB file on a guest I get write performance of about 5MB/s (idle). We have disabled all
2009 Jul 22
3
DRBD very slow....
Hello all, we have a new setup with xen on centos5.3 I run drbd from lvm volumes to mirror data between the two servers. both servers are 1U nec rack mounts with 8GB RAM, 2x mirrored 1TB seagate satas. The one is a dual core xeon, and the other a quad-core xeon. I have a gigabit crossover link between the two with an MTU of 9000 on each end. I currently have 6 drbds mirroring across that