Displaying 20 results from an estimated 300 matches similar to: "Remus DRBD frozen"
2013 Mar 28
1
Xen Remus DRBD dual primary frozen
Dear all,
I have sent this problem earlier but maybe its not detail, here I try to
write more detail. I hope anybody can help me to point out the problem.
First of all I used Ubuntu 12.04 x64 both for domain0 and domainU with
modification to run under xen hypervisor and work with remus.
I follow and configured the remus with this notes
2013 Apr 05
0
DRBD + Remus High IO load frozen
Dear all,
I have installed DRBD 8.3.11 compiled from sources. However the backend
block will freeze if there is high IO load. I use Remus to support high
availability and checkpointing is controlled by remus for each 400ms.
If I check the Iostat I got the idle CPU will decreasing extremely each
checkpointing and when its reach 0% of idle cpu the local backing device
will freeze and damage the
2011 Jul 28
0
remus / switches
hi all,
coming finally to the end, remus, all works except some questions?
1) I protect 1 guest (remus -i 100 server01 10.41.10.42). how can I protect
3 guests? like to execute 3 commands like
remus -i 100 server01 10.41.10.42
remus -i 100 server02 10.41.10.42
remus -I 100 server03 10.41.10.42
2) how can I see if remus works? are there any douc''s for command line
switches?
3) does
2013 Feb 02
3
Running XEN 4.1 64 bit on Ubuntu desktop without VTx
Dear all,
I am quite new in this virtualization area. I am want to do some experiment
with live migration using xen. However, I got problem since my server
didn''t support VTx. I am using Ubuntu desktop 12.04 64 bit with Xen 4.1
Amd64. But when I reload the machine it wont start, since the XEN website
its doesn''t matter using Paravirtualization without VTx support I dont know
2013 Mar 04
1
Error: /usr/lib/xen/bin/xc_save 22 6 0 0 0 failed
Hi All,
Does anybody have this problem when try to migrate DomU :
Error: /usr/lib/xen/bin/xc_save 22 6 0 0 0 failed
I use xen 4.2.2-pre compiled from sources and my machine is Ubuntu
12.04 Linux with kernel 3.2.0-29-generic.
Its working perfectly, even remus with drbd support migration was running
successfully before. However today after I kill the remus process with
command
pkill -USR1 remus
2013 Apr 23
1
Xen with openvswitch integration
Dear all,
Again, I have some question here and I hope some one have experience how to
do it. I have Xen 4.2.1 installed from sources and working with XM
toolstack. I will use this xen in order to run Remus high availability with
DRBD.
Does any one know about openvswitch integration with xm toolstack, since
Xen wiki only confirm XCP have natively support openvswitch
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi,
First of all, I'm new to ocfs2 and drbd.
I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20),
then compiled the drbd-modules and ocfs (modules and tools) from source.
The process of getting everything up and running was very easy.
I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2007 Jul 12
0
No subject
Olle ?) aiming to unify logging, eventing, monitoring (AMI, SNMP, ...)
APIs.
I think that thread occurred when it was decided to include a version number
in Manager interface.
I agree this is an interesting idea ...
The use case that made me ask this is here :
I've got a running system which is working ok up to a moment it stops to
dial out on ISDN-BRI spans (incoming calls are ok). When
2014 Sep 02
1
samba_spnupdate invoked oom-killer
Hello All,
Anyone have seen this before?
Did samba_spnupdate really caused the crash???
[ 49.753564] block drbd1: drbd_sync_handshake:
[ 49.753571] block drbd1: self
BB16E125AF60AEDC:0000000000000000:30D97136FB1DA7A3:30D87136FB1DA7A3 bits:0
flags:0
[ 49.753576] block drbd1: peer
6365B5AFF049F16D:BB16E125AF60AEDD:30D97136FB1DA7A2:30D87136FB1DA7A3 bits:1
flags:0
[ 49.753580] block drbd1:
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
Hello List,
i am running Debian Squeeze and i installed DRBD, 3.2.0-0.bpo.4-amd64
and Xen4.0 from the Backports.
Sometimes i get such ugly Kernel message:
[257318.441757] BUG: unable to handle kernel paging request at ffff880025f19000
Log:
[256820.643918] xen-blkback:ring-ref 772, event-channel 16, protocol 1
(x86_64-abi)
[256830.802492] vif86.0: no IPv6 routers present
[256835.674481]
2007 Jun 29
0
centos drbd - mounts/ replication
Hi,
I would normally post this to the drbd list but it so low traffic/low
volume (plus Austria might be asleep right now) I figured i'd ask
someone here in case they have gotten drbd working on centos. Right now
my system says i'm only the 971st person to even install it... It's
been out for years, so likely this just means version 8. But you'll
only see a couple of posts
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2013 Sep 08
9
Re: IBM HS20 Xen 4.1 and 4.2 Critical Interrupt - Front panel NMI crash
Hello,
I have the same error, server is auto rebooted during every boot with
kernel XEN, HS20 with Debian Wheezy and XEN hang on and AMM managment show
same errors described in previous mails. With Debian wheezy wit non-xen
kernel boots correcte, it seems that problems is with xen kernel
Same Server HS20 with Debian Lenny+ XEN 3.2 or Debian Squeeze+XEN
4.0 working perfect
Upgraded to Debian
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2011 Mar 23
3
EXT4 Filesystem Mount Failed (bad geometry: block count)
Dear All,
Currently using RHEL6 Linux and Kernel Version is 2.6.32-71.el6.i686 and DRBD Version is 8.3.10
DRBD is build from source and Configured DRBD with 2 Node testing with Simplex Setup
Server 1 : 192.168.13.131 IP Address and hostname is primary
Server 2 : 192.168.13.132 IP Address and hostname is secondary
Finally found that drbd0, drbd1 mount failed problem
*Found some error messages
2013 Mar 06
4
Task blocked for more than 120 seconds.
Hi all,
Today I got problem below and my domU become unresponsive and I should
restart the pc to make it running properly again.
[ 240.172092] INFO: task kworker/u:0:5 blocked for more than 120 seconds.
[ 240.172110] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[ 240.172376] INFO: task jbd2/xvda1-8:153 blocked for more than 120
seconds.
[ 240.172388]
2012 Nov 29
0
Windows NLB crashing VM's
Hi All,
We have a somewhat serious issue around NLB on Windows 2012 and Xen.
First, let me describe our environment and then I''ll let you know what''s
wrong.
2 X Debian-squeeze boxes running the latest provided AMD64 Xen kernel and
about 100GB of RAM.
These boxes are connected via infiniband and DRBD is running over
this(IPoIB).
Each VPS runs on a mirrored DRBD devices.
Each
2011 Jan 12
1
Problems with fsck
Hi List,
i'd like to share with you what happened yesterday.
Kernel 2.6.36.1
ocfs2-tools 1.6.3 (latest).
I had an old OCFS2 partition created with a 2.6.32 kernel and ocfs2
tools 1.4.5.
I unmounted all partitions on all nodes in order to enable discontig-bg.
I then used tunefs to add discontig-bg, inline-data and indexed-dirs.
During indexed-dirs tunefs segfaulted and since then, fsck
2007 Jun 04
1
Problems with Xen HVM image on DRBD filesystem
Hi,
I have been trying to create a xen HVM Centos4.4 image on centos 5
with a DRBD filesystem installing from DVD. However I get an IO error
on the filesystem during the centos installation process which then
aborts.
The DRBD filesystem seem to be setup correctly and is functioning as
a block device as I can mkfs -t ext3 /dev/drbd1 read and write
without error.
If I replace the disk
2008 Mar 05
0
ocfs2 and another node is heartbeating in our slot
Hello,
I have one cluster drbd8+ocfs2.
If I mount ocfs2 partition on node1 it's work but when I mount partition on
node 2 I receive in /var/log/messages this
-Mar 5 18:10:04 suse4 kernel: (2857,0):o2hb_do_disk_heartbeat:665 ERROR:
Device "drbd1": another node is heartbeating in our slot!
-Mar 5 18:10:04 suse4 kernel: WARNING: at include/asm/dma-mapping.h:44
dma_map_sg()
-Mar 5