Displaying 20 results from an estimated 600 matches similar to: "Xen Remus DRBD dual primary frozen"
2013 Apr 05
0
DRBD + Remus High IO load frozen
Dear all,
I have installed DRBD 8.3.11 compiled from sources. However the backend
block will freeze if there is high IO load. I use Remus to support high
availability and checkpointing is controlled by remus for each 400ms.
If I check the Iostat I got the idle CPU will decreasing extremely each
checkpointing and when its reach 0% of idle cpu the local backing device
will freeze and damage the
2013 Mar 19
0
Remus DRBD frozen
Hi all,
I don''t know if my question doesn''t related to xen at all, how ever I am
trying to use DRBD as my disk replication when I ran Remus. However when I
run remus sometimes my Dom-U will be freezing. I see the log file and it
seem caused by drbd frozen :
875.616068] block drbd1: Local backing block device frozen?
[ 887.648072] block drbd1: Local backing block device
2013 Mar 04
1
Error: /usr/lib/xen/bin/xc_save 22 6 0 0 0 failed
Hi All,
Does anybody have this problem when try to migrate DomU :
Error: /usr/lib/xen/bin/xc_save 22 6 0 0 0 failed
I use xen 4.2.2-pre compiled from sources and my machine is Ubuntu
12.04 Linux with kernel 3.2.0-29-generic.
Its working perfectly, even remus with drbd support migration was running
successfully before. However today after I kill the remus process with
command
pkill -USR1 remus
2013 Apr 23
1
Xen with openvswitch integration
Dear all,
Again, I have some question here and I hope some one have experience how to
do it. I have Xen 4.2.1 installed from sources and working with XM
toolstack. I will use this xen in order to run Remus high availability with
DRBD.
Does any one know about openvswitch integration with xm toolstack, since
Xen wiki only confirm XCP have natively support openvswitch
2013 Feb 18
2
Kernel Error with Debian Squeeze, DRBD, 3.2.0-0.bpo.4-amd64 and Xen4.0
Hello List,
i am running Debian Squeeze and i installed DRBD, 3.2.0-0.bpo.4-amd64
and Xen4.0 from the Backports.
Sometimes i get such ugly Kernel message:
[257318.441757] BUG: unable to handle kernel paging request at ffff880025f19000
Log:
[256820.643918] xen-blkback:ring-ref 772, event-channel 16, protocol 1
(x86_64-abi)
[256830.802492] vif86.0: no IPv6 routers present
[256835.674481]
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2011 Mar 03
1
OCFS2 1.4 + DRBD + iSCSI problem with DLM
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20110303/0fbefee6/attachment.html
2011 Mar 23
3
EXT4 Filesystem Mount Failed (bad geometry: block count)
Dear All,
Currently using RHEL6 Linux and Kernel Version is 2.6.32-71.el6.i686 and DRBD Version is 8.3.10
DRBD is build from source and Configured DRBD with 2 Node testing with Simplex Setup
Server 1 : 192.168.13.131 IP Address and hostname is primary
Server 2 : 192.168.13.132 IP Address and hostname is secondary
Finally found that drbd0, drbd1 mount failed problem
*Found some error messages
2011 Jan 12
1
Problems with fsck
Hi List,
i'd like to share with you what happened yesterday.
Kernel 2.6.36.1
ocfs2-tools 1.6.3 (latest).
I had an old OCFS2 partition created with a 2.6.32 kernel and ocfs2
tools 1.4.5.
I unmounted all partitions on all nodes in order to enable discontig-bg.
I then used tunefs to add discontig-bg, inline-data and indexed-dirs.
During indexed-dirs tunefs segfaulted and since then, fsck
2011 Apr 25
3
where is xen_blkback xen_netback.ko source code?
where is the source code of xen_blkback.ko xen_netback.ko?
thanks
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2007 Jun 04
1
Problems with Xen HVM image on DRBD filesystem
Hi,
I have been trying to create a xen HVM Centos4.4 image on centos 5
with a DRBD filesystem installing from DVD. However I get an IO error
on the filesystem during the centos installation process which then
aborts.
The DRBD filesystem seem to be setup correctly and is functioning as
a block device as I can mkfs -t ext3 /dev/drbd1 read and write
without error.
If I replace the disk
2011 Jun 30
1
Xen with DRBD, mount DRBD device / Filesystem type?
Hi everyone,
I´m using Citrix XenServer with DRBD. But something went really wrong, so I
need a fresh Install.My only question is, how can I mount the DRBD
partition? It is /dev/drbd1 , but if I try to mount it, I need to provide
the filesystem type. Does anyone know what I have to do? I only need to copy
some data from it, it doesnt matter if it will be destroyed. I tried GFS as
filesystem
2007 Sep 04
3
Ocfs2 and debian
Hi.
I'm pretty new to ocfs2 and clusters.
I'm trying to make ocfs2 running over a drbd device.
I know it's not the best solution but for now i must deal with this.
I set up drbd and work perfectly.
I set up ocfs and i'm not able to make it to work.
/etc/init.d/o2cb status:
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module
2013 Feb 02
3
Running XEN 4.1 64 bit on Ubuntu desktop without VTx
Dear all,
I am quite new in this virtualization area. I am want to do some experiment
with live migration using xen. However, I got problem since my server
didn''t support VTx. I am using Ubuntu desktop 12.04 64 bit with Xen 4.1
Amd64. But when I reload the machine it wont start, since the XEN website
its doesn''t matter using Paravirtualization without VTx support I dont know
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody,
I am in the process of setting up a really cool xen serverfarm. Backend
storage will be an LVMed AoE-device on top of DRBD.
The goal is to have the backend storage completely redundant.
Picture:
|RAID| |RAID|
|DRBD1| <----> |DRBD2|
\ /
|VMAC|
| AoE |
|global LVM VG|
/ | \
|Dom0a| |Dom0b| |Dom0c|
| |
2006 Jun 24
2
DRBD Problem
Hi all,
I've been wrestling with a problem with drdb and centos. I have
successfully created one drbd resource, but when I try the create a 2nd, I
get an error on one of the nodes:
Lower device is already mounted.
Command 'drbdsetup /dev/drbd1 disk /dev/hdd1 internal -1' terminated with
exit code 20
The partition is not mounted from fstab etc and is newly created with
parted after
2013 Mar 06
4
Task blocked for more than 120 seconds.
Hi all,
Today I got problem below and my domU become unresponsive and I should
restart the pc to make it running properly again.
[ 240.172092] INFO: task kworker/u:0:5 blocked for more than 120 seconds.
[ 240.172110] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
this message.
[ 240.172376] INFO: task jbd2/xvda1-8:153 blocked for more than 120
seconds.
[ 240.172388]
2006 Jun 12
1
kernel BUG at /usr/src/ocfs2-1.2.1/fs/ocfs2/file.c:494!
Hi,
First of all, I'm new to ocfs2 and drbd.
I set up two identical servers (Athlon64, 1GB RAM, GB-Ethernet) with Debian Etch, compiled my own kernel (2.6.16.20),
then compiled the drbd-modules and ocfs (modules and tools) from source.
The process of getting everything up and running was very easy.
I have one big 140GB partition that is synced with drbd (in c-mode) and has an ocfs2
2014 Sep 02
1
samba_spnupdate invoked oom-killer
Hello All,
Anyone have seen this before?
Did samba_spnupdate really caused the crash???
[ 49.753564] block drbd1: drbd_sync_handshake:
[ 49.753571] block drbd1: self
BB16E125AF60AEDC:0000000000000000:30D97136FB1DA7A3:30D87136FB1DA7A3 bits:0
flags:0
[ 49.753576] block drbd1: peer
6365B5AFF049F16D:BB16E125AF60AEDD:30D97136FB1DA7A2:30D87136FB1DA7A3 bits:1
flags:0
[ 49.753580] block drbd1:
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server