Displaying 19 results from an estimated 19 matches similar to: "mount.ocfs2: Invalid argument while mounting /dev/mapper/xenconfig_part1 on /etc/xen/vm/. Check 'dmesg' for more information on this error."
2010 Aug 20
1
ocfs2 hang writing until reboot the cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012
Hello,
I hope this mailing list is correct.
I've a cluster pacemaker with a clone resource ocfs2 with
ocfs2-tools-1.4.1-25.6.x86_64
ocfs2-tools-o2cb-1.4.1-25.6.x86_64
on Opensuse 11.2
After some network problem on my switch I receive on one of 4 nodes of
my cluster the following messages
Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] The token was lost in the
OPERATIONAL state.
Aug 18 13:12:28
2008 Nov 18
1
[Patch 3/3] ocfs2-tools: Fix compilation of Pacemaker glue for ocfs2_controld
Fix compilation of Pacemaker glue for ocfs2_controld when the
underlying Pacemaker installation supports both the Heartbeat and
OpenAIS stack
Signed-off-by: Andrew Beekhof <abeekhof at suse.de>
--- upstream/ocfs2_controld/pacemaker.c 2008-09-11 16:51:11.000000000
+0200
+++ dev/ocfs2_controld/pacemaker.c 2008-10-23 13:14:56.000000000 +0200
@@ -20,8 +20,16 @@
#include
2012 Aug 15
1
ocfs2_controld binary
I have been reading loads of threads over different mailing lists about ocfs2_controld, so have anyone ever built Cluster stack (openAIS, pacemaker, corosync + OCFS2 1.4) from source and got o2cb agent working with pacemaker?
Got this from messages:
/var/log/messages:Aug 14 15:05:20 ip-172-16-2-12 o2cb(resO2CB:0)[4239]: ERROR: Setup problem: couldn't find command:
2009 Apr 08
1
ocfs2_controld.cman
If I start ocfs2_controld.cman in parallel on a few nodes, only one of them
starts up, the others exit with one of these errors:
call_section_read at 370: Reading from section "daemon_protocol" on checkpoint "ocfs2:controld" (try 1)
call_section_read at 387: Checkpoint "ocfs2:controld" does not have a section named "daemon_protocol"
call_section_read at
2009 Jun 15
1
Is Pacemaker integration ready to go?
I have seen many references online to being able to use OCFS2 with
Pacemaker, but the documentation I have been able to find is very Sparse.
I have kernel 2.6.29, and the latest DLM and Pacemaker (using openais)
and OCFS2-Tools from GIT. (As of June 13).
I was able to build ocfs2_controld.pcmk ... (With some minor changes to
the makefile for my install)
I noticed the OCF version of o2cb is not
2006 Nov 16
2
Porting ZFS, trouble with nvpair
Hi. I thought I''d take a stab at the first steps of porting ZFS to Darwin. I realize there are rumors that Apple is already doing this, but my contact at Apple has yet to get back to me to verify this. In the meantime, I wanted to see how hard it would be. I started with libzfs, and promptly ran into issues with libnvpair.
It wants sys/nvpair.h, but I can''t find that in the
2010 Apr 12
0
ocfs2/o2cb problem with openais/pacemaker
hi!
i'm on debian lenny and trying to run ocfs2 on a dual primary
drbd device. the drbd device is already set up as msDRBD0.
to get dlm_controld.pcmk i installed it from source (from
cluster-suite-3.0.10)
now i configured a resource "resDLM" with 2 clones:
primitive resDLM ocf:pacemaker:controld op monitor interval="120s"
clone cloneDLM resDLM meta
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub
diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk
--- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100
+++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400
@@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2009 Mar 04
1
Patch to Pacemaker hooks in ocfs2_controld
Hi Guys,
I overhauled and simplified the Pacemaker hooks recently. This patch:
- Reuses more code from the Pacemaker libraries
- Escalates fencing to the cluster manager instead of initiating it
directly
Attached patch is against master, or you can pull the original patch
which is against an older version used by SUSE:
2011 Nov 17
0
OCFS2 + CMAN/PCMK Kernel Error
Hello Everyone,
Coming accross this once in a while. It's the test environemnt so not
very important, and
it does take a lot of abuse. Maybe it's of interest to someone.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
2009 Apr 15
1
hang with fsdlm
Using fsdlm/ocfs2_controld.cman, I've rerun the test I've been having problems
with on 2.6.30-rc1. After running for several minutes in the same directory
on three nodes, the test hangs, and I collect the following information:
bull-01
-------
3053 S< [ocfs2dc] ocfs2_downconvert_thread
3054 S< [dlm_astd] dlm_astd
3055 S< [dlm_scand]
2011 Dec 20
1
OCFS2 problems when connectivity lost
Hello,
We are having a problem with a 3-node cluster based on
Pacemaker/Corosync with 2 primary DRBD+OCFS2 nodes and a quorum node.
Nodes run on Debian Squeeze, all packages are from the stable branch
except for Corosync (which is from backports for udpu functionality).
Each node has a single network card.
When the network is up, everything works without any problems, graceful
shutdown of
2011 Sep 12
1
glusterfs, pacemaker and Filesystem RA
Hello List
due to a mistake my post from yesterday has been cut. This is why I try to
send my post again and open it as new thread. I hope it will work this time.
<---- Original postet mail starts here ---->
Hello Marcel, hello Samuel,
sorry for my late answer, but I was away for two months and for that I could
continue my tests last week.
First of all thank you for your patch of the
2011 Sep 20
9
XL: pv guests dont reboot after migration (xen4.1.2-rc2-pre)
A pv guest will not reboot after migration, the guest itself does
everything right, including the shutdown, but xl does not recreate the
guest, it just shuts it down.
This goes for 2.6.39 and 3.0.4 guest kernels, havent tried different
ones. I also haven tried different xen versions.
Dont know if this would affect hvm, probably not since qemu leaves the
guest running and does a
2008 Nov 14
5
[RFC] Splitting cluster.git into separate projects/trees
Hi everybody,
as discussed and agreed at the Cluster Summit we need to split our tree
to make life easier in the long run (etc. etc.).
We need to decide how we want to do it and there are different
approaches to that. I was able to think of 3. There might be more and I
might not have taken everything into consideration so comments and ideas
are welcome.
At this point we haven't really
2009 Jul 06
1
lvb length issue [was Re: [ocfs2-tools-devel] question of ocfs2_controld (Jun 27)]
Now the discussion moves to kernel space, I move the email from
ocfs2-tools-devel to ocfs2-devel.
The original discussion can be found from
http://oss.oracle.com/pipermail/ocfs2-tools-devel/2009-June/001891.html
Joel Becker Wrote:
> On Sat, Jun 27, 2009 at 03:46:04AM +0800, Coly Li wrote:
>> Joel Becker Wrote:
>>> On Sat, Jun 27, 2009 at 03:00:05AM +0800, Coly Li wrote:
>>
2011 Feb 24
0
No subject
These pairs could be pinged and/or tested before the Filesystem RA tries =
to
connect to them. In case that one of these nodes is not reachable or =
does
not respond to the connection attempt the RA could try a connection =
with
the next nvpair.
Background:
I would like to build a openais/pacemaker cluster consisting of three =
nodes.
On each node should run a gluster server providing a
2010 Oct 25
7
[PATCH 0/6] Ocfs2-tools: Add a new tool 'o2info'.
Now it's a good time to introduce the new tool 'o2info' since kernel
part of OCFS2_IOC_INFO ioctl has been pulld upstream by linus.
The following 6 patches have already got sunil's SOBs, and now they're
trying to attract more reviewers before it goes to central repo with
a modification of getting manual pages being introduced.
2007 Oct 24
16
PATCH 0/10: Merge PV framebuffer & console into QEMU
The following series of 10 patches is a merge of the xenfb and xenconsoled
functionality into the qemu-dm code. The general approach taken is to have
qemu-dm provide two machine types - one for xen paravirt, the other for
fullyvirt. For compatability the later is the default. The goals overall
are to kill LibVNCServer, remove alot of code duplication and/or parallel
impls of the same concepts, and