Displaying 20 results from an estimated 10000 matches similar to: "ext4 failure on cluster"
2006 Aug 09
1
Re: URGENT: OCFS2 hang - 32 node cluster POC
Run:
# top
# vmstat 1
# iostat -x /dev/emcpowerb 1
The latter two you can save to a file. For top, just monitor cpu usage
and see if any process is hogging all of it.
Colin Laird wrote:
> and the fstab settings:
>
> # This file is edited by fstab-sync - see 'man fstab-sync' for details
> /dev/VolGroup00/LogVol01 / ext3
> defaults 1 1
>
2008 Mar 13
1
Clustered Samba and OCFS2
Hi,
I was reading the Clustered Samba Wiki and I noticed the following blurb
about OCFS2 and Samba (
http://wiki.samba.org/index.php/CTDB_Setup#Other_cluster_filesystems).
I was wondering if anyone can confirm that the "fileid:mapping" is or isn't
required for OCFS2. I was also wondering if anyone would be willing to
share their experiences with Clustered Samba OCFS2. Does it
2005 Nov 07
1
o2cb_ctl: Unable to access cluster service Cannot initialize cluster
Hi,
I've tried to run ocfs2 on debian stable (sarge). Have patched and
compiled a new kernel-package using vanilla 2.6.14 sources. No compiler
errors, all modules for ocfs2 builded. Installed the Debian packages
ocfs2-tools_1.0.0-1_i386.deb and ocfs2console_1.0.0-1_i386.deb without any
problems. But now I can't initialize the cluster, whether through
ocfs2console or with
2017 Dec 15
0
OCFS2 cluster debian8 / debian9
Hi,
On 12/05/2017 11:19 PM, BASSAGET C?dric wrote:
> Hello
> Retried from scratch; and still have an error when trying to bring up
> the second cluster :
>
> root at LAB-virtm6:/# o2cb register-cluster ocfs2new
> o2cb: Internal logic failure while registering cluster 'ocfs2new'
>
> root at LAB-virtm6:/mnt/vol1_iscsi_san1# o2cb list-clusters
> ocfs2
>
2017 Dec 15
0
OCFS2 cluster debian8 / debian9
Hi,
On 12/05/2017 11:19 PM, BASSAGET C?dric wrote:
> Hello
> Retried from scratch; and still have an error when trying to bring up
> the second cluster :
>
> root at LAB-virtm6:/# o2cb register-cluster ocfs2new
> o2cb: Internal logic failure while registering cluster 'ocfs2new'
>
> root at LAB-virtm6:/mnt/vol1_iscsi_san1# o2cb list-clusters
> ocfs2
>
2009 Nov 17
1
[PATCH 1/1] ocfs2/cluster: Make fence method configurable
By default, o2cb fences the box by calling emergency_restart(). While this
scheme works well in production, it comes in the way during testing as it
does not let the tester take stack/core dumps for analysis.
This patch allows user to dynamically change the fence method to panic() by:
# echo "panic" > /sys/kernel/config/cluster/<clustername>/fence_method
Signed-off-by: Sunil
2006 Feb 21
1
[PATCH 07/14] ocfs2: actually free hb set on cluster removal
This patch actually frees the hb set when the cluster dir is removed.
fs/ocfs2/cluster/nodemanager.c | 1 +
1 files changed, 1 insertion(+)
Signed-off-by: Jeff Mahoney <jeffm at suse.com>
diff -ruNpX ../dontdiff linux-2.6.16-rc4.ocfs2-staging1/fs/ocfs2/cluster/nodemanager.c linux-2.6.16-rc4.ocfs2-staging2/fs/ocfs2/cluster/nodemanager.c
---
2008 Aug 06
1
[2.6 patch] ocfs2/cluster/tcp.c: make some functions static
Commit 0f475b2abed6cbccee1da20a0bef2895eb2a0edd
(ocfs2/net: Silence build warnings) made sense
as far as it fixed compile warnings, but it was
not required that it made the functions global.
Signed-off-by: Adrian Bunk <bunk at kernel.org>
---
This patch has been sent on:
- 5 Jun 2008
fs/ocfs2/cluster/tcp.c | 44 ++++++++++++++++++++++++++------
fs/ocfs2/cluster/tcp_internal.h
2011 Apr 25
0
dovecot & OCFS2 Cluster
Am 25.04.2011 19:02, schrieb Osvaldo Alvarez Pozo:
> hi all
>
> We have an ocfs2 cluster compose of 4 Debian lenny serveurs wich have
> access to an ISCSI LUN we have create a partition on this Lun and
> formated this partition as OCFS2.
> 2 serveurs does mail delivery (SMTP) and the two other are pop/imap
> servers. The smtp servers use dovecot LDA to deliver to mailboxes.
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2009 Jun 24
3
Unexplained reboots in DRBD82 + OCFS2 setup
We're trying to setup a dual-primary DRBD environment, with a shared
disk with either OCFS2 or GFS. The environment is a Centos 5.3 with
DRBD82 (but also tried with DRBD83 from testing) .
Setting up a single primary disk and running bonnie++ on it works.
Setting up a dual-primary disk, only mounting it on one node (ext3) and
running bonnie++ works
When setting up ocfs2 on the /dev/drbd0
2006 Jul 03
1
What is cluster size mean ?
An HTML attachment was scrubbed...
URL: http://oss.oracle.com/pipermail/ocfs2-users/attachments/20060703/73920daa/attachment.html
2009 Jan 06
2
cluster member hangs during reboot
Hi All,
I inherited a 4-node ocfs2 cluster and recently 2 ocfs2 filesystems were added to be use as temp tablespace.?One of the four nodes rebooted during the ?creation of the tablespace and hanged at the message below...and it just sits there. If I put the server into rescue mode and comment out all the filesystems it boots up fine and than I can mount the ocfs2 filesystem manuelly but it cannot
2009 Apr 06
1
[PATCH] ocfs2: Reserve 1 more cluster in expanding_inline_dir for indexed dir.
In ocfs2_expand_inline_dir, we calculate whether we need 1 extra
cluster if we can't store the dx inline the root and save it in
dx_alloc. So add it when we call ocfs2_reserve_clusters.
Signed-off-by: Tao Ma <tao.ma at oracle.com>
---
fs/ocfs2/dir.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
index e71160c..07d8920 100644
2011 Apr 25
1
dovecot & OCFS2 Cluster
There is a bug in ocfs2 1.4 if you are using it you should be looking for
upgrading to ocfs2 1.6
I have several performance problens with ocfs2 but now i guess the problem
is the webmail client that we are using.
I have several post here about it.
I will write a new one as soon as i found the solution.
Anyway, why use lmtp over lda ?
My setup have about 5000 active acounts all in maildir.
2014 Aug 22
2
ocfs2 problem on ctdb cluster
Ubuntu 14.04, drbd
Hi
On a drbd Primary node, when attempting to mount our cluster partition:
sudo mount -t ocfs2 /dev/drbd1 /cluster
we get:
mount.ocfs2: Unable to access cluster service while trying to join the
group
We then call:
sudo dpkg-reconfigure ocfs2-tools
Setting cluster stack "o2cb": OK
Starting O2CB cluster ocfs2: OK
And all is well:
Aug 22 13:48:23 uc1 kernel: [
2007 Mar 30
1
HowTo recover ocfs2 in a 10g four node cluster
Hi All,
I needed to rebuild the operating system on one of the 4 nodes in my
cluster but when I try to startup ocfs, the return from the init
script is nothing?
how do I fix:
[root@kmloraper1 /]# /etc/init.d/ocfs2 restart
Stopping Oracle Cluster File System (OCFS2) [ OK ]
Starting Oracle Cluster File System (OCFS2) ocfs2_hb_ctl: Device name
specified was not found while reading
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]
2006 Aug 14
1
2 node cluster, Heartbeat2, self-fencing
Hello everyone.
I am currently working on setting up new servers for my employer. Basically we
want two servers, all of them running several VEs (virtual environments,
OpenVZ) which can dynamically take over each others job if necessary. Some
services will run concurrently on both servers like apache2 (load balancing),
so those need concurrent access to specific data.
We had a close look at