similar to: GFS2/OCFS2 scalability

Displaying 20 results from an estimated 8000 matches similar to: "GFS2/OCFS2 scalability"

2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2005 Dec 23
1
GFS2, OCFS2, and FUSE cause xenU to oops.
I really need to share a filesystem and I''d rather not have to export it from one domU to another so I tried mounting it with GFS2 and then OCFS2. Both caused the xenU kernel to oops just as the mount was attempted. I assumed that a FUSE-based solution would be a little less problematic (if only because it doesn''t require kernel patches) but it also caused an oops right when
2009 Nov 08
1
[PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2
I've tested all these filesystems here: http://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/ -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://et.redhat.com/~rjones/virt-top -------------- next part
2013 Aug 05
1
Corrupted mboxes with v2.2.4, posix_fallocate and GFS2
Hi, on a clustered Dovecot server installation that was recently moved from a shared GPFS filesystem to GFS2, occasional corruptions in the users' INBOXes started appearing, where a new incoming message would be appended directly after a block of NUL bytes, and be scanned by dovecot as being glued to the preceding message. I traced this to the file extension operation performed in
2013 Aug 21
2
Dovecot tuning for GFS2
Hello, I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm using courier over GFS. Actually I'm testing Dovecot with these parameters: mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes lock_method = fcntl Are they correct? RedHat GFS support mmap, so is it better to enable it or leave it disabled? The documentation suggest the
2011 Dec 29
0
ocfs2 with RHCS and GNBD on RHEL?
Does anyone have OCFS2 running with the "Red Hat Cluster Suite" on RHEL? I'm trying to create a more or less completely fault tolerant solution with two storage servers syncing storage with dual-primary DRBD and offering it up via multipath to nodes for OCFS2. I was able to successfully multipath a dual-primary DRBD based GFS2 volume in this manner using RHCS and GNBD. But switched
2023 May 08
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
Sorry for reply late, I am a little bit busy recently. On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: > > > On 5/5/23 12:20 AM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 4:02 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote:
2023 May 09
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/9/23 12:40 AM, Heming Zhao wrote: > Sorry for reply late, I am a little bit busy recently. > > On Fri, May 05, 2023 at 11:42:51AM +0800, Joseph Qi wrote: >> >> >> On 5/5/23 12:20 AM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 4:02 PM, Heming Zhao wrote:
2008 Feb 12
0
Lustre-discuss Digest, Vol 25, Issue 17
Hi, i just want to know whether there are any alternative file systems for HP SFS. I heard that there is Cluster Gateway from Polyserve. Can anybody plz help me in finding more abt this Cluster Gateway. Thanks and Regards, Ashok Bharat -----Original Message----- From: lustre-discuss-bounces at lists.lustre.org on behalf of lustre-discuss-request at lists.lustre.org Sent: Tue 2/12/2008 3:18 AM
2013 May 03
1
sanlockd, virtlock and GFS2
Hi, I'm trying to put in place a KVM cluster (using clvm and gfs2), but I'm running into some issues with either sanlock or virtlockd. All virtual machines are handled via the cluster (in /etc/cluser/cluster.conf) but I want some kind of locking to be in place as extra security measurement. Sanlock ======= At first I tried sanlock, but it seems if one node goes down unexpectedly,
2023 May 05
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On 5/5/23 12:20 AM, Heming Zhao wrote: > On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: >> >> >> On 5/4/23 4:02 PM, Heming Zhao wrote: >>> On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: >>>> >>>> >>>> On 5/4/23 2:21 PM, Heming Zhao wrote: >>>>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph
2007 Apr 05
7
Problems using GFS2 and clustered dovecot
I am trying to use dovecot. I've got a GFS2 shared volume on two servers with dovecot running on both. On one server at a time, it works. The test I am trying is to attach two mail programs (MUA) via IMAPS (Thunderbird and Evolution as it happens). I've attached one mail program to each IMAPS server. I am trying to move emails around in one program (from folder to folder), and then
2023 May 04
1
[PATCH 2/2] ocfs2: add error handling path when jbd2 enter ABORT status
On Thu, May 04, 2023 at 05:41:29PM +0800, Joseph Qi wrote: > > > On 5/4/23 4:02 PM, Heming Zhao wrote: > > On Thu, May 04, 2023 at 03:34:49PM +0800, Joseph Qi wrote: > >> > >> > >> On 5/4/23 2:21 PM, Heming Zhao wrote: > >>> On Thu, May 04, 2023 at 10:27:46AM +0800, Joseph Qi wrote: > >>>> > >>>> >
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2011 Jun 20
2
ubuntu, ocfs2 with cman and ctdb
hi guys, we're evaluating the available clustering options to get ctdb up and running for a highly available file server. we've set up both gluster and ocfs2 both on seperate 2 node setups. ocfs2 seems to provide better throughput and iops to samba clients than does gluster and that is comparing a single node server to a ctdb clustered 2 node server. problem with ocfs2 is that i've
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2010 Dec 14
1
Samba slowness serving SAN-based GFS2 filesystems
Ok, I'm experiencing slowness serving SAN-based GFS2 filesystems (of a specific SAN configuration). Here's my layout: I have a server cluster. OS= RHEL 5.4 (both nodes...) kernel= 2.6.18-194.11.3.el5 Samba= samba-3.0.33-3.14.el5 *On this cluster are 6 GFS2 Clustered filesystems. *4 of these volumes belong to one huge LUN (1.8 TB), spanning 8 disks. The other 2 remaining volumes are 1
2008 Jun 26
0
CEBA-2008:0501 CentOS 5 i386 gfs2-kmod Update
CentOS Errata and Bugfix Advisory 2008:0501 Upstream details at : https://rhn.redhat.com/errata/RHBA-2008-0501.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) i386: 629f45a15a6cef05327f23d73524358d kmod-gfs2-1.92-1.1.el5_2.2.i686.rpm dcc5d2905e9c0cf4d424000ad24c6a5b kmod-gfs2-PAE-1.92-1.1.el5_2.2.i686.rpm
2014 Mar 10
1
gfs2 and quotas - system crash
I have tried sending this before, but it did not appear to get through. Hello, When using gfs2 with quotas on a SAN that is providing storage to two clustered systems running CentOS6.5, one of the systems can crash. This crash appears to be caused when a user tries to add something to a SAN disk when they have exceeded their quota on that disk. Sometimes a stack trace is produced in
2009 Mar 20
1
Centos 5.2 ,5.3 and GFS2
Hello, I will create a Xen cluster and using GFS2 (with conga, ...) to create a new Xen cluster. I know that GFS2 is prod ready since RHEL 5.3. Do you know whent Centos 5.3 will be ready ? Can I install my GFS2 FS with centos 5.2 and then "simply" upgrade to 5.3 without reinstallation ? Tx