similar to: GFS/GFS2 on CentOS

Displaying 20 results from an estimated 7000 matches similar to: "GFS/GFS2 on CentOS"

2013 Aug 21
2
Dovecot tuning for GFS2
Hello, I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm using courier over GFS. Actually I'm testing Dovecot with these parameters: mmap_disable = yes mail_fsync = always mail_nfs_storage = yes mail_nfs_index = yes lock_method = fcntl Are they correct? RedHat GFS support mmap, so is it better to enable it or leave it disabled? The documentation suggest the
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster Backend with GFS2, also we are using dovecot as a Director for user node persistence, everything was ok until we started stress testing the solution with imaptest, we had many deadlocks, cluster filesystems corruptions and hangs, specially in index filesystem, we have configured the backend as if they were on a NFS like setup
2009 Apr 29
3
GFS and Small Files
Hi all, We are running CentOS 5.2 64bit as our file server. Currently, we used GFS (with CLVM underneath it) as our filesystem (for our multiple 2TB SAN volume exports) since we plan to add more file servers (serving the same contents) later on. The issue we are facing at the moment is we found out that command such as 'ls' gives a very slow response.(e.g 3-4minutes for the outputs of ls
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi. The short story... Rush job, never done clustered file systems before, vlan didn't support multicast. Thus I ended up with drbd working ok between the two servers but cman / gfs2 not working, resulting in what was meant to be a drbd primary/primary cluster being a primary/secondary cluster until the vlan could be fixed with gfs only mounted on the one server. I got the single server
2008 Sep 24
3
Dovecot performance on GFS clustered filesystem
Hello All, We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix servers which share a fiber array via the GFS clustered filesystem. This all works very well for the most part, with the exception that certain operations are so inefficient on GFS that they generate significant I/O load and hurt performance. We are using the Maildir format on disk. We're also using
2009 Nov 08
1
[PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2
I've tested all these filesystems here: http://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/ -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://et.redhat.com/~rjones/virt-top -------------- next part
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody, I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not understand some points. It is possible to run the CTDB defining it under services section in cluster.conf but running it on the second node shuts down the process at the first one. My CTDB configuration implies 2 active-active nodes. Does CTDB care if the node starts with clean_start="0" or
2007 Sep 08
1
Xen VMs on GFS
Hello list I've installed cluster suite with 8 physical nodes, these are connected to a SAN using CLVM and AOE protocol. The cluster suite runs on the physical nodes/servers, in dom0. If I have to use GFS, where do I install this? What is the right approach to using GFS with Xen. I see 2 options: 1. I install GFS inside each domU (unprivileged domain, the actual VM). 2. I install GFS on
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com> > Subject: Re: [Dovecot] SSD drives are really fast running Dovecot > > > Yes. Go with a cluster filesystem such as OCFS or GFS2 and an inexpensive SAN > storage unit that supports mixed SSD and spinning storage such as the Nexsan > SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php I can't speak for
2008 May 29
3
GFS
Hello: I am planning to implement GFS for my university as a summer project. I have 10 servers each with SAN disks attached. I will be reading and writing many files for professor's research projects. Each file can be anywhere from 1k to 120GB (fluid dynamic research images). The 10 servers will be using NIC bonding (1GB/network). So, would GFS be ideal for this? I have been reading a lot
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all, Where i want to arrive: 1) having two storage server replicating partition with DRBD 2) exporting via GNBD from the primary server the drbd with GFS2 3) inporting the GNBD on some nodes and mount it with GFS2 Assuming no logical error are done in the last points logic this is the situation: Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2. DRBD seems to work
2010 Jul 19
2
redundant networked secure file system recommendation
Hi all, We are currently running a NFS-based server centric setup. I would like to set up something where I can easily have more than one redundant server, security/authentication (this part seems a little flaky with NFS, at least did several years ago), with the capability to easily add/remove servers as necessary, take redundant servers down for maintenance, etc. Total volume we expect to run
2019 Dec 20
1
GFS performance under heavy traffic
Hi David, Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases). In such way, when the primary is lost, your client can reach a backup one without disruption. P.S.: Client may 'hang' - if the primary server got
2008 Nov 14
10
Shared volume: Software-ISCSI or GFS or OCFS2?
Hello list, I want to use shared volumes between severall vm''s and defenetly don''t want to use NFS or Samba! So i have three options: 1. simulated(software-) iscsi 2. GFS 3. OCFS2 What do you suggest and why? Kind regards, Florian ********************************************************************************************** IMPORTANT: The contents of this email and any
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best shared filesystem for hosting many users, here I share with you the results, notice the bad perfomance of all the shared filesystems against the local storage. Is there any specific optimization/tunning on dovecot for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will
2011 Dec 14
1
[PATCH] mkfs: optimization and code cleanup
Optimizations by reducing the STREQ operations and do some code cleanup. Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com> --- daemon/mkfs.c | 29 +++++++++++++---------------- 1 files changed, 13 insertions(+), 16 deletions(-) diff --git a/daemon/mkfs.c b/daemon/mkfs.c index a2c2366..7757623 100644 --- a/daemon/mkfs.c +++ b/daemon/mkfs.c @@ -44,13 +44,16 @@ do_mkfs_opts (const
2010 Jan 14
8
XCP - GFS - ISCSI
Hi everyone! I have 2 hosts + 1 ISCSI device. I want to create a shared storage repository and both hosts use together. I wont use NFS. prepared sr: xe sr-create host-uuid=xxx content-type=user name-label=NAS1 shared=true type=iscsi device-config:target=xxxx device-config:targetIQN=xxxx hosts see the iscsi device: scsi4 : iSCSI Initiator over TCP/IP scsi 4:0:0:0: Direct-Access NAS
2006 Oct 12
5
AoE LVM2 DRBD Xen Setup
Hello everybody, I am in the process of setting up a really cool xen serverfarm. Backend storage will be an LVMed AoE-device on top of DRBD. The goal is to have the backend storage completely redundant. Picture: |RAID| |RAID| |DRBD1| <----> |DRBD2| \ / |VMAC| | AoE | |global LVM VG| / | \ |Dom0a| |Dom0b| |Dom0c| | |
2007 Apr 05
7
Problems using GFS2 and clustered dovecot
I am trying to use dovecot. I've got a GFS2 shared volume on two servers with dovecot running on both. On one server at a time, it works. The test I am trying is to attach two mail programs (MUA) via IMAPS (Thunderbird and Evolution as it happens). I've attached one mail program to each IMAPS server. I am trying to move emails around in one program (from folder to folder), and then