Displaying 20 results from an estimated 10000 matches similar to: "Dovecot performance on GFS clustered filesystem"
2008 Sep 19
2
Possible bug involving LDA and Namespaces
Hi All,
Yesterday I began experimenting with using LDA to do delivery. To begin
with I'm using a .forward file entry for myself so as to avoid affecting
other users. When "deliver" is passed an item to deliver, the following
is logged:
> deliver(allen): Sep 18 19:28:52 Info: Namespace: type=private,
> prefix=INBOX., sep=., inbox=yes, hidden=no, list=yes, subscriptions=yes
2008 May 29
3
GFS
Hello:
I am planning to implement GFS for my university as a summer project. I have
10 servers each with SAN disks attached. I will be reading and writing many
files for professor's research projects. Each file can be anywhere from 1k
to 120GB (fluid dynamic research images). The 10 servers will be using NIC
bonding (1GB/network). So, would GFS be ideal for this? I have been reading
a lot
2010 Aug 10
1
GFS/GFS2 on CentOS
Hi all,
If you have had experience hosting GFS/GFS2 on CentOS machines could
you share you general impression on it? Was it realiable? Fast? Any
issues or concerns?
Also, how feasible is it to start it on just one machine and then grow
it out if necessary?
Thanks.
Boris.
2011 Jan 18
2
dovecot Digest, Vol 93, Issue 41
> From: Stan Hoeppner <stan at hardwarefreak.com>
> Subject: Re: [Dovecot] SSD drives are really fast running Dovecot
>
>
> Yes. Go with a cluster filesystem such as OCFS or GFS2 and an
inexpensive SAN
> storage unit that supports mixed SSD and spinning storage such as the
Nexsan
> SATABoy with 2GB cache: http://www.nexsan.com/sataboy.php
I can't speak for
2009 Nov 08
1
[PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2
I've tested all these filesystems here:
http://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://et.redhat.com/~rjones/virt-top
-------------- next part
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server got
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2007 May 29
1
GFS on 4.4 vs. C5
I am doing research on getting an operational GFS setup on 3 servers I
have centos 4.4/64 built on. I can of course upgrade the to 4.5, but is
it smarter to try centos 5 for this or can I get away with getting this
working on 4.4/4.5? (basically trying to eliminate the need to visit
the colo.)
-krb
2008 Nov 14
10
Shared volume: Software-ISCSI or GFS or OCFS2?
Hello list,
I want to use shared volumes between severall vm''s and defenetly don''t
want to use NFS or Samba!
So i have three options:
1. simulated(software-) iscsi
2. GFS
3. OCFS2
What do you suggest and why?
Kind regards, Florian
**********************************************************************************************
IMPORTANT: The contents of this email and any
2006 Sep 27
2
GFS and samba
Hello,
We have two Fedora 5 Servers clustered with GFS. We installed samba
and exported the same shares in both of them.
All went fine at first, with people accessing to theirs own files and
so, but for some programs (minitab, matlab, ...) people need to access
the same file at once. Then samba begins to fail and clients hang. In
order to fix samba is necessary to restart the service.
2007 Apr 05
7
Problems using GFS2 and clustered dovecot
I am trying to use dovecot. I've got a GFS2 shared volume on two servers
with dovecot running on both. On one server at a time, it works.
The test I am trying is to attach two mail programs (MUA) via IMAPS
(Thunderbird and Evolution as it happens). I've attached one mail
program to each IMAPS server. I am trying to move emails around in one
program (from folder to folder), and then
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work
2009 Jun 05
2
Dovecot + DRBD/GFS mailstore
Hi guys,
I'm looking at the possibility of running a pair of servers with
Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or
other clustered FS) for the mail storage and ext3 for the root drive.
I'm currently using maildrop for delivery and Dovecot imap/pop3 with
the stores over NFS. I'm looking for better performance but still
keeping the HA element I have now with
2011 Dec 14
1
[PATCH] mkfs: optimization and code cleanup
Optimizations by reducing the STREQ operations and do some
code cleanup.
Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com>
---
daemon/mkfs.c | 29 +++++++++++++----------------
1 files changed, 13 insertions(+), 16 deletions(-)
diff --git a/daemon/mkfs.c b/daemon/mkfs.c
index a2c2366..7757623 100644
--- a/daemon/mkfs.c
+++ b/daemon/mkfs.c
@@ -44,13 +44,16 @@ do_mkfs_opts (const
2010 Feb 17
3
GlusterFs - Any new progress reports?
GlusterFs always strikes me as being "the solution" (one day...). It's
had a lot of growing pains, but there have been a few on the list had
success using it already.
Given some time has gone by since I last asked - has anyone got any more
recent experience with it and how has it worked out with particular
emphasis on Dovecot maildir storage? How has version 3 worked out for
2010 Mar 24
3
mounting gfs partition hangs
Hi,
I have configured two machines for testing gfs filesystems. They are
attached to a iscsi device and centos versions are:
CentOS release 5.4 (Final)
Linux node1.fib.upc.es 2.6.18-164.el5 #1 SMP Thu Sep 3 03:33:56 EDT 2009
i686 i686 i386 GNU/Linux
The problem is if I try to mount a gfs partition it hangs.
[root at node2 ~]# cman_tool status
Version: 6.2.0
Config Version: 29
Cluster Name:
2009 Jun 05
1
DRBD+GFS - Logical Volume problem
Hi list.
I am dealing with DRBD (+GFS as its DLM). GFS configuration needs a
CLVMD configuration. So, after syncronized my (two) /dev/drbd0 block
devices, I start the clvmd service and try to create a clustered
logical volume. I get this:
On "alice":
[root at alice ~]# pvcreate /dev/drbd0
Physical volume "/dev/drbd0" successfully created
[root at alice ~]# vgcreate