Displaying 20 results from an estimated 3000 matches similar to: "dovecot Digest, Vol 93, Issue 41"
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best
shared filesystem for hosting many users, here I share with you the results,
notice the bad perfomance of all the shared filesystems against the local
storage.
Is there any specific optimization/tunning on dovecot for use GFS2 on
rhel6??, we have configured the director to make the user mailbox persistent
in a node, we will
2012 Jan 04
1
GPFS for mail-storage (Was: Re: Compressing existing maildirs)
Great information, thank you. Could you remark on GPFS services hosting mail storage over a WAN between two geographically separated data centers?
----- Reply message -----
From: "Jan-Frode Myklebust" <janfrode at tanso.net>
To: "Stan Hoeppner" <stan at hardwarefreak.com>
Cc: "Timo Sirainen" <tss at iki.fi>, <dovecot at dovecot.org>
Subject:
2010 Jul 19
1
GFS performance issue
Two web servers, both virtualized with CentOS Xen servers as host
(residing on two different physical servers).
GFS used to store home directories containing web document roots.
Shared block device used by GFS is an ISCSI target with the ISCSI
initiator residing on the Dom-0, and presented to Dom-U webservers as
drives.
Also, providing a second shared block device for quorum disk.
If I hit the
2010 Nov 02
8
remote hot site, IMAP replication or cluster over WAN
Taking a survey.
1. How many of you have a remote site hot backup Dovecot IMAP server?
2. How are you replicating mailbox data to the hot backup system?
A. rsync
B. DRBD+GFS2
C. Other
Thanks.
--
Stan
2010 Feb 16
2
Highly Performance and Availability
Hello everyone,
I am currently running Dovecot as a high performance solution to a particular
kind of problem. My userbase is small, but it murders email servers. The volume
is moderate, but message retention requirements are stringent, to put it nicely.
Many users receive a high volume of email traffic, but want to keep every
message, and *search* them. This produces mail accounts up to
2008 Sep 24
3
Dovecot performance on GFS clustered filesystem
Hello All,
We are using Dovecot 1.1.3 to serve IMAP on a pair of clustered Postfix
servers which share a fiber array via the GFS clustered filesystem.
This all works very well for the most part, with the exception that
certain operations are so inefficient on GFS that they generate
significant I/O load and hurt performance. We are using the Maildir
format on disk. We're also using
2013 Aug 21
2
Dovecot tuning for GFS2
Hello,
I'm deploing a new email cluster using Dovecot over GFS2. Actually I'm
using courier over GFS.
Actually I'm testing Dovecot with these parameters:
mmap_disable = yes
mail_fsync = always
mail_nfs_storage = yes
mail_nfs_index = yes
lock_method = fcntl
Are they correct?
RedHat GFS support mmap, so is it better to enable it or leave it disabled?
The documentation suggest the
2011 Jun 11
2
mmap in GFS2 on rhel 6.1
Hello list, we continue our tests using Dovecot on a RHEL 6.1 Cluster
Backend with GFS2, also we are using dovecot as a Director for user node
persistence, everything was ok until we started stress testing the solution
with imaptest, we had many deadlocks, cluster filesystems corruptions and
hangs, specially in index filesystem, we have configured the backend as if
they were on a NFS like setup
2012 Nov 27
6
CTDB / Samba / GFS2 - Performance - with Picture Link
Hello,
maybe there is someone they can help and answer a question why i get these network screen on my ctdb clusters. I have two ctdb clusters. One physical and one in a vmware enviroment.
So when i transfer any files (copy) in a samba share so i get such network curves with performance breaks. I dont see that the transfer will stop but why is that so? can i change anything or does anybody know
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2009 Nov 08
1
[PATCH] appliance: Add support for btrfs, GFS, GFS2, JFS, HFS, HFS+, NILFS, OCFS2
I've tested all these filesystems here:
http://rwmj.wordpress.com/2009/11/08/filesystem-metadata-overhead/
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://et.redhat.com/~rjones/virt-top
-------------- next part
2008 May 29
3
GFS
Hello:
I am planning to implement GFS for my university as a summer project. I have
10 servers each with SAN disks attached. I will be reading and writing many
files for professor's research projects. Each file can be anywhere from 1k
to 120GB (fluid dynamic research images). The 10 servers will be using NIC
bonding (1GB/network). So, would GFS be ideal for this? I have been reading
a lot
2007 Apr 05
7
Problems using GFS2 and clustered dovecot
I am trying to use dovecot. I've got a GFS2 shared volume on two servers
with dovecot running on both. On one server at a time, it works.
The test I am trying is to attach two mail programs (MUA) via IMAPS
(Thunderbird and Evolution as it happens). I've attached one mail
program to each IMAPS server. I am trying to move emails around in one
program (from folder to folder), and then
2008 Jun 18
1
mkfs.ocfs2: double free or corruption
Dear Srs,
I get this error when running "mkfs.ocfs2":
=================================================================================
# mkfs.ocfs2 -b 4K -C 32K -N 255 -L backup_ocfs2_001 /dev/sdb1
mkfs.ocfs2 1.2.7
Filesystem label=backup_ocfs2_001
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=6000488677376 (183120382 clusters) (1464963056 blocks)
5678 cluster
2011 Feb 27
1
Recover botched drdb gfs2 setup .
Hi.
The short story... Rush job, never done clustered file systems before,
vlan didn't support multicast. Thus I ended up with drbd working ok
between the two servers but cman / gfs2 not working, resulting in what
was meant to be a drbd primary/primary cluster being a primary/secondary
cluster until the vlan could be fixed with gfs only mounted on the one
server. I got the single server
2010 Aug 10
1
GFS/GFS2 on CentOS
Hi all,
If you have had experience hosting GFS/GFS2 on CentOS machines could
you share you general impression on it? Was it realiable? Fast? Any
issues or concerns?
Also, how feasible is it to start it on just one machine and then grow
it out if necessary?
Thanks.
Boris.
2011 Dec 14
1
[PATCH] mkfs: optimization and code cleanup
Optimizations by reducing the STREQ operations and do some
code cleanup.
Signed-off-by: Wanlong Gao <gaowanlong at cn.fujitsu.com>
---
daemon/mkfs.c | 29 +++++++++++++----------------
1 files changed, 13 insertions(+), 16 deletions(-)
diff --git a/daemon/mkfs.c b/daemon/mkfs.c
index a2c2366..7757623 100644
--- a/daemon/mkfs.c
+++ b/daemon/mkfs.c
@@ -44,13 +44,16 @@ do_mkfs_opts (const
2009 Aug 03
1
CTDB+GFS2+CMAN. clean_start="0" or clean_start="1"?
Hi everybody,
I have tested CTDB+GFS2+CMAN under Debian. It works good but I do not
understand some points.
It is possible to run the CTDB defining it under services section in
cluster.conf but running it on the second node shuts down the process at the
first one. My CTDB configuration implies 2 active-active nodes.
Does CTDB care if the node starts with clean_start="0" or
2012 Mar 07
1
[HELP!]GFS2 in the xen 4.1.2 does not work!
[This email is either empty or too large to be displayed at this time]
2010 Mar 27
1
DRBD,GFS2 and GNBD without all clustered cman stuff
Hi all,
Where i want to arrive:
1) having two storage server replicating partition with DRBD
2) exporting via GNBD from the primary server the drbd with GFS2
3) inporting the GNBD on some nodes and mount it with GFS2
Assuming no logical error are done in the last points logic this is the
situation:
Server 1: LogVol09, DRDB configured as /dev/drbd0 replicated to Server 2.
DRBD seems to work