similar to: Maildirs++ on ext4

Displaying 20 results from an estimated 4000 matches similar to: "Maildirs++ on ext4"

2009 Nov 23
5
[OT] DRBD
Hello all, has someone worked with DRBD (http://www.drbd.org) for HA of mail storage? if so, does it have stability issues? comments and experiences are thanked :) Thanks, Rodolfo.
2010 Feb 19
2
Best inode_ratio for maildir++ on ext4
Hi, This might be a silly question: which would be the best inode ratio for a 5 Tb filesystem dedicated to Maildir++ storage? I use ubuntu server, which has a preconfigured setting for mkfs.ext4 called "news" with inode_ratio = 4096, and after formating the fs with that setting and then with the defautl setting I see this difference of space (wasted space, but more inodes): 4328633696
2009 Nov 10
1
Multiple instances of dovecot in the same machine.
Hello all, I'm trying to run several instances of dovecot, with a different config each one, following the instructions from here: http://wiki.dovecot.org/RunningDovecot I try the command dovecot -c /etc/dovecot/installN/dovecot.conf and get: --- Error: ssl_cert_file: Can't use /etc/ssl/certs/dovecot.pem: No such file or directory Fatal: Invalid configuration in
2006 Jan 24
2
Maildirquota
Hi, I'd like to use dovecot+postfix (with Maildir) for several virtual email servers, but one requirement is quota support. I'm aware about the lack of maildirquota support in dovecot, and I found a patch in the "unofficial patches" section which is said to provide maildirquota support to dovecot. How stable is this patch?, is it usable with dovecot 1.0b2?. And, is there
2019 Mar 21
3
Maildirs on AWS EFS
Hello, AWS released one month ago a EFS system with administered life cycle, which means that files not accessed in the last 30 days are moved to a lower cost storage tier. Currently I hold my e-mail, delivered to Maildir++ folders by postfix and retrieved with Dovecot, in standard EBS volumes. This has the disadvantage that I need to allocate more than enough space to ensure that the volume
2009 Dec 09
2
Mail.app + dovecot 1.2 + POP3
Hello all, I have an user with MacOS X 10.5 Mail.app 3.5, which is connecting using POP3 to a dovecot 1.2 server. He has a 512 kbps link, which he saturates downloading large files. While the bandwidth is saturated, Mail.app shows the POP3 account as "offline" (I haven't used that client, but it seems that it doesn't retry the connection if it fails at some moment). He
2013 Nov 24
3
The state of btrfs RAID6 as of kernel 3.13-rc1
Hi What is the general state of btrfs RAID6 as of kernel 3.13-rc1 and the latest btrfs tools? More specifically: - Is it able to correct errors during scrubs? - Is it able to transparently handle disk failures without downtime? - Is it possible to convert btrfs RAID10 to RAID6 without recreating the fs? - Is it possible to add/remove drives to a RAID6 array? Regards, Hans-Kristian -- To
2013 Dec 10
2
gentoo linux, problem starting vm´s when cache=none
hello mailinglist, on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7, virt-manager 0.10.0-r1 when i set on virtual machine "cache=none" in the disk-menu, the machine faults to start with: << Fehler beim Starten der Domain: Interner Fehler: Prozess während der Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64: -drive
2013 Jun 11
1
cluster.min-free-disk working?
Hi, have a system consisting of four bricks, using 3.3.2qa3. I used the command gluster volume set glusterKumiko cluster.min-free-disk 20% Two of the bricks where empty, and two were full to just under 80% when building the volume. Now, when syncing data (from a primary system), and using min-free-disk 20% I thought new data would go to the two empty bricks, but gluster does not seem
2003 Aug 14
1
Re: Samba vs. Windows : significant difference intimestamp handling?
>> > > Fine. Use reiserfs and don't worry about ctime. >> > >> > Why? Does reiserfs handle ctime in a different >> > way than other linux filesystems? >> >> It's not supposed to given the same instructions >> from clients but it appears to because perhaps it >> elicits different kind of response from Office. >> Maybe
2001 May 02
1
ext3 versus Reiser
Greetings, I'll keep this as brief as possible: Is Redhat going to support ext3 or reiser (or both) in the next version of Redhat (I'm not looking for a commitment, just a roadmap)? I noticed (and used) the reiser utilities in the Wolverine beta (on a test box) to make reiser paritions, but upgrading to the 7.1 release failed as it only saw ext2 partitions. I understand that ext3 is
2009 Aug 05
3
RAID[56] with arbitrary numbers of "parity" stripes.
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single ''RAID56'' flag, and the amount of redundancy is stored elsewhere. This attempts it, but I hate it and don''t really want to do it. The type field is designed as a bitmask, and _used_ as a bitmask in a number of
2012 Nov 13
1
mdX and mismatch_cnt when building an array
CentOS 6.3, x86_64. I have noticed when building a new software RAID-6 array on CentOS 6.3 that the mismatch_cnt grows monotonically while the array is building: # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md11 : active raid6 sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] 3904890880 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
2014 Aug 28
2
Random Disk I/O Tests
I have two openvz servers running Centos 6.x both with 32GB of RAM. One is an Intel Xeon E3-1230 quad core with two 4TB 7200 SATA drives in software RAID1. The other is an old HP DL380 dual quad core with 8 750GB 2.5" SATA drives in hardware RAID6. I want to figure out which one has better random I/O performance to host a busy container. The DL380 currently has one failed drive in the
2014 Jun 11
2
Re: libguestfs supermin error
On Wed, Jun 11, 2014 at 01:00:16PM +0530, abhishek jain wrote: > Hi RIch > > Below are the logs updated logs of libguestfs-test-tool on ubuntu powerpc... > > libguestfs-test-tool > ************************************************************ > * IMPORTANT NOTICE > * > * When reporting bugs, include the COMPLETE, UNEDITED >
2009 Sep 24
4
mdadm size issues
Hi, I am trying to create a 10 drive raid6 array. OS is Centos 5.3 (64 Bit) All 10 drives are 2T in size. device sd{a,b,c,d,e,f} are on my motherboard device sd{i,j,k,l} are on a pci express areca card (relevant lspci info below) #lspci 06:0e.0 RAID bus controller: Areca Technology Corp. ARC-1210 4-Port PCI-Express to SATA RAID Controller The controller is set to JBOD the drives. All
2009 Oct 19
2
Can't use string ("0") as a HASH ref while "strict refs" in use at /usr/local/sbin/courier-dovecot-migrate.pl line 300.
Hello, I need to migrate from courier-imap to dovecot. I'm trying to use courier-dovecot-migrate.pl to migrate the users' maildirs but I'm getting these messages: --- # courier-dovecot-migrate.pl --to-dovecot --recursive mydomain.com Testing conversion to Dovecot format Finding maildirs under mydomain.com mydomain.com/contact/./Maildir: No imap/pop3 uidlist files
2006 Mar 15
5
Possible bug with multiport?
Hi Folks: I am either using the multiport of the -m or --match option of iptables in correctly or there is a bug with it. Is anyone else using it with no problem? This is the way I am trying to use it: my_ports=21,25,80 iptables -t nat -A PREROUTING -i $wan_addr -p tcp -m multiport --dports $my_ports -j DNAT --to $my_internal_address I have used this in the past successfully but that was a
2003 Aug 13
1
Re: Samba vs. Windows : significant difference intimestamp handling ?
>> > > ... On my PCs the mtime remains unmodified. >> > > It's a weird thing if it happens under normal >> > > circumstances ... But if it only happens when >> > > you fake the identity from within the Office >> > > programs, well, I wouldn't bother really. >> > > >> > I totally agree ! >> >>
2005 Oct 04
2
what's the best filesystem
Curious what folks think the best choice of filesystem is, specifically when used for Samba and regular Unix users (SMB+NFS access). I'm using Reiser now but its slow and doesn't work good XP attributes. I've also used XFS (a couple of years ago) and liked it but had some troubles with cross-platform Unix and the extended ACLs. What's the preference today? Thanks -- Eric A.