Displaying 20 results from an estimated 20000 matches similar to: "storage for mailserver"
2020 Sep 17
2
storage for mailserver
Hello Phil,
Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
PP> marking the HDD members as --write-mostly, meaning most of the reads
PP> will come from the faster SSDs retaining much of the speed advantage,
PP> but you have the redundancy of both SSDs and HDDs in the array.
PP> Read performance is
2020 Sep 19
1
storage for mailserver
On 9/17/20 4:25 PM, Phil Perry wrote:
> On 17/09/2020 13:35, Michael Schumacher wrote:
>> Hello Phil,
>>
>> Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
>>
>> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
>> PP> marking the HDD members as --write-mostly, meaning most of the reads
>> PP> will come from the
2020 Mar 24
2
Building a NFS server with a mix of HDD and SSD (for caching)
Hi list,
I'm building a NFS server on top of CentOS 8.
It has 8 x 8 TB HDDs and 2 x 500GB SSDs.
The spinning drives are in a RAID-6 array. They are 4K sector size.
The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done
so far:
/dev/sdb ==> SSD raid1 array
/dev/sdd ==> spinning raid6 array
I've
2020 Sep 16
0
storage for mailserver
On 16/09/2020 17:11, Michael Schumacher wrote:
> hi,
>
> I am planning to replace my old CentOS 6 mail server soon. Most details
> are quite obvious and do not need to be changed, but the old system
> was running on spinning discs and this is certainly not the best
> option for todays mail servers.
>
> With spinning discs, HW-RAID6 was the way to go to increase
2020 Sep 16
0
storage for mailserver
On Wed, 16 Sep 2020 at 12:12, Michael Schumacher <
michael.schumacher at pamas.de> wrote:
> hi,
>
> I am planning to replace my old CentOS 6 mail server soon. Most details
> are quite obvious and do not need to be changed, but the old system
> was running on spinning discs and this is certainly not the best
> option for todays mail servers.
>
> With spinning discs,
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello,
I would really appreciate some help/guidance with this problem. First of
all sorry for the long message. I would file a bug, but do not know if
it is my fault, dm-cache, qemu or (probably) a combination of both. And
i can imagine some of you have this setup up and running without
problems (or maybe you think it works, just like i did, but it does not):
PROBLEM
LVM cache writeback
2012 Oct 02
2
new "large" fileserver config questions
Hi all,
I was recently charged with configuring a new fairly large (24x3TB
disks) fileserver for my group. I think I know mostly what I want to do
with it, but I did have two questions, at least one of which is directly
related to CentOS.
1) The controller node has two 90GB SSDs that I plan to use as a
bootable RAID1 system disk. What is the preferred method for laying
out the RAID array? I
2020 Sep 17
0
storage for mailserver
On 17/09/2020 13:35, Michael Schumacher wrote:
> Hello Phil,
>
> Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
>
> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
> PP> marking the HDD members as --write-mostly, meaning most of the reads
> PP> will come from the faster SSDs retaining much of the speed advantage,
> PP> but you
2020 Apr 20
4
performance problems with notmuch new
Franz Fellner <alpine.art.de at gmail.com> writes:
> I also suffer from bad performance of notmuch new. I used notmuch
> some years ago and notmuch new always felt instantanious. Had to stop
> using it because internet was too slow to sync my mails :/ Now (with
> better internet and a completely new setup using mbsync) indexing one
> mail takes at least 10 seconds,
2017 Sep 13
3
stripe size for SSDs? ( cyrus spool on btrfs?)
Stephen John Smoogen wrote:
> On 13 September 2017 at 09:25, hw <hw at gc-24.de> wrote:
>> John R Pierce wrote:
>>>
>>> On 9/9/2017 9:47 AM, hw wrote:
>>>>
>>>>
>>>> Isn?t it easier for SSDs to write small chunks of data at a time?
>>>> The small chunk might fit into some free space more easily than
>>>> a
2009 Feb 13
3
Bonnie++ run with RAID-1 on a single SSD (2.6.29-rc4-224-g4b6136c)
Hi folks,
For people who might be interested, here is how btrfs performs
with two partitions on a single SSD drive in a RAID-1 mirror.
This is on a Dell E4200 with Core 2 Duo U9300 (1.2GHz), 2GB RAM
and a Samsung SSD (128GB Thin uSATA SSD).
Version 1.03c ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
2014 Oct 09
3
dovecot replication (active-active) - server specs
Hello,
i have some questions about the new dovecot replication and mdbox format.
my company has currently 3 old dovecot 2.0.x fileserver/backend with ca. 120k mailboxes and ca. 6 TB data used.
They are synchronised per drbd/corosync.
Each fileserver/backend have ca. 40k mailboxes im Maildir format.
Our MX server is delivering ca. 30 GB new mails per day.
Two IMAP proxy server get the
2023 Mar 15
1
Kernel updates do not boot - always boots oldest kernel
>
>
> > I have only changed GRUB_DEFAULT from "saved" to "0"
> >
> > I have also run
> >
> > /usr/sbin/grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
>
> I may be wrong here but IIRC, using grub2-mkconfig as described in the
> Grub docs didn't work for me when I tried to use it years ago.
>
> I think you have to find out
2023 Jan 11
1
Upgrading system from non-RAID to RAID1
I plan to upgrade an existing C7 computer which currently has one 256 GB SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest of the system remaining the same. The system also has one additional internal and one external harddisk but these should not be touched. The system will continue to run C7.
If I remember correctly, the existing SSD does not use a M2. slot so they
2017 Sep 08
4
cyrus spool on btrfs?
On Fri, September 8, 2017 12:56 pm, hw wrote:
> Valeri Galtsev wrote:
>>
>> On Fri, September 8, 2017 9:48 am, hw wrote:
>>> m.roth at 5-cent.us wrote:
>>>> hw wrote:
>>>>> Mark Haney wrote:
>>>> <snip>
>>>>>> BTRFS isn't going to impact I/O any more significantly than, say,
>>>>>> XFS.
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all,
I am thinking about a new laptop. I see that there are
a number of higher-performance models (incidenatlly, they
are also marketed as "gamer" ones) which offer two SATA
2.5" bays and an SD flash card slot. Vendors usually
position the two-HDD bay part as either "get lots of
capacity with RAID0 over two HDDs, or get some capacity
and some performance by mixing one
2016 Jan 05
4
SSD drives for the OS - 1 or 2?
Preparing to build a small replacement server (initially built in 2005)
and normally for the OS I would buy 2x500GB drives and deploy in a RAID
1 configuration.
Now we have SSD drives available
- does just a single SSD drive offer the same reliability or is there
advantage in deploying two in a Raid 1 config?
Also, what form factor / interface is best for the SSD OS boot device on
a server
2023 Jan 11
2
Upgrading system from non-RAID to RAID1
On 01/11/2023 02:09 AM, Simon Matter wrote:
> What I usually do is this: "cut" the large disk into several pieces of
> equal size and create individual RAID1 arrays. Then add them as LVM PVs to
> one large VG. The advantage is that with one error on one disk, you wont
> lose redundancy on the whole RAID mirror but only on a partial segment.
> You can even lose another
2017 Sep 08
3
cyrus spool on btrfs?
On 09/08/2017 01:31 PM, hw wrote:
> Mark Haney wrote:
>
> I/O is not heavy in that sense, that?s why I said that?s not the
> application.
> There is I/O which, as tests have shown, benefits greatly from low
> latency, which
> is where the idea to use SSDs for the relevant data has arisen from.?
> This I/O
> only involves a small amount of data and is not sustained
2012 Jul 30
10
encfs on top of zfs
Dear ZFS-Users,
I want to switch to ZFS, but still want to encrypt my data. Native
Encryption for ZFS was added in "ZFS Pool Version Number
30<http://en.wikipedia.org/wiki/ZFS#Release_history>",
but I''m using ZFS on FreeBSD with Version 28. My question is how would
encfs (fuse encryption) affect zfs specific features like data Integrity
and deduplication?
Regards