similar to: Btrfs RAID-10 performance

Displaying 20 results from an estimated 8000 matches similar to: "Btrfs RAID-10 performance"

2020 Sep 10
0
Btrfs RAID-10 performance
>>>>> "Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> writes: Miloslav> Dne 09.09.2020 v 17:52 John Stoffel napsal(a): Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to Miloslav> it. Because the controller does not support pass-through for Miloslav> the drives,
2020 Sep 10
2
Btrfs RAID-10 performance
Dne 09.09.2020 v 17:52 John Stoffel napsal(a): > Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO > Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to > Miloslav> it. Because the controller does not support pass-through for > Miloslav> the drives, we use 16x RAID-0 on controller. So, we get > Miloslav> /dev/sda ... /dev/sdp (roughly) in
2020 Sep 09
0
Btrfs RAID-10 performance
>>>>> "Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> writes: Miloslav> Hi, thank you for your reply. I'll continue inline... Me too... please look for further comments. Esp about 'fio' and Netapp useage. Miloslav> Dne 09.09.2020 v 3:15 John Stoffel napsal(a): Miloslav> Hello, Miloslav> I sent this into the Linux Kernel Btrfs
2020 Sep 15
1
Btrfs RAID-10 performance
Dne 10.09.2020 v 17:40 John Stoffel napsal(a): >>> So why not run the backend storage on the Netapp, and just keep the >>> indexes and such local to the system? I've run Netapps for many years >>> and they work really well. And then you'd get automatic backups using >>> schedule snapshots. >>> >>> Keep the index files local on
2020 Sep 09
4
Btrfs RAID-10 performance
Hi, thank you for your reply. I'll continue inline... Dne 09.09.2020 v 3:15 John Stoffel napsal(a): > Miloslav> Hello, > Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: > Miloslav> "RAID-1 would be preferable" > Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). >
2020 Sep 09
0
Btrfs RAID-10 performance
The 9361-8i does support passthrough ( JBOD mode ). Make sure you have the latest firmware. On Wednesday, 09/09/2020 at 03:55 Miloslav H?la wrote: Hi, thank you for your reply. I'll continue inline... Dne 09.09.2020 v 3:15 John Stoffel napsal(a): > Miloslav> Hello, > Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: > Miloslav> "RAID-1
2020 Sep 09
0
Btrfs RAID-10 performance
>>>>> "Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> writes: Miloslav> Hello, Miloslav> I sent this into the Linux Kernel Btrfs mailing list and I got reply: Miloslav> "RAID-1 would be preferable" Miloslav> (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). Miloslav> May I ask you
2020 Sep 07
0
Btrfs RAID-10 performance
> On 7. Sep 2020, at 12.38, Miloslav H?la <miloslav.hula at gmail.com> wrote: > > Hello, > > I sent this into the Linux Kernel Btrfs mailing list and I got reply: "RAID-1 would be preferable" (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). May I ask you for the comments as from people around the Dovecot? > >
2017 Apr 10
0
lvm cache + qemu-kvm stops working after about 20GB of writes
Adding Paolo and Miroslav. On Sat, Apr 8, 2017 at 4:49 PM, Richard Landsman - Rimote <richard at rimote.nl > wrote: > Hello, > > I would really appreciate some help/guidance with this problem. First of > all sorry for the long message. I would file a bug, but do not know if it > is my fault, dm-cache, qemu or (probably) a combination of both. And i can > imagine some of
2020 Sep 17
0
storage for mailserver
On 17/09/2020 13:35, Michael Schumacher wrote: > Hello Phil, > > Wednesday, September 16, 2020, 7:40:24 PM, you wrote: > > PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and > PP> marking the HDD members as --write-mostly, meaning most of the reads > PP> will come from the faster SSDs retaining much of the speed advantage, > PP> but you
2017 Apr 20
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello everyone, Anybody had the chance to test out this setup and reproduce the problem? I assumed it would be something that's used often these days and a solution would benefit a lot of users. If can be of any assistance please contact me. -- Met vriendelijke groet, Richard Landsman http://rimote.nl T: +31 (0)50 - 763 04 07 (ma-vr 9:00 tot 18:00) 24/7 bij storingen: +31 (0)6 - 4388
2020 Sep 19
1
storage for mailserver
On 9/17/20 4:25 PM, Phil Perry wrote: > On 17/09/2020 13:35, Michael Schumacher wrote: >> Hello Phil, >> >> Wednesday, September 16, 2020, 7:40:24 PM, you wrote: >> >> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and >> PP> marking the HDD members as --write-mostly, meaning most of the reads >> PP> will come from the
2015 Apr 30
0
Postpone email delivery with LMTP and Postfix
* Miloslav H?la <miloslav.hula at gmail.com> 2015.04.29 22:47: > is there any way, based on userdb/passwdb attribute, how to postpone an > email delivery? The purpose is, I need to freeze an account (Maildir++) for > a few minutes and new email must not be delivered. But emails must be > delivered when account is unfrozen. You can put the messages on hold and then release them
2017 Oct 11
1
Connection closed reason
Miloslav H?la <miloslav.hula at gmail.com> > we have one user using the old Alpine client with IMAP. Time to time (3 > times per day or 3 times per week) he get error: "MAIL FOLDER INBOX > CLOSED DUE TO ACCESS ERROR" and he complains, that inbox stops to > refresh with new emails. Hmm, I see when using direct file access, but not when using IMAP, but I don't often
2015 Mar 27
0
Migrating from Cyrus to Dovecot
> On 27 Mar 2015, at 10:19, Miloslav H?la <miloslav.hula at gmail.com> wrote: > > Hi, > > we are migrating from Cyrus 2.3.7 to Dovecot 2.2.13. We have ~7000 maildirs with ~500GB. Our goal is to do the migration without users have notice and with the shortest service downtime. The users use IMAP (with shared folders and ACL), POP3 and sieve filters. > > As a first
2017 Apr 08
2
lvm cache + qemu-kvm stops working after about 20GB of writes
Hello, I would really appreciate some help/guidance with this problem. First of all sorry for the long message. I would file a bug, but do not know if it is my fault, dm-cache, qemu or (probably) a combination of both. And i can imagine some of you have this setup up and running without problems (or maybe you think it works, just like i did, but it does not): PROBLEM LVM cache writeback
2020 Sep 07
4
Btrfs RAID-10 performance
Hello, I sent this into the Linux Kernel Btrfs mailing list and I got reply: "RAID-1 would be preferable" (https://lore.kernel.org/linux-btrfs/7b364356-7041-7d18-bd77-f60e0e2e2112 at lechevalier.se/T/). May I ask you for the comments as from people around the Dovecot? We are using btrfs RAID-10 (/data, 4.7TB) on a physical Supermicro server with Intel(R) Xeon(R) CPU E5-2620 v4 @
2016 Jun 22
3
Mailboxes on NFS or iSCSI
Hello, we are running Dovecot (2.2.13-12~deb8u1) on Debian stable. Configured with Mailbox++, IMAP, POP3, LMTPD, Managesieved, ACL. Mailboxes are on local 1.2TB RAID, it's about 5310 accounts. We are slowly getting out of space and we are considering to move Mailboxes onto Netapp disk array with two independent network connections. Are there some pitfalls? Not sure we should use NTP or
2023 Jan 11
1
Upgrading system from non-RAID to RAID1
I plan to upgrade an existing C7 computer which currently has one 256 GB SSD to use mdadmin software RAID1 after adding two 4 TB M2. SSDs, the rest of the system remaining the same. The system also has one additional internal and one external harddisk but these should not be touched. The system will continue to run C7. If I remember correctly, the existing SSD does not use a M2. slot so they
2020 Sep 17
2
storage for mailserver
Hello Phil, Wednesday, September 16, 2020, 7:40:24 PM, you wrote: PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and PP> marking the HDD members as --write-mostly, meaning most of the reads PP> will come from the faster SSDs retaining much of the speed advantage, PP> but you have the redundancy of both SSDs and HDDs in the array. PP> Read performance is