similar to: Dbox and NVMe drives

Displaying 20 results from an estimated 9000 matches similar to: "Dbox and NVMe drives"

2015 Feb 27
0
users of dbox format
Andreas, > I am interested in finding out your experiences with using the dbox > format (especially mdbox) if you use this format. mdbox is THE reason why I am trying Dovecot. With mailboxes of several (tens of) GB with several k of messages I hope mdbox will speedup backups. Also SIS for attachments sounds very good, but still doesn't follow the altstorage rules (while messages go to
2023 Mar 26
1
hardware issues and new server advice
Hi, sry if i hijack this, but maybe it's helpful for other gluster users... > pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data. > I would choose LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several (not the built-in ones)
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
Hello there, as Strahil suggested a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3:
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2020 Sep 17
0
storage for mailserver
On 17/09/2020 13:35, Michael Schumacher wrote: > Hello Phil, > > Wednesday, September 16, 2020, 7:40:24 PM, you wrote: > > PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and > PP> marking the HDD members as --write-mostly, meaning most of the reads > PP> will come from the faster SSDs retaining much of the speed advantage, > PP> but you
2020 Sep 19
1
storage for mailserver
On 9/17/20 4:25 PM, Phil Perry wrote: > On 17/09/2020 13:35, Michael Schumacher wrote: >> Hello Phil, >> >> Wednesday, September 16, 2020, 7:40:24 PM, you wrote: >> >> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and >> PP> marking the HDD members as --write-mostly, meaning most of the reads >> PP> will come from the
2013 Mar 15
0
[PATCH] btrfs-progs: mkfs: add missing raid5/6 description
Signed-off-by: Matias Bjørling <m@bjorling.me> --- man/mkfs.btrfs.8.in | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/man/mkfs.btrfs.8.in b/man/mkfs.btrfs.8.in index 41163e0..db8c57c 100644 --- a/man/mkfs.btrfs.8.in +++ b/man/mkfs.btrfs.8.in @@ -37,7 +37,7 @@ mkfs.btrfs uses all the available storage for the filesystem. .TP \fB\-d\fR, \fB\-\-data
2020 Sep 10
0
Btrfs RAID-10 performance
"Miloslav" == Miloslav H?la <miloslav.hula at gmail.com> <miloslav.hula at gmail.com> writes: Miloslav> Dne 09.09.2020 v 17:52 John Stoffel napsal(a): Miloslav> There is a one PCIe RAID controller in a chasis. AVAGO Miloslav> MegaRAID SAS 9361-8i. And 16x SAS 15k drives conneced to Miloslav> it. Because the controller does not support pass-through for
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme? What would be the advantage over virtio-blk? Multiqueue is not supported by QEMU but it's already supported by Linux (commit 6a27b656fc). To me, the advantage of nvme is that it provides more than decent performance on unmodified Windows guests, and thanks to your vendor extension can be used on Linux as well with speeds comparable to
2015 Dec 01
2
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme? What would be the advantage over virtio-blk? Multiqueue is not supported by QEMU but it's already supported by Linux (commit 6a27b656fc). To me, the advantage of nvme is that it provides more than decent performance on unmodified Windows guests, and thanks to your vendor extension can be used on Linux as well with speeds comparable to
2023 Mar 24
2
hardware issues and new server advice
Actually, pure NVME-based volume will be waste of money. Gluster excells when you have more servers and clients to consume that data. I would choose? LVM cache (NVMEs) + HW RAID10 of SAS 15K disks to cope with the load. At least if you decide to go with more disks for the raids, use several? (not the built-in ones) controllers. @Martin, in order to get a more reliable setup, you will have to
2015 Feb 27
4
users of dbox format
I am interested in finding out your experiences with using the dbox format (especially mdbox) if you use this format. I am contemplating changing my maildir setup to mdbox but I still need to make a case for it against maildir which has become a de-facto standard and provides sort of a secure basis in case of software changes. Your input will be appreciated. -- Andreas Kasenides Senior IT
2020 Sep 16
0
storage for mailserver
On 16/09/2020 17:11, Michael Schumacher wrote: > hi, > > I am planning to replace my old CentOS 6 mail server soon. Most details > are quite obvious and do not need to be changed, but the old system > was running on spinning discs and this is certainly not the best > option for todays mail servers. > > With spinning discs, HW-RAID6 was the way to go to increase
2015 Dec 01
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On Tue, 2015-12-01 at 17:02 +0100, Paolo Bonzini wrote: > > On 01/12/2015 00:20, Ming Lin wrote: > > qemu-nvme: 148MB/s > > vhost-nvme + google-ext: 230MB/s > > qemu-nvme + google-ext + eventfd: 294MB/s > > virtio-scsi: 296MB/s > > virtio-blk: 344MB/s > > > > "vhost-nvme + google-ext" didn't get good enough performance. > >
2012 Aug 01
1
Windows DomU with SSDs
Hi Everyone, We are thinking of venturing into the world of hosting Windows DomUs on our Xen infrastructure. As Windows generally requires a lot more IOPS than Linux does, we are trying to do everything we can to improve performance. While using SSDs would solve the IOPS problem, SSDs suffer from limited write cycles. So, we have the idea of using Flashcache from Facebook to use a single SSD as
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For
2015 Dec 01
1
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote: > qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance. I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For
2015 Dec 02
0
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On Tue, 2015-12-01 at 11:59 -0500, Paolo Bonzini wrote: > > What do you think about virtio-nvme+vhost-nvme? > > What would be the advantage over virtio-blk? Multiqueue is not supported > by QEMU but it's already supported by Linux (commit 6a27b656fc). I expect performance would be better. Seems google cloud VM uses both nvme and virtio-scsi. Not sure if virtio-blk is also
2015 Sep 17
0
[RFC PATCH 0/2] virtio nvme
Hi Ming & Co, On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote: > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote: > > On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote: > > > These 2 patches added virtio-nvme to kernel and qemu, > > > basically modified from virtio-blk and nvme code. > > > > > > As title said,
2015 Sep 18
0
[RFC PATCH 0/2] virtio nvme
On Thu, 2015-09-17 at 16:31 -0700, Ming Lin wrote: > On Wed, 2015-09-16 at 23:10 -0700, Nicholas A. Bellinger wrote: > > Hi Ming & Co, > > > > On Thu, 2015-09-10 at 10:28 -0700, Ming Lin wrote: > > > On Thu, 2015-09-10 at 15:38 +0100, Stefan Hajnoczi wrote: > > > > On Thu, Sep 10, 2015 at 6:48 AM, Ming Lin <mlin at kernel.org> wrote: > >