Paolo Bonzini
2015-Dec-01 16:02 UTC
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
On 01/12/2015 00:20, Ming Lin wrote:> qemu-nvme: 148MB/s > vhost-nvme + google-ext: 230MB/s > qemu-nvme + google-ext + eventfd: 294MB/s > virtio-scsi: 296MB/s > virtio-blk: 344MB/s > > "vhost-nvme + google-ext" didn't get good enough performance.I'd expect it to be on par of qemu-nvme with ioeventfd but the question is: why should it be better? For vhost-net, the answer is that more zerocopy can be done if you put the data path in the kernel. But qemu-nvme is already using io_submit for the data path, perhaps there's not much to gain from vhost-nvme... Paolo> Still tuning.
Paolo Bonzini
2015-Dec-01 16:59 UTC
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
> What do you think about virtio-nvme+vhost-nvme?What would be the advantage over virtio-blk? Multiqueue is not supported by QEMU but it's already supported by Linux (commit 6a27b656fc). To me, the advantage of nvme is that it provides more than decent performance on unmodified Windows guests, and thanks to your vendor extension can be used on Linux as well with speeds comparable to virtio-blk. So it's potentially a very good choice for a cloud provider that wants to support Windows guests (together with e.g. a fast SAS emulated controller to replace virtio-scsi, and emulated igb or ixgbe to replace virtio-net). Which features are supported by NVMe and not virtio-blk? Paolo> I also have patch for vritio-nvme: > https://git.kernel.org/cgit/linux/kernel/git/mlin/linux.git/log/?h=nvme-split/virtio > > Just need to change vhost-nvme to work with it. > > > > > Paolo > > > > > Still tuning. > > >
Maybe Matching Threads
- [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
- [RFC PATCH 0/2] virtio nvme
- [RFC PATCH 0/2] virtio nvme
- [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
- [RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target