search for: admin_q

Displaying 4 results from an estimated 4 matches for "admin_q".

2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all, These 2 patches added virtio-nvme to kernel and qemu, basically modified from virtio-blk and nvme code. As title said, request for your comments. Play it in Qemu with: -drive file=disk.img,format=raw,if=none,id=D22 \ -device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4 The goal is to have a full NVMe stack from VM guest(virtio-nvme) to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it