Displaying 3 results from an estimated 3 matches for "nvme_update_sq_tail".
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...INVALID_OPCODE | NVME_DNR;
}
}
+static void nvme_update_sq_eventidx(const NvmeSQueue *sq)
+{
+ if (sq->eventidx_addr) {
+ pci_dma_write(&sq->ctrl->parent_obj, sq->eventidx_addr,
+ &sq->tail, sizeof(sq->tail));
+ }
+}
+
+static void nvme_update_sq_tail(NvmeSQueue *sq)
+{
+ if (sq->db_addr) {
+ pci_dma_read(&sq->ctrl->parent_obj, sq->db_addr,
+ &sq->tail, sizeof(sq->tail));
+ }
+}
+
static void nvme_process_sq(void *opaque)
{
NvmeSQueue *sq = opaque;
@@ -561,6 +628,8 @@ static void...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it