search for: nvme_set_db_memory

Displaying 8 results from an estimated 8 matches for "nvme_set_db_memory".

2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote: > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val) > } > > start_sqs = nvme_cq_full(cq) ? 1 : 0; > - cq->head = new_head; > + /* When the mapped pointer memory area is setup, we don't rely on > + * the MMIO written values to update the head pointer. */ >
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...E_CAP_DSTRD(n->bar.cap); + + event_notifier_init(&sq->notifier, 0); + event_notifier_set_handler(&sq->notifier, nvme_sq_notifier); + memory_region_add_eventfd(&n->iomem, + 0x1000 + offset, 4, true, sq->sqid * 2, &sq->notifier); +} + static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) { uint64_t db_addr = le64_to_cpu(cmd->prp1); @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) /* Submission queue tail pointer location, 2 * QID * stride. */ sq->db_addr = db_addr + 2 * i *...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...fier, nvme_sq_notifier); > + memory_region_add_eventfd(&n->iomem, > + 0x1000 + offset, 4, true, sq->sqid * 2, &sq->notifier); likewise should be 0x1000 + offset, 4, false, 0, &sq->notifier Otherwise looks good! Paolo > +} > + > static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > { > uint64_t db_addr = le64_to_cpu(cmd->prp1); > @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > /* Submission queue tail pointer location, 2 * QID * stride. */ > sq->db...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...fier, nvme_sq_notifier); > + memory_region_add_eventfd(&n->iomem, > + 0x1000 + offset, 4, true, sq->sqid * 2, &sq->notifier); likewise should be 0x1000 + offset, 4, false, 0, &sq->notifier Otherwise looks good! Paolo > +} > + > static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > { > uint64_t db_addr = le64_to_cpu(cmd->prp1); > @@ -565,6 +603,7 @@ static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) > /* Submission queue tail pointer location, 2 * QID * stride. */ > sq->db...
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...ix_vector_use(&n->parent_obj, cq->vector); n->cq[cqid] = cq; cq->timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq); @@ -528,6 +543,40 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeCmd *cmd, NvmeRequest *req) return NVME_SUCCESS; } +static uint16_t nvme_set_db_memory(NvmeCtrl *n, const NvmeCmd *cmd) +{ + uint64_t db_addr = le64_to_cpu(cmd->prp1); + uint64_t eventidx_addr = le64_to_cpu(cmd->prp2); + int i; + + /* Addresses should not be NULL and should be page aligned. */ + if (db_addr == 0 || db_addr & (n->page_size - 1) || +...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai, I wrote vhost-nvme patches on top of Christoph's NVMe target. vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe driver. But the tests I have done didn't show competitive performance compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme vendor extension patches reduces greatly the number of MMIO writes. So I'd like to push it