Displaying 6 results from an estimated 6 matches for "qemu_clock_get_ns".
Did you mean:
qemu_clock_get_ms
2015 Nov 21
1
[PATCH -qemu] nvme: support Google vendor extension
...culprit. In your nvme_cq_notifier you are not doing the
equivalent of:
start_sqs = nvme_cq_full(cq) ? 1 : 0;
cq->head = new_head;
if (start_sqs) {
NvmeSQueue *sq;
QTAILQ_FOREACH(sq, &cq->sq_list, entry) {
timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
}
timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
}
Instead, you are just calling nvme_post_cqes, which is the equivalent of
timer_mod(cq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
Adding a loop to n...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
On 20/11/2015 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...}
- sq->tail = new_tail;
+ /* When the mapped pointer memory area is setup, we don't rely on
+ * the MMIO written values to update the tail pointer. */
+ if (!sq->db_addr) {
+ sq->tail = new_tail;
+ }
timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
}
}
@@ -805,6 +885,8 @@ static int nvme_init(PCIDevice *pci_dev)
pci_conf = pci_dev->config;
pci_conf[PCI_INTERRUPT_PIN] = 1;
pci_config_set_prog_interface(pci_dev->config, 0x2);
+ pci_config_set_vendor_id(pci_dev->config, n->vid);
+...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it