Displaying 12 results from an estimated 12 matches for "start_sqs".
Did you mean:
start_pos
2015 Nov 21
1
[PATCH -qemu] nvme: support Google vendor extension
...onths away. In the meanwhile we can apply your patch as is,
apart from disabling the "if (new_head >= cq->size)" and the similar
one for "if (new_ tail >= sq->size".
But, I have a possible culprit. In your nvme_cq_notifier you are not doing the
equivalent of:
start_sqs = nvme_cq_full(cq) ? 1 : 0;
cq->head = new_head;
if (start_sqs) {
NvmeSQueue *sq;
QTAILQ_FOREACH(sq, &cq->sq_list, entry) {
timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500);
}
timer_mod(cq-...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...15 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
>>> + /* When the mapped pointer memory area is setup, we don't rely on
>>> + * the MMIO written values to update the head pointer. */
>>> + if (!cq->db_addr) {
>>...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...15 09:11, Ming Lin wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
>>> + /* When the mapped pointer memory area is setup, we don't rely on
>>> + * the MMIO written values to update the head pointer. */
>>> + if (!cq->db_addr) {
>>...
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote:
> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> }
>
> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> - cq->head = new_head;
> + /* When the mapped pointer memory area is setup, we don't rely on
> + * the MMIO written values to update the head pointer. */
> + if (!cq->db_addr) {
> + cq->head = new_head...
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote:
> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> }
>
> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> - cq->head = new_head;
> + /* When the mapped pointer memory area is setup, we don't rely on
> + * the MMIO written values to update the head pointer. */
> + if (!cq->db_addr) {
> + cq->head = new_head...
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...opaque)
req->status = status;
nvme_enqueue_req_completion(cq, req);
}
+
+ nvme_update_sq_eventidx(sq);
+ nvme_update_sq_tail(sq);
}
}
@@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
}
start_sqs = nvme_cq_full(cq) ? 1 : 0;
- cq->head = new_head;
+ /* When the mapped pointer memory area is setup, we don't rely on
+ * the MMIO written values to update the head pointer. */
+ if (!cq->db_addr) {
+ cq->head = new_head;
+ }
if...
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>
> On 18/11/2015 06:47, Ming Lin wrote:
> > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> > }
> >
> > start_sqs = nvme_cq_full(cq) ? 1 : 0;
> > - cq->head = new_head;
> > + /* When the mapped pointer memory area is setup, we don't rely on
> > + * the MMIO written values to update the head pointer. */
> > + if (!cq->db_addr) {
> > +...
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
> >>
> >> On 18/11/2015 06:47, Ming Lin wrote:
> >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> >>> }
> >>>
> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> >>> - cq->head = new_head;
> >>> + /* When the mapped pointer memory area is setup, we don't rely on
> >>> + * the MMIO written values to update the head pointer. */
> >>> + if (!cq->...
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as