Displaying 20 results from an estimated 213 matches for "cq".
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote:
> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> }
>
> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> - cq->head = new_head;
> + /* When the mapped pointer memory area is setup, we don't rely on
> + * the MMIO written values to update the head pointer. */
> + if (!cq->db_addr) {
> + cq->head = new_head;
> +...
2015 Nov 19
2
[PATCH -qemu] nvme: support Google vendor extension
On 18/11/2015 06:47, Ming Lin wrote:
> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> }
>
> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> - cq->head = new_head;
> + /* When the mapped pointer memory area is setup, we don't rely on
> + * the MMIO written values to update the head pointer. */
> + if (!cq->db_addr) {
> + cq->head = new_head;
> +...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...n wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
>>> + /* When the mapped pointer memory area is setup, we don't rely on
>>> + * the MMIO written values to update the head pointer. */
>>> + if (!cq->db_addr) {
>>> +...
2015 Nov 20
2
[PATCH -qemu] nvme: support Google vendor extension
...n wrote:
> On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>>
>> On 18/11/2015 06:47, Ming Lin wrote:
>>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
>>> }
>>>
>>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
>>> - cq->head = new_head;
>>> + /* When the mapped pointer memory area is setup, we don't rely on
>>> + * the MMIO written values to update the head pointer. */
>>> + if (!cq->db_addr) {
>>> +...
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
On Thu, 2015-11-19 at 11:37 +0100, Paolo Bonzini wrote:
>
> On 18/11/2015 06:47, Ming Lin wrote:
> > @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> > }
> >
> > start_sqs = nvme_cq_full(cq) ? 1 : 0;
> > - cq->head = new_head;
> > + /* When the mapped pointer memory area is setup, we don't rely on
> > + * the MMIO written values to update the head pointer. */
> > + if (!cq->db_addr) {
> > + cq->...
2015 Nov 18
0
[PATCH -qemu] nvme: support Google vendor extension
...d[optional]>
*/
+#include <exec/memory.h>
#include <hw/block/block.h>
#include <hw/hw.h>
#include <hw/pci/msix.h>
@@ -158,6 +159,14 @@ static uint16_t nvme_dma_read_prp(NvmeCtrl *n, uint8_t *ptr, uint32_t len,
return NVME_SUCCESS;
}
+static void nvme_update_cq_head(NvmeCQueue *cq)
+{
+ if (cq->db_addr) {
+ pci_dma_read(&cq->ctrl->parent_obj, cq->db_addr,
+ &cq->head, sizeof(cq->head));
+ }
+}
+
static void nvme_post_cqes(void *opaque)
{
NvmeCQueue *cq = opaque;
@@ -168,6 +177,8 @@ static v...
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 20
0
[PATCH -qemu] nvme: support Google vendor extension
...19 at 11:37 +0100, Paolo Bonzini wrote:
> >>
> >> On 18/11/2015 06:47, Ming Lin wrote:
> >>> @@ -726,7 +798,11 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
> >>> }
> >>>
> >>> start_sqs = nvme_cq_full(cq) ? 1 : 0;
> >>> - cq->head = new_head;
> >>> + /* When the mapped pointer memory area is setup, we don't rely on
> >>> + * the MMIO written values to update the head pointer. */
> >>> + if (!cq->db_addr)...
2015 Nov 21
1
[PATCH -qemu] nvme: support Google vendor extension
...Then it doesn't response input for almost 1 minute.
> Without this patch, kernel loads quickly.
Interesting. I guess there's time to debug it, since QEMU 2.6 is still
a few months away. In the meanwhile we can apply your patch as is,
apart from disabling the "if (new_head >= cq->size)" and the similar
one for "if (new_ tail >= sq->size".
But, I have a possible culprit. In your nvme_cq_notifier you are not doing the
equivalent of:
start_sqs = nvme_cq_full(cq) ? 1 : 0;
cq->head = new_head;
if (start_sqs) {
NvmeSQu...
2011 Jul 15
1
Error Message Help: Differing Number of Rows
...number format) and
column 2 is an isotope ratio (i.e. -8.12)
Runoff_18o: same as above
Daily_Precip: 2 columns - column 1 is the same date format but column 2 is a
weekly bulk precipitation value (i.e. 10mm)
When running the script, I keep getting the following error message:
Error in data.frame(cQ.ou[ind.mea], cQ[cal.cQ:nrow(cQ), 2]) :
arguments imply differing number of rows: 42, 44
Now, I know it's not the script, as it run perfectly for one site, but not
the other, but having read previous threads on other forums, it suggests
that there aren't the same number of values in all...
2001 Jan 31
1
Reduced Numerical Precision in 1.2.x?
...6 architecture. I noticed the
difference when using the abcnon() function in the bootstrap package, but I
think you could see the problem in any routine that involves subtracting
relatively large numbers from each other.
For the case I am looking at (see code below), the abcnon parameters bhat
and cq should exactly equal 0.0. Version 1.1.1 did calculate these
parameters to be close to 0.0, but for versions 1.2.x the errors increase
as the size of the problem incresases:
|----------------------+---------------+---------------|
|Case |Version 1.1.1 |Version 1.2.x |
|...
2005 Jun 30
1
spandsp fax out fails
I've a stock RH9 system with spandsp 0.18. Faxing out over a PRI to a
USRobotics modem on a stock Suse9.3 system with hylafax fails with the
following errors in the hylafax logs:
Jun 30 19:28:53.23: [ 608]: RECV/CQ: Bad 1D pixel count, row 0, got 595, expected 1728
Jun 30 19:28:53.23: [ 608]: RECV/CQ: Bad 1D pixel count, row 1, got 595, expected 1728
Jun 30 19:28:53.23: [ 608]: RECV/CQ: Bad 1D pixel count, row 2, got 595, expected 1728
etc...
I'm using the following call file as a test:
Channel: Zap/g...
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
...ation and a look forward on possible implementation
> > techniques.
> >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an op...
2017 Jul 19
1
[virtio-dev] packed ring layout proposal v2
...gt; An alternative is to use a producer index(PI).
> > Using the PI posted by the driver, and the Consumer Index(CI) maintained
> by the device, the device knows how much work it has outstanding, so it can
> do the prefetch accordingly.
> > There are few options for the device to acquire the PI.
> > Most efficient will be to write the PI in the doorbell together with the queue
> number.
>
> Right. This was suggested in "Fwd: Virtio-1.1 Ring Layout".
> Or just the PI if we don't need the queue number.
>
> > I would like to raise the nee...
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
...ation and a look forward on possible implementation
> > techniques.
> >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an op...
2017 Jul 19
1
[virtio-dev] packed ring layout proposal v2
...gt; An alternative is to use a producer index(PI).
> > Using the PI posted by the driver, and the Consumer Index(CI) maintained
> by the device, the device knows how much work it has outstanding, so it can
> do the prefetch accordingly.
> > There are few options for the device to acquire the PI.
> > Most efficient will be to write the PI in the doorbell together with the queue
> number.
>
> Right. This was suggested in "Fwd: Virtio-1.1 Ring Layout".
> Or just the PI if we don't need the queue number.
>
> > I would like to raise the nee...
2017 Jul 16
1
[virtio-dev] packed ring layout proposal v2
...device do an efficient prefetch.
An alternative is to use a producer index(PI).
Using the PI posted by the driver, and the Consumer Index(CI) maintained by the device, the device knows how much work it has outstanding, so it can do the prefetch accordingly.
There are few options for the device to acquire the PI.
Most efficient will be to write the PI in the doorbell together with the queue number.
I would like to raise the need for a Completion Queue(CQ).
Multiple Work Queues(hold the work descriptors, WQ in short) can be connected to a single CQ.
So when the device completes the work on the d...