Displaying 14 results from an estimated 14 matches for "cqe_size".
Did you mean:
ce_size
2018 May 18
2
[RFC v4 3/5] virtio_ring: add packed ring support
...; /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> * descriptor offset can be deduced from the CQE index instead of
> * reading 'cqe->index' */
> index = cq->mcq.cons_index & ring->size_mask;
> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> ```
>
> It seems that although they have a completion
> queue, they are still using the ring in order.
I guess so (at least from the above bits). Git grep -i "out of order" in
drivers/net gives some hints. Looks like there're few deivces do this.
> I gue...
2018 May 18
2
[RFC v4 3/5] virtio_ring: add packed ring support
...; /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> * descriptor offset can be deduced from the CQE index instead of
> * reading 'cqe->index' */
> index = cq->mcq.cons_index & ring->size_mask;
> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> ```
>
> It seems that although they have a completion
> queue, they are still using the ring in order.
I guess so (at least from the above bits). Git grep -i "out of order" in
drivers/net gives some hints. Looks like there're few deivces do this.
> I gue...
2018 May 17
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 22:33, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 10:05:44PM +0800, Jason Wang wrote:
>> On 2018?05?16? 21:45, Tiwei Bie wrote:
>>> On Wed, May 16, 2018 at 08:51:43PM +0800, Jason Wang wrote:
>>>> On 2018?05?16? 20:39, Tiwei Bie wrote:
>>>>> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>>>>>> On
2018 May 17
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 22:33, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 10:05:44PM +0800, Jason Wang wrote:
>> On 2018?05?16? 21:45, Tiwei Bie wrote:
>>> On Wed, May 16, 2018 at 08:51:43PM +0800, Jason Wang wrote:
>>>> On 2018?05?16? 20:39, Tiwei Bie wrote:
>>>>> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>>>>>> On
2018 May 18
0
[RFC v4 3/5] virtio_ring: add packed ring support
...n_process_rx_cq():
```
/* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
* descriptor offset can be deduced from the CQE index instead of
* reading 'cqe->index' */
index = cq->mcq.cons_index & ring->size_mask;
cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
```
It seems that although they have a completion
queue, they are still using the ring in order.
I guess maybe storage device may want OOO.
Best regards,
Tiwei Bie
>
> Thanks
>
> >
> > > Not for the patch, but it looks like having a OUT_OF_ORDER feature bit...
2018 May 19
2
[RFC v4 3/5] virtio_ring: add packed ring support
...en CQEs and Rx descriptors, so Rx
>>> * descriptor offset can be deduced from the CQE index instead of
>>> * reading 'cqe->index' */
>>> index = cq->mcq.cons_index & ring->size_mask;
>>> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
>>> ```
>>>
>>> It seems that although they have a completion
>>> queue, they are still using the ring in order.
>> I guess so (at least from the above bits). Git grep -i "out of order" in
>> drivers/net gives some hints. Looks li...
2018 May 19
2
[RFC v4 3/5] virtio_ring: add packed ring support
...en CQEs and Rx descriptors, so Rx
>>> * descriptor offset can be deduced from the CQE index instead of
>>> * reading 'cqe->index' */
>>> index = cq->mcq.cons_index & ring->size_mask;
>>> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
>>> ```
>>>
>>> It seems that although they have a completion
>>> queue, they are still using the ring in order.
>> I guess so (at least from the above bits). Git grep -i "out of order" in
>> drivers/net gives some hints. Looks li...
2018 May 18
0
[RFC v4 3/5] virtio_ring: add packed ring support
...mapping between CQEs and Rx descriptors, so Rx
> > * descriptor offset can be deduced from the CQE index instead of
> > * reading 'cqe->index' */
> > index = cq->mcq.cons_index & ring->size_mask;
> > cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> > ```
> >
> > It seems that although they have a completion
> > queue, they are still using the ring in order.
>
> I guess so (at least from the above bits). Git grep -i "out of order" in
> drivers/net gives some hints. Looks like there'...
2018 May 19
0
[RFC v4 3/5] virtio_ring: add packed ring support
...so Rx
> > > > * descriptor offset can be deduced from the CQE index instead of
> > > > * reading 'cqe->index' */
> > > > index = cq->mcq.cons_index & ring->size_mask;
> > > > cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> > > > ```
> > > >
> > > > It seems that although they have a completion
> > > > queue, they are still using the ring in order.
> > > I guess so (at least from the above bits). Git grep -i "out of order" in
> > &g...
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as
2015 Nov 20
15
[RFC PATCH 0/9] vhost-nvme: new qemu nvme backend using nvme target
Hi,
This is the first attempt to add a new qemu nvme backend using
in-kernel nvme target.
Most code are ported from qemu-nvme and also borrow code from
Hannes Reinecke's rts-megasas.
It's similar as vhost-scsi, but doesn't use virtio.
The advantage is guest can run unmodified NVMe driver.
So guest can be any OS that has a NVMe driver.
The goal is to get as good performance as
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2015 Sep 10
6
[RFC PATCH 0/2] virtio nvme
Hi all,
These 2 patches added virtio-nvme to kernel and qemu,
basically modified from virtio-blk and nvme code.
As title said, request for your comments.
Play it in Qemu with:
-drive file=disk.img,format=raw,if=none,id=D22 \
-device virtio-nvme-pci,drive=D22,serial=1234,num_queues=4
The goal is to have a full NVMe stack from VM guest(virtio-nvme)
to host(vhost_nvme) to LIO NVMe-over-fabrics
2020 Jul 16
0
[PATCH vhost next 10/10] vdpa/mlx5: Add VDPA driver for supported mlx5 devices
..._RESET | VIRTIO_CONFIG_S_FAILED)
> +
> +struct mlx5_vdpa_net_resources {
> + u32 tisn;
> + u32 tdn;
> + u32 tirn;
> + u32 rqtn;
> + bool valid;
> +};
> +
> +struct mlx5_vdpa_cq_buf {
> + struct mlx5_frag_buf_ctrl fbc;
> + struct mlx5_frag_buf frag_buf;
> + int cqe_size;
> + int nent;
> +};
> +
> +struct mlx5_vdpa_cq {
> + struct mlx5_core_cq mcq;
> + struct mlx5_vdpa_cq_buf buf;
> + struct mlx5_db db;
> + int cqe;
> +};
> +
> +struct mlx5_vdpa_umem {
> + struct mlx5_frag_buf_ctrl fbc;
> + struct mlx5_frag_buf frag_buf;
>...