Displaying 20 results from an estimated 26 matches for "cqe".
Did you mean:
cq
2018 May 18
2
[RFC v4 3/5] virtio_ring: add packed ring support
...> hardware NIC drivers to support OOO (i.e. NICs
> will return the descriptors OOO):
>
> I'm not familiar with mlx4, maybe I'm wrong.
> I just had a quick glance. And I found below
> comments in mlx4_en_process_rx_cq():
>
> ```
> /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> * descriptor offset can be deduced from the CQE index instead of
> * reading 'cqe->index' */
> index = cq->mcq.cons_index & ring->size_mask;
> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> ```
>
> I...
2018 May 18
2
[RFC v4 3/5] virtio_ring: add packed ring support
...> hardware NIC drivers to support OOO (i.e. NICs
> will return the descriptors OOO):
>
> I'm not familiar with mlx4, maybe I'm wrong.
> I just had a quick glance. And I found below
> comments in mlx4_en_process_rx_cq():
>
> ```
> /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> * descriptor offset can be deduced from the CQE index instead of
> * reading 'cqe->index' */
> index = cq->mcq.cons_index & ring->size_mask;
> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
> ```
>
> I...
2018 May 17
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 22:33, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 10:05:44PM +0800, Jason Wang wrote:
>> On 2018?05?16? 21:45, Tiwei Bie wrote:
>>> On Wed, May 16, 2018 at 08:51:43PM +0800, Jason Wang wrote:
>>>> On 2018?05?16? 20:39, Tiwei Bie wrote:
>>>>> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>>>>>> On
2018 May 17
2
[RFC v4 3/5] virtio_ring: add packed ring support
On 2018?05?16? 22:33, Tiwei Bie wrote:
> On Wed, May 16, 2018 at 10:05:44PM +0800, Jason Wang wrote:
>> On 2018?05?16? 21:45, Tiwei Bie wrote:
>>> On Wed, May 16, 2018 at 08:51:43PM +0800, Jason Wang wrote:
>>>> On 2018?05?16? 20:39, Tiwei Bie wrote:
>>>>> On Wed, May 16, 2018 at 07:50:16PM +0800, Jason Wang wrote:
>>>>>> On
2018 May 18
0
[RFC v4 3/5] virtio_ring: add packed ring support
...you were saying it's quite common for
hardware NIC drivers to support OOO (i.e. NICs
will return the descriptors OOO):
I'm not familiar with mlx4, maybe I'm wrong.
I just had a quick glance. And I found below
comments in mlx4_en_process_rx_cq():
```
/* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
* descriptor offset can be deduced from the CQE index instead of
* reading 'cqe->index' */
index = cq->mcq.cons_index & ring->size_mask;
cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;
```
It seems that although they have a co...
2018 May 19
2
[RFC v4 3/5] virtio_ring: add packed ring support
...will return the descriptors OOO):
>>>
>>> I'm not familiar with mlx4, maybe I'm wrong.
>>> I just had a quick glance. And I found below
>>> comments in mlx4_en_process_rx_cq():
>>>
>>> ```
>>> /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
>>> * descriptor offset can be deduced from the CQE index instead of
>>> * reading 'cqe->index' */
>>> index = cq->mcq.cons_index & ring->size_mask;
>>> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_si...
2018 May 19
2
[RFC v4 3/5] virtio_ring: add packed ring support
...will return the descriptors OOO):
>>>
>>> I'm not familiar with mlx4, maybe I'm wrong.
>>> I just had a quick glance. And I found below
>>> comments in mlx4_en_process_rx_cq():
>>>
>>> ```
>>> /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
>>> * descriptor offset can be deduced from the CQE index instead of
>>> * reading 'cqe->index' */
>>> index = cq->mcq.cons_index & ring->size_mask;
>>> cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_si...
2018 May 18
0
[RFC v4 3/5] virtio_ring: add packed ring support
...i.e. NICs
> > will return the descriptors OOO):
> >
> > I'm not familiar with mlx4, maybe I'm wrong.
> > I just had a quick glance. And I found below
> > comments in mlx4_en_process_rx_cq():
> >
> > ```
> > /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> > * descriptor offset can be deduced from the CQE index instead of
> > * reading 'cqe->index' */
> > index = cq->mcq.cons_index & ring->size_mask;
> > cqe = mlx4_en_get_cqe(cq->buf, index, priv->cqe_size) + factor;...
2018 May 19
0
[RFC v4 3/5] virtio_ring: add packed ring support
...gt;
> > > > I'm not familiar with mlx4, maybe I'm wrong.
> > > > I just had a quick glance. And I found below
> > > > comments in mlx4_en_process_rx_cq():
> > > >
> > > > ```
> > > > /* We assume a 1:1 mapping between CQEs and Rx descriptors, so Rx
> > > > * descriptor offset can be deduced from the CQE index instead of
> > > > * reading 'cqe->index' */
> > > > index = cq->mcq.cons_index & ring->size_mask;
> > > > cqe = mlx4_en_get_cqe(cq->...
2019 Apr 13
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
...> + VIRTIO_CMD_QUERY_PORT,
> + VIRTIO_CMD_CREATE_CQ,
> + VIRTIO_CMD_DESTROY_CQ,
> + VIRTIO_CMD_CREATE_PD,
> + VIRTIO_CMD_DESTROY_PD,
> + VIRTIO_CMD_GET_DMA_MR,
> +};
> +
> +struct cmd_query_port {
> + __u8 port;
> +};
> +
> +struct cmd_create_cq {
> + __u32 cqe;
> +};
> +
> +struct rsp_create_cq {
> + __u32 cqn;
> +};
> +
> +struct cmd_destroy_cq {
> + __u32 cqn;
> +};
> +
> +struct rsp_create_pd {
> + __u32 pdn;
> +};
> +
> +struct cmd_destroy_pd {
> + __u32 pdn;
> +};
> +
> +struct cmd_get_dma_mr...
2019 Apr 11
1
[RFC 3/3] RDMA/virtio-rdma: VirtIO rdma driver
..._u8 cmd;
+ __u8 status;
+};
+
+enum {
+ VIRTIO_CMD_QUERY_DEVICE = 10,
+ VIRTIO_CMD_QUERY_PORT,
+ VIRTIO_CMD_CREATE_CQ,
+ VIRTIO_CMD_DESTROY_CQ,
+ VIRTIO_CMD_CREATE_PD,
+ VIRTIO_CMD_DESTROY_PD,
+ VIRTIO_CMD_GET_DMA_MR,
+};
+
+struct cmd_query_port {
+ __u8 port;
+};
+
+struct cmd_create_cq {
+ __u32 cqe;
+};
+
+struct rsp_create_cq {
+ __u32 cqn;
+};
+
+struct cmd_destroy_cq {
+ __u32 cqn;
+};
+
+struct rsp_create_pd {
+ __u32 pdn;
+};
+
+struct cmd_destroy_pd {
+ __u32 pdn;
+};
+
+struct cmd_get_dma_mr {
+ __u32 pdn;
+ __u32 access_flags;
+};
+
+struct rsp_get_dma_mr {
+ __u32 mrn;
+ __u32 lkey;...
2019 Apr 11
9
[RFC 0/3] VirtIO RDMA
Data center backends use more and more RDMA or RoCE devices and more and
more software runs in virtualized environment.
There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
Virtio is the optimal solution since is the de-facto para-virtualizaton
technology and also because the Virtio specification
allows Hardware Vendors to support Virtio protocol natively in order to
achieve
2019 Apr 11
9
[RFC 0/3] VirtIO RDMA
Data center backends use more and more RDMA or RoCE devices and more and
more software runs in virtualized environment.
There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
Virtio is the optimal solution since is the de-facto para-virtualizaton
technology and also because the Virtio specification
allows Hardware Vendors to support Virtio protocol natively in order to
achieve
2019 Apr 11
1
[RFC 2/3] hw/virtio-rdma: VirtIO rdma device
...attr.page_size_cap = 4096;
+ attr.vendor_id = 1;
+ attr.vendor_part_id = 1;
+ attr.hw_ver = VIRTIO_RDMA_HW_VER;
+ attr.max_qp = 1024;
+ attr.max_qp_wr = 1024;
+ attr.device_cap_flags = 0;
+ attr.max_sge = 64;
+ attr.max_sge_rd = 64;
+ attr.max_cq = 1024;
+ attr.max_cqe = 64;
+ attr.max_mr = 1024;
+ attr.max_pd = 1024;
+ attr.max_qp_rd_atom = 0;
+ attr.max_ee_rd_atom = 0;
+ attr.max_res_rd_atom = 0;
+ attr.max_qp_init_rd_atom = 0;
+ attr.max_ee_init_rd_atom = 0;
+ attr.atomic_cap = IBV_ATOMIC_NONE;
+ attr.max_ee = 0;
+ attr.max_rdd =...
2024 Oct 31
16
[PATCH v3 00/15] NVKM GSP RPC kernel docs, cleanups and fixes
Hi folks:
Here is the leftover of the previous spin of NVKM GSP RPC fixes, which
is handling the return of large GSP message. PATCH 1 and 2 in the previous
spin were merged [1], and this spin is based on top of PATCH 1 and PATCH 2
in the previous spin.
Besides the support of the large GSP message, kernel doc and many cleanups
are introduced according to the comments in the previous spin [2].
2020 Jul 16
0
[PATCH vhost next 10/10] vdpa/mlx5: Add VDPA driver for supported mlx5 devices
..._RESET | VIRTIO_CONFIG_S_FAILED)
> +
> +struct mlx5_vdpa_net_resources {
> + u32 tisn;
> + u32 tdn;
> + u32 tirn;
> + u32 rqtn;
> + bool valid;
> +};
> +
> +struct mlx5_vdpa_cq_buf {
> + struct mlx5_frag_buf_ctrl fbc;
> + struct mlx5_frag_buf frag_buf;
> + int cqe_size;
> + int nent;
> +};
> +
> +struct mlx5_vdpa_cq {
> + struct mlx5_core_cq mcq;
> + struct mlx5_vdpa_cq_buf buf;
> + struct mlx5_db db;
> + int cqe;
> +};
> +
> +struct mlx5_vdpa_umem {
> + struct mlx5_frag_buf_ctrl fbc;
> + struct mlx5_frag_buf frag_buf;...
2023 Feb 28
1
[PATCH] io_uring: fix fget leak when fs don't support nowait buffered read
...ode];
/* assign early for deferred execution for non-fixed file */
- if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE))
+ if (def->needs_file && !(req->flags & REQ_F_FIXED_FILE) && !req->file)
req->file = io_file_get_normal(req, req->cqe.fd);
if (!cdef->prep_async)
return 0;
--
2.24.4
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2015 Nov 18
3
[RFC PATCH 0/2] Google extension to improve qemu-nvme performance
Hi Rob & Mihai,
I wrote vhost-nvme patches on top of Christoph's NVMe target.
vhost-nvme still uses mmio. So the guest OS can run unmodified NVMe
driver. But the tests I have done didn't show competitive performance
compared to virtio-blk/virtio-scsi. The bottleneck is in mmio. Your nvme
vendor extension patches reduces greatly the number of MMIO writes.
So I'd like to push it
2017 Jul 16
1
[virtio-dev] packed ring layout proposal v2
...e PI in the doorbell together with the queue number.
I would like to raise the need for a Completion Queue(CQ).
Multiple Work Queues(hold the work descriptors, WQ in short) can be connected to a single CQ.
So when the device completes the work on the descriptor, it writes a Completion Queue Entry (CQE) to the CQ.
CQEs are continuous in memory so prefetching by the driver is efficient, although the device might complete work descriptors out of order.
The interrupt handler is connected to the CQ, so an allocation of a single CQ per core, with a single interrupt handler is possible although this co...