search for: vring_event_f_

Displaying 3 results from an estimated 3 matches for "vring_event_f_".

2018 Jul 04
1
[PATCH net-next 6/8] virtio: introduce packed ring defines
...ng Change Event Offset/Wrap Counter). > + * Only valid if VIRTIO_F_RING_EVENT_IDX has been negotiated. > + */ > +#define RING_EVENT_FLAGS_DESC 0x2 For above three macros, maybe it's better to name them as: VRING_EVENT_FLAGS_ENABLE VRING_EVENT_FLAGS_DISABLE VRING_EVENT_FLAGS_DESC or VRING_EVENT_F_ENABLE VRING_EVENT_F_DISABLE VRING_EVENT_F_DESC VRING_EVENT_F_* will be more consistent with VIRTIO_F_*, VRING_DESC_F_*, etc > +/* The value 0x3 is reserved */ > + > +struct vring_packed_desc_event { > + /* Descriptor Ring Change Event Offset and Wrap Counter */ > + __virtio16 off_w...
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when
2018 Jul 03
12
[PATCH net-next 0/8] Packed virtqueue for vhost
Hi all: This series implements packed virtqueues. The code were tested with Tiwei's RFC V6 at https://lkml.org/lkml/2018/6/5/120. Pktgen test for both RX and TX does not show obvious difference with split virtqueues. The main bottleneck is the guest Linux driver, since it can not stress vhost for a 100% CPU utilization. A full TCP benchmark is ongoing. Will test virtio-net pmd as well when