search for: tx_can_batch

Displaying 15 results from an estimated 15 matches for "tx_can_batch".

2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
...pful for future implementation like in order completion. Please review. Thanks Jason Wang (9): vhost_net: drop unnecessary parameter vhost_net: introduce helper to initialize tx iov iter vhost_net: introduce vhost_exceeds_weight() vhost_net: introduce get_tx_bufs() vhost_net: introduce tx_can_batch() vhost_net: split out datacopy logic vhost_net: rename vhost_rx_signal_used() to vhost_net_signal_used() vhost_net: rename VHOST_RX_BATCH to VHOST_NET_BATCH vhost_net: batch update used ring for datacopy TX drivers/vhost/net.c | 249 +++++++++++++++++++++++++++++++++++++--------------- 1...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
...pful for future implementation like in order completion. Please review. Thanks Jason Wang (9): vhost_net: drop unnecessary parameter vhost_net: introduce helper to initialize tx iov iter vhost_net: introduce vhost_exceeds_weight() vhost_net: introduce get_tx_bufs() vhost_net: introduce tx_can_batch() vhost_net: split out datacopy logic vhost_net: rename vhost_rx_signal_used() to vhost_net_signal_used() vhost_net: rename VHOST_RX_BATCH to VHOST_NET_BATCH vhost_net: batch update used ring for datacopy TX drivers/vhost/net.c | 249 +++++++++++++++++++++++++++++++++++++--------------- 1...
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...&nvq->vq; > int ret; > > - ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, busyloop_intr); > + ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, msg, busyloop_intr); > > if (ret < 0 || ret == vq->num) > return ret; > @@ -540,6 +574,83 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) > !vhost_vq_avail_empty(vq->dev, vq); > } > > +#define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) I wonder whether NET_IP_ALIGN make sense for XDP. > + > +static int vhost_net_build_xdp(struct vhost_net_virtqueue *n...
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...&nvq->vq; > int ret; > > - ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, busyloop_intr); > + ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, msg, busyloop_intr); > > if (ret < 0 || ret == vq->num) > return ret; > @@ -540,6 +574,83 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) > !vhost_vq_avail_empty(vq->dev, vq); > } > > +#define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) I wonder whether NET_IP_ALIGN make sense for XDP. > + > +static int vhost_net_build_xdp(struct vhost_net_virtqueue *n...
2018 Sep 07
1
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...vhost_net *net, struct socket *sock) > > > break; > > > } > > > - vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head); > > > - vq->heads[nvq->done_idx].len = 0; > > > - > > > total_len += len; > > > - if (tx_can_batch(vq, total_len)) > > > - msg.msg_flags |= MSG_MORE; > > > - else > > > - msg.msg_flags &= ~MSG_MORE; > > > + > > > + /* For simplicity, TX batching is only enabled if > > > + * sndbuf is unlimited. > > What if sndbuf changes whi...
2018 Sep 06
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...et *net, struct vhost_virtqueue *vq = &nvq->vq; int ret; - ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, busyloop_intr); + ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, msg, busyloop_intr); if (ret < 0 || ret == vq->num) return ret; @@ -540,6 +574,83 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq->dev, vq); } +#define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) + +static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, + struct iov_iter *from) +{ + struct vhost_virtqueue *vq = &nvq-&...
2018 Sep 12
0
[PATCH net-next V2 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...et *net, struct vhost_virtqueue *vq = &nvq->vq; int ret; - ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, busyloop_intr); + ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, msg, busyloop_intr); if (ret < 0 || ret == vq->num) return ret; @@ -540,6 +577,80 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq->dev, vq); } +#define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) + +static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq, + struct iov_iter *from) +{ + struct vhost_virtqueue *vq = &nvq-&...
2018 Sep 07
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...>> >> - ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, busyloop_intr); >> + ret = vhost_net_tx_get_vq_desc(net, nvq, out, in, msg, busyloop_intr); >> >> if (ret < 0 || ret == vq->num) >> return ret; >> @@ -540,6 +574,83 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) >> !vhost_vq_avail_empty(vq->dev, vq); >> } >> >> +#define VHOST_NET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD) > I wonder whether NET_IP_ALIGN make sense for XDP. XDP is not the only consumer, socket may build skb...
2018 Jul 22
0
[PATCH net-next 0/9] TX used ring batched updating for vhost
...plit, the mixed data path became hard to maintain. > Jason Wang (9): > vhost_net: drop unnecessary parameter > vhost_net: introduce helper to initialize tx iov iter > vhost_net: introduce vhost_exceeds_weight() > vhost_net: introduce get_tx_bufs() > vhost_net: introduce tx_can_batch() > vhost_net: split out datacopy logic > vhost_net: rename vhost_rx_signal_used() to vhost_net_signal_used() > vhost_net: rename VHOST_RX_BATCH to VHOST_NET_BATCH > vhost_net: batch update used ring for datacopy TX > > drivers/vhost/net.c | 249 +++++++++++++++++++++++++...
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...{ unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + /* Private page frag */ + struct page_frag page_frag; + /* Refcount bias of page frag */ + int refcnt_bias; }; static unsigned vhost_net_zcopy_mask __read_mostly; @@ -637,14 +641,53 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq->dev, vq); } +#define SKB_FRAG_PAGE_ORDER get_order(32768) + +static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz, + struct page_frag *pfrag, gfp_t gfp) +{ + if (pfrag->p...
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be
2019 Jul 17
17
[PATCH V3 00/15] Packed virtqueue support for vhost
Hi all: This series implements packed virtqueues which were described at [1]. In this version we try to address the performance regression saw by V2. The root cause is packed virtqueue need more times of userspace memory accesssing which turns out to be very expensive. Thanks to the help of 7f466032dc9e ("vhost: access vq metadata through kernel virtual address"), such overhead cold be