search for: init_iov_it

Displaying 20 results from an estimated 25 matches for "init_iov_it".

Did you mean: init_iov_iter
2018 May 21
1
[RFC PATCH net-next 01/12] vhost_net: introduce helper to initialize tx iov iter
...host/net.c > index c4b49fc..15d191a 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -459,6 +459,26 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) > min_t(unsigned int, VHOST_MAX_PEND, vq->num >> 2); > } > > +static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, > + size_t hdr_size, int out) > +{ > + /* Skip header. TODO: support TSO. */ > + size_t len = iov_length(vq->iov, out); > + > + iov_iter_init(iter, WRITE, vq->iov, out, len); > + iov_iter_advance(iter, hdr_size);...
2018 May 21
0
[RFC PATCH net-next 01/12] vhost_net: introduce helper to initialize tx iov iter
...--git a/drivers/vhost/net.c b/drivers/vhost/net.c index c4b49fc..15d191a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -459,6 +459,26 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) min_t(unsigned int, VHOST_MAX_PEND, vq->num >> 2); } +static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, + size_t hdr_size, int out) +{ + /* Skip header. TODO: support TSO. */ + size_t len = iov_length(vq->iov, out); + + iov_iter_init(iter, WRITE, vq->iov, out, len); + iov_iter_advance(iter, hdr_size); + /* Sanity check */ + if (!iov_ite...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2018 May 21
1
[RFC PATCH net-next 02/12] vhost_net: introduce vhost_exceeds_weight()
...t; drivers/vhost/net.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 15d191a..de544ee 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -479,6 +479,12 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, > return len; > } > > +static bool vhost_exceeds_weight(int pkts, int total_len) > +{ > + return unlikely(total_len >= VHOST_NET_WEIGHT) || > + unlikely(pkts >= VHOST_NET_PKT_WEIGHT); I was going to say jus...
2018 May 21
1
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
...+ if (in) { > + vq_err(vq, "Unexpected descriptor format for TX: " > + "out %d, int %d\n", out, in); don't break strings, keep all the bits between " " together, even if it is longer than 80 chars. > + break; > + } > + > + len = init_iov_iter(vq, &msg.msg_iter, hdr_size, out); > + if (len < 0) > + break; same comment as previous patch, len is a size_t which is unsigned. > + > + total_len += len; > + if (total_len < VHOST_NET_WEIGHT && > + vhost_has_more_pkts(net, vq)) { > + msg.msg_...
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all: We do not support XDP batching for TUN since it can only receive one packet a time from vhost_net. This series tries to remove this limitation by: - introduce a TUN specific msg_control that can hold a pointer to an array of XDP buffs - try copy and build XDP buff in vhost_net - store XDP buffs in an array and submit them once for every N packets from vhost_net - since TUN can only
2018 May 21
0
[RFC PATCH net-next 02/12] vhost_net: introduce vhost_exceeds_weight()
...ng <jasowang at redhat.com> --- drivers/vhost/net.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 15d191a..de544ee 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -479,6 +479,12 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, return len; } +static bool vhost_exceeds_weight(int pkts, int total_len) +{ + return unlikely(total_len >= VHOST_NET_WEIGHT) || + unlikely(pkts >= VHOST_NET_PKT_WEIGHT); +} + /* Expects to be always run from workqueue - which a...
2019 Sep 25
0
[PATCH] vhost: It's better to use size_t for the 3rd parameter of vhost_exceeds_weight()
...; > As the following code, the 2nd branch of iov_iter_advance() does not check if i->count < size, when this happens, i->count -= size may cause len exceed INT_MAX, and then total_len exceed INT_MAX. > > handle_tx_copy() -> > get_tx_bufs(..., &len, ...) -> > init_iov_iter() -> > iov_iter_advance(iter, ...) // has 3 branches: > pipe_advance() // has checked the size: if (unlikely(i->count < size)) size = i->count; > iov_iter_is_discard() ... // no check. Yes, but I don't think we use ITER_DISCARD. Thanks > it...
2018 May 21
0
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
...f (unlikely(vhost_enable_notify(&net->dev, vq))) { + vhost_disable_notify(&net->dev, vq); + continue; + } + break; + } + if (in) { + vq_err(vq, "Unexpected descriptor format for TX: " + "out %d, int %d\n", out, in); + break; + } + + len = init_iov_iter(vq, &msg.msg_iter, hdr_size, out); + if (len < 0) + break; + + total_len += len; + if (total_len < VHOST_NET_WEIGHT && + vhost_has_more_pkts(net, vq)) { + msg.msg_flags |= MSG_MORE; + } else { + msg.msg_flags &= ~MSG_MORE; + } + + /* TODO: Check specific err...
2019 May 17
0
[PATCH V2 1/4] vhost: introduce vhost_exceeds_weight()
...drivers/vhost/vhost.h | 5 ++++- drivers/vhost/vsock.c | 12 +++++++++++- 5 files changed, 48 insertions(+), 20 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index df51a35..061a06d 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -604,12 +604,6 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, return iov_iter_count(iter); } -static bool vhost_exceeds_weight(int pkts, int total_len) -{ - return total_len >= VHOST_NET_WEIGHT || - pkts >= VHOST_NET_PKT_WEIGHT; -} - static int get_tx_bufs(struct vhost_net *net,...
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...st_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), > - out_num, in_num, NULL, NULL); > + r = vhost_get_avail_buf(tvq, buf, tvq->iov, ARRAY_SIZE(tvq->iov), > + out_num, in_num, NULL, NULL); > } > > return r; > @@ -607,6 +620,7 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, > > static int get_tx_bufs(struct vhost_net *net, > struct vhost_net_virtqueue *nvq, > + struct vhost_buf *buf, > struct msghdr *msg, > unsigned int *out, unsigned int *in, >...
2020 Jun 07
17
[PATCH RFC v5 00/13] vhost: ring format independence
This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to IOV later. The point is that we have a tight loop that fetches descriptors, which is good for cache utilization. This will
2020 Jun 02
0
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...q, busyloop_intr, false); - r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), - out_num, in_num, NULL, NULL); + r = vhost_get_avail_buf(tvq, buf, tvq->iov, ARRAY_SIZE(tvq->iov), + out_num, in_num, NULL, NULL); } return r; @@ -607,6 +620,7 @@ static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, static int get_tx_bufs(struct vhost_net *net, struct vhost_net_virtqueue *nvq, + struct vhost_buf *buf, struct msghdr *msg, unsigned int *out, unsigned int *in, size_t *len, bool *busyloop_intr)...
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully that is enough to fix vhost-net crashes reported here, but it helps me keep track of what changed. I will naturally squash them later when we are done. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that
2020 Jun 10
18
[PATCH RFC v7 00/14] vhost: ring format independence
This intentionally leaves "fixup" changes separate - hopefully that is enough to fix vhost-net crashes reported here, but it helps me keep track of what changed. I will naturally squash them later when we are done. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that
2019 May 16
6
[PATCH net 0/4] Prevent vhost kthread from hogging CPU
Hi: This series try to prvernt a guest triggerable CPU hogging through vhost kthread. This is done by introducing and checking the weight after each requrest. The patch has been tested with reproducer of vsock and virtio-net. Only compile test is done for vhost-scsi. Please review. This addresses CVE-2019-3900. Jason Wang (4): vhost: introduce vhost_exceeds_weight() vhost_net: fix possible
2020 Jun 02
21
[PATCH RFC 00/13] vhost: format independence
We let the specifics of the ring format seep through to vhost API callers - mostly because there was only one format so it was hard to imagine what an independent API would look like. Now that there's an alternative in form of the packed ring, it's easier to see the issues, and fixing them is perhaps the cleanest way to add support for more formats. This patchset does this by indtroducing
2020 Jun 08
14
[PATCH RFC v6 00/11] vhost: ring format independence
This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to IOV later. The point is that we have a tight loop that fetches descriptors, which is good for cache utilization. This will
2019 May 17
9
[PATCH V2 0/4] Prevent vhost kthread from hogging CPU
Hi: This series try to prevent a guest triggerable CPU hogging through vhost kthread. This is done by introducing and checking the weight after each requrest. The patch has been tested with reproducer of vsock and virtio-net. Only compile test is done for vhost-scsi. Please review. This addresses CVE-2019-3900. Changs from V1: - fix user-ater-free in vosck patch Jason Wang (4): vhost: