search for: tx_flush

Displaying 19 results from an estimated 19 matches for "tx_flush".

2019 Jun 06
1
memory leak in vhost_net_ioctl
...hanged, 7 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 3beb401..dcf20b6 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -141,6 +141,7 @@ struct vhost_net { unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + bool ld; /* Last dinner */ /* Private page frag */ struct page_frag page_frag; /* Refcount bias of page frag */ @@ -1283,6 +1284,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) n = kvmalloc(sizeof *n, GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!n) return -ENOMEM; + n...
2019 Jun 06
1
memory leak in vhost_net_ioctl
...hanged, 7 insertions(+), 1 deletion(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 3beb401..dcf20b6 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -141,6 +141,7 @@ struct vhost_net { unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + bool ld; /* Last dinner */ /* Private page frag */ struct page_frag page_frag; /* Refcount bias of page frag */ @@ -1283,6 +1284,7 @@ static int vhost_net_open(struct inode *inode, struct file *f) n = kvmalloc(sizeof *n, GFP_KERNEL | __GFP_RETRY_MAYFAIL); if (!n) return -ENOMEM; + n...
2019 Jun 13
0
memory leak in vhost_net_ioctl
...iff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 3beb401..dcf20b6 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -141,6 +141,7 @@ struct vhost_net { > ????unsigned tx_zcopy_err; > ????/* Flush in progress. Protected by tx vq lock. */ > ????bool tx_flush; > +??? bool ld;??? /* Last dinner */ > ????/* Private page frag */ > ????struct page_frag page_frag; > ????/* Refcount bias of page frag */ > @@ -1283,6 +1284,7 @@ static int vhost_net_open(struct inode *inode, > struct file *f) > ????n = kvmalloc(sizeof *n, GFP_KERNEL | __GF...
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...n.c if (iov_iter_npages(&i, INT_MAX) <= MAX_SKB_FRAGS) Generating packets with up to 21 frags. I'm not sure yet why or what the fraction of these packets is. But this in turn can disable zcopy_used in vhost_net_tx_select_zcopy for a larger share of packets: return !net->tx_flush && net->tx_packets / 64 >= net->tx_zcopy_err; Because the number of copied and zerocopy packets are the same before and after the patch, so are the overall throughput numbers.
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...n.c if (iov_iter_npages(&i, INT_MAX) <= MAX_SKB_FRAGS) Generating packets with up to 21 frags. I'm not sure yet why or what the fraction of these packets is. But this in turn can disable zcopy_used in vhost_net_tx_select_zcopy for a larger share of packets: return !net->tx_flush && net->tx_packets / 64 >= net->tx_zcopy_err; Because the number of copied and zerocopy packets are the same before and after the patch, so are the overall throughput numbers.
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
...e_idx; + /* an array of userspace buffers info */ + struct ubuf_info *ubuf_info; + /* Reference counting for outstanding ubufs. + * Protected by vq mutex. Writers must also take device mutex. */ + struct vhost_ubuf_ref *ubufs; }; struct vhost_net { @@ -92,6 +108,88 @@ struct vhost_net { bool tx_flush; }; +static unsigned vhost_zcopy_mask __read_mostly; + +void vhost_enable_zcopy(int vq) +{ + vhost_zcopy_mask |= 0x1 << vq; +} + +static void vhost_zerocopy_done_signal(struct kref *kref) +{ + struct vhost_ubuf_ref *ubufs = container_of(kref, struct vhost_ubuf_ref, + kref); + wake...
2013 Apr 27
0
[PATCH] vhost: Move vhost-net zerocopy support fields to net.c
...e_idx; + /* an array of userspace buffers info */ + struct ubuf_info *ubuf_info; + /* Reference counting for outstanding ubufs. + * Protected by vq mutex. Writers must also take device mutex. */ + struct vhost_ubuf_ref *ubufs; }; struct vhost_net { @@ -92,6 +108,88 @@ struct vhost_net { bool tx_flush; }; +static unsigned vhost_zcopy_mask __read_mostly; + +void vhost_enable_zcopy(int vq) +{ + vhost_zcopy_mask |= 0x1 << vq; +} + +static void vhost_zerocopy_done_signal(struct kref *kref) +{ + struct vhost_ubuf_ref *ubufs = container_of(kref, struct vhost_ubuf_ref, + kref); + wake...
2014 Feb 13
2
[PATCH net v2] vhost: fix ref cnt checking deadlock
...nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV; } else { msg.msg_control = NULL; @@ -785,7 +784,7 @@ static void vhost_net_flush(struct vhost_net *n) vhost_net_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs); mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex); n->tx_flush = false; - kref_init(&n->vqs[VHOST_NET_VQ_TX].ubufs->kref); + atomic_set(&n->vqs[VHOST_NET_VQ_TX].ubufs->refcount, 1); mutex_unlock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex); } } -- MST
2014 Feb 13
2
[PATCH net v2] vhost: fix ref cnt checking deadlock
...nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV; } else { msg.msg_control = NULL; @@ -785,7 +784,7 @@ static void vhost_net_flush(struct vhost_net *n) vhost_net_ubuf_put_and_wait(n->vqs[VHOST_NET_VQ_TX].ubufs); mutex_lock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex); n->tx_flush = false; - kref_init(&n->vqs[VHOST_NET_VQ_TX].ubufs->kref); + atomic_set(&n->vqs[VHOST_NET_VQ_TX].ubufs->refcount, 1); mutex_unlock(&n->vqs[VHOST_NET_VQ_TX].vq.mutex); } } -- MST
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
...if (r) goto err_used; ==> oldubufs = nvq->ubufs; /* here oldubufs might become != 0 */ nvq->ubufs = ubufs; n->tx_packets = 0; n->tx_zcopy_err = 0; n->tx_flush = false; } mutex_unlock(&vq->mutex); if (oldubufs) { vhost_net_ubuf_put_wait_and_free(oldubufs); mutex_lock(&vq->mutex); ==> vhost_zerocopy_signal_used(n, vq); /* tries to update virtqueue structures; endianness i...
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
...if (r) goto err_used; ==> oldubufs = nvq->ubufs; /* here oldubufs might become != 0 */ nvq->ubufs = ubufs; n->tx_packets = 0; n->tx_zcopy_err = 0; n->tx_flush = false; } mutex_unlock(&vq->mutex); if (oldubufs) { vhost_net_ubuf_put_wait_and_free(oldubufs); mutex_lock(&vq->mutex); ==> vhost_zerocopy_signal_used(n, vq); /* tries to update virtqueue structures; endianness i...
2017 Jan 26
0
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
...goto err_used; > > ==> oldubufs = nvq->ubufs; > /* here oldubufs might become != 0 */ > nvq->ubufs = ubufs; > > n->tx_packets = 0; > n->tx_zcopy_err = 0; > n->tx_flush = false; > } > mutex_unlock(&vq->mutex); > > if (oldubufs) { > vhost_net_ubuf_put_wait_and_free(oldubufs); > mutex_lock(&vq->mutex); > ==> vhost_zerocopy_signal_used(n, vq); > /* tries to u...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...sertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index ab11b2bee273..d919284f103b 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -141,6 +141,10 @@ struct vhost_net { unsigned tx_zcopy_err; /* Flush in progress. Protected by tx vq lock. */ bool tx_flush; + /* Private page frag */ + struct page_frag page_frag; + /* Refcount bias of page frag */ + int refcnt_bias; }; static unsigned vhost_net_zcopy_mask __read_mostly; @@ -637,14 +641,53 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len) !vhost_vq_avail_empty(vq-&g...
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6: - Allow device specific fields per vq - Track cmd per vq - Do not track evt - Switch to static array for inflight allocation, completely get rid of the pain to handle inflight allocation failure. Asias He (2): vhost: Allow device specific fields per vq tcm_vhost: Wait for pending requests in vhost_scsi_flush() drivers/vhost/net.c | 60 +++++++++++--------
2013 Apr 27
2
[PATCH v6 0/2] tcm_vhost flush
Changes in v6: - Allow device specific fields per vq - Track cmd per vq - Do not track evt - Switch to static array for inflight allocation, completely get rid of the pain to handle inflight allocation failure. Asias He (2): vhost: Allow device specific fields per vq tcm_vhost: Wait for pending requests in vhost_scsi_flush() drivers/vhost/net.c | 60 +++++++++++--------
2013 May 06
13
[PATCH v2 00/11] vhost cleanups
MST, This is on top of [PATCH 0/2] vhost-net fix ubuf. Asias He (11): vhost: Remove vhost_enable_zcopy in vhost.h vhost: Move VHOST_NET_FEATURES to net.c vhost: Make vhost a separate module vhost: Remove comments for hdr in vhost.h vhost: Simplify dev->vqs[i] access vhost-net: Cleanup vhost_ubuf and vhost_zcopy vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
2013 May 06
13
[PATCH v2 00/11] vhost cleanups
MST, This is on top of [PATCH 0/2] vhost-net fix ubuf. Asias He (11): vhost: Remove vhost_enable_zcopy in vhost.h vhost: Move VHOST_NET_FEATURES to net.c vhost: Make vhost a separate module vhost: Remove comments for hdr in vhost.h vhost: Simplify dev->vqs[i] access vhost-net: Cleanup vhost_ubuf and vhost_zcopy vhost-scsi: Remove unnecessary forward struct vhost_scsi declaration
2017 Sep 28
9
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of
2017 Sep 28
9
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of