Displaying 20 results from an estimated 36 matches for "tx_zcopy_err".
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...ting packets with up to 21 frags. I'm not sure yet why or
what the fraction of these packets is. But this in turn can
disable zcopy_used in vhost_net_tx_select_zcopy for a
larger share of packets:
return !net->tx_flush &&
net->tx_packets / 64 >= net->tx_zcopy_err;
Because the number of copied and zerocopy packets are the
same before and after the patch, so are the overall throughput
numbers.
2017 Sep 30
2
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...ting packets with up to 21 frags. I'm not sure yet why or
what the fraction of these packets is. But this in turn can
disable zcopy_used in vhost_net_tx_select_zcopy for a
larger share of packets:
return !net->tx_flush &&
net->tx_packets / 64 >= net->tx_zcopy_err;
Because the number of copied and zerocopy packets are the
same before and after the patch, so are the overall throughput
numbers.
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...is that reverting to copying in these cases increases
> cycle cost. I think that that is a trade-off worth making compared to
> the alternative drop in throughput. It probably would be good to be
> able to measure this without kernel instrumentation: export
> counters similar to net->tx_zcopy_err and net->tx_packets (though
> without reset to zero, as in vhost_net_tx_packet).
>
>> 1) sndbuf is not INT_MAX
>
> You mean the case where the device stalls, later zerocopy notifications
> are queued, but these are never cleaned in free_old_xmit_skbs,
> because it require...
2017 Sep 01
2
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...is that reverting to copying in these cases increases
> cycle cost. I think that that is a trade-off worth making compared to
> the alternative drop in throughput. It probably would be good to be
> able to measure this without kernel instrumentation: export
> counters similar to net->tx_zcopy_err and net->tx_packets (though
> without reset to zero, as in vhost_net_tx_packet).
>
>> 1) sndbuf is not INT_MAX
>
> You mean the case where the device stalls, later zerocopy notifications
> are queued, but these are never cleaned in free_old_xmit_skbs,
> because it require...
2018 Jul 02
1
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
...ay choose not to use
>>> zerocopy. So it was probably something in your setup or a bug somewhere.
>> Thanks for the hint!
Seems zerocopy packets are always nonlinear and
netif_receive_generic_xdp() calls skb_linearize() in which
__pskb_pull_tail() calls skb_zcopy_clear(). Looks like tx_zcopy_err is
always counted when zerocopy is used with XDP in my env.
--
Toshiaki Makita
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...these cases increases
>>> cycle cost. I think that that is a trade-off worth making compared to
>>> the alternative drop in throughput. It probably would be good to be
>>> able to measure this without kernel instrumentation: export
>>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>>> without reset to zero, as in vhost_net_tx_packet).
>
>
> I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>
>>>
>>>> 1) sndbuf is not INT_MAX
>>>
>>> You mean the case where...
2017 Sep 05
1
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...these cases increases
>>> cycle cost. I think that that is a trade-off worth making compared to
>>> the alternative drop in throughput. It probably would be good to be
>>> able to measure this without kernel instrumentation: export
>>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>>> without reset to zero, as in vhost_net_tx_packet).
>
>
> I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>
>>>
>>>> 1) sndbuf is not INT_MAX
>>>
>>> You mean the case where...
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b,
however you can only enable this mode if you know your workload does not
trigger heavy guest to host/host to guest traffic - otherwise you
get a (minor) performance regression.
This patchset addresses this problem by notifying the owner
device when callback is invoked because of a data copy.
This makes it possible to
2012 Dec 03
1
[PATCH] vhost-net: initialize zcopy packet counters
...et.c b/drivers/vhost/net.c
index 67898fa..ff6c9199 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -823,6 +823,9 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
r = vhost_init_used(vq);
if (r)
goto err_vq;
+
+ n->tx_packets = 0;
+ n->tx_zcopy_err = 0;
}
mutex_unlock(&vq->mutex);
--
MST
2012 Dec 03
1
[PATCH] vhost-net: initialize zcopy packet counters
...et.c b/drivers/vhost/net.c
index 67898fa..ff6c9199 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -823,6 +823,9 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
r = vhost_init_used(vq);
if (r)
goto err_vq;
+
+ n->tx_packets = 0;
+ n->tx_zcopy_err = 0;
}
mutex_unlock(&vq->mutex);
--
MST
2019 Jun 06
1
memory leak in vhost_net_ioctl
...le release.
Thanks
Hillf
---
drivers/vhost/net.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 3beb401..dcf20b6 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -141,6 +141,7 @@ struct vhost_net {
unsigned tx_zcopy_err;
/* Flush in progress. Protected by tx vq lock. */
bool tx_flush;
+ bool ld; /* Last dinner */
/* Private page frag */
struct page_frag page_frag;
/* Refcount bias of page frag */
@@ -1283,6 +1284,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
n = kvmalloc(sizeof *n...
2019 Jun 06
1
memory leak in vhost_net_ioctl
...le release.
Thanks
Hillf
---
drivers/vhost/net.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 3beb401..dcf20b6 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -141,6 +141,7 @@ struct vhost_net {
unsigned tx_zcopy_err;
/* Flush in progress. Protected by tx vq lock. */
bool tx_flush;
+ bool ld; /* Last dinner */
/* Private page frag */
struct page_frag page_frag;
/* Refcount bias of page frag */
@@ -1283,6 +1284,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
n = kvmalloc(sizeof *n...
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
...r = vhost_net_enable_vq(n, vq);
if (r)
goto err_used;
==> oldubufs = nvq->ubufs;
/* here oldubufs might become != 0 */
nvq->ubufs = ubufs;
n->tx_packets = 0;
n->tx_zcopy_err = 0;
n->tx_flush = false;
}
mutex_unlock(&vq->mutex);
if (oldubufs) {
vhost_net_ubuf_put_wait_and_free(oldubufs);
mutex_lock(&vq->mutex);
==> vhost_zerocopy_signal_used(n, vq);
/* tries to updat...
2017 Jan 26
2
[BUG/RFC] vhost: net: big endian viring access despite virtio 1
...r = vhost_net_enable_vq(n, vq);
if (r)
goto err_used;
==> oldubufs = nvq->ubufs;
/* here oldubufs might become != 0 */
nvq->ubufs = ubufs;
n->tx_packets = 0;
n->tx_zcopy_err = 0;
n->tx_flush = false;
}
mutex_unlock(&vq->mutex);
if (oldubufs) {
vhost_net_ubuf_put_wait_and_free(oldubufs);
mutex_lock(&vq->mutex);
==> vhost_zerocopy_signal_used(n, vq);
/* tries to updat...
2017 Sep 04
0
[PATCH net-next] virtio-net: invoke zerocopy callback on xmit path if no tx napi
...g to copying in these cases increases
>> cycle cost. I think that that is a trade-off worth making compared to
>> the alternative drop in throughput. It probably would be good to be
>> able to measure this without kernel instrumentation: export
>> counters similar to net->tx_zcopy_err and net->tx_packets (though
>> without reset to zero, as in vhost_net_tx_packet).
I think it's acceptable if extra cycles were spent if we detect HOL anyhow.
>>
>>> 1) sndbuf is not INT_MAX
>> You mean the case where the device stalls, later zerocopy notification...