search for: destructor_arg

Displaying 20 results from an estimated 21 matches for "destructor_arg".

2018 Sep 06
1
[PATCH net-next 08/11] tun: switch to new type of msg_control
...i; > > copylen = vnet_hdr.hdr_len ? > @@ -724,11 +724,11 @@ static ssize_t tap_get_user(struct tap_queue *q, struct msghdr *m, > tap = rcu_dereference(q->tap); > /* copy skb_ubuf_info for callback when skb has no error */ > if (zerocopy) { > - skb_shinfo(skb)->destructor_arg = m->msg_control; > + skb_shinfo(skb)->destructor_arg = msg_control; > skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; > skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; > - } else if (m && m->msg_control) { > - struct ubuf_info *uarg = m->msg_contro...
2018 Sep 06
0
[PATCH net-next 08/11] tun: switch to new type of msg_control
...CK_ZEROCOPY)) { struct iov_iter i; copylen = vnet_hdr.hdr_len ? @@ -724,11 +724,11 @@ static ssize_t tap_get_user(struct tap_queue *q, struct msghdr *m, tap = rcu_dereference(q->tap); /* copy skb_ubuf_info for callback when skb has no error */ if (zerocopy) { - skb_shinfo(skb)->destructor_arg = m->msg_control; + skb_shinfo(skb)->destructor_arg = msg_control; skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY; skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; - } else if (m && m->msg_control) { - struct ubuf_info *uarg = m->msg_control; + } else if (msg_contr...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...gt; I think this happens when a device is removed. > > Thoughts? > Agree, vhost net removal should not be blocked by a skb. But since the skbs could be queued may places, just destroy them may need extra locks. Haven't thought this deeply, but another possible sloution is to rcuify destructor_arg and assign it to NULL during vhost_net removing. >> --- >> drivers/vhost/net.c | 17 +++++++---------- >> 1 file changed, 7 insertions(+), 10 deletions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index a0fa5de..3e96e47 100644 >> --...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...gt; I think this happens when a device is removed. > > Thoughts? > Agree, vhost net removal should not be blocked by a skb. But since the skbs could be queued may places, just destroy them may need extra locks. Haven't thought this deeply, but another possible sloution is to rcuify destructor_arg and assign it to NULL during vhost_net removing. >> --- >> drivers/vhost/net.c | 17 +++++++---------- >> 1 file changed, 7 insertions(+), 10 deletions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index a0fa5de..3e96e47 100644 >> --...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...t; Thoughts? >>> >> >> Agree, vhost net removal should not be blocked by a skb. But since the >> skbs could be queued may places, just destroy them may need extra locks. >> >> Haven't thought this deeply, but another possible sloution is to rcuify >> destructor_arg and assign it to NULL during vhost_net removing. > > Xen treat it by a timer, for those skbs which has been delivered for a > while, netback would exchange page of zero_copy's skb with dom0's page. > > but there is still a race between host's another process handle the sk...
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...t; Thoughts? >>> >> >> Agree, vhost net removal should not be blocked by a skb. But since the >> skbs could be queued may places, just destroy them may need extra locks. >> >> Haven't thought this deeply, but another possible sloution is to rcuify >> destructor_arg and assign it to NULL during vhost_net removing. > > Xen treat it by a timer, for those skbs which has been delivered for a > while, netback would exchange page of zero_copy's skb with dom0's page. > > but there is still a race between host's another process handle the sk...
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...ocked by a skb. But since the >>>> > >>skbs could be queued may places, just destroy them may need extra locks. >>>> > >> >>>> > >>Haven't thought this deeply, but another possible sloution is to rcuify >>>> > >>destructor_arg and assign it to NULL during vhost_net removing. >>> > > >>> > >Xen treat it by a timer, for those skbs which has been delivered for a >>> > >while, netback would exchange page of zero_copy's skb with dom0's page. >>> > > >>&...
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...ocked by a skb. But since the >>>> > >>skbs could be queued may places, just destroy them may need extra locks. >>>> > >> >>>> > >>Haven't thought this deeply, but another possible sloution is to rcuify >>>> > >>destructor_arg and assign it to NULL during vhost_net removing. >>> > > >>> > >Xen treat it by a timer, for those skbs which has been delivered for a >>> > >while, netback would exchange page of zero_copy's skb with dom0's page. >>> > > >>&...
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...> >> > >>Agree, vhost net removal should not be blocked by a skb. But since the > >>skbs could be queued may places, just destroy them may need extra locks. > >> > >>Haven't thought this deeply, but another possible sloution is to rcuify > >>destructor_arg and assign it to NULL during vhost_net removing. > > > >Xen treat it by a timer, for those skbs which has been delivered for a > >while, netback would exchange page of zero_copy's skb with dom0's page. > > > >but there is still a race between host's another...
2014 Feb 26
0
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
...is removed. >> >> Thoughts? >> > > Agree, vhost net removal should not be blocked by a skb. But since the > skbs could be queued may places, just destroy them may need extra locks. > > Haven't thought this deeply, but another possible sloution is to rcuify > destructor_arg and assign it to NULL during vhost_net removing. Xen treat it by a timer, for those skbs which has been delivered for a while, netback would exchange page of zero_copy's skb with dom0's page. but there is still a race between host's another process handle the skb and netback exchange...
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2018 Sep 06
22
[PATCH net-next 00/11] Vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+
2018 Sep 12
14
[PATCH net-next V2 00/11] vhost_net TX batching
Hi all: This series tries to batch submitting packets to underlayer socket through msg_control during sendmsg(). This is done by: 1) Doing userspace copy inside vhost_net 2) Build XDP buff 3) Batch at most 64 (VHOST_NET_BATCH) XDP buffs and submit them once through msg_control during sendmsg(). 4) Underlayer sockets can use XDP buffs directly when XDP is enalbed, or build skb based on XDP