similar to: SKB paged fragment lifecycle on receive

Displaying 20 results from an estimated 10000 matches similar to: "SKB paged fragment lifecycle on receive"

2013 Jan 04
31
xennet: skb rides the rocket: 20 slots
Hi Ian, Today i fired up an old VM with a bittorrent client, trying to download some torrents. I seem to be hitting the unlikely case of "xennet: skb rides the rocket: xx slots" and this results in some dropped packets in domU, I don''t see any warnings in dom0. I have added some extra info, but i don''t have enough knowledge if this could/should be prevented from
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote: > On 2014/2/26 13:53, Jason Wang wrote: >> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: >>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: >>>> We used to stop the handling of tx when the number of pending DMAs >>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 02:32 PM, Qin Chuanyu wrote: > On 2014/2/26 13:53, Jason Wang wrote: >> On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: >>> On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: >>>> We used to stop the handling of tx when the number of pending DMAs >>>> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: > On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: >> We used to stop the handling of tx when the number of pending DMAs >> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >> of both host and guest. But it was too aggressive in some cases, since >> any delay or blocking of a single packet
2014 Feb 26
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: > On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason Wang wrote: >> We used to stop the handling of tx when the number of pending DMAs >> exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation >> of both host and guest. But it was too aggressive in some cases, since >> any delay or blocking of a single packet
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote: > On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote: >> > On 02/26/2014 02:32 PM, Qin Chuanyu wrote: >>> > >On 2014/2/26 13:53, Jason Wang wrote: >>>> > >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: >>>>> > >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason
2014 Feb 27
1
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
On 02/26/2014 05:23 PM, Michael S. Tsirkin wrote: > On Wed, Feb 26, 2014 at 03:11:21PM +0800, Jason Wang wrote: >> > On 02/26/2014 02:32 PM, Qin Chuanyu wrote: >>> > >On 2014/2/26 13:53, Jason Wang wrote: >>>> > >>On 02/25/2014 09:57 PM, Michael S. Tsirkin wrote: >>>>> > >>>On Tue, Feb 25, 2014 at 02:53:58PM +0800, Jason
2013 Jun 12
26
Interesting observation with network event notification and batching
Hi all I''m hacking on a netback trying to identify whether TLB flushes causes heavy performance penalty on Tx path. The hack is quite nasty (you would not want to know, trust me). Basically what is doesn''t is, 1) alter network protocol to pass along mfns instead of grant references, 2) when the backend sees a new mfn, map it RO and cache it in its own address space. With this
2013 Jul 09
20
[PATCH 1/1] xen/netback: correctly calculate required slots of skb.
When counting required slots for skb, netback directly uses DIV_ROUND_UP to get slots required by header data. This is wrong when offset in the page of header data is not zero, and is also inconsistent with following calculation for required slot in netbk_gop_skb. In netbk_gop_skb, required slots are calculated based on offset and len in page of header data. It is possible that required slots
2013 Apr 30
6
[PATCH net-next 2/2] xen-netback: avoid allocating variable size array on stack
Tune xen_netbk_count_requests to not touch working array beyond limit, so that we can make working array size constant. Signed-off-by: Wei Liu <wei.liu2@citrix.com> --- drivers/net/xen-netback/netback.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c index
2010 Dec 03
7
skb_checksum_setup() placement in pv-ops vs. legacy kernel
Ian, Jeremy, knowing pretty little about networking, it nevertheless seems to me that the different placement of skb_checksum_setup() (in the receive paths of pv-ops vs in various transmit paths in legacy) poses a compatibility problem (nothing done on either side if sending from pv-ops to legacy, and done on both ends when sending from legacy to pv-ops). Am I overlooking something here? Thanks,
2013 Nov 28
4
[PATCH net] xen-netback: fix fragment detection in checksum setup
The code to detect fragments in checksum_setup() was missing for IPv4 and too eager for IPv6. (It transpires that Windows seems to send IPv6 packets with a fragment header even if they are not a fragment - i.e. offset is zero, and M bit is not set). Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com>
2023 Mar 28
8
[PATCH net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively chaotic. For example, the processing of XDP actions has two copies of similar code. Such as page, xdp_page processing, etc. The purpose of this patch set is to refactor these code. Reduce the difficulty of subsequent maintenance. Subsequent developers will not introduce new bugs because of some complex logical
2023 Mar 22
9
[PATCH net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively chaotic. For example, the processing of XDP actions has two copies of similar code. Such as page, xdp_page processing, etc. The purpose of this patch set is to refactor these code. Reduce the difficulty of subsequent maintenance. Subsequent developers will not introduce new bugs because of some complex logical
2012 Aug 13
9
[PATCH RFC] xen/netback: Count ring slots properly when larger MTU sizes are used
Hi, I ran into an issue where netback driver is crashing with BUG_ON(npo.meta_prod > ARRAY_SIZE(netbk->meta)). It is happening in Intel 10Gbps network when larger mtu values are used. The problem seems to be the way the slots are counted. After applying this patch things ran fine in my environment. I request to validate my changes. Thanks Siva
2023 Mar 15
10
[RFC net-next 0/8] virtio_net: refactor xdp codes
Due to historical reasons, the implementation of XDP in virtio-net is relatively chaotic. For example, the processing of XDP actions has two copies of similar code. Such as page, xdp_page processing, etc. The purpose of this patch set is to refactor these code. Reduce the difficulty of subsequent maintenance. Subsequent developers will not introduce new bugs because of some complex logical
2012 Nov 26
1
[net-next RFC] pktgen: don't wait for the device who doesn't free skb immediately after sent
Some deivces do not free the old tx skbs immediately after it has been sent (usually in tx interrupt). One such example is virtio-net which optimizes for virt and only free the possible old tx skbs during the next packet sending. This would lead the pktgen to wait forever in the refcount of the skb if no other pakcet will be sent afterwards. Solving this issue by introducing a new flag
2012 Nov 26
1
[net-next RFC] pktgen: don't wait for the device who doesn't free skb immediately after sent
Some deivces do not free the old tx skbs immediately after it has been sent (usually in tx interrupt). One such example is virtio-net which optimizes for virt and only free the possible old tx skbs during the next packet sending. This would lead the pktgen to wait forever in the refcount of the skb if no other pakcet will be sent afterwards. Solving this issue by introducing a new flag
2011 Jan 06
2
Flow Control and Port Mirroring Revisited
Hi, Back in October I reported that I noticed a problem whereby flow control breaks down when openvswitch is configured to mirror a port[1]. I have (finally) looked into this further and the problem appears to relate to cloning of skbs, as Jesse Gross originally suspected. More specifically, in do_execute_actions[2] the first n-1 times that an skb needs to be transmitted it is cloned first and
2011 Jan 06
2
Flow Control and Port Mirroring Revisited
Hi, Back in October I reported that I noticed a problem whereby flow control breaks down when openvswitch is configured to mirror a port[1]. I have (finally) looked into this further and the problem appears to relate to cloning of skbs, as Jesse Gross originally suspected. More specifically, in do_execute_actions[2] the first n-1 times that an skb needs to be transmitted it is cloned first and