similar to: performance regression in virtio-net in 2.6.32-rc4

Displaying 20 results from an estimated 3000 matches similar to: "performance regression in virtio-net in 2.6.32-rc4"

2009 Sep 04
2
Xen & netperf
First, I apologize if this message has been received multiple times. I''m having problems subscribing to this mailing list: Hi xen-users, I am trying to decide whether I should run a game server inside a Xen domain. My primary reason for wanting to virtualize is because I want to isolate this environment from the rest of my server. I really like the idea of isolating the game server
2011 Oct 27
0
No subject
box. I'll send an updated KVM tools patch in a bit as well. Before: # netperf -H 192.168.33.4,ipv4 -t TCP_RR MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0 Local /Remote Socket Size Request Resp. Elapsed Trans. Send Recv Size Size Time Rate bytes Bytes bytes bytes
2011 Oct 27
0
No subject
box. I'll send an updated KVM tools patch in a bit as well. Before: # netperf -H 192.168.33.4,ipv4 -t TCP_RR MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0 Local /Remote Socket Size Request Resp. Elapsed Trans. Send Recv Size Size Time Rate bytes Bytes bytes bytes
2013 Oct 31
0
[PATCH net-next 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable rx buffers to page frag allocators) try to increase the payload/truesize for MTU-sized traffic. But this will introduce the extra overhead for GSO packets received because of the frag list. This commit tries to reduce this issue by coalesce the possible rx frags when possible during rx. Test result shows the about 15%
2013 Oct 31
0
[PATCH net-next V2 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable rx buffers to page frag allocators) try to increase the payload/truesize for MTU-sized traffic. But this will introduce the extra overhead for GSO packets received because of the frag list. This commit tries to reduce this issue by coalesce the possible rx frags when possible during rx. Test result shows the about 15%
2013 Nov 01
0
[PATCH net-next V3 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable rx buffers to page frag allocators) try to increase the payload/truesize for MTU-sized traffic. But this will introduce the extra overhead for GSO packets received because of the frag list. This commit tries to reduce this issue by coalesce the possible rx frags when possible during rx. Test result shows the about 15%
2009 Jun 10
5
trouble with maxbw
Folks, I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I have a couple of questions. First, the limts seem only advisory. The first example has the main host talking to a zone that has 172.16.17.100 configured on znic0. When there is no maxbw, the throughtput is as expected; when maxbw is 55M the throughput only drops to 76 Mbps: # netperf -H
2011 Nov 29
1
[PATCH] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect descriptors even if we have plenty of space in the ring. This means that we take a performance hit at all times due to the overhead of creating indirect descriptors. With this patch, we will use indirect descriptors only if we have less than either 16, or 12% of the total amount of descriptors available. I did basic
2011 Nov 29
1
[PATCH] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect descriptors even if we have plenty of space in the ring. This means that we take a performance hit at all times due to the overhead of creating indirect descriptors. With this patch, we will use indirect descriptors only if we have less than either 16, or 12% of the total amount of descriptors available. I did basic
2020 Jun 11
0
[PATCH RFC v7 03/14] vhost: use batched get_vq_desc version
On Wed, Jun 10, 2020 at 06:18:32PM +0200, Eugenio Perez Martin wrote: > On Wed, Jun 10, 2020 at 5:13 PM Michael S. Tsirkin <mst at redhat.com> wrote: > > > > On Wed, Jun 10, 2020 at 02:37:50PM +0200, Eugenio Perez Martin wrote: > > > > +/* This function returns a value > 0 if a descriptor was found, or 0 if none were found. > > > > + * A negative
2013 Jun 06
4
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since some older hypervisors (e.g. qemu before version 1.5) required that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- This is a repost of my old
2013 Jun 06
4
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since some older hypervisors (e.g. qemu before version 1.5) required that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- This is a repost of my old
2020 Jun 16
0
[PATCH RFC v7 03/14] vhost: use batched get_vq_desc version
On Tue, Jun 16, 2020 at 05:23:43PM +0200, Eugenio Perez Martin wrote: > On Mon, Jun 15, 2020 at 6:05 PM Eugenio P??rez <eperezma at redhat.com> wrote: > > > > On Thu, 2020-06-11 at 07:30 -0400, Michael S. Tsirkin wrote: > > > On Wed, Jun 10, 2020 at 06:18:32PM +0200, Eugenio Perez Martin wrote: > > > > On Wed, Jun 10, 2020 at 5:13 PM Michael S. Tsirkin
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able to be rescheduled within napi_complete_done() even in non-busypoll case, but virtnet_poll() always enabled interrupts before complete, and when napi was rescheduled within napi_complete_done() it did not disable interrupts. This caused more interrupts when event idx is disabled. According to commit cbdadbbf0c79
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able to be rescheduled within napi_complete_done() even in non-busypoll case, but virtnet_poll() always enabled interrupts before complete, and when napi was rescheduled within napi_complete_done() it did not disable interrupts. This caused more interrupts when event idx is disabled. According to commit cbdadbbf0c79
2013 Jun 07
0
[PATCH] virtio-net: put virtio net header inline with data
On 06/06/2013 05:55 PM, Michael S. Tsirkin wrote: > For small packets we can simplify xmit processing by linearizing buffers > with the header: most packets seem to have enough head room we can use > for this purpose. > > Since some older hypervisors (e.g. qemu before version 1.5) > required that header is the first s/g element, > we need a feature bit for this. > >
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is virtio-net driver which tries to use small frags for both MTU sized packet and GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this. Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michael Dalton <mwdalton at google.com> Cc: Eric Dumazet
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is virtio-net driver which tries to use small frags for both MTU sized packet and GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this. Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michael Dalton <mwdalton at google.com> Cc: Eric Dumazet
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is virtio-net driver which tries to use small frags for both MTU sized packet and GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this. Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michael Dalton <mwdalton at google.com> Cc: Eric Dumazet
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is virtio-net driver which tries to use small frags for both MTU sized packet and GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this. Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: Michael S. Tsirkin <mst at redhat.com> Cc: Michael Dalton <mwdalton at google.com> Cc: Eric Dumazet