Displaying 20 results from an estimated 10000 matches similar to: "Xen & netperf"
2009 Jun 10
5
trouble with maxbw
Folks,
I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I
have a couple of questions. First, the limts seem only advisory. The first
example has the main host talking to a zone that has 172.16.17.100
configured on znic0. When there is no maxbw, the throughtput is
as expected; when maxbw is 55M the throughput only drops to 76 Mbps:
# netperf -H
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2009 Oct 26
2
performance regression in virtio-net in 2.6.32-rc4
Hi!
I noticed a performance regression in virtio net: going from
2.6.31 to 2.6.32-rc4 I see this, for guest to host communication:
[mst at tuck ~]$ ssh robin sh streamtest1
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3
(11.0.0.3) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs.
2009 Oct 26
2
performance regression in virtio-net in 2.6.32-rc4
Hi!
I noticed a performance regression in virtio net: going from
2.6.31 to 2.6.32-rc4 I see this, for guest to host communication:
[mst at tuck ~]$ ssh robin sh streamtest1
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3
(11.0.0.3) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs.
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able
to be rescheduled within napi_complete_done() even in non-busypoll case,
but virtnet_poll() always enabled interrupts before complete, and when
napi was rescheduled within napi_complete_done() it did not disable
interrupts.
This caused more interrupts when event idx is disabled.
According to commit cbdadbbf0c79
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
Since commit 39e6c8208d7b ("net: solve a NAPI race") napi has been able
to be rescheduled within napi_complete_done() even in non-busypoll case,
but virtnet_poll() always enabled interrupts before complete, and when
napi was rescheduled within napi_complete_done() it did not disable
interrupts.
This caused more interrupts when event idx is disabled.
According to commit cbdadbbf0c79
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
0
[PATCH net-next 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable
rx buffers to page frag allocators) try to increase the payload/truesize for
MTU-sized traffic. But this will introduce the extra overhead for GSO packets
received because of the frag list. This commit tries to reduce this issue by
coalesce the possible rx frags when possible during rx. Test result shows the
about 15%
2013 Oct 31
0
[PATCH net-next V2 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable
rx buffers to page frag allocators) try to increase the payload/truesize for
MTU-sized traffic. But this will introduce the extra overhead for GSO packets
received because of the frag list. This commit tries to reduce this issue by
coalesce the possible rx frags when possible during rx. Test result shows the
about 15%
2013 Nov 01
0
[PATCH net-next V3 2/2] virtio-net: coalesce rx frags when possible during rx
Commit 2613af0ed18a11d5c566a81f9a6510b73180660a (virtio_net: migrate mergeable
rx buffers to page frag allocators) try to increase the payload/truesize for
MTU-sized traffic. But this will introduce the extra overhead for GSO packets
received because of the frag list. This commit tries to reduce this issue by
coalesce the possible rx frags when possible during rx. Test result shows the
about 15%
2012 Jan 03
7
Low performance
Hi!
I do a rsync between 2 machines. The throughput is only 2 MByte/Sec.
Each machine is a Supermicro server with
2 x 8 Core Opteron 6128
64 GByte of ECC RAM
1 LSI MegaRAID SAS 9280-24i4e
24 x 2TByte SATA Disks as a RAID6
2 Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network-cards
Both run Ubuntu 11.04 64Bit.
Both use rsync version 3.0.7 protocol version 30
There are no
2010 Sep 13
2
TCP flow latency graphs
Hello,
we have one application which gets some data from our database but
to print just one result it connects many times to the database over WAN.
Of course there are some performance related problems with this type of work.
I'd like to analyze (not necessary online) its tcp flow especially its
latency (using port redirect on switch).
Do you know any software (similar to tcpdump or so) which
2013 Jun 06
4
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers
with the header: most packets seem to have enough head room we can use
for this purpose.
Since some older hypervisors (e.g. qemu before version 1.5)
required that header is the first s/g element,
we need a feature bit for this.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
This is a repost of my old
2013 Jun 06
4
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers
with the header: most packets seem to have enough head room we can use
for this purpose.
Since some older hypervisors (e.g. qemu before version 1.5)
required that header is the first s/g element,
we need a feature bit for this.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
This is a repost of my old