Displaying 20 results from an estimated 439 matches for "netperf".
2010 Aug 18
0
Re: [netperf-talk] How configure my firewall to execute netperf ? I use shorewall (iptable firewall) on Debian
Le 16/08/2010 19:20, Rick Jones a écrit :
> Klein Stéphane wrote:
>> Hi,
>>
>> I''ve two computer :
>> * A : it''s a server with a firewall
>> * B : an computer on internet
>>
>> I''ve installed netserver on host A.
>> I use netperf on host B.
>>
>> On host B, I launch :
>>
>> $ netperf -H host_A_address_IP
>>
>> If I stop the firewall on host A, all work great.
>> It isn''t work when firewall is enabled.
>>
>> In filewall rules, I''ve opened default netserv...
2006 Oct 17
3
Much difference between netperf results on every run
Hi all,
the throughput measured by netperf differs from time to time. The
changeset was xen-unstable.hg C/S 11760. This is observed when I
executed a netperf on DomU connecting to a netserver on Dom0 in the
same box. The observed throughput was between 185Mbps to 3854Mbps. I
have never seen such a difference on ia64.
Regards,
Hiroya...
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes byte...
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes byte...
2005 Jul 11
9
HTB Rate and Prio (continued)
Hi again,
I keep posting about my problem with HTB ->
http://mailman.ds9a.nl/pipermail/lartc/2005q3/016611.html
With a bit of search I recently found the exact same problem I have in the
2004 archives with some
graphs that explain it far better than I did ->
http://mailman.ds9a.nl/pipermail/lartc/2004q4/014519.html
and
http://mailman.ds9a.nl/pipermail/lartc/2004q4/014568.html
2018 Jun 30
1
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
...ck receive queue at the
> same time. handle_rx do that in the same way.
>
> We set the poll-us=100us and use the iperf3 to test
Where/how do you configure poll-us=100us ?
Are you talking about /proc/sys/net/core/busy_poll ?
p.s. Nice performance boost! :-)
> its bandwidth, use the netperf to test throughput and mean
> latency. When running the tests, the vhost-net kthread of
> that VM, is alway 100% CPU. The commands are shown as below.
>
> iperf3 -s -D
> iperf3 -c IP -i 1 -P 1 -t 20 -M 1400
>
> or
> netserver
> netperf -H IP -t TCP_RR -l 20 -- -O &quo...
2009 Sep 04
2
Xen & netperf
...There will only be a single virtual machine, and I am for
all practical purposes, the only person with access to the box.
Anyway, the game server is more network intensive than CPU intensive,
and that will be my primary criteria for decided whether I virtualize.
I ran some naive benchmarks with netperf on my Dom0 (debian lenny w/
xen 3.2.1), DomU, and my Linux box at home. Dom0 and DomU are
connected by a public network (100 Mbps link) and a private network (1
Gbps link). All netperf tests were run with defaults (w/o any extra
options).
Dom0 to Dom0 (local)
TCP STREAM TEST from 0.0.0.0 (0.0.0....
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545 MB/sec).
> >
> > filebench,...
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545 MB/sec).
> >
> > filebench,...
2016 Apr 06
1
[virtio-dev] virtio-vsock live migration
On Wed, Mar 16, 2016 at 05:05:19PM +0200, Michael S. Tsirkin wrote:
> > > > NFS and netperf are the first two protocols I looked
> > > > at and both transmit address information across the connection...
> > >
> > >
> > > Does netperf really attempt to get local IP
> > > and then send that inline within the connection?
> >
> &g...
2016 Apr 06
1
[virtio-dev] virtio-vsock live migration
On Wed, Mar 16, 2016 at 05:05:19PM +0200, Michael S. Tsirkin wrote:
> > > > NFS and netperf are the first two protocols I looked
> > > > at and both transmit address information across the connection...
> > >
> > >
> > > Does netperf really attempt to get local IP
> > > and then send that inline within the connection?
> >
> &g...
2008 Sep 10
0
netperf strange issue
Hi,
I''m not able to get netperf working for a virtual PCI(vif) domU.
However, I''m able to get it working while doing pass-through I/O. I''m
not sure what is the problem. However, I am able to ping the
destination machine using the vif domU. Is there any port etc. that
needs to be unblocked here.
Regards,
Asim
_...
2009 Jun 10
5
trouble with maxbw
...ssbow, and I
have a couple of questions. First, the limts seem only advisory. The first
example has the main host talking to a zone that has 172.16.17.100
configured on znic0. When there is no maxbw, the throughtput is
as expected; when maxbw is 55M the throughput only drops to 76 Mbps:
# netperf -H 172.16.17.100
TCP STREAM TEST from ::ffff:0.0.0.0 (0.0.0.0) port 0 AF_INET to ::ffff:172.16.17.100 (172.16.17.100) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughpu...
2008 Jun 24
5
Reg: Throughput b/w domU & dom0
Hi all,
I used netperf to measure throughput between dom0 & domU.
The throughput between dom0 -> domU was 256.00 Mb/sec
domU -> dom0 was 401.15 Mb/sec.
The throughput between dom0 & domU seems to be very asymmetry. To my
surprise the throughput between domU ->...
2016 Mar 16
3
[virtio-dev] virtio-vsock live migration
...guest<->guest
communication requires a virtio spec change. If packets contain
source/destination CIDs then allowing/forbidding guest<->host or
guest<->guest communication is purely a host policy decision. I think
it's worth keeping that in from the start.
> > NFS and netperf are the first two protocols I looked
> > at and both transmit address information across the connection...
>
>
> Does netperf really attempt to get local IP
> and then send that inline within the connection?
Yes, netperf has separate control and data sockets. I think part o...
2016 Mar 16
3
[virtio-dev] virtio-vsock live migration
...guest<->guest
communication requires a virtio spec change. If packets contain
source/destination CIDs then allowing/forbidding guest<->host or
guest<->guest communication is purely a host policy decision. I think
it's worth keeping that in from the start.
> > NFS and netperf are the first two protocols I looked
> > at and both transmit address information across the connection...
>
>
> Does netperf really attempt to get local IP
> and then send that inline within the connection?
Yes, netperf has separate control and data sockets. I think part o...
2016 Oct 24
2
NFS help
....
> I'm certain I'm missing something, but the fundamental architecture
> doesn't make sense to me given what I understand of the process flow.
>
> Were you able to run some basic network testing tools between the C6
> and C7 machines? I'm interested specifically in netperf, which does
> round trip packet testing, both TCP and UDP. I would look for packet
> drops with UDP, and/or major performance outliers with TCP, and/or any
> kind of timeouts with either protocol.
netperf is not installed.
> How is name resolution working on both machines? Do you ad...
2013 Jun 12
26
Interesting observation with network event notification and batching
Hi all
I''m hacking on a netback trying to identify whether TLB flushes causes
heavy performance penalty on Tx path. The hack is quite nasty (you would
not want to know, trust me).
Basically what is doesn''t is, 1) alter network protocol to pass along
mfns instead of grant references, 2) when the backend sees a new mfn,
map it RO and cache it in its own address space.
With this
2008 Aug 28
4
Samba ignoring socket options?
...onsist of 2 6 disk raid5 sets with fast
disks on them, running lvm and XFS for a filesysteem. I can do a dd of
a multigigabyte file to /dev/null and get roughly 500-600 MB/'s
transfer rates through the filesystem, so I don't think the raid array
and file system is a bottleneck.
I have run netperf tests
between the server and the clients to see if I had some network
plumbing problems. With default socket settings for netperf (8182
buffer size), I get about 300 mbps transfer rates between the clients
and the server (which matches approximately the 30 MB/s transfer
rates). With 65536 byte bu...
2014 Aug 17
2
[PATCH] vhost: Add polling mode
...any number of explanations besides polling
> improving the efficiency. For example, increasing system load might
> disable host power management.
>
Hi Michael,
I re-ran the tests, this time with the "turbo mode" and "C-states"
features off.
No Polling:
1 VM running netperf (msg size 64B): 1107 Mbits/sec
Polling:
1 VM running netperf (msg size 64B): 1572 Mbits/sec
As you can see from the new results, the numbers are lower,
but relatively (polling on/off) there's no change.
Thank you,
Razya
>
> > > --
> > > MST
> > >...