Displaying 20 results from an estimated 439 matches for "netperfs".
Did you mean:
netperf
2010 Aug 18
0
Re: [netperf-talk] How configure my firewall to execute netperf ? I use shorewall (iptable firewall) on Debian
Le 16/08/2010 19:20, Rick Jones a écrit :
> Klein Stéphane wrote:
>> Hi,
>>
>> I''ve two computer :
>> * A : it''s a server with a firewall
>> * B : an computer on internet
>>
>> I''ve installed netserver on host A.
>> I use netperf on host B.
>>
>> On host B, I launch :
>>
>> $ netperf -H
2006 Oct 17
3
Much difference between netperf results on every run
Hi all,
the throughput measured by netperf differs from time to time. The
changeset was xen-unstable.hg C/S 11760. This is observed when I
executed a netperf on DomU connecting to a netserver on Dom0 in the
same box. The observed throughput was between 185Mbps to 3854Mbps. I
have never seen such a difference on ia64.
Regards,
Hiroya
_______________________________________________
Xen-devel
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2005 Jul 11
9
HTB Rate and Prio (continued)
Hi again,
I keep posting about my problem with HTB ->
http://mailman.ds9a.nl/pipermail/lartc/2005q3/016611.html
With a bit of search I recently found the exact same problem I have in the
2004 archives with some
graphs that explain it far better than I did ->
http://mailman.ds9a.nl/pipermail/lartc/2004q4/014519.html
and
http://mailman.ds9a.nl/pipermail/lartc/2004q4/014568.html
2018 Jun 30
1
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
On Fri, 29 Jun 2018 23:33:58 -0700
xiangxia.m.yue at gmail.com wrote:
> From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
>
> This patch improves the guest receive and transmit performance.
> On the handle_tx side, we poll the sock receive queue at the
> same time. handle_rx do that in the same way.
>
> We set the poll-us=100us and use the iperf3 to test
Where/how do
2009 Sep 04
2
Xen & netperf
First, I apologize if this message has been received multiple times.
I''m having problems subscribing to this mailing list:
Hi xen-users,
I am trying to decide whether I should run a game server inside a Xen
domain. My primary reason for wanting to virtualize is because I want
to isolate this environment from the rest of my server. I really like
the idea of isolating the game server
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545
2014 Aug 21
2
[PATCH] vhost: Add polling mode
"Michael S. Tsirkin" <mst at redhat.com> wrote on 20/08/2014 01:57:10 PM:
> > Results:
> >
> > Netperf, 1 vm:
> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046
MB/sec).
> > Number of exits/sec decreased 6x.
> > The same improvement was shown when I tested with 3 vms running
netperf
> > (4086 MB/sec -> 5545
2016 Apr 06
1
[virtio-dev] virtio-vsock live migration
On Wed, Mar 16, 2016 at 05:05:19PM +0200, Michael S. Tsirkin wrote:
> > > > NFS and netperf are the first two protocols I looked
> > > > at and both transmit address information across the connection...
> > >
> > >
> > > Does netperf really attempt to get local IP
> > > and then send that inline within the connection?
> >
2016 Apr 06
1
[virtio-dev] virtio-vsock live migration
On Wed, Mar 16, 2016 at 05:05:19PM +0200, Michael S. Tsirkin wrote:
> > > > NFS and netperf are the first two protocols I looked
> > > > at and both transmit address information across the connection...
> > >
> > >
> > > Does netperf really attempt to get local IP
> > > and then send that inline within the connection?
> >
2008 Sep 10
0
netperf strange issue
Hi,
I''m not able to get netperf working for a virtual PCI(vif) domU.
However, I''m able to get it working while doing pass-through I/O. I''m
not sure what is the problem. However, I am able to ping the
destination machine using the vif domU. Is there any port etc. that
needs to be unblocked here.
Regards,
Asim
_______________________________________________
Xen-users
2009 Jun 10
5
trouble with maxbw
Folks,
I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I
have a couple of questions. First, the limts seem only advisory. The first
example has the main host talking to a zone that has 172.16.17.100
configured on znic0. When there is no maxbw, the throughtput is
as expected; when maxbw is 55M the throughput only drops to 76 Mbps:
# netperf -H
2008 Jun 24
5
Reg: Throughput b/w domU & dom0
Hi all,
I used netperf to measure throughput between dom0 & domU.
The throughput between dom0 -> domU was 256.00 Mb/sec
domU -> dom0 was 401.15 Mb/sec.
The throughput between dom0 & domU seems to be very asymmetry. To my
surprise the throughput between domU -> dom0 is more. The value which I
specified are consistent values. Is
2016 Mar 16
3
[virtio-dev] virtio-vsock live migration
On Tue, Mar 15, 2016 at 06:12:55PM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 15, 2016 at 03:15:29PM +0000, Stefan Hajnoczi wrote:
> > On Mon, Mar 14, 2016 at 01:13:24PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Mar 03, 2016 at 03:37:37PM +0000, Stefan Hajnoczi wrote:
> > > > Michael pointed out that the virtio-vsock draft specification does not
> >
2016 Mar 16
3
[virtio-dev] virtio-vsock live migration
On Tue, Mar 15, 2016 at 06:12:55PM +0200, Michael S. Tsirkin wrote:
> On Tue, Mar 15, 2016 at 03:15:29PM +0000, Stefan Hajnoczi wrote:
> > On Mon, Mar 14, 2016 at 01:13:24PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Mar 03, 2016 at 03:37:37PM +0000, Stefan Hajnoczi wrote:
> > > > Michael pointed out that the virtio-vsock draft specification does not
> >
2016 Oct 24
2
NFS help
On Mon, Oct 24, 2016 at 5:25 PM, Matt Garman <matthew.garman at gmail.com> wrote:
> On Mon, Oct 24, 2016 at 2:42 PM, Larry Martell <larry.martell at gmail.com> wrote:
>>> At any rate, what I was looking at was seeing if there was any way to
>>> simplify this process, and cut NFS out of the picture. If you need
>>> only to push these files around, what
2013 Jun 12
26
Interesting observation with network event notification and batching
Hi all
I''m hacking on a netback trying to identify whether TLB flushes causes
heavy performance penalty on Tx path. The hack is quite nasty (you would
not want to know, trust me).
Basically what is doesn''t is, 1) alter network protocol to pass along
mfns instead of grant references, 2) when the backend sees a new mfn,
map it RO and cache it in its own address space.
With this
2008 Aug 28
4
Samba ignoring socket options?
Hi everyone. I am running Samba 3.2.0-22.1 (as packaged by OpenSUSE in
11.0) on a storage server connected to multiple windows based clients
over a gigabit ethernet link. The server is a quad core Intel CPU and
is equipped with an Intel e1000 based gigabit ethernet controller and plugged into a common gigabit ethernet switch with the windows clients.
I am seeing performance issues on transfers
2014 Aug 17
2
[PATCH] vhost: Add polling mode
> >
> > Hi Michael,
> >
> > Sorry for the delay, had some problems with my mailbox, and I realized
> > just now that
> > my reply wasn't sent.
> > The vm indeed ALWAYS utilized 100% cpu, whether polling was enabled or
> > not.
> > The vhost thread utilized less than 100% (of the other cpu) when
polling
> > was disabled.
>