Displaying 20 results from an estimated 23 matches for "800mbps".
Did you mean:
100mbps
2008 Jan 09
2
[PATCH] Increase the tx queue to 512 descriptors to fix performance problem.
Now that we have a host timer based tx wakeup it waits for 64
packets or timeout before processing them.
This might cause the guest to run out of tx buffers while the host
holds them up.
This is a temporal solution to quickly bring back performance to 800mbps.
But a better fix will soon be sent (its not the only problem).
Signed-off-by: Dor Laor <dor.laor@qumranet.com>
---
qemu/hw/virtio-net.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/qemu/hw/virtio-net.c b/qemu/hw/virtio-net.c
index 777fe2c..3d07b65 100644
--- a/...
2008 Jan 09
2
[PATCH] Increase the tx queue to 512 descriptors to fix performance problem.
Now that we have a host timer based tx wakeup it waits for 64
packets or timeout before processing them.
This might cause the guest to run out of tx buffers while the host
holds them up.
This is a temporal solution to quickly bring back performance to 800mbps.
But a better fix will soon be sent (its not the only problem).
Signed-off-by: Dor Laor <dor.laor@qumranet.com>
---
qemu/hw/virtio-net.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/qemu/hw/virtio-net.c b/qemu/hw/virtio-net.c
index 777fe2c..3d07b65 100644
--- a/...
2015 Dec 02
1
[PATCH] Receive multiple packets at a time
Dave Taht, on Wed 02 Dec 2015 14:41:35 +0100, wrote:
> I guess my meta point is driven by my headaches. Getting per packet
> processing to scale up past 100Mbit is hard without offloads even on
> embedded hardware considered "high end".
In my tests I was getting 800Mbps with common laptop Gb ethernet
devices.
> >> > At least for now we could commit the recvmmsg part?
> >>
> >> Not my call. It is a linux only thing, so far as I know.
> >
> > ATM yes. I wouldn't be surprised that other OSes adopt it, though.
>
>...
2007 Dec 21
1
[Virtio-for-kvm] [PATCH 0/7] userspace virtio
...chset updates kvm repository with Anthony's virtio
implementation along
with rx performance improvements and guest reset handling.
The original code was sent to qemu devel list 2 weeks ago.
It contains support for network & block devices.
Using the performance improvement I was able to do 800Mbps for rx/tx.
The performance improvement are not implemented in a qemu standard code but
the intention is to first let the community have decent performance with
a merged virtio and
afterwards do the polishing (there is dma support on the way for qemu).
Enjoy [with the courtesy of Anthony & Rus...
2007 Dec 21
1
[Virtio-for-kvm] [PATCH 0/7] userspace virtio
...chset updates kvm repository with Anthony's virtio
implementation along
with rx performance improvements and guest reset handling.
The original code was sent to qemu devel list 2 weeks ago.
It contains support for network & block devices.
Using the performance improvement I was able to do 800Mbps for rx/tx.
The performance improvement are not implemented in a qemu standard code but
the intention is to first let the community have decent performance with
a merged virtio and
afterwards do the polishing (there is dma support on the way for qemu).
Enjoy [with the courtesy of Anthony & Rus...
2015 Dec 10
3
[PATCH] Receive multiple packets at a time
10.12.2015 09:20, Michael Tokarev wrote:
> 10.12.2015 03:35, Samuel Thibault wrote:
> []
>
> I suggest reducing ifdeffery in handle_incoming_vpn_data(), especially
> the error checking code.
>
> The function isn't that large now, it might be much better to have
> two different implementations. Like this (untested, patch attached):
>
> void
2006 Jan 27
23
5,000 concurrent calls system rollout question
Hi,
we are currently considering different options for rolling out a large scale IP PBX to handle around 3,000 + concurrent calls.
Can this be done with Asterisk? Has it been done before?
I really would like an input on this.
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2016 Jun 10
1
icecast relay server performance testing
Hi Philipp
Thank you for chiming in.
The only reason i use the -kh fork is because it seemed to be more recent. I had this performance issue with the regular version and tried the fork.
I realize that the problem can be with the TCP stack parameters. I had no problem getting about 800Mbps between these machines when using iperf. Certainly, the workload completely different, but at least i know that the TCP stack is somewhat operational.
Do you happen to know specifically what’s broken?
I don’t have access to a physical data center this is why i would like to use either EC2 or Azur...
2007 Dec 21
2
[Virtio-for-kvm] [PATCH 7/7] userspace virtio
...kets one at
a time and also to copy them to a temporal buffer.
This patch prevents qemu handlers from reading the tap and instead
it selects the tap descriptors for virtio devices.
This eliminates copies and also batch guest notifications (interrupts).
Using this patch the rx performance reaches 800Mbps.
The patch does not follow qemu's api since the intention is first to have
a better io in kvm and then to polish it correctly.
Signed-off-by: Dor Laor <dor.laor@qumranet.com>
---
qemu/hw/pc.h | 2 +-
qemu/hw/virtio-net.c | 114
+++++++++++++++++++++++++++++++++++------------...
2007 Dec 21
2
[Virtio-for-kvm] [PATCH 7/7] userspace virtio
...kets one at
a time and also to copy them to a temporal buffer.
This patch prevents qemu handlers from reading the tap and instead
it selects the tap descriptors for virtio devices.
This eliminates copies and also batch guest notifications (interrupts).
Using this patch the rx performance reaches 800Mbps.
The patch does not follow qemu's api since the intention is first to have
a better io in kvm and then to polish it correctly.
Signed-off-by: Dor Laor <dor.laor@qumranet.com>
---
qemu/hw/pc.h | 2 +-
qemu/hw/virtio-net.c | 114
+++++++++++++++++++++++++++++++++++------------...
2006 May 19
1
New HPN Patch Released
...nts in bulk data
throughput performance are achieved.
In other words, transfers over the internet are a lot faster with this
patch. Increase in performance of more than an order of magnitude are
pretty common. If you make use of the now included NONE cipher I've hit
data rates of more than 800Mbps between Pittsburgh and Chicago and
650Mbps between Pittsburgh and San Diego. The NONE cipher is only used
during the bulk data transfer and *not* during authentication. This
version of the patch makes the NONE cipher a server side configuration
option so you don't have to enable it if you d...
2015 Dec 02
0
[PATCH] Receive multiple packets at a time
...thus compiled the same as before), and makes the code index into the
> arrays. You may want to use interdiff -w /dev/null patch to better see
> what changes the patch makes.
>
> With this patch, I saw the non-ciphered bandwidth achieved over direct
> ethernet improve from 680Mbps to 800Mbps (or conversely, reduce the CPU
> usage for the same bandwidth).
That's great! It would be good though to split
handle_incoming_vpn_data() into a function that does the actual
recvfrom/mmsg() and one that processes each individual packet, to reduce
the level of indentation and make the funct...
2015 Dec 10
0
[PATCH] Receive multiple packets at a time
Michael Tokarev, on Thu 10 Dec 2015 09:34:45 +0300, wrote:
> it is highly unlikely we'll have 256 packets in queue to process.
I did get a bit more than a hundred packets in the queue at 800Mbps. But
we can reduce to 64, yes. I wouldn't recommend using a static buffer,
since we'd want to go threaded at some point. Allocating an array on the
stack is very cheap anyway.
Samuel
2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 0/13] [Mostly resend] virtio additions
...s kvm repository with Rusty's/Anthony's virtio
implementation along with already sent tx performance bug fix and
new module reload fixes.
The first 9 patches are Rusty's and Anthony's work and are pending in
Rusty's tree.
This code together with the userspace patches able to do 800Mbps for
network rx/tx.
Enjoy [with the courtesy of Anthony & Rusty],
Dor
2007 Dec 21
0
[kvm-devel] [Virtio-for-kvm] [PATCH 0/13] [Mostly resend] virtio additions
...s kvm repository with Rusty's/Anthony's virtio
implementation along with already sent tx performance bug fix and
new module reload fixes.
The first 9 patches are Rusty's and Anthony's work and are pending in
Rusty's tree.
This code together with the userspace patches able to do 800Mbps for
network rx/tx.
Enjoy [with the courtesy of Anthony & Rusty],
Dor
2015 Dec 02
5
[PATCH] Receive multiple packets at a time
...sg is not available, and
thus compiled the same as before), and makes the code index into the
arrays. You may want to use interdiff -w /dev/null patch to better see
what changes the patch makes.
With this patch, I saw the non-ciphered bandwidth achieved over direct
ethernet improve from 680Mbps to 800Mbps (or conversely, reduce the CPU
usage for the same bandwidth).
More is yet to come: I'll have a look at extending the tun/tap interface
to send/receive several packets at a time, and then also using sendmmsg
will again improve performance.
Samuel
--- configure.ac.original 2015-10-02 17:06:31....
2015 Dec 10
2
[PATCH] Receive multiple packets at a time
On Thu, Dec 10, 2015 at 10:15:19AM +0100, Samuel Thibault wrote:
> I did get a bit more than a hundred packets in the queue at 800Mbps. But
> we can reduce to 64, yes. I wouldn't recommend using a static buffer,
> since we'd want to go threaded at some point. Allocating an array on the
> stack is very cheap anyway.
I assumed that one would only get more than 1 packet at a time under
heavy load, but apparently tr...
2015 Dec 02
2
[PATCH] Receive multiple packets at a time
Dave Taht, on Wed 02 Dec 2015 14:13:27 +0100, wrote:
> More recently Tom Herbert was working on udp encapsulation methods in
> the kernel "foo over udp"
>
> https://www.netdev01.org/docs/herbert-UDP-Encapsulation-Linux.pdf
>
> https://lwn.net/Articles/614348/
>
> which preserve things important at high rates like GRO/GSO.
Yes, FOU will probably get the highest
2020 Oct 27
1
Looking for a guide to collect all e-mail from the ISP mail server
2015 Dec 02
0
[PATCH] Receive multiple packets at a time
...thus compiled the same as before), and makes the code index into the
> arrays. You may want to use interdiff -w /dev/null patch to better see
> what changes the patch makes.
>
> With this patch, I saw the non-ciphered bandwidth achieved over direct
> ethernet improve from 680Mbps to 800Mbps (or conversely, reduce the CPU
> usage for the same bandwidth).
>
> More is yet to come: I'll have a look at extending the tun/tap interface
> to send/receive several packets at a time, and then also using sendmmsg
> will again improve performance.
>
> Samuel
>
> ---...