Displaying 20 results from an estimated 3000 matches similar to: "[PATCH] virtio-ring: Use threshold for switching to indirect descriptors"
2012 Aug 30
2
[PATCH v3 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2012 Aug 30
2
[PATCH v3 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2012 Jun 18
2
[RFC 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2012 Jun 18
2
[RFC 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2012 Aug 28
3
[PATCH v2 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2012 Aug 28
3
[PATCH v2 1/2] virtio-ring: Use threshold for switching to indirect descriptors
Currently if VIRTIO_RING_F_INDIRECT_DESC is enabled we will use indirect
descriptors even if we have plenty of space in the ring. This means that
we take a performance hit at all times due to the overhead of creating
indirect descriptors.
Instead, use it only after we're below a configurable offset.
Signed-off-by: Sasha Levin <levinsasha928 at gmail.com>
---
drivers/block/virtio_blk.c
2009 Sep 04
2
Xen & netperf
First, I apologize if this message has been received multiple times.
I''m having problems subscribing to this mailing list:
Hi xen-users,
I am trying to decide whether I should run a game server inside a Xen
domain. My primary reason for wanting to virtualize is because I want
to isolate this environment from the rest of my server. I really like
the idea of isolating the game server
2009 Oct 26
2
performance regression in virtio-net in 2.6.32-rc4
Hi!
I noticed a performance regression in virtio net: going from
2.6.31 to 2.6.32-rc4 I see this, for guest to host communication:
[mst at tuck ~]$ ssh robin sh streamtest1
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3
(11.0.0.3) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs.
2009 Oct 26
2
performance regression in virtio-net in 2.6.32-rc4
Hi!
I noticed a performance regression in virtio net: going from
2.6.31 to 2.6.32-rc4 I see this, for guest to host communication:
[mst at tuck ~]$ ssh robin sh streamtest1
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3
(11.0.0.3) port 0 AF_INET : demo
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs.
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Nov 01
5
[PATCH net-next V3 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2011 Oct 27
0
No subject
box.
I'll send an updated KVM tools patch in a bit as well.
Before:
# netperf -H 192.168.33.4,ipv4 -t TCP_RR
MIGRATED TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET
to 192.168.33.4 (192.168.33.4) port 0 AF_INET : first burst 0
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
4
[PATCH net-next V2 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2013 Oct 31
6
[PATCH net-next 1/2] net: introduce skb_coalesce_rx_frag()
Sometimes we need to coalesce the rx frags to avoid frag list. One example is
virtio-net driver which tries to use small frags for both MTU sized packet and
GSO packet. So this patch introduce skb_coalesce_rx_frag() to do this.
Cc: Rusty Russell <rusty at rustcorp.com.au>
Cc: Michael S. Tsirkin <mst at redhat.com>
Cc: Michael Dalton <mwdalton at google.com>
Cc: Eric Dumazet
2009 Jun 10
5
trouble with maxbw
Folks,
I''m playing with maxbw on links (as opposed to flows) in Crossbow, and I
have a couple of questions. First, the limts seem only advisory. The first
example has the main host talking to a zone that has 172.16.17.100
configured on znic0. When there is no maxbw, the throughtput is
as expected; when maxbw is 55M the throughput only drops to 76 Mbps:
# netperf -H
2012 Jan 03
7
Low performance
Hi!
I do a rsync between 2 machines. The throughput is only 2 MByte/Sec.
Each machine is a Supermicro server with
2 x 8 Core Opteron 6128
64 GByte of ECC RAM
1 LSI MegaRAID SAS 9280-24i4e
24 x 2TByte SATA Disks as a RAID6
2 Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network-cards
Both run Ubuntu 11.04 64Bit.
Both use rsync version 3.0.7 protocol version 30
There are no
2019 Sep 06
0
[PATCH] virtio_ring: fix unmap of indirect descriptors
On Fri, Sep 06, 2019 at 02:06:59PM +0200, Matthias Lange wrote:
> The function virtqueue_add_split() DMA-maps the scatterlist buffers. In
> case a mapping error occurs the already mapped buffers must be unmapped.
> This happens by jumping to the 'unmap_release' label.
>
> In case of indirect descriptors the release is wrong and may leak kernel
> memory. Because the