search for: tcp_maert

Displaying 20 results from an estimated 26 matches for "tcp_maert".

Did you mean: tcp_maerts
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get everyone on the same page, I created several variants of this patch so they can be compared. Whoever's interested, please check out the following, and tell me how these compare: kernel: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get everyone on the same page, I created several variants of this patch so they can be compared. Whoever's interested, please check out the following, and tell me how these compare: kernel: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2015 Oct 22
1
[PATCH net-next RFC 2/2] vhost_net: basic polling support
...Is there a measureable increase in cpu utilization > with busyloop_timeout = 0? And since a netperf TCP_RR test is involved, be careful about what netperf reports for CPU util if that increase isn't in the context of the guest OS. For completeness, looking at the effect on TCP_STREAM and TCP_MAERTS, aggregate _RR and even aggregate _RR/packets per second for many VMs on the same system would be in order. happy benchmarking, rick jones
2011 Jun 09
0
No subject
...0.38 7,700.45 7,856.76 > > 4K: > 1 8,976.14 9,026.77 9,147.32 9,095.58 > 4 7,532.25 7,410.80 7,683.81 7,524.94 > > 16K: > 1 8,991.61 9,045.10 9,124.58 9,238.34 > 4 7,406.10 7,626.81 7,711.62 7,345.37 > > Here's the remote host-to-guest summary for 1 VM doing TCP_MAERTS with > 256, 1K, 4K and 16K message size in Mbps: > > 256: > Instances Base V0 V1 V2 > 1 1,165.69 1,181.92 1,152.20 1,104.68 > 4 2,580.46 2,545.22 2,436.30 2,601.74 > > 1K: > 1 2,393.34 2,457.22 2,128.86 2,258.92 > 4 7,152.57 7,606.60 8,004.64 7,576.85 > &...
2011 Jun 09
0
No subject
...0.38 7,700.45 7,856.76 > > 4K: > 1 8,976.14 9,026.77 9,147.32 9,095.58 > 4 7,532.25 7,410.80 7,683.81 7,524.94 > > 16K: > 1 8,991.61 9,045.10 9,124.58 9,238.34 > 4 7,406.10 7,626.81 7,711.62 7,345.37 > > Here's the remote host-to-guest summary for 1 VM doing TCP_MAERTS with > 256, 1K, 4K and 16K message size in Mbps: > > 256: > Instances Base V0 V1 V2 > 1 1,165.69 1,181.92 1,152.20 1,104.68 > 4 2,580.46 2,545.22 2,436.30 2,601.74 > > 1K: > 1 2,393.34 2,457.22 2,128.86 2,258.92 > 4 7,152.57 7,606.60 8,004.64 7,576.85 > &...
2015 Oct 22
0
[PATCH net-next RFC 2/2] vhost_net: basic polling support
...ease in cpu utilization > >with busyloop_timeout = 0? > > And since a netperf TCP_RR test is involved, be careful about what netperf > reports for CPU util if that increase isn't in the context of the guest OS. > > For completeness, looking at the effect on TCP_STREAM and TCP_MAERTS, > aggregate _RR and even aggregate _RR/packets per second for many VMs on the > same system would be in order. > > happy benchmarking, > > rick jones Absolutely, merging a new kernel API just for a specific benchmark doesn't make sense. I'm guessing this is just an ea...
2018 Nov 27
0
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...aged too: > better to prepare for kick enable interrupts then kick. That was on my list of things to look at, because it could relax things for device drivers that don't call us with interrupts disabled. I just tried it and I can see some performance improvement (7% and 4% on tcp_stream and tcp_maerts respectively, +/-2.5%). Since it's an optimization I'll leave it for later (ACPI and module support is higher on my list). The resulting change is complicated because we now need to deal with threads adding new requests while sync() is running. With my current prototype one thread could e...
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote: > This patch tries to poll for new added tx buffer for a while at the > end of tx processing. The maximum time spent on polling were limited > through a module parameter. To avoid block rx, the loop will end it > there's new other works queued on vhost so in fact socket receive > queue is also be polled. > >
2015 Oct 22
4
[PATCH net-next RFC 2/2] vhost_net: basic polling support
On Thu, Oct 22, 2015 at 01:27:29AM -0400, Jason Wang wrote: > This patch tries to poll for new added tx buffer for a while at the > end of tx processing. The maximum time spent on polling were limited > through a module parameter. To avoid block rx, the loop will end it > there's new other works queued on vhost so in fact socket receive > queue is also be polled. > >
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
On Tue, Nov 27, 2018 at 05:55:20PM +0000, Jean-Philippe Brucker wrote: > On 23/11/2018 21:56, Michael S. Tsirkin wrote: > >> +config VIRTIO_IOMMU > >> + bool "Virtio IOMMU driver" > >> + depends on VIRTIO=y > >> + select IOMMU_API > >> + select INTERVAL_TREE > >> + select ARM_DMA_USE_IOMMU if ARM > >> + help > >>
2018 Dec 12
2
[virtio-dev] Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver
...igned to the guest kernel, which > corresponds to case (2) above, with nesting page tables and without the > lazy mode. The host's only job is forwarding invalidation to the HW SMMU. > > vhost-iommu performed on average 1.8x and 5.5x better than vSMMU on > netperf TCP_STREAM and TCP_MAERTS respectively (~200 samples). I think > this can be further optimized (that was still polling under the vq > lock), and unlike vSMMU, virtio-iommu offers the possibility of > multi-queue for improved scalability. In addition, the guest will need > to send both TLB and ATC invalidations...
2018 Dec 07
0
[virtio-dev] Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver
...derX2 with a 10Gb NIC assigned to the guest kernel, which corresponds to case (2) above, with nesting page tables and without the lazy mode. The host's only job is forwarding invalidation to the HW SMMU. vhost-iommu performed on average 1.8x and 5.5x better than vSMMU on netperf TCP_STREAM and TCP_MAERTS respectively (~200 samples). I think this can be further optimized (that was still polling under the vq lock), and unlike vSMMU, virtio-iommu offers the possibility of multi-queue for improved scalability. In addition, the guest will need to send both TLB and ATC invalidations with vSMMU, but virt...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...prepare for kick enable interrupts then kick. > > That was on my list of things to look at, because it could relax > things for device drivers that don't call us with interrupts disabled. I > just tried it and I can see some performance improvement (7% and 4% on > tcp_stream and tcp_maerts respectively, +/-2.5%). > > Since it's an optimization I'll leave it for later (ACPI and module > support is higher on my list). The resulting change is complicated > because we now need to deal with threads adding new requests while > sync() is running. With my current pro...
2018 Nov 27
2
[PATCH v5 5/7] iommu: Add virtio-iommu driver
...prepare for kick enable interrupts then kick. > > That was on my list of things to look at, because it could relax > things for device drivers that don't call us with interrupts disabled. I > just tried it and I can see some performance improvement (7% and 4% on > tcp_stream and tcp_maerts respectively, +/-2.5%). > > Since it's an optimization I'll leave it for later (ACPI and module > support is higher on my list). The resulting change is complicated > because we now need to deal with threads adding new requests while > sync() is running. With my current pro...
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2011 Nov 29
4
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate)
2018 Dec 13
1
[virtio-dev] Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver
...t;> corresponds to case (2) above, with nesting page tables and without the >>> lazy mode. The host's only job is forwarding invalidation to the HW SMMU. >>> >>> vhost-iommu performed on average 1.8x and 5.5x better than vSMMU on >>> netperf TCP_STREAM and TCP_MAERTS respectively (~200 samples). I think >>> this can be further optimized (that was still polling under the vq >>> lock), and unlike vSMMU, virtio-iommu offers the possibility of >>> multi-queue for improved scalability. In addition, the guest will need >>> to send...
2018 Dec 12
0
[virtio-dev] Re: [PATCH v5 5/7] iommu: Add virtio-iommu driver
...kernel, which >> corresponds to case (2) above, with nesting page tables and without the >> lazy mode. The host's only job is forwarding invalidation to the HW SMMU. >> >> vhost-iommu performed on average 1.8x and 5.5x better than vSMMU on >> netperf TCP_STREAM and TCP_MAERTS respectively (~200 samples). I think >> this can be further optimized (that was still polling under the vq >> lock), and unlike vSMMU, virtio-iommu offers the possibility of >> multi-queue for improved scalability. In addition, the guest will need >> to send both TLB and AT...
2018 Nov 23
3
[PATCH v5 5/7] iommu: Add virtio-iommu driver
On Thu, Nov 22, 2018 at 07:37:59PM +0000, Jean-Philippe Brucker wrote: > The virtio IOMMU is a para-virtualized device, allowing to send IOMMU > requests such as map/unmap over virtio transport without emulating page > tables. This implementation handles ATTACH, DETACH, MAP and UNMAP > requests. > > The bulk of the code transforms calls coming from the IOMMU API into >
2018 Nov 23
3
[PATCH v5 5/7] iommu: Add virtio-iommu driver
On Thu, Nov 22, 2018 at 07:37:59PM +0000, Jean-Philippe Brucker wrote: > The virtio IOMMU is a para-virtualized device, allowing to send IOMMU > requests such as map/unmap over virtio transport without emulating page > tables. This implementation handles ATTACH, DETACH, MAP and UNMAP > requests. > > The bulk of the code transforms calls coming from the IOMMU API into >