Ohad Ben-Cohen
2011-Nov-29 12:31 UTC
[RFC] virtio: use mandatory barriers for remote processor vdevs
Virtio is using memory barriers to control the ordering of references to the vrings on SMP systems. When the guest is compiled with SMP support, virtio is only using SMP barriers in order to avoid incurring the overhead involved with mandatory barriers. Lately, though, virtio is being increasingly used with inter-processor communication scenarios too, which involve running two (separate) instances of operating systems on two (separate) processors, each of which might either be UP or SMP. To control the ordering of memory references when the vrings are shared between two external processors, we must always use mandatory barriers. A trivial, albeit sub-optimal, solution would be to simply revert commit d57ed95 "virtio: use smp_XX barriers on SMP". Obviously, though, that's going to have a negative impact on performance of SMP-based virtualization use cases. A different approach, as demonstrated by this patch, would pick the type of memory barriers, in run time, according to the requirements of the virtio device. This way, both SMP virtualization scenarios and inter- processor communication use cases would run correctly, without making any performance compromises (except for those incurred by an additional branch or level of indirection). This patch introduces VIRTIO_RING_F_REMOTEPROC, a new virtio transport feature, which should be used by virtio devices that run on remote processors. The CONFIG_SMP variant of virtio_{mb, rmb, wmb} is then changed to use SMP barriers only if VIRTIO_RING_F_REMOTEPROC was absent. Signed-off-by: Ohad Ben-Cohen <ohad at wizery.com> --- Alternatively, we can also introduce some kind of virtio_mb_ops and set it according to the nature of the vdev with handlers that just do the right thing, instead of introducting that branch. Though I also wonder how big really is the perforamnce gain of d57ed95 ? drivers/virtio/virtio_ring.c | 78 +++++++++++++++++++++++++++++------------- include/linux/virtio_ring.h | 6 +++ 2 files changed, 60 insertions(+), 24 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index c7a2c20..cf66a2d 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -23,24 +23,6 @@ #include <linux/slab.h> #include <linux/module.h> -/* virtio guest is communicating with a virtual "device" that actually runs on - * a host processor. Memory barriers are used to control SMP effects. */ -#ifdef CONFIG_SMP -/* Where possible, use SMP barriers which are more lightweight than mandatory - * barriers, because mandatory barriers control MMIO effects on accesses - * through relaxed memory I/O windows (which virtio does not use). */ -#define virtio_mb() smp_mb() -#define virtio_rmb() smp_rmb() -#define virtio_wmb() smp_wmb() -#else -/* We must force memory ordering even if guest is UP since host could be - * running on another CPU, but SMP barriers are defined to barrier() in that - * configuration. So fall back to mandatory barriers instead. */ -#define virtio_mb() mb() -#define virtio_rmb() rmb() -#define virtio_wmb() wmb() -#endif - #ifdef DEBUG /* For development, we want to crash whenever the ring is screwed. */ #define BAD_RING(_vq, fmt, args...) \ @@ -86,6 +68,9 @@ struct vring_virtqueue /* Host publishes avail event idx */ bool event; + /* Host runs on a remote processor */ + bool rproc; + /* Number of free buffers */ unsigned int num_free; /* Head of free buffer list. */ @@ -108,6 +93,48 @@ struct vring_virtqueue void *data[]; }; +/* + * virtio guest is communicating with a virtual "device" that may either run + * on the host processor, or on an external processor. The former requires + * memory barriers in order to control SMP effects, but the latter must + * use mandatory barriers. + */ +#ifdef CONFIG_SMP +/* Where possible, use SMP barriers which are more lightweight than mandatory + * barriers, because mandatory barriers control MMIO effects on accesses + * through relaxed memory I/O windows. */ +static inline void virtio_mb(struct vring_virtqueue *vq) +{ + if (vq->rproc) + mb(); + else + smp_mb(); +} + +static inline void virtio_rmb(struct vring_virtqueue *vq) +{ + if (vq->rproc) + rmb(); + else + smp_rmb(); +} + +static inline void virtio_wmb(struct vring_virtqueue *vq) +{ + if (vq->rproc) + wmb(); + else + smp_wmb(); +} +#else +/* We must force memory ordering even if guest is UP since host could be + * running on another CPU, but SMP barriers are defined to barrier() in that + * configuration. So fall back to mandatory barriers instead. */ +static inline void virtio_mb(struct vring_virtqueue *vq) { mb(); } +static inline void virtio_rmb(struct vring_virtqueue *vq) { rmb(); } +static inline void virtio_wmb(struct vring_virtqueue *vq) { wmb(); } +#endif + #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq) /* Set up an indirect table of descriptors and add it to the queue. */ @@ -245,14 +272,14 @@ void virtqueue_kick(struct virtqueue *_vq) START_USE(vq); /* Descriptors and available array need to be set before we expose the * new available array entries. */ - virtio_wmb(); + virtio_wmb(vq); old = vq->vring.avail->idx; new = vq->vring.avail->idx = old + vq->num_added; vq->num_added = 0; /* Need to update avail index before checking if we should notify */ - virtio_mb(); + virtio_mb(vq); if (vq->event ? vring_need_event(vring_avail_event(&vq->vring), new, old) : @@ -314,7 +341,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) } /* Only get used array entries after they have been exposed by host. */ - virtio_rmb(); + virtio_rmb(vq); i = vq->vring.used->ring[vq->last_used_idx%vq->vring.num].id; *len = vq->vring.used->ring[vq->last_used_idx%vq->vring.num].len; @@ -337,7 +364,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) * the read in the next get_buf call. */ if (!(vq->vring.avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) { vring_used_event(&vq->vring) = vq->last_used_idx; - virtio_mb(); + virtio_mb(vq); } END_USE(vq); @@ -366,7 +393,7 @@ bool virtqueue_enable_cb(struct virtqueue *_vq) * entry. Always do both to keep code simple. */ vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; vring_used_event(&vq->vring) = vq->last_used_idx; - virtio_mb(); + virtio_mb(vq); if (unlikely(more_used(vq))) { END_USE(vq); return false; @@ -393,7 +420,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) /* TODO: tune this threshold */ bufs = (u16)(vq->vring.avail->idx - vq->last_used_idx) * 3 / 4; vring_used_event(&vq->vring) = vq->last_used_idx + bufs; - virtio_mb(); + virtio_mb(vq); if (unlikely((u16)(vq->vring.used->idx - vq->last_used_idx) > bufs)) { END_USE(vq); return false; @@ -486,6 +513,7 @@ struct virtqueue *vring_new_virtqueue(unsigned int num, vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC); vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); + vq->rproc = virtio_has_feature(vdev, VIRTIO_RING_F_REMOTEPROC); /* No callback? Tell other side not to bother us. */ if (!callback) @@ -522,6 +550,8 @@ void vring_transport_features(struct virtio_device *vdev) break; case VIRTIO_RING_F_EVENT_IDX: break; + case VIRTIO_RING_F_REMOTEPROC: + break; default: /* We don't understand this bit. */ clear_bit(i, vdev->features); diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h index 36be0f6..9839593 100644 --- a/include/linux/virtio_ring.h +++ b/include/linux/virtio_ring.h @@ -58,6 +58,12 @@ * at the end of the used ring. Guest should ignore the used->flags field. */ #define VIRTIO_RING_F_EVENT_IDX 29 +/* + * The device we're talking to resides on a remote processor, so we must always + * use mandatory memory barriers. + */ +#define VIRTIO_RING_F_REMOTEPROC 30 + /* Virtio ring descriptors: 16 bytes. These can chain together via "next". */ struct vring_desc { /* Address (guest-physical). */ -- 1.7.5.4
Michael S. Tsirkin
2011-Nov-29 13:11 UTC
[RFC] virtio: use mandatory barriers for remote processor vdevs
On Tue, Nov 29, 2011 at 02:31:26PM +0200, Ohad Ben-Cohen wrote:> Virtio is using memory barriers to control the ordering of > references to the vrings on SMP systems. When the guest is compiled > with SMP support, virtio is only using SMP barriers in order to > avoid incurring the overhead involved with mandatory barriers. > > Lately, though, virtio is being increasingly used with inter-processor > communication scenarios too, which involve running two (separate) > instances of operating systems on two (separate) processors, each of > which might either be UP or SMP.Is that using virtio-mmio? If yes, would the extra serialization belongs at that layer?> To control the ordering of memory references when the vrings are shared > between two external processors, we must always use mandatory barriers.Sorry, could you pls explain what are 'two external processors'? I think I know that if two x86 CPUs in an SMP system run kernels built in an SMP configuration, smp_*mb barriers are enough. Documentation/memory-barriers.txt says: Mandatory barriers should not be used to control SMP effects ... They may, however, be used to control MMIO effects on accesses through relaxed memory I/O windows. We don't control MMIO/relaxed memory I/O windows here, so what exactly is the issue? Could you please give an example of a setup that is currently broken?> > A trivial, albeit sub-optimal, solution would be to simply revert > commit d57ed95 "virtio: use smp_XX barriers on SMP". Obviously, though, > that's going to have a negative impact on performance of SMP-based > virtualization use cases. > > A different approach, as demonstrated by this patch, would pick the type > of memory barriers, in run time, according to the requirements of the > virtio device. This way, both SMP virtualization scenarios and inter- > processor communication use cases would run correctly, without making > any performance compromises (except for those incurred by an additional > branch or level of indirection).Is an extra branch faster or slower than reverting d57ed95?> > This patch introduces VIRTIO_RING_F_REMOTEPROC, a new virtio transport > feature, which should be used by virtio devices that run on remote > processors. The CONFIG_SMP variant of virtio_{mb, rmb, wmb} is then changed > to use SMP barriers only if VIRTIO_RING_F_REMOTEPROC was absent.One wonders how the remote side knows enough to set this flag?> > Signed-off-by: Ohad Ben-Cohen <ohad at wizery.com> > --- > Alternatively, we can also introduce some kind of virtio_mb_ops and set it > according to the nature of the vdev with handlers that just do the right > thing, instead of introducting that branch. > > Though I also wonder how big really is the perforamnce gain of d57ed95 ?Want to check and tell us?> drivers/virtio/virtio_ring.c | 78 +++++++++++++++++++++++++++++------------- > include/linux/virtio_ring.h | 6 +++ > 2 files changed, 60 insertions(+), 24 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index c7a2c20..cf66a2d 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -23,24 +23,6 @@ > #include <linux/slab.h> > #include <linux/module.h> > > -/* virtio guest is communicating with a virtual "device" that actually runs on > - * a host processor. Memory barriers are used to control SMP effects. */ > -#ifdef CONFIG_SMP > -/* Where possible, use SMP barriers which are more lightweight than mandatory > - * barriers, because mandatory barriers control MMIO effects on accesses > - * through relaxed memory I/O windows (which virtio does not use). */ > -#define virtio_mb() smp_mb() > -#define virtio_rmb() smp_rmb() > -#define virtio_wmb() smp_wmb() > -#else > -/* We must force memory ordering even if guest is UP since host could be > - * running on another CPU, but SMP barriers are defined to barrier() in that > - * configuration. So fall back to mandatory barriers instead. */ > -#define virtio_mb() mb() > -#define virtio_rmb() rmb() > -#define virtio_wmb() wmb() > -#endif > - > #ifdef DEBUG > /* For development, we want to crash whenever the ring is screwed. */ > #define BAD_RING(_vq, fmt, args...) \ > @@ -86,6 +68,9 @@ struct vring_virtqueue > /* Host publishes avail event idx */ > bool event; > > + /* Host runs on a remote processor */ > + bool rproc; > + > /* Number of free buffers */ > unsigned int num_free; > /* Head of free buffer list. */ > @@ -108,6 +93,48 @@ struct vring_virtqueue > void *data[]; > }; > > +/* > + * virtio guest is communicating with a virtual "device" that may either run > + * on the host processor, or on an external processor. The former requires > + * memory barriers in order to control SMP effects, but the latter must > + * use mandatory barriers. > + */ > +#ifdef CONFIG_SMP > +/* Where possible, use SMP barriers which are more lightweight than mandatory > + * barriers, because mandatory barriers control MMIO effects on accesses > + * through relaxed memory I/O windows. */ > +static inline void virtio_mb(struct vring_virtqueue *vq) > +{ > + if (vq->rproc) > + mb(); > + else > + smp_mb(); > +} > + > +static inline void virtio_rmb(struct vring_virtqueue *vq) > +{ > + if (vq->rproc) > + rmb(); > + else > + smp_rmb(); > +} > + > +static inline void virtio_wmb(struct vring_virtqueue *vq) > +{ > + if (vq->rproc) > + wmb(); > + else > + smp_wmb(); > +} > +#else > +/* We must force memory ordering even if guest is UP since host could be > + * running on another CPU, but SMP barriers are defined to barrier() in that > + * configuration. So fall back to mandatory barriers instead. */ > +static inline void virtio_mb(struct vring_virtqueue *vq) { mb(); } > +static inline void virtio_rmb(struct vring_virtqueue *vq) { rmb(); } > +static inline void virtio_wmb(struct vring_virtqueue *vq) { wmb(); } > +#endif > + > #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq) > > /* Set up an indirect table of descriptors and add it to the queue. */ > @@ -245,14 +272,14 @@ void virtqueue_kick(struct virtqueue *_vq) > START_USE(vq); > /* Descriptors and available array need to be set before we expose the > * new available array entries. */ > - virtio_wmb(); > + virtio_wmb(vq); > > old = vq->vring.avail->idx; > new = vq->vring.avail->idx = old + vq->num_added; > vq->num_added = 0; > > /* Need to update avail index before checking if we should notify */ > - virtio_mb(); > + virtio_mb(vq); > > if (vq->event ? > vring_need_event(vring_avail_event(&vq->vring), new, old) : > @@ -314,7 +341,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) > } > > /* Only get used array entries after they have been exposed by host. */ > - virtio_rmb(); > + virtio_rmb(vq); > > i = vq->vring.used->ring[vq->last_used_idx%vq->vring.num].id; > *len = vq->vring.used->ring[vq->last_used_idx%vq->vring.num].len; > @@ -337,7 +364,7 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) > * the read in the next get_buf call. */ > if (!(vq->vring.avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) { > vring_used_event(&vq->vring) = vq->last_used_idx; > - virtio_mb(); > + virtio_mb(vq); > } > > END_USE(vq); > @@ -366,7 +393,7 @@ bool virtqueue_enable_cb(struct virtqueue *_vq) > * entry. Always do both to keep code simple. */ > vq->vring.avail->flags &= ~VRING_AVAIL_F_NO_INTERRUPT; > vring_used_event(&vq->vring) = vq->last_used_idx; > - virtio_mb(); > + virtio_mb(vq); > if (unlikely(more_used(vq))) { > END_USE(vq); > return false; > @@ -393,7 +420,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *_vq) > /* TODO: tune this threshold */ > bufs = (u16)(vq->vring.avail->idx - vq->last_used_idx) * 3 / 4; > vring_used_event(&vq->vring) = vq->last_used_idx + bufs; > - virtio_mb(); > + virtio_mb(vq); > if (unlikely((u16)(vq->vring.used->idx - vq->last_used_idx) > bufs)) { > END_USE(vq); > return false; > @@ -486,6 +513,7 @@ struct virtqueue *vring_new_virtqueue(unsigned int num, > > vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC); > vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX); > + vq->rproc = virtio_has_feature(vdev, VIRTIO_RING_F_REMOTEPROC); > > /* No callback? Tell other side not to bother us. */ > if (!callback) > @@ -522,6 +550,8 @@ void vring_transport_features(struct virtio_device *vdev) > break; > case VIRTIO_RING_F_EVENT_IDX: > break; > + case VIRTIO_RING_F_REMOTEPROC: > + break; > default: > /* We don't understand this bit. */ > clear_bit(i, vdev->features); > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h > index 36be0f6..9839593 100644 > --- a/include/linux/virtio_ring.h > +++ b/include/linux/virtio_ring.h > @@ -58,6 +58,12 @@ > * at the end of the used ring. Guest should ignore the used->flags field. */ > #define VIRTIO_RING_F_EVENT_IDX 29 > > +/* > + * The device we're talking to resides on a remote processor, so we must always > + * use mandatory memory barriers. > + */ > +#define VIRTIO_RING_F_REMOTEPROC 30 > + > /* Virtio ring descriptors: 16 bytes. These can chain together via "next". */ > struct vring_desc { > /* Address (guest-physical). */ > -- > 1.7.5.4
Benjamin Herrenschmidt
2011-Dec-02 23:09 UTC
[RFC] virtio: use mandatory barriers for remote processor vdevs
On Tue, 2011-11-29 at 14:31 +0200, Ohad Ben-Cohen wrote:> Virtio is using memory barriers to control the ordering of > references to the vrings on SMP systems. When the guest is compiled > with SMP support, virtio is only using SMP barriers in order to > avoid incurring the overhead involved with mandatory barriers. > > Lately, though, virtio is being increasingly used with inter-processor > communication scenarios too, which involve running two (separate) > instances of operating systems on two (separate) processors, each of > which might either be UP or SMP. > > To control the ordering of memory references when the vrings are shared > between two external processors, we must always use mandatory barriers. > > A trivial, albeit sub-optimal, solution would be to simply revert > commit d57ed95 "virtio: use smp_XX barriers on SMP". Obviously, though, > that's going to have a negative impact on performance of SMP-based > virtualization use cases.Have you measured the impact of using normal barriers (non-SMP ones) like we use on normal HW drivers unconditionally ? IE. If the difference is small enough I'd say just go for it and avoid the bloat. Ben.
Amos Kong
2011-Dec-12 03:06 UTC
[RFC] virtio: use mandatory barriers for remote processor vdevs
On 12/12/11 06:27, Benjamin Herrenschmidt wrote:> On Sun, 2011-12-11 at 14:25 +0200, Michael S. Tsirkin wrote: > >> Forwarding some results by Amos, who run multiple netperf streams in >> parallel, from an external box to the guest. TCP_STREAM results were >> noisy. This could be due to buffering done by TCP, where packet size >> varies even as message size is constant. >> >> TCP_RR results were consistent. In this benchmark, after switching >> to mandatory barriers, CPU utilization increased by up to 35% while >> throughput went down by up to 14%. the normalized throughput/cpu >> regressed consistently, between 7 and 35% >> >> The "fix" applied was simply this: > > What machine& processor was this ?pined guest memory to numa node 1 # numactl -m 1 qemu-kvm ... pined guest vcpu threads and vhost thread to single cpu of numa node 1 # taskset -p 0x10 8348 (vhost_net_thread) # taskset -p 0x20 8353 (vcpu 1 thread) # taskset -p 0x40 8357 (vcpu 2 thread) pined cpu/memory of netperf client process to node 1 # numactl --cpunodebind=1 --membind=1 netperf ... 8 cores ------- processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0xc cpu MHz : 1596.000 cache size : 12288 KB physical id : 1 siblings : 4 core id : 10 cpu cores : 4 apicid : 52 initial apicid : 52 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dts tpr_shadow vnmi flexpriority ept vpid bogomips : 4787.76 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management: # cat /proc/meminfo MemTotal: 16446616 kB MemFree: 15874092 kB Buffers: 30404 kB Cached: 238640 kB SwapCached: 0 kB Active: 100204 kB Inactive: 184312 kB Active(anon): 15724 kB Inactive(anon): 4 kB Active(file): 84480 kB Inactive(file): 184308 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8388604 kB SwapFree: 8388604 kB Dirty: 56 kB Writeback: 0 kB AnonPages: 15548 kB Mapped: 11540 kB Shmem: 256 kB Slab: 82444 kB SReclaimable: 19220 kB SUnreclaim: 63224 kB KernelStack: 1224 kB PageTables: 2256 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 16611912 kB Committed_AS: 209068 kB VmallocTotal: 34359738367 kB VmallocUsed: 224244 kB VmallocChunk: 34351073668 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 9876 kB DirectMap2M: 2070528 kB DirectMap1G: 14680064 kB # numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 node 0 size: 8175 MB node 0 free: 7706 MB node 1 cpus: 4 5 6 7 node 1 size: 8192 MB node 1 free: 7796 MB node distances: node 0 1 0: 10 20 1: 20 10 # numactl --show policy: default preferred node: current physcpubind: 0 1 2 3 4 5 6 7 cpubind: 0 1 nodebind: 0 1 membind: 0 1> Cheers, > Ben. > >> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c >> index 3198f2e..fdccb77 100644 >> --- a/drivers/virtio/virtio_ring.c >> +++ b/drivers/virtio/virtio_ring.c >> @@ -23,7 +23,7 @@ >> >> /* virtio guest is communicating with a virtual "device" that actually runs on >> * a host processor. Memory barriers are used to control SMP effects. */ >> -#ifdef CONFIG_SMP >> +#if 0 >> /* Where possible, use SMP barriers which are more lightweight than mandatory >> * barriers, because mandatory barriers control MMIO effects on accesses >> * through relaxed memory I/O windows (which virtio does not use). */ >> >> >> > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo at vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html
Amos Kong
2011-Dec-19 07:21 UTC
[RFC] virtio: use mandatory barriers for remote processor vdevs
On 19/12/11 10:41, Benjamin Herrenschmidt wrote:> On Mon, 2011-12-19 at 10:19 +0800, Amos Kong wrote: > >> I tested with the same environment and scenarios. >> tested one scenarios for three times and compute the average for more >> precision. >> >> Thanks, Amos >> >> --------- compare results ----------- >> Mon Dec 19 09:51:09 2011 >> >> 1 - avg-old.netperf.exhost_guest.txt >> 2 - avg-fixed.netperf.exhost_guest.txt > > The output is word wrapped and generally unreadable. Any chance you can > provide us with a summary of the outcome ? > > Cheers, > Ben.Hi Ben, The change of TCP_RR Throughput is very small. external host -> guest: Some of throughput of TCP_STREAM and TCP_MAERTS reduced a little. local host -> guest: Some of throughput of TCP_STREAM and TCP_MAERTS increased a little. About compare result format: --------------------------->> 1 - avg-old.netperf.exhost_guest.txtaverage result (tested 3 times) file of test 1>> 2 - avg-fixed.netperf.exhost_guest.txtaverage result file of test 2>> >> =====>> TCP_STREAM^^^ protocol>> sessions| size|throughput| cpu| normalize| #tx-pkts| #rx-pkts| #tx-byts| #rx-byts| #re-trans| #tx-intr| #rx-intr| #io_exit| #irq_inj|#tpkt/#exit| #rpkt/#irq >> 1 1| 64| 1073.54| 10.50| 102| ....^^^ average result of old kernel, start netserver in guest, start netperf client(s) in external host>> 2 1| 64| 1079.44| 10.29| 104| ....^^^ average result of fixed kernel>> % | 0.0| +0.5| -2.0| +2.0| ....^^^ augment rate between test1 and test2 -------- sessions: netperf clients number size: request/response sizes #rx-pkts: received packets number #rx-byts: received bytes number #rx-intr: interrupt number for receive #io_exit: io exit number #irq_inj: injected irq number Thanks, Amos.
Maybe Matching Threads
- [RFC] virtio: use mandatory barriers for remote processor vdevs
- [PATCH 1/6] virtio_host: host-side implementation of virtio rings.
- [PATCH 1/6] virtio_host: host-side implementation of virtio rings.
- [RFC virtio-next 0/4] Introduce CAIF Virtio and reversed Vrings
- [RFC virtio-next 0/4] Introduce CAIF Virtio and reversed Vrings