Quan Xu
2017-Nov-14 07:02 UTC
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/13 18:53, Juergen Gross wrote:> On 13/11/17 11:06, Quan Xu wrote: >> From: Quan Xu <quan.xu0 at gmail.com> >> >> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >> in idle path which will poll for a while before we enter the real idle >> state. >> >> In virtualization, idle path includes several heavy operations >> includes timer access(LAPIC timer or TSC deadline timer) which will >> hurt performance especially for latency intensive workload like message >> passing task. The cost is mainly from the vmexit which is a hardware >> context switch between virtual machine and hypervisor. Our solution is >> to poll for a while and do not enter real idle path if we can get the >> schedule event during polling. >> >> Poll may cause the CPU waste so we adopt a smart polling mechanism to >> reduce the useless poll. >> >> Signed-off-by: Yang Zhang <yang.zhang.wz at gmail.com> >> Signed-off-by: Quan Xu <quan.xu0 at gmail.com> >> Cc: Juergen Gross <jgross at suse.com> >> Cc: Alok Kataria <akataria at vmware.com> >> Cc: Rusty Russell <rusty at rustcorp.com.au> >> Cc: Thomas Gleixner <tglx at linutronix.de> >> Cc: Ingo Molnar <mingo at redhat.com> >> Cc: "H. Peter Anvin" <hpa at zytor.com> >> Cc: x86 at kernel.org >> Cc: virtualization at lists.linux-foundation.org >> Cc: linux-kernel at vger.kernel.org >> Cc: xen-devel at lists.xenproject.org > Hmm, is the idle entry path really so critical to performance that a new > pvops function is necessary?Juergen, Here is the data we get when running benchmark netperf: ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 29031.6 bit/s -- 76.1 %CPU ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): ??? 35787.7 bit/s -- 129.4 %CPU ?3. w/ kvm dynamic poll: ??? 35735.6 bit/s -- 200.0 %CPU ?4. w/patch and w/ kvm dynamic poll: ??? 42225.3 bit/s -- 198.7 %CPU ?5. idle=poll ??? 37081.7 bit/s -- 998.1 %CPU ?w/ this patch, we will improve performance by 23%.. even we could improve ?performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the ?cost of CPU is much lower than 'idle=poll' case..> Wouldn't a function pointer, maybe guarded > by a static key, be enough? A further advantage would be that this would > work on other architectures, too.I assume this feature will be ported to other archs.. a new pvops makes code clean and easy to maintain. also I tried to add it into existed pvops, but it doesn't match. Quan Alibaba Cloud> > Juergen >
Wanpeng Li
2017-Nov-14 07:12 UTC
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
2017-11-14 15:02 GMT+08:00 Quan Xu <quan.xu0 at gmail.com>:> > > On 2017/11/13 18:53, Juergen Gross wrote: >> >> On 13/11/17 11:06, Quan Xu wrote: >>> >>> From: Quan Xu <quan.xu0 at gmail.com> >>> >>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >>> in idle path which will poll for a while before we enter the real idle >>> state. >>> >>> In virtualization, idle path includes several heavy operations >>> includes timer access(LAPIC timer or TSC deadline timer) which will >>> hurt performance especially for latency intensive workload like message >>> passing task. The cost is mainly from the vmexit which is a hardware >>> context switch between virtual machine and hypervisor. Our solution is >>> to poll for a while and do not enter real idle path if we can get the >>> schedule event during polling. >>> >>> Poll may cause the CPU waste so we adopt a smart polling mechanism to >>> reduce the useless poll. >>> >>> Signed-off-by: Yang Zhang <yang.zhang.wz at gmail.com> >>> Signed-off-by: Quan Xu <quan.xu0 at gmail.com> >>> Cc: Juergen Gross <jgross at suse.com> >>> Cc: Alok Kataria <akataria at vmware.com> >>> Cc: Rusty Russell <rusty at rustcorp.com.au> >>> Cc: Thomas Gleixner <tglx at linutronix.de> >>> Cc: Ingo Molnar <mingo at redhat.com> >>> Cc: "H. Peter Anvin" <hpa at zytor.com> >>> Cc: x86 at kernel.org >>> Cc: virtualization at lists.linux-foundation.org >>> Cc: linux-kernel at vger.kernel.org >>> Cc: xen-devel at lists.xenproject.org >> >> Hmm, is the idle entry path really so critical to performance that a new >> pvops function is necessary? > > Juergen, Here is the data we get when running benchmark netperf: > 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > 29031.6 bit/s -- 76.1 %CPU > > 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): > 35787.7 bit/s -- 129.4 %CPU > > 3. w/ kvm dynamic poll: > 35735.6 bit/s -- 200.0 %CPUActually we can reduce the CPU utilization by sleeping a period of time as what has already been done in the poll logic of IO subsystem, then we can improve the algorithm in kvm instead of introduing another duplicate one in the kvm guest. Regards, Wanpeng Li> > 4. w/patch and w/ kvm dynamic poll: > 42225.3 bit/s -- 198.7 %CPU > > 5. idle=poll > 37081.7 bit/s -- 998.1 %CPU > > > > w/ this patch, we will improve performance by 23%.. even we could improve > performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the > cost of CPU is much lower than 'idle=poll' case.. > >> Wouldn't a function pointer, maybe guarded >> by a static key, be enough? A further advantage would be that this would >> work on other architectures, too. > > > I assume this feature will be ported to other archs.. a new pvops makes code > clean and easy to maintain. also I tried to add it into existed pvops, but > it > doesn't match. > > > > Quan > Alibaba Cloud >> >> >> Juergen >> >
Juergen Gross
2017-Nov-14 07:30 UTC
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 14/11/17 08:02, Quan Xu wrote:> > > On 2017/11/13 18:53, Juergen Gross wrote: >> On 13/11/17 11:06, Quan Xu wrote: >>> From: Quan Xu <quan.xu0 at gmail.com> >>> >>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >>> in idle path which will poll for a while before we enter the real idle >>> state. >>> >>> In virtualization, idle path includes several heavy operations >>> includes timer access(LAPIC timer or TSC deadline timer) which will >>> hurt performance especially for latency intensive workload like message >>> passing task. The cost is mainly from the vmexit which is a hardware >>> context switch between virtual machine and hypervisor. Our solution is >>> to poll for a while and do not enter real idle path if we can get the >>> schedule event during polling. >>> >>> Poll may cause the CPU waste so we adopt a smart polling mechanism to >>> reduce the useless poll. >>> >>> Signed-off-by: Yang Zhang <yang.zhang.wz at gmail.com> >>> Signed-off-by: Quan Xu <quan.xu0 at gmail.com> >>> Cc: Juergen Gross <jgross at suse.com> >>> Cc: Alok Kataria <akataria at vmware.com> >>> Cc: Rusty Russell <rusty at rustcorp.com.au> >>> Cc: Thomas Gleixner <tglx at linutronix.de> >>> Cc: Ingo Molnar <mingo at redhat.com> >>> Cc: "H. Peter Anvin" <hpa at zytor.com> >>> Cc: x86 at kernel.org >>> Cc: virtualization at lists.linux-foundation.org >>> Cc: linux-kernel at vger.kernel.org >>> Cc: xen-devel at lists.xenproject.org >> Hmm, is the idle entry path really so critical to performance that a new >> pvops function is necessary? > Juergen, Here is the data we get when running benchmark netperf: > ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): > ??? 29031.6 bit/s -- 76.1 %CPU > > ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): > ??? 35787.7 bit/s -- 129.4 %CPU > > ?3. w/ kvm dynamic poll: > ??? 35735.6 bit/s -- 200.0 %CPU > > ?4. w/patch and w/ kvm dynamic poll: > ??? 42225.3 bit/s -- 198.7 %CPU > > ?5. idle=poll > ??? 37081.7 bit/s -- 998.1 %CPU > > > > ?w/ this patch, we will improve performance by 23%.. even we could improve > ?performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the > ?cost of CPU is much lower than 'idle=poll' case..I don't question the general idea. I just think pvops isn't the best way to implement it.>> Wouldn't a function pointer, maybe guarded >> by a static key, be enough? A further advantage would be that this would >> work on other architectures, too. > > I assume this feature will be ported to other archs.. a new pvops makes > code > clean and easy to maintain. also I tried to add it into existed pvops, > but it > doesn't match.You are aware that pvops is x86 only? I really don't see the big difference in maintainability compared to the static key / function pointer variant: void (*guest_idle_poll_func)(void); struct static_key guest_idle_poll_key __read_mostly; static inline void guest_idle_poll(void) { if (static_key_false(&guest_idle_poll_key)) guest_idle_poll_func(); } And KVM would just need to set guest_idle_poll_func and enable the static key. Works on non-x86 architectures, too. Juergen
Quan Xu
2017-Nov-14 08:15 UTC
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/14 15:12, Wanpeng Li wrote:> 2017-11-14 15:02 GMT+08:00 Quan Xu <quan.xu0 at gmail.com>: >> >> On 2017/11/13 18:53, Juergen Gross wrote: >>> On 13/11/17 11:06, Quan Xu wrote: >>>> From: Quan Xu <quan.xu0 at gmail.com> >>>> >>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >>>> in idle path which will poll for a while before we enter the real idle >>>> state. >>>> >>>> In virtualization, idle path includes several heavy operations >>>> includes timer access(LAPIC timer or TSC deadline timer) which will >>>> hurt performance especially for latency intensive workload like message >>>> passing task. The cost is mainly from the vmexit which is a hardware >>>> context switch between virtual machine and hypervisor. Our solution is >>>> to poll for a while and do not enter real idle path if we can get the >>>> schedule event during polling. >>>> >>>> Poll may cause the CPU waste so we adopt a smart polling mechanism to >>>> reduce the useless poll. >>>> >>>> Signed-off-by: Yang Zhang <yang.zhang.wz at gmail.com> >>>> Signed-off-by: Quan Xu <quan.xu0 at gmail.com> >>>> Cc: Juergen Gross <jgross at suse.com> >>>> Cc: Alok Kataria <akataria at vmware.com> >>>> Cc: Rusty Russell <rusty at rustcorp.com.au> >>>> Cc: Thomas Gleixner <tglx at linutronix.de> >>>> Cc: Ingo Molnar <mingo at redhat.com> >>>> Cc: "H. Peter Anvin" <hpa at zytor.com> >>>> Cc: x86 at kernel.org >>>> Cc: virtualization at lists.linux-foundation.org >>>> Cc: linux-kernel at vger.kernel.org >>>> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> 29031.6 bit/s -- 76.1 %CPU >> >> 2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> 35787.7 bit/s -- 129.4 %CPU >> >> 3. w/ kvm dynamic poll: >> 35735.6 bit/s -- 200.0 %CPU > Actually we can reduce the CPU utilization by sleeping a period of > time as what has already been done in the poll logic of IO subsystem, > then we can improve the algorithm in kvm instead of introduing another > duplicate one in the kvm guest.We really appreciate upstream's kvm dynamic poll mechanism, which is really helpful for a lot of scenario.. However, as description said, in virtualization, idle path includes several heavy operations includes timer access (LAPIC timer or TSC deadline timer) which will hurt performance especially for latency intensive workload like message passing task. The cost is mainly from the vmexit which is a hardware context switch between virtual machine and hypervisor. for upstream's kvm dynamic poll mechanism, even you could provide a better algorism, how could you bypass timer access (LAPIC timer or TSC deadline timer), or a hardware context switch between virtual machine and hypervisor. I know these is a tradeoff. Furthermore, here is the data we get when running benchmark contextswitch to measure the latency(lower is better): 1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): ? 3402.9 ns/ctxsw -- 199.8 %CPU 2. w/ patch and disable kvm dynamic poll: ? 1163.5 ns/ctxsw -- 205.5 %CPU 3. w/ kvm dynamic poll: ? 2280.6 ns/ctxsw -- 199.5 %CPU so, these tow solution are quite similar, but not duplicate.. that's also why to add a generic idle poll before enter real idle path. When a reschedule event is pending, we can bypass the real idle path. Quan Alibaba Cloud> Regards, > Wanpeng Li > >> 4. w/patch and w/ kvm dynamic poll: >> 42225.3 bit/s -- 198.7 %CPU >> >> 5. idle=poll >> 37081.7 bit/s -- 998.1 %CPU >> >> >> >> w/ this patch, we will improve performance by 23%.. even we could improve >> performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the >> cost of CPU is much lower than 'idle=poll' case.. >> >>> Wouldn't a function pointer, maybe guarded >>> by a static key, be enough? A further advantage would be that this would >>> work on other architectures, too. >> >> I assume this feature will be ported to other archs.. a new pvops makes code >> clean and easy to maintain. also I tried to add it into existed pvops, but >> it >> doesn't match. >> >> >> >> Quan >> Alibaba Cloud >>> >>> Juergen >>>
Quan Xu
2017-Nov-14 09:38 UTC
[PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
On 2017/11/14 15:30, Juergen Gross wrote:> On 14/11/17 08:02, Quan Xu wrote: >> >> On 2017/11/13 18:53, Juergen Gross wrote: >>> On 13/11/17 11:06, Quan Xu wrote: >>>> From: Quan Xu <quan.xu0 at gmail.com> >>>> >>>> So far, pv_idle_ops.poll is the only ops for pv_idle. .poll is called >>>> in idle path which will poll for a while before we enter the real idle >>>> state. >>>> >>>> In virtualization, idle path includes several heavy operations >>>> includes timer access(LAPIC timer or TSC deadline timer) which will >>>> hurt performance especially for latency intensive workload like message >>>> passing task. The cost is mainly from the vmexit which is a hardware >>>> context switch between virtual machine and hypervisor. Our solution is >>>> to poll for a while and do not enter real idle path if we can get the >>>> schedule event during polling. >>>> >>>> Poll may cause the CPU waste so we adopt a smart polling mechanism to >>>> reduce the useless poll. >>>> >>>> Signed-off-by: Yang Zhang <yang.zhang.wz at gmail.com> >>>> Signed-off-by: Quan Xu <quan.xu0 at gmail.com> >>>> Cc: Juergen Gross <jgross at suse.com> >>>> Cc: Alok Kataria <akataria at vmware.com> >>>> Cc: Rusty Russell <rusty at rustcorp.com.au> >>>> Cc: Thomas Gleixner <tglx at linutronix.de> >>>> Cc: Ingo Molnar <mingo at redhat.com> >>>> Cc: "H. Peter Anvin" <hpa at zytor.com> >>>> Cc: x86 at kernel.org >>>> Cc: virtualization at lists.linux-foundation.org >>>> Cc: linux-kernel at vger.kernel.org >>>> Cc: xen-devel at lists.xenproject.org >>> Hmm, is the idle entry path really so critical to performance that a new >>> pvops function is necessary? >> Juergen, Here is the data we get when running benchmark netperf: >> ?1. w/o patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 29031.6 bit/s -- 76.1 %CPU >> >> ?2. w/ patch and disable kvm dynamic poll (halt_poll_ns=0): >> ??? 35787.7 bit/s -- 129.4 %CPU >> >> ?3. w/ kvm dynamic poll: >> ??? 35735.6 bit/s -- 200.0 %CPU >> >> ?4. w/patch and w/ kvm dynamic poll: >> ??? 42225.3 bit/s -- 198.7 %CPU >> >> ?5. idle=poll >> ??? 37081.7 bit/s -- 998.1 %CPU >> >> >> >> ?w/ this patch, we will improve performance by 23%.. even we could improve >> ?performance by 45.4%, if we use w/patch and w/ kvm dynamic poll. also the >> ?cost of CPU is much lower than 'idle=poll' case.. > I don't question the general idea. I just think pvops isn't the best way > to implement it. > >>> Wouldn't a function pointer, maybe guarded >>> by a static key, be enough? A further advantage would be that this would >>> work on other architectures, too. >> I assume this feature will be ported to other archs.. a new pvops makes????? sorry, a typo.. /other archs/other hypervisors/ ????? it refers hypervisor like Xen, HyperV and VMware)..>> code >> clean and easy to maintain. also I tried to add it into existed pvops, >> but it >> doesn't match. > You are aware that pvops is x86 only?yes, I'm aware..> I really don't see the big difference in maintainability compared to the > static key / function pointer variant: > > void (*guest_idle_poll_func)(void); > struct static_key guest_idle_poll_key __read_mostly; > > static inline void guest_idle_poll(void) > { > if (static_key_false(&guest_idle_poll_key)) > guest_idle_poll_func(); > }thank you for your sample code :) I agree there is no big difference.. I think we are discussion for two things: ?1) x86 VM on different hypervisors ?2) different archs VM on kvm hypervisor What I want to do is x86 VM on different hypervisors, such as kvm / xen / hyperv ..> And KVM would just need to set guest_idle_poll_func and enable the > static key. Works on non-x86 architectures, too. >.. referred to 'pv_mmu_ops', HyperV and Xen can implement their own functions for 'pv_mmu_ops'. I think it is the same to pv_idle_ops. with above explaination, do you still think I need to define the static key/function pointer variant? btw, any interest to port it to Xen HVM guest? :) Quan Alibaba Cloud
Reasonably Related Threads
- [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
- [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
- [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
- [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops
- [PATCH RFC v3 1/6] x86/paravirt: Add pv_idle_ops to paravirt ops