Raghavendra K T
2013-Jun-01 08:21 UTC
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V9: - Changed spin_threshold to 32k to avoid excess halt exits that are causing undercommit degradation (after PLE handler improvement). - Added kvm_irq_delivery_to_apic (suggested by Gleb) - Optimized halt exit path to use PLE handler V8 of PVspinlock was posted last year. After Avi's suggestions to look at PLE handler's improvements, various optimizations in PLE handling have been tried. With this series we see that we could get little more improvements on top of that. Ticket locks have an inherent problem in a virtualized case, because the vCPUs are scheduled rather than running concurrently (ignoring gang scheduled vCPUs). This can result in catastrophic performance collapses when the vCPU scheduler doesn't schedule the correct "next" vCPU, and ends up scheduling a vCPU which burns its entire timeslice spinning. (Note that this is not the same problem as lock-holder preemption, which this series also addresses; that's also a problem, but not catastrophic). (See Thomas Friebel's talk "Prevent Guests from Spinning Around" http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.) Currently we deal with this by having PV spinlocks, which adds a layer of indirection in front of all the spinlock functions, and defining a completely new implementation for Xen (and for other pvops users, but there are none at present). PV ticketlocks keeps the existing ticketlock implemenentation (fastpath) as-is, but adds a couple of pvops for the slow paths: - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD iterations, then call out to the __ticket_lock_spinning() pvop, which allows a backend to block the vCPU rather than spinning. This pvop can set the lock into "slowpath state". - When releasing a lock, if it is in "slowpath state", the call __ticket_unlock_kick() to kick the next vCPU in line awake. If the lock is no longer in contention, it also clears the slowpath flag. The "slowpath state" is stored in the LSB of the within the lock tail ticket. This has the effect of reducing the max number of CPUs by half (so, a "small ticket" can deal with 128 CPUs, and "large ticket" 32768). For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Overall, it results in a large reduction in code, it makes the native and virtualized cases closer, and it removes a layer of indirection around all the spinlock functions. The fast path (taking an uncontended lock which isn't in "slowpath" state) is optimal, identical to the non-paravirtualized case. The inner part of ticket lock code becomes: inc = xadd(&lock->tickets, inc); inc.tail &= ~TICKET_SLOWPATH_FLAG; if (likely(inc.head == inc.tail)) goto out; for (;;) { unsigned count = SPIN_THRESHOLD; do { if (ACCESS_ONCE(lock->tickets.head) == inc.tail) goto out; cpu_relax(); } while (--count); __ticket_lock_spinning(lock, inc.tail); } out: barrier(); which results in: push %rbp mov %rsp,%rbp mov $0x200,%eax lock xadd %ax,(%rdi) movzbl %ah,%edx cmp %al,%dl jne 1f # Slowpath if lock in contention pop %rbp retq ### SLOWPATH START 1: and $-2,%edx movzbl %dl,%esi 2: mov $0x800,%eax jmp 4f 3: pause sub $0x1,%eax je 5f 4: movzbl (%rdi),%ecx cmp %cl,%dl jne 3b pop %rbp retq 5: callq *__ticket_lock_spinning jmp 2b ### SLOWPATH END with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where the fastpath case is straight through (taking the lock without contention), and the spin loop is out of line: push %rbp mov %rsp,%rbp mov $0x100,%eax lock xadd %ax,(%rdi) movzbl %ah,%edx cmp %al,%dl jne 1f pop %rbp retq ### SLOWPATH START 1: pause movzbl (%rdi),%eax cmp %dl,%al jne 1b pop %rbp retq ### SLOWPATH END The unlock code is complicated by the need to both add to the lock's "head" and fetch the slowpath flag from "tail". This version of the patch uses a locked add to do this, followed by a test to see if the slowflag is set. The lock prefix acts as a full memory barrier, so we can be sure that other CPUs will have seen the unlock before we read the flag (without the barrier the read could be fetched from the store queue before it hits memory, which could result in a deadlock). This is is all unnecessary complication if you're not using PV ticket locks, it also uses the jump-label machinery to use the standard "add"-based unlock in the non-PV case. if (TICKET_SLOWPATH_FLAG && static_key_false(¶virt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,(%rdi) # Do unlock testb $0x1,0x1(%rdi) # Test flag je 1b # Finished if not set ### Slow path ### add $2,%sil # Add "head" in old lock state mov %esi,%edx and $0xfe,%dh # clear slowflag for comparison movzbl %dh,%eax cmp %dl,%al # If head == tail (uncontended) je 4f # clear slowpath flag # Kick next CPU waiting for lock 3: movzbl %sil,%esi callq *pv_lock_ops.kick pop %rbp retq # Lock no longer contended - clear slowflag 4: mov %esi,%eax lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag cmp %si,%ax jne 3b # If clear failed, then kick pop %rbp retq So when not using PV ticketlocks, the unlock sequence just has a 5-byte nop added to it, and the PV case is reasonable straightforward aside from requiring a "lock add". Results: ======base = 3.10-rc2 kernel patched = base + this series The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled with 32 KVM guest vcpu 8GB RAM. +-----------+-----------+-----------+------------+-----------+ ebizzy (records/sec) higher is better +-----------+-----------+-----------+------------+-----------+ base stdev patched stdev %improvement +-----------+-----------+-----------+------------+-----------+ 1x 5574.9000 237.4997 5618.0000 94.0366 0.77311 2x 2741.5000 561.3090 3332.0000 102.4738 21.53930 3x 2146.2500 216.7718 2302.3333 76.3870 7.27237 4x 1663.0000 141.9235 1753.7500 83.5220 5.45701 +-----------+-----------+-----------+------------+-----------+ +-----------+-----------+-----------+------------+-----------+ dbench (Throughput) higher is better +-----------+-----------+-----------+------------+-----------+ base stdev patched stdev %improvement +-----------+-----------+-----------+------------+-----------+ 1x 14111.5600 754.4525 14645.9900 114.3087 3.78718 2x 2481.6270 71.2665 2667.1280 73.8193 7.47498 3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173 4x 1029.4875 16.9166 1039.7069 43.8840 0.99267 +-----------+-----------+-----------+------------+-----------+ Your suggestions and comments are welcome. github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9 Please note that we set SPIN_THRESHOLD = 32k with this series, that would eatup little bit of overcommit performance of PLE machines and overall performance of non-PLE machines. The older series was tested by Attilio for Xen implementation [1]. Jeremy Fitzhardinge (9): x86/spinlock: Replace pv spinlocks with pv ticketlocks x86/ticketlock: Collapse a layer of functions xen: Defer spinlock setup until boot CPU setup xen/pvticketlock: Xen implementation for PV ticket locks xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks x86/pvticketlock: Use callee-save for lock_spinning x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 x86/ticketlock: Add slowpath logic xen/pvticketlock: Allow interrupts to be enabled while blocking Andrew Jones (1): Split jumplabel ratelimit Stefano Stabellini (1): xen: Enable PV ticketlocks on HVM Xen Srivatsa Vaddagiri (3): kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks kvm guest : Add configuration support to enable debug information for KVM Guests kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T (5): x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Add directed yield in vcpu block path --- Link in V8 has links to previous patch series and also whole history. V8 PV Ticketspinlock for Xen/KVM link: [1] https://lkml.org/lkml/2012/5/2/119 Documentation/virtual/kvm/cpuid.txt | 4 + Documentation/virtual/kvm/hypercalls.txt | 13 ++ arch/ia64/include/asm/kvm_host.h | 5 + arch/powerpc/include/asm/kvm_host.h | 5 + arch/s390/include/asm/kvm_host.h | 5 + arch/x86/Kconfig | 10 + arch/x86/include/asm/kvm_host.h | 7 +- arch/x86/include/asm/kvm_para.h | 14 +- arch/x86/include/asm/paravirt.h | 32 +-- arch/x86/include/asm/paravirt_types.h | 10 +- arch/x86/include/asm/spinlock.h | 128 +++++++---- arch/x86/include/asm/spinlock_types.h | 16 +- arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++ arch/x86/kernel/paravirt-spinlocks.c | 18 +- arch/x86/kvm/cpuid.c | 3 +- arch/x86/kvm/lapic.c | 5 +- arch/x86/kvm/x86.c | 39 +++- arch/x86/xen/smp.c | 3 +- arch/x86/xen/spinlock.c | 384 ++++++++++--------------------- include/linux/jump_label.h | 26 +-- include/linux/jump_label_ratelimit.h | 34 +++ include/linux/kvm_host.h | 2 +- include/linux/perf_event.h | 1 + include/uapi/linux/kvm_para.h | 1 + kernel/jump_label.c | 1 + virt/kvm/kvm_main.c | 6 +- 27 files changed, 645 insertions(+), 384 deletions(-)
Sorry! Please ignore this thread. My sendmail script aborted in between and resending whole series. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20130602/f4c18771/attachment-0001.html>
Raghavendra K T
2013-Jun-01 19:21 UTC
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V9: - Changed spin_threshold to 32k to avoid excess halt exits that are causing undercommit degradation (after PLE handler improvement). - Added kvm_irq_delivery_to_apic (suggested by Gleb) - Optimized halt exit path to use PLE handler V8 of PVspinlock was posted last year. After Avi's suggestions to look at PLE handler's improvements, various optimizations in PLE handling have been tried. With this series we see that we could get little more improvements on top of that. Ticket locks have an inherent problem in a virtualized case, because the vCPUs are scheduled rather than running concurrently (ignoring gang scheduled vCPUs). This can result in catastrophic performance collapses when the vCPU scheduler doesn't schedule the correct "next" vCPU, and ends up scheduling a vCPU which burns its entire timeslice spinning. (Note that this is not the same problem as lock-holder preemption, which this series also addresses; that's also a problem, but not catastrophic). (See Thomas Friebel's talk "Prevent Guests from Spinning Around" http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.) Currently we deal with this by having PV spinlocks, which adds a layer of indirection in front of all the spinlock functions, and defining a completely new implementation for Xen (and for other pvops users, but there are none at present). PV ticketlocks keeps the existing ticketlock implemenentation (fastpath) as-is, but adds a couple of pvops for the slow paths: - If a CPU has been waiting for a spinlock for SPIN_THRESHOLD iterations, then call out to the __ticket_lock_spinning() pvop, which allows a backend to block the vCPU rather than spinning. This pvop can set the lock into "slowpath state". - When releasing a lock, if it is in "slowpath state", the call __ticket_unlock_kick() to kick the next vCPU in line awake. If the lock is no longer in contention, it also clears the slowpath flag. The "slowpath state" is stored in the LSB of the within the lock tail ticket. This has the effect of reducing the max number of CPUs by half (so, a "small ticket" can deal with 128 CPUs, and "large ticket" 32768). For KVM, one hypercall is introduced in hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Overall, it results in a large reduction in code, it makes the native and virtualized cases closer, and it removes a layer of indirection around all the spinlock functions. The fast path (taking an uncontended lock which isn't in "slowpath" state) is optimal, identical to the non-paravirtualized case. The inner part of ticket lock code becomes: inc = xadd(&lock->tickets, inc); inc.tail &= ~TICKET_SLOWPATH_FLAG; if (likely(inc.head == inc.tail)) goto out; for (;;) { unsigned count = SPIN_THRESHOLD; do { if (ACCESS_ONCE(lock->tickets.head) == inc.tail) goto out; cpu_relax(); } while (--count); __ticket_lock_spinning(lock, inc.tail); } out: barrier(); which results in: push %rbp mov %rsp,%rbp mov $0x200,%eax lock xadd %ax,(%rdi) movzbl %ah,%edx cmp %al,%dl jne 1f # Slowpath if lock in contention pop %rbp retq ### SLOWPATH START 1: and $-2,%edx movzbl %dl,%esi 2: mov $0x800,%eax jmp 4f 3: pause sub $0x1,%eax je 5f 4: movzbl (%rdi),%ecx cmp %cl,%dl jne 3b pop %rbp retq 5: callq *__ticket_lock_spinning jmp 2b ### SLOWPATH END with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where the fastpath case is straight through (taking the lock without contention), and the spin loop is out of line: push %rbp mov %rsp,%rbp mov $0x100,%eax lock xadd %ax,(%rdi) movzbl %ah,%edx cmp %al,%dl jne 1f pop %rbp retq ### SLOWPATH START 1: pause movzbl (%rdi),%eax cmp %dl,%al jne 1b pop %rbp retq ### SLOWPATH END The unlock code is complicated by the need to both add to the lock's "head" and fetch the slowpath flag from "tail". This version of the patch uses a locked add to do this, followed by a test to see if the slowflag is set. The lock prefix acts as a full memory barrier, so we can be sure that other CPUs will have seen the unlock before we read the flag (without the barrier the read could be fetched from the store queue before it hits memory, which could result in a deadlock). This is is all unnecessary complication if you're not using PV ticket locks, it also uses the jump-label machinery to use the standard "add"-based unlock in the non-PV case. if (TICKET_SLOWPATH_FLAG && static_key_false(¶virt_ticketlocks_enabled))) { arch_spinlock_t prev; prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); which generates: push %rbp mov %rsp,%rbp nop5 # replaced by 5-byte jmp 2f when PV enabled # non-PV unlock addb $0x2,(%rdi) 1: pop %rbp retq ### PV unlock ### 2: movzwl (%rdi),%esi # Fetch prev lock addb $0x2,(%rdi) # Do unlock testb $0x1,0x1(%rdi) # Test flag je 1b # Finished if not set ### Slow path ### add $2,%sil # Add "head" in old lock state mov %esi,%edx and $0xfe,%dh # clear slowflag for comparison movzbl %dh,%eax cmp %dl,%al # If head == tail (uncontended) je 4f # clear slowpath flag # Kick next CPU waiting for lock 3: movzbl %sil,%esi callq *pv_lock_ops.kick pop %rbp retq # Lock no longer contended - clear slowflag 4: mov %esi,%eax lock cmpxchg %dx,(%rdi) # cmpxchg to clear flag cmp %si,%ax jne 3b # If clear failed, then kick pop %rbp retq So when not using PV ticketlocks, the unlock sequence just has a 5-byte nop added to it, and the PV case is reasonable straightforward aside from requiring a "lock add". Results: ======base = 3.10-rc2 kernel patched = base + this series The test was on 32 core (model: Intel(R) Xeon(R) CPU X7560) HT disabled with 32 KVM guest vcpu 8GB RAM. +-----------+-----------+-----------+------------+-----------+ ebizzy (records/sec) higher is better +-----------+-----------+-----------+------------+-----------+ base stdev patched stdev %improvement +-----------+-----------+-----------+------------+-----------+ 1x 5574.9000 237.4997 5618.0000 94.0366 0.77311 2x 2741.5000 561.3090 3332.0000 102.4738 21.53930 3x 2146.2500 216.7718 2302.3333 76.3870 7.27237 4x 1663.0000 141.9235 1753.7500 83.5220 5.45701 +-----------+-----------+-----------+------------+-----------+ +-----------+-----------+-----------+------------+-----------+ dbench (Throughput) higher is better +-----------+-----------+-----------+------------+-----------+ base stdev patched stdev %improvement +-----------+-----------+-----------+------------+-----------+ 1x 14111.5600 754.4525 14645.9900 114.3087 3.78718 2x 2481.6270 71.2665 2667.1280 73.8193 7.47498 3x 1510.2483 31.8634 1503.8792 36.0777 -0.42173 4x 1029.4875 16.9166 1039.7069 43.8840 0.99267 +-----------+-----------+-----------+------------+-----------+ Your suggestions and comments are welcome. github link: https://github.com/ktraghavendra/linux/tree/pvspinlock_v9 Please note that we set SPIN_THRESHOLD = 32k with this series, that would eatup little bit of overcommit performance of PLE machines and overall performance of non-PLE machines. The older series was tested by Attilio for Xen implementation [1]. Jeremy Fitzhardinge (9): x86/spinlock: Replace pv spinlocks with pv ticketlocks x86/ticketlock: Collapse a layer of functions xen: Defer spinlock setup until boot CPU setup xen/pvticketlock: Xen implementation for PV ticket locks xen/pvticketlocks: Add xen_nopvspin parameter to disable xen pv ticketlocks x86/pvticketlock: Use callee-save for lock_spinning x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 x86/ticketlock: Add slowpath logic xen/pvticketlock: Allow interrupts to be enabled while blocking Andrew Jones (1): Split jumplabel ratelimit Stefano Stabellini (1): xen: Enable PV ticketlocks on HVM Xen Srivatsa Vaddagiri (3): kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks kvm guest : Add configuration support to enable debug information for KVM Guests kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor Raghavendra K T (5): x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks kvm : Fold pv_unhalt flag into GET_MP_STATE ioctl to aid migration Simplify kvm_for_each_vcpu with kvm_irq_delivery_to_apic Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock Add directed yield in vcpu block path --- Link in V8 has links to previous patch series and also whole history. V8 PV Ticketspinlock for Xen/KVM link: [1] https://lkml.org/lkml/2012/5/2/119 Documentation/virtual/kvm/cpuid.txt | 4 + Documentation/virtual/kvm/hypercalls.txt | 13 ++ arch/ia64/include/asm/kvm_host.h | 5 + arch/powerpc/include/asm/kvm_host.h | 5 + arch/s390/include/asm/kvm_host.h | 5 + arch/x86/Kconfig | 10 + arch/x86/include/asm/kvm_host.h | 7 +- arch/x86/include/asm/kvm_para.h | 14 +- arch/x86/include/asm/paravirt.h | 32 +-- arch/x86/include/asm/paravirt_types.h | 10 +- arch/x86/include/asm/spinlock.h | 128 +++++++---- arch/x86/include/asm/spinlock_types.h | 16 +- arch/x86/include/uapi/asm/kvm_para.h | 1 + arch/x86/kernel/kvm.c | 256 +++++++++++++++++++++ arch/x86/kernel/paravirt-spinlocks.c | 18 +- arch/x86/kvm/cpuid.c | 3 +- arch/x86/kvm/lapic.c | 5 +- arch/x86/kvm/x86.c | 39 +++- arch/x86/xen/smp.c | 3 +- arch/x86/xen/spinlock.c | 384 ++++++++++--------------------- include/linux/jump_label.h | 26 +-- include/linux/jump_label_ratelimit.h | 34 +++ include/linux/kvm_host.h | 2 +- include/linux/perf_event.h | 1 + include/uapi/linux/kvm_para.h | 1 + kernel/jump_label.c | 1 + virt/kvm/kvm_main.c | 6 +- 27 files changed, 645 insertions(+), 384 deletions(-)
Raghavendra K T
2013-Jun-01 19:21 UTC
[PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks
x86/spinlock: Replace pv spinlocks with pv ticketlocks From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Rather than outright replacing the entire spinlock implementation in order to paravirtualize it, keep the ticket lock implementation but add a couple of pvops hooks on the slow patch (long spin on lock, unlocking a contended lock). Ticket locks have a number of nice properties, but they also have some surprising behaviours in virtual environments. They enforce a strict FIFO ordering on cpus trying to take a lock; however, if the hypervisor scheduler does not schedule the cpus in the correct order, the system can waste a huge amount of time spinning until the next cpu can take the lock. (See Thomas Friebel's talk "Prevent Guests from Spinning Around" http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.) To address this, we add two hooks: - __ticket_spin_lock which is called after the cpu has been spinning on the lock for a significant number of iterations but has failed to take the lock (presumably because the cpu holding the lock has been descheduled). The lock_spinning pvop is expected to block the cpu until it has been kicked by the current lock holder. - __ticket_spin_unlock, which on releasing a contended lock (there are more cpus with tail tickets), it looks to see if the next cpu is blocked and wakes it if so. When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub functions causes all the extra code to go away. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com> Tested-by: Attilio Rao <attilio.rao at citrix.com> [ Raghavendra: Changed SPIN_THRESHOLD ] Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/include/asm/paravirt.h | 32 ++++---------------- arch/x86/include/asm/paravirt_types.h | 10 ++---- arch/x86/include/asm/spinlock.h | 53 +++++++++++++++++++++++++++------ arch/x86/include/asm/spinlock_types.h | 4 -- arch/x86/kernel/paravirt-spinlocks.c | 15 +-------- arch/x86/xen/spinlock.c | 8 ++++- 6 files changed, 61 insertions(+), 61 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index cfdc9ee..040e72d 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS) -static inline int arch_spin_is_locked(struct arch_spinlock *lock) +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, + __ticket_t ticket) { - return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock); + PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket); } -static inline int arch_spin_is_contended(struct arch_spinlock *lock) +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, + __ticket_t ticket) { - return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock); -} -#define arch_spin_is_contended arch_spin_is_contended - -static __always_inline void arch_spin_lock(struct arch_spinlock *lock) -{ - PVOP_VCALL1(pv_lock_ops.spin_lock, lock); -} - -static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock, - unsigned long flags) -{ - PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags); -} - -static __always_inline int arch_spin_trylock(struct arch_spinlock *lock) -{ - return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock); -} - -static __always_inline void arch_spin_unlock(struct arch_spinlock *lock) -{ - PVOP_VCALL1(pv_lock_ops.spin_unlock, lock); + PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket); } #endif diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index 0db1fca..d5deb6d 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -327,13 +327,11 @@ struct pv_mmu_ops { }; struct arch_spinlock; +#include <asm/spinlock_types.h> + struct pv_lock_ops { - int (*spin_is_locked)(struct arch_spinlock *lock); - int (*spin_is_contended)(struct arch_spinlock *lock); - void (*spin_lock)(struct arch_spinlock *lock); - void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long flags); - int (*spin_trylock)(struct arch_spinlock *lock); - void (*spin_unlock)(struct arch_spinlock *lock); + void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket); + void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket); }; /* This contains all the paravirt structures: we get a convenient diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 33692ea..4d54244 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -34,6 +34,35 @@ # define UNLOCK_LOCK_PREFIX #endif +/* How long a lock should spin before we consider blocking */ +#define SPIN_THRESHOLD (1 << 15) + +#ifndef CONFIG_PARAVIRT_SPINLOCKS + +static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, + __ticket_t ticket) +{ +} + +static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, + __ticket_t ticket) +{ +} + +#endif /* CONFIG_PARAVIRT_SPINLOCKS */ + + +/* + * If a spinlock has someone waiting on it, then kick the appropriate + * waiting cpu. + */ +static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, + __ticket_t next) +{ + if (unlikely(lock->tickets.tail != next)) + ____ticket_unlock_kick(lock, next); +} + /* * Ticket locks are conceptually two parts, one indicating the current head of * the queue, and the other indicating the current tail. The lock is acquired @@ -47,19 +76,24 @@ * in the high part, because a wide xadd increment of the low part would carry * up and contaminate the high part. */ -static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock) +static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock) { register struct __raw_tickets inc = { .tail = 1 }; inc = xadd(&lock->tickets, inc); for (;;) { - if (inc.head == inc.tail) - break; - cpu_relax(); - inc.head = ACCESS_ONCE(lock->tickets.head); + unsigned count = SPIN_THRESHOLD; + + do { + if (inc.head == inc.tail) + goto out; + cpu_relax(); + inc.head = ACCESS_ONCE(lock->tickets.head); + } while (--count); + __ticket_lock_spinning(lock, inc.tail); } - barrier(); /* make sure nothing creeps before the lock is taken */ +out: barrier(); /* make sure nothing creeps before the lock is taken */ } static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock) @@ -78,7 +112,10 @@ static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock) static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock) { + __ticket_t next = lock->tickets.head + 1; + __add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX); + __ticket_unlock_kick(lock, next); } static inline int __ticket_spin_is_locked(arch_spinlock_t *lock) @@ -95,8 +132,6 @@ static inline int __ticket_spin_is_contended(arch_spinlock_t *lock) return (__ticket_t)(tmp.tail - tmp.head) > 1; } -#ifndef CONFIG_PARAVIRT_SPINLOCKS - static inline int arch_spin_is_locked(arch_spinlock_t *lock) { return __ticket_spin_is_locked(lock); @@ -129,8 +164,6 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock, arch_spin_lock(lock); } -#endif /* CONFIG_PARAVIRT_SPINLOCKS */ - static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) { while (arch_spin_is_locked(lock)) diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h index ad0ad07..83fd3c7 100644 --- a/arch/x86/include/asm/spinlock_types.h +++ b/arch/x86/include/asm/spinlock_types.h @@ -1,10 +1,6 @@ #ifndef _ASM_X86_SPINLOCK_TYPES_H #define _ASM_X86_SPINLOCK_TYPES_H -#ifndef __LINUX_SPINLOCK_TYPES_H -# error "please don't include this file directly" -#endif - #include <linux/types.h> #if (CONFIG_NR_CPUS < 256) diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index 676b8c7..c2e010e 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -7,21 +7,10 @@ #include <asm/paravirt.h> -static inline void -default_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) -{ - arch_spin_lock(lock); -} - struct pv_lock_ops pv_lock_ops = { #ifdef CONFIG_SMP - .spin_is_locked = __ticket_spin_is_locked, - .spin_is_contended = __ticket_spin_is_contended, - - .spin_lock = __ticket_spin_lock, - .spin_lock_flags = default_spin_lock_flags, - .spin_trylock = __ticket_spin_trylock, - .spin_unlock = __ticket_spin_unlock, + .lock_spinning = paravirt_nop, + .unlock_kick = paravirt_nop, #endif }; EXPORT_SYMBOL(pv_lock_ops); diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 3002ec1..d6481a9 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -138,6 +138,9 @@ struct xen_spinlock { xen_spinners_t spinners; /* count of waiting cpus */ }; +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; + +#if 0 static int xen_spin_is_locked(struct arch_spinlock *lock) { struct xen_spinlock *xl = (struct xen_spinlock *)lock; @@ -165,7 +168,6 @@ static int xen_spin_trylock(struct arch_spinlock *lock) return old == 0; } -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners); /* @@ -352,6 +354,7 @@ static void xen_spin_unlock(struct arch_spinlock *lock) if (unlikely(xl->spinners)) xen_spin_unlock_slow(xl); } +#endif static irqreturn_t dummy_handler(int irq, void *dev_id) { @@ -413,13 +416,14 @@ void __init xen_init_spinlocks(void) return; BUILD_BUG_ON(sizeof(struct xen_spinlock) > sizeof(arch_spinlock_t)); - +#if 0 pv_lock_ops.spin_is_locked = xen_spin_is_locked; pv_lock_ops.spin_is_contended = xen_spin_is_contended; pv_lock_ops.spin_lock = xen_spin_lock; pv_lock_ops.spin_lock_flags = xen_spin_lock_flags; pv_lock_ops.spin_trylock = xen_spin_trylock; pv_lock_ops.spin_unlock = xen_spin_unlock; +#endif } #ifdef CONFIG_XEN_DEBUG_FS
Raghavendra K T
2013-Jun-01 19:22 UTC
[PATCH RFC V9 2/19] x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks
x86/ticketlock: Don't inline _spin_unlock when using paravirt spinlocks From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> The code size expands somewhat, and its better to just call a function rather than inline it. Thanks Jeremy for original version of ARCH_NOINLINE_SPIN_UNLOCK config patch, which is simplified. Suggested-by: Linus Torvalds <torvalds at linux-foundation.org> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 685692c..80fcc4b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -621,6 +621,7 @@ config PARAVIRT_DEBUG config PARAVIRT_SPINLOCKS bool "Paravirtualization layer for spinlocks" depends on PARAVIRT && SMP + select UNINLINE_SPIN_UNLOCK ---help--- Paravirtualized spinlocks allow a pvops backend to replace the spinlock implementation with something virtualization-friendly
Raghavendra K T
2013-Jun-01 19:22 UTC
[PATCH RFC V9 4/19] xen: Defer spinlock setup until boot CPU setup
xen: Defer spinlock setup until boot CPU setup From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> There's no need to do it at very early init, and doing it there makes it impossible to use the jump_label machinery. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/xen/smp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c index 8ff3799..dcdc91c 100644 --- a/arch/x86/xen/smp.c +++ b/arch/x86/xen/smp.c @@ -246,6 +246,7 @@ static void __init xen_smp_prepare_boot_cpu(void) xen_filter_cpu_maps(); xen_setup_vcpu_info_placement(); + xen_init_spinlocks(); } static void __init xen_smp_prepare_cpus(unsigned int max_cpus) @@ -647,7 +648,6 @@ void __init xen_smp_init(void) { smp_ops = xen_smp_ops; xen_fill_possible_map(); - xen_init_spinlocks(); } static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
Raghavendra K T
2013-Jun-01 19:23 UTC
[PATCH RFC V9 7/19] x86/pvticketlock: Use callee-save for lock_spinning
x86/pvticketlock: Use callee-save for lock_spinning From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Although the lock_spinning calls in the spinlock code are on the uncommon path, their presence can cause the compiler to generate many more register save/restores in the function pre/postamble, which is in the fast path. To avoid this, convert it to using the pvops callee-save calling convention, which defers all the save/restores until the actual function is called, keeping the fastpath clean. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com> Tested-by: Attilio Rao <attilio.rao at citrix.com> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/include/asm/paravirt.h | 2 +- arch/x86/include/asm/paravirt_types.h | 2 +- arch/x86/kernel/paravirt-spinlocks.c | 2 +- arch/x86/xen/spinlock.c | 3 ++- 4 files changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 040e72d..7131e12c 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -715,7 +715,7 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx, static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket) { - PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket); + PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket); } static __always_inline void ____ticket_unlock_kick(struct arch_spinlock *lock, diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h index d5deb6d..350d017 100644 --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -330,7 +330,7 @@ struct arch_spinlock; #include <asm/spinlock_types.h> struct pv_lock_ops { - void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket); + struct paravirt_callee_save lock_spinning; void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket); }; diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c index c2e010e..4251c1d 100644 --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -9,7 +9,7 @@ struct pv_lock_ops pv_lock_ops = { #ifdef CONFIG_SMP - .lock_spinning = paravirt_nop, + .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop), .unlock_kick = paravirt_nop, #endif }; diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 3de6805..5502fda 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -171,6 +171,7 @@ out: local_irq_restore(flags); spin_time_accum_blocked(start); } +PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning); static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) { @@ -254,7 +255,7 @@ void __init xen_init_spinlocks(void) return; } - pv_lock_ops.lock_spinning = xen_lock_spinning; + pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning); pv_lock_ops.unlock_kick = xen_unlock_kick; }
Raghavendra K T
2013-Jun-01 19:24 UTC
[PATCH RFC V9 8/19] x86/pvticketlock: When paravirtualizing ticket locks, increment by 2
x86/pvticketlock: When paravirtualizing ticket locks, increment by 2 From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Increment ticket head/tails by 2 rather than 1 to leave the LSB free to store a "is in slowpath state" bit. This halves the number of possible CPUs for a given ticket size, but this shouldn't matter in practice - kernels built for 32k+ CPU systems are probably specially built for the hardware rather than a generic distro kernel. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Tested-by: Attilio Rao <attilio.rao at citrix.com> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/include/asm/spinlock.h | 10 +++++----- arch/x86/include/asm/spinlock_types.h | 10 +++++++++- 2 files changed, 14 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 7442410..04a5cd5 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, */ static __always_inline void arch_spin_lock(struct arch_spinlock *lock) { - register struct __raw_tickets inc = { .tail = 1 }; + register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC }; inc = xadd(&lock->tickets, inc); @@ -104,7 +104,7 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock) if (old.tickets.head != old.tickets.tail) return 0; - new.head_tail = old.head_tail + (1 << TICKET_SHIFT); + new.head_tail = old.head_tail + (TICKET_LOCK_INC << TICKET_SHIFT); /* cmpxchg is a full barrier, so nothing can move before it */ return cmpxchg(&lock->head_tail, old.head_tail, new.head_tail) == old.head_tail; @@ -112,9 +112,9 @@ static __always_inline int arch_spin_trylock(arch_spinlock_t *lock) static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { - __ticket_t next = lock->tickets.head + 1; + __ticket_t next = lock->tickets.head + TICKET_LOCK_INC; - __add(&lock->tickets.head, 1, UNLOCK_LOCK_PREFIX); + __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); __ticket_unlock_kick(lock, next); } @@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t *lock) { struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets); - return (__ticket_t)(tmp.tail - tmp.head) > 1; + return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC; } #define arch_spin_is_contended arch_spin_is_contended diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h index 83fd3c7..e96fcbd 100644 --- a/arch/x86/include/asm/spinlock_types.h +++ b/arch/x86/include/asm/spinlock_types.h @@ -3,7 +3,13 @@ #include <linux/types.h> -#if (CONFIG_NR_CPUS < 256) +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define __TICKET_LOCK_INC 2 +#else +#define __TICKET_LOCK_INC 1 +#endif + +#if (CONFIG_NR_CPUS < (256 / __TICKET_LOCK_INC)) typedef u8 __ticket_t; typedef u16 __ticketpair_t; #else @@ -11,6 +17,8 @@ typedef u16 __ticket_t; typedef u32 __ticketpair_t; #endif +#define TICKET_LOCK_INC ((__ticket_t)__TICKET_LOCK_INC) + #define TICKET_SHIFT (sizeof(__ticket_t) * 8) typedef struct arch_spinlock {
Raghavendra K T
2013-Jun-01 19:25 UTC
[PATCH RFC V9 15/19] kvm guest : Add configuration support to enable debug information for KVM Guests
kvm guest : Add configuration support to enable debug information for KVM Guests From: Srivatsa Vaddagiri <vatsa at linux.vnet.ibm.com> Signed-off-by: Srivatsa Vaddagiri <vatsa at linux.vnet.ibm.com> Signed-off-by: Suzuki Poulose <suzuki at in.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/x86/Kconfig | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 80fcc4b..f8ff42d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -646,6 +646,15 @@ config KVM_GUEST underlying device model, the host provides the guest with timing infrastructure such as time of day, and system time +config KVM_DEBUG_FS + bool "Enable debug information for KVM Guests in debugfs" + depends on KVM_GUEST && DEBUG_FS + default n + ---help--- + This option enables collection of various statistics for KVM guest. + Statistics are displayed in debugfs filesystem. Enabling this option + may incur significant overhead. + source "arch/x86/lguest/Kconfig" config PARAVIRT_TIME_ACCOUNTING
Raghavendra K T
2013-Jun-01 19:26 UTC
[PATCH RFC V9 18/19] Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock
Documentation/kvm : Add documentation on Hypercalls and features used for PV spinlock From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> KVM_HC_KICK_CPU hypercall added to wakeup halted vcpu in paravirtual spinlock enabled guest. KVM_FEATURE_PV_UNHALT enables guest to check whether pv spinlock can be enabled in guest. Thanks Vatsa for rewriting KVM_HC_KICK_CPU Signed-off-by: Srivatsa Vaddagiri <vatsa at linux.vnet.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- Documentation/virtual/kvm/cpuid.txt | 4 ++++ Documentation/virtual/kvm/hypercalls.txt | 13 +++++++++++++ 2 files changed, 17 insertions(+) diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt index 83afe65..654f43c 100644 --- a/Documentation/virtual/kvm/cpuid.txt +++ b/Documentation/virtual/kvm/cpuid.txt @@ -43,6 +43,10 @@ KVM_FEATURE_CLOCKSOURCE2 || 3 || kvmclock available at msrs KVM_FEATURE_ASYNC_PF || 4 || async pf can be enabled by || || writing to msr 0x4b564d02 ------------------------------------------------------------------------------ +KVM_FEATURE_PV_UNHALT || 6 || guest checks this feature bit + || || before enabling paravirtualized + || || spinlock support. +------------------------------------------------------------------------------ KVM_FEATURE_CLOCKSOURCE_STABLE_BIT || 24 || host will warn if no guest-side || || per-cpu warps are expected in || || kvmclock. diff --git a/Documentation/virtual/kvm/hypercalls.txt b/Documentation/virtual/kvm/hypercalls.txt index ea113b5..2a4da11 100644 --- a/Documentation/virtual/kvm/hypercalls.txt +++ b/Documentation/virtual/kvm/hypercalls.txt @@ -64,3 +64,16 @@ Purpose: To enable communication between the hypervisor and guest there is a shared page that contains parts of supervisor visible register state. The guest can map this shared page to access its supervisor register through memory using this hypercall. + +5. KVM_HC_KICK_CPU +------------------------ +Architecture: x86 +Status: active +Purpose: Hypercall used to wakeup a vcpu from HLT state +Usage example : A vcpu of a paravirtualized guest that is busywaiting in guest +kernel mode for an event to occur (ex: a spinlock to become available) can +execute HLT instruction once it has busy-waited for more than a threshold +time-interval. Execution of HLT instruction would cause the hypervisor to put +the vcpu to sleep until occurence of an appropriate event. Another vcpu of the +same guest can wakeup the sleeping vcpu by issuing KVM_HC_KICK_CPU hypercall, +specifying APIC ID of the vcpu to be wokenup.
Raghavendra K T
2013-Jun-01 19:26 UTC
[PATCH RFC V9 19/19] kvm hypervisor: Add directed yield in vcpu block path
kvm hypervisor: Add directed yield in vcpu block path From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> We use the improved PLE handler logic in vcpu block patch for scheduling rather than plain schedule, so that we can make intelligent decisions Signed-off-by: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> --- arch/ia64/include/asm/kvm_host.h | 5 +++++ arch/powerpc/include/asm/kvm_host.h | 5 +++++ arch/s390/include/asm/kvm_host.h | 5 +++++ arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/x86.c | 8 ++++++++ include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 6 ++++-- 7 files changed, 29 insertions(+), 4 deletions(-) diff --git a/arch/ia64/include/asm/kvm_host.h b/arch/ia64/include/asm/kvm_host.h index 989dd3f..999ab15 100644 --- a/arch/ia64/include/asm/kvm_host.h +++ b/arch/ia64/include/asm/kvm_host.h @@ -595,6 +595,11 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu); int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run); void kvm_sal_emul(struct kvm_vcpu *vcpu); +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu) +{ + schedule(); +} + #define __KVM_HAVE_ARCH_VM_ALLOC 1 struct kvm *kvm_arch_alloc_vm(void); void kvm_arch_free_vm(struct kvm *kvm); diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index af326cd..1aeecc0 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -628,4 +628,9 @@ struct kvm_vcpu_arch { #define __KVM_HAVE_ARCH_WQP #define __KVM_HAVE_CREATE_DEVICE +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu) +{ + schedule(); +} + #endif /* __POWERPC_KVM_HOST_H__ */ diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index 16bd5d1..db09a56 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -266,4 +266,9 @@ struct kvm_arch{ }; extern int sie64a(struct kvm_s390_sie_block *, u64 *); +static inline void kvm_do_schedule(struct kvm_vcpu *vcpu) +{ + schedule(); +} + #endif diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 95702de..72ff791 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1042,5 +1042,5 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info); int kvm_pmu_read_pmc(struct kvm_vcpu *vcpu, unsigned pmc, u64 *data); void kvm_handle_pmu_event(struct kvm_vcpu *vcpu); void kvm_deliver_pmi(struct kvm_vcpu *vcpu); - +void kvm_do_schedule(struct kvm_vcpu *vcpu); #endif /* _ASM_X86_KVM_HOST_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b963c86..d26c4be 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7281,6 +7281,14 @@ bool kvm_arch_can_inject_async_page_present(struct kvm_vcpu *vcpu) kvm_x86_ops->interrupt_allowed(vcpu); } +void kvm_do_schedule(struct kvm_vcpu *vcpu) +{ + /* We try to yield to a kikced vcpu else do a schedule */ + if (kvm_vcpu_on_spin(vcpu) <= 0) + schedule(); +} +EXPORT_SYMBOL_GPL(kvm_do_schedule); + EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq); EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_page_fault); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f0eea07..39efc18 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -565,7 +565,7 @@ void mark_page_dirty_in_slot(struct kvm *kvm, struct kvm_memory_slot *memslot, void kvm_vcpu_block(struct kvm_vcpu *vcpu); void kvm_vcpu_kick(struct kvm_vcpu *vcpu); bool kvm_vcpu_yield_to(struct kvm_vcpu *target); -void kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); +bool kvm_vcpu_on_spin(struct kvm_vcpu *vcpu); void kvm_resched(struct kvm_vcpu *vcpu); void kvm_load_guest_fpu(struct kvm_vcpu *vcpu); void kvm_put_guest_fpu(struct kvm_vcpu *vcpu); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 302681c..8387247 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1685,7 +1685,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) if (signal_pending(current)) break; - schedule(); + kvm_do_schedule(vcpu); } finish_wait(&vcpu->wq, &wait); @@ -1786,7 +1786,7 @@ bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) } #endif -void kvm_vcpu_on_spin(struct kvm_vcpu *me) +bool kvm_vcpu_on_spin(struct kvm_vcpu *me) { struct kvm *kvm = me->kvm; struct kvm_vcpu *vcpu; @@ -1835,6 +1835,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) /* Ensure vcpu is not eligible during next spinloop */ kvm_vcpu_set_dy_eligible(me, false); + + return yielded; } EXPORT_SYMBOL_GPL(kvm_vcpu_on_spin);
FWIW I use the paravirt spinlock ops for adding lock elision to the spinlocks. This needs to be done at the top level (so the level you're removing) However I don't like the pv mechanism very much and would be fine with using an static key hook in the main path like I do for all the other lock types. It also uses interrupt ops patching, for that it would be still needed though. -Andi -- ak at linux.intel.com -- Speaking for myself only.
Reasonably Related Threads
- [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
- [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
- [PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
- [PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
- [PATCH RFC V11 0/18] Paravirtualized ticket spinlocks