Raghavendra K T
2014-Feb-27 15:22 UTC
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote: [...]>> But neither of the VCPUs being kicked here are halted -- they're either >> running or runnable (descheduled by the hypervisor). > > /me actually looks at Waiman's code... > > Right, this is really different from pvticketlocks, where the *unlock* > primitive wakes up a sleeping VCPU. It is more similar to PLE > (pause-loop exiting).Adding to the discussion, I see there are two possibilities here, considering that in undercommit cases we should not exceed HEAD_SPIN_THRESHOLD, 1. the looping vcpu in pv_head_spin_check() should do halt() considering that we have done enough spinning (more than typical lock-hold time), and hence we are in potential overcommit. 2. multiplex kick_cpu to do directed yield in qspinlock case. But this may result in some ping ponging?
Paolo Bonzini
2014-Feb-27 15:50 UTC
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
Il 27/02/2014 16:22, Raghavendra K T ha scritto:> On 02/27/2014 08:15 PM, Paolo Bonzini wrote: > [...] >>> But neither of the VCPUs being kicked here are halted -- they're either >>> running or runnable (descheduled by the hypervisor). >> >> /me actually looks at Waiman's code... >> >> Right, this is really different from pvticketlocks, where the *unlock* >> primitive wakes up a sleeping VCPU. It is more similar to PLE >> (pause-loop exiting). > > Adding to the discussion, I see there are two possibilities here, > considering that in undercommit cases we should not exceed > HEAD_SPIN_THRESHOLD, > > 1. the looping vcpu in pv_head_spin_check() should do halt() > considering that we have done enough spinning (more than typical > lock-hold time), and hence we are in potential overcommit. > > 2. multiplex kick_cpu to do directed yield in qspinlock case. > But this may result in some ping ponging?Actually, I think the qspinlock can work roughly the same as the pvticketlock, using the same lock_spinning and unlock_lock hooks. The x86-specific codepath can use bit 1 in the ->wait byte as "I have halted, please kick me". value = _QSPINLOCK_WAITING; i = 0; do cpu_relax(); while (ACCESS_ONCE(slock->lock) && i++ < BUSY_WAIT); if (ACCESS_ONCE(slock->lock)) { value |= _QSPINLOCK_HALTED; xchg(&slock->wait, value >> 8); if (ACCESS_ONCE(slock->lock)) { ... call lock_spinning hook ... } } /* * Set the lock bit & clear the halted+waiting bits */ if (cmpxchg(&slock->lock_wait, value, _QSPINLOCK_LOCKED) == value) return -1; /* Got the lock */ __atomic_and(&slock->lock_wait, ~QSPINLOCK_HALTED); The lock_spinning/unlock_lock code can probably be much simpler, because you do not need to keep a list of all spinning locks. Unlock_lock can just use the CPU number to wake up the right CPU. Paolo
Waiman Long
2014-Feb-27 20:50 UTC
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 10:22 AM, Raghavendra K T wrote:> On 02/27/2014 08:15 PM, Paolo Bonzini wrote: > [...] >>> But neither of the VCPUs being kicked here are halted -- they're either >>> running or runnable (descheduled by the hypervisor). >> >> /me actually looks at Waiman's code... >> >> Right, this is really different from pvticketlocks, where the *unlock* >> primitive wakes up a sleeping VCPU. It is more similar to PLE >> (pause-loop exiting). > > Adding to the discussion, I see there are two possibilities here, > considering that in undercommit cases we should not exceed > HEAD_SPIN_THRESHOLD, > > 1. the looping vcpu in pv_head_spin_check() should do halt() > considering that we have done enough spinning (more than typical > lock-hold time), and hence we are in potential overcommit. > > 2. multiplex kick_cpu to do directed yield in qspinlock case. > But this may result in some ping ponging? > > >In the current code, the lock holder can't easily locate the CPU # of the queue head when in the unlock path. That is why I try to keep the queue head alive as long as possible so that it can take over when the lock is free. I am trying out new code to let the CPUs that are waiting other than the first 2 to go to halt to see if that will help the overcommit case. -Longman
David Vrabel
2014-Mar-03 11:06 UTC
[Xen-devel] [PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 27/02/14 15:50, Paolo Bonzini wrote:> Il 27/02/2014 16:22, Raghavendra K T ha scritto: >> On 02/27/2014 08:15 PM, Paolo Bonzini wrote: >> [...] >>>> But neither of the VCPUs being kicked here are halted -- they're either >>>> running or runnable (descheduled by the hypervisor). >>> >>> /me actually looks at Waiman's code... >>> >>> Right, this is really different from pvticketlocks, where the *unlock* >>> primitive wakes up a sleeping VCPU. It is more similar to PLE >>> (pause-loop exiting). >> >> Adding to the discussion, I see there are two possibilities here, >> considering that in undercommit cases we should not exceed >> HEAD_SPIN_THRESHOLD, >> >> 1. the looping vcpu in pv_head_spin_check() should do halt() >> considering that we have done enough spinning (more than typical >> lock-hold time), and hence we are in potential overcommit. >> >> 2. multiplex kick_cpu to do directed yield in qspinlock case. >> But this may result in some ping ponging? > > Actually, I think the qspinlock can work roughly the same as the > pvticketlock, using the same lock_spinning and unlock_lock hooks.This is is approach I would like to see. This would also work for Xen PV guests. The current implementation depends on hardware PLE which Xen PV guests do not support and I'm not sure if other architectures have something similar (e.g., does ARM's virtualization extensions have PLE?). David
Reasonably Related Threads
- [PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
- [PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
- [PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
- [PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
- [PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks