2014-05-07 11:01-0400, Waiman Long:> From: Peter Zijlstra <peterz at infradead.org> > > Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline.I think there is an unwanted scenario on virtual machines: 1) VCPU sets the pending bit and start spinning. 2) Pending VCPU gets descheduled. - we have PLE and lock holder isn't running [1] - the hypervisor randomly preempts us 3) Lock holder unlocks while pending VCPU is waiting in queue. 4) Subsequent lockers will see free lock with set pending bit and will loop in trylock's 'for (;;)' - the worst-case is lock starving [2] - PLE can save us from wasting whole timeslice Retry threshold is the easiest solution, regardless of its ugliness [4]. Another minor design flaw is that formerly first VCPU gets appended to the tail when it decides to queue; is the performance gain worth it? Thanks. --- 1: Pause Loop Exiting is almost certain to vmexit in that case: we default to 4096 TSC cycles on KVM, and pending loop is longer than 4 (4096/PSPIN_THRESHOLD). We would also vmexit if critical section was longer than 4k. 2: In this example, vpus 1 and 2 use the lock while 3 never gets there. VCPU: 1 2 3 lock() // we are the holder pend() // we have pending bit vmexit // while in PSPIN_THRESHOLD loop unlock() vmentry SPINNING // for {;;} loop vmexit vmentry lock() pend() vmexit unlock() vmentry SPINNING vmexit vmentry --- loop --- The window is (should be) too small to happen in bare-metal. 3: Pending VCPU was first in line, but when it decides to queue, it must go to the tail. 4: The idea is to prevent unfairness by queueing after a while of useless looping. Magic value should be set a bit above the time it takes an active pending bit holder to go through the loop. 4 looks enough. We can use either pv_qspinlock_enabled() or cpu_has_hypervisor. I presume that we never want this to happen in a VM and that we won't have pv_qspinlock_enabled() without cpu_has_hypervisor. diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index 37b5c7f..cd45c27 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -573,7 +573,7 @@ static __always_inline int get_qlock(struct qspinlock *lock) static inline int trylock_pending(struct qspinlock *lock, u32 *pval) { u32 old, new, val = *pval; - int retry = 1; + int retry = 0; /* * trylock || pending @@ -595,9 +595,9 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) * a while to see if that either bit will be cleared. * If that is no change, we return and be queued. */ - if (!retry) + if (retry) return 0; - retry--; + retry++; cpu_relax(); cpu_relax(); *pval = val = atomic_read(&lock->val); @@ -608,7 +608,11 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) * Assuming that the pending bit holder is going to * set the lock bit and clear the pending bit soon, * it is better to wait than to exit at this point. + * Our assumption does not hold on hypervisors, where + * the pending bit holder doesn't have to be running. */ + if (cpu_has_hypervisor && ++retry > MAGIC) + return 0; cpu_relax(); *pval = val = atomic_read(&lock->val); continue;
On Mon, May 12, 2014 at 05:22:08PM +0200, Radim Kr?m?? wrote:> 2014-05-07 11:01-0400, Waiman Long: > > From: Peter Zijlstra <peterz at infradead.org> > > > > Because the qspinlock needs to touch a second cacheline; add a pending > > bit and allow a single in-word spinner before we punt to the second > > cacheline. > > I think there is an unwanted scenario on virtual machines: > 1) VCPU sets the pending bit and start spinning. > 2) Pending VCPU gets descheduled. > - we have PLE and lock holder isn't running [1] > - the hypervisor randomly preempts us > 3) Lock holder unlocks while pending VCPU is waiting in queue. > 4) Subsequent lockers will see free lock with set pending bit and will > loop in trylock's 'for (;;)' > - the worst-case is lock starving [2] > - PLE can save us from wasting whole timeslice > > Retry threshold is the easiest solution, regardless of its ugliness [4]. > > Another minor design flaw is that formerly first VCPU gets appended to > the tail when it decides to queue; > is the performance gain worth it?This is all for real hardware, I've not yet stared at the (para)virt crap. My primary concern is that native hardware runs good and that the (para)virt support does rape the code -- so far its failing hard on the second.
On 05/12/2014 11:22 AM, Radim Kr?m?? wrote:> 2014-05-07 11:01-0400, Waiman Long: >> From: Peter Zijlstra<peterz at infradead.org> >> >> Because the qspinlock needs to touch a second cacheline; add a pending >> bit and allow a single in-word spinner before we punt to the second >> cacheline. > I think there is an unwanted scenario on virtual machines: > 1) VCPU sets the pending bit and start spinning. > 2) Pending VCPU gets descheduled. > - we have PLE and lock holder isn't running [1] > - the hypervisor randomly preempts us > 3) Lock holder unlocks while pending VCPU is waiting in queue. > 4) Subsequent lockers will see free lock with set pending bit and will > loop in trylock's 'for (;;)' > - the worst-case is lock starving [2] > - PLE can save us from wasting whole timeslice > > Retry threshold is the easiest solution, regardless of its ugliness [4].Yes, that can be a real issue. Some sort of retry threshold, as you said, should be able to handle it. BTW, the relevant patch should be 16/19 where the PV spinlock stuff should be discussed. This patch is perfectly fine.> Another minor design flaw is that formerly first VCPU gets appended to > the tail when it decides to queue; > is the performance gain worth it? > > Thanks.Yes, the performance gain is worth it. The primary goal is to be not worse than ticket spinlock in light load situation which is the most common case. This feature is need to achieve that. -Longman
2014-05-13 15:47-0400, Waiman Long:> On 05/12/2014 11:22 AM, Radim Kr?m?? wrote: > >I think there is an unwanted scenario on virtual machines: > >1) VCPU sets the pending bit and start spinning. > >2) Pending VCPU gets descheduled. > > - we have PLE and lock holder isn't running [1] > > - the hypervisor randomly preempts us > >3) Lock holder unlocks while pending VCPU is waiting in queue. > >4) Subsequent lockers will see free lock with set pending bit and will > > loop in trylock's 'for (;;)' > > - the worst-case is lock starving [2] > > - PLE can save us from wasting whole timeslice > > > >Retry threshold is the easiest solution, regardless of its ugliness [4]. > > Yes, that can be a real issue. Some sort of retry threshold, as you said, > should be able to handle it. > > BTW, the relevant patch should be 16/19 where the PV spinlock stuff should > be discussed. This patch is perfectly fine.Ouch, my apology to Peter didn't make it ... Agreed, I should have split the comment under patches [06/19] (part quoted above; does not depend on PV), [16/19] (part quoted below) and [17/19] (general doubts).> >Another minor design flaw is that formerly first VCPU gets appended to > >the tail when it decides to queue; > >is the performance gain worth it? > > > >Thanks. > > Yes, the performance gain is worth it. The primary goal is to be not worse > than ticket spinlock in light load situation which is the most common case. > This feature is need to achieve that.Ok. I've seen merit in pvqspinlock even with slightly slower first-waiter, so I would have happily sacrificed those horrible branches. (I prefer elegant to optimized code, but I can see why we want to be strictly better than ticketlock.) Peter mentioned that we are focusing on bare-metal patches, so I'll withold my other paravirt rants until they are polished. And to forcefully bring this thread a little bit on-topic: Pending-bit is effectively a lock in a lock, so I was wondering why don't we use more pending bits; advantages are the same, just diminished by the probability of having an ideally contended lock: - waiter won't be blocked on RAM access if critical section (or more) ends sooner - some unlucky cacheline is not forgotten - faster unlock (no need for tail operations) (- ?) disadvantages are magnified: - increased complexity - intense cacheline sharing (I thought that this is the main disadvantage of ticketlock.) (- ?) One bit still improved performance, is it the best we got? Thanks.