Peter Zijlstra
2015-Apr-29 18:11 UTC
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote:> In the pv_scan_next() function, the slow cmpxchg atomic operation is > performed even if the other CPU is not even close to being halted. This > extra cmpxchg can harm slowpath performance. > > This patch introduces the new mayhalt flag to indicate if the other > spinning CPU is close to being halted or not. The current threshold > for x86 is 2k cpu_relax() calls. If this flag is not set, the other > spinning CPU will have at least 2k more cpu_relax() calls before > it can enter the halt state. This should give enough time for the > setting of the locked flag in struct mcs_spinlock to propagate to > that CPU without using atomic op.Yuck! I'm not at all sure you can make assumptions like that. And the worst part is, if it goes wrong the borkage is subtle and painful.
Linus Torvalds
2015-Apr-29 18:27 UTC
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Wed, Apr 29, 2015 at 11:11 AM, Peter Zijlstra <peterz at infradead.org> wrote:> On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: >> In the pv_scan_next() function, the slow cmpxchg atomic operation is >> performed even if the other CPU is not even close to being halted. This >> extra cmpxchg can harm slowpath performance. >> >> This patch introduces the new mayhalt flag to indicate if the other >> spinning CPU is close to being halted or not. The current threshold >> for x86 is 2k cpu_relax() calls. If this flag is not set, the other >> spinning CPU will have at least 2k more cpu_relax() calls before >> it can enter the halt state. This should give enough time for the >> setting of the locked flag in struct mcs_spinlock to propagate to >> that CPU without using atomic op. > > Yuck! I'm not at all sure you can make assumptions like that. And the > worst part is, if it goes wrong the borkage is subtle and painful.\I have to agree with Peter. But it goes beyond this particular patch. Patterns like this: xchg(&pn->mayhalt, true); are just evil and disgusting. Even befoe this patch, that code had (void)xchg(&pn->state, vcpu_halted); which is *wrong* and should never be done. If you want it to be "set_mb()" (which sets a value and has a memory barrier), then use set_mb(). Yes, it happens to use a "xchg()" to do so, but dammit, it documents that whole "this is a memory barrier" in the name. Also, anybody who does this should damn well document why the memory barrier is needed. The xchg(&pn->state, vcpu_halted) at least is preceded by a comment about the barriers. The new mayhalt has no sane comment in it, and the reason seems to be that no sane comment is possible. The xchg() seems to be some black magic thing. Let's not introduce magic stuff in our locking primitives. At least not undocumented magic that makes no sense. Linus
Waiman Long
2015-Apr-30 18:49 UTC
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On 04/29/2015 02:11 PM, Peter Zijlstra wrote:> On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: >> In the pv_scan_next() function, the slow cmpxchg atomic operation is >> performed even if the other CPU is not even close to being halted. This >> extra cmpxchg can harm slowpath performance. >> >> This patch introduces the new mayhalt flag to indicate if the other >> spinning CPU is close to being halted or not. The current threshold >> for x86 is 2k cpu_relax() calls. If this flag is not set, the other >> spinning CPU will have at least 2k more cpu_relax() calls before >> it can enter the halt state. This should give enough time for the >> setting of the locked flag in struct mcs_spinlock to propagate to >> that CPU without using atomic op. > Yuck! I'm not at all sure you can make assumptions like that. And the > worst part is, if it goes wrong the borkage is subtle and painful.I do think the code is OK. However, you are right that if my reasoning is incorrect, the resulting bug will be really subtle. So I am going to withdraw this particular patch as it has no functional impact to the overall patch series. Please let me know if you have any other comments on other parts of the series and I will send send out a new series without this particular patch. Cheers, Longman
Waiman Long
2015-Apr-30 18:56 UTC
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On 04/29/2015 02:27 PM, Linus Torvalds wrote:> On Wed, Apr 29, 2015 at 11:11 AM, Peter Zijlstra<peterz at infradead.org> wrote: >> On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: >>> In the pv_scan_next() function, the slow cmpxchg atomic operation is >>> performed even if the other CPU is not even close to being halted. This >>> extra cmpxchg can harm slowpath performance. >>> >>> This patch introduces the new mayhalt flag to indicate if the other >>> spinning CPU is close to being halted or not. The current threshold >>> for x86 is 2k cpu_relax() calls. If this flag is not set, the other >>> spinning CPU will have at least 2k more cpu_relax() calls before >>> it can enter the halt state. This should give enough time for the >>> setting of the locked flag in struct mcs_spinlock to propagate to >>> that CPU without using atomic op. >> Yuck! I'm not at all sure you can make assumptions like that. And the >> worst part is, if it goes wrong the borkage is subtle and painful.\ > I have to agree with Peter. > > But it goes beyond this particular patch. Patterns like this: > > xchg(&pn->mayhalt, true); > > are just evil and disgusting. Even befoe this patch, that code had > > (void)xchg(&pn->state, vcpu_halted); > > which is *wrong* and should never be done. > > If you want it to be "set_mb()" (which sets a value and has a memory > barrier), then use set_mb(). Yes, it happens to use a "xchg()" to do > so, but dammit, it documents that whole "this is a memory barrier" in > the name. > Also, anybody who does this should damn well document why the memory > barrier is needed. The xchg(&pn->state, vcpu_halted) at least is > preceded by a comment about the barriers. The new mayhalt has no sane > comment in it, and the reason seems to be that no sane comment is > possible. The xchg() seems to be some black magic thing. > > Let's not introduce magic stuff in our locking primitives. At least > not undocumented magic that makes no sense. > > LinusThanks for the comments. I will withdraw this patch and use set_mb() in the code as suggested for better readability. Cheers, Longman
Peter Zijlstra
2015-May-04 14:05 UTC
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Thu, Apr 30, 2015 at 02:49:26PM -0400, Waiman Long wrote:> On 04/29/2015 02:11 PM, Peter Zijlstra wrote: > >On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > >>In the pv_scan_next() function, the slow cmpxchg atomic operation is > >>performed even if the other CPU is not even close to being halted. This > >>extra cmpxchg can harm slowpath performance. > >> > >>This patch introduces the new mayhalt flag to indicate if the other > >>spinning CPU is close to being halted or not. The current threshold > >>for x86 is 2k cpu_relax() calls. If this flag is not set, the other > >>spinning CPU will have at least 2k more cpu_relax() calls before > >>it can enter the halt state. This should give enough time for the > >>setting of the locked flag in struct mcs_spinlock to propagate to > >>that CPU without using atomic op. > >Yuck! I'm not at all sure you can make assumptions like that. And the > >worst part is, if it goes wrong the borkage is subtle and painful. > > I do think the code is OK. However, you are right that if my reasoning is > incorrect, the resulting bug will be really subtle.So I do not think its correct, imagine the fabrics used for the 4096 cpu SGI machine, now add some serious traffic to them. There is no saying your random 2k relax loop will be enough to propagate the change. Equally, another arch (this is generic code) might have starvation issues on its inter-cpu fabric and delay the store just long enough. The thing is, one should _never_ rely on timing for correctness, _ever_.> So I am going to > withdraw this particular patch as it has no functional impact to the overall > patch series. Please let me know if you have any other comments on other > parts of the series and I will send send out a new series without this > particular patch.Please wait a little while, I've queued the 'basic' patches, once that settles in tip we can look at the others. Also, I have some local changes (sorry, I could not help mysef) I should post, I've been somewhat delayed by illness.
Possibly Parallel Threads
- [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
- [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
- [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
- [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
- [PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg