Oleg Nesterov
2015-Feb-11 17:24 UTC
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote:> > On 02/10/2015 05:26 AM, Oleg Nesterov wrote: > > On 02/10, Raghavendra K T wrote: > >> Unfortunately xadd could result in head overflow as tail is high. > >> > >> The other option was repeated cmpxchg which is bad I believe. > >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH from .tail to .head? > > In this case arch_spin_unlock() could do xadd(tickets.head) and check > > the result > > Well, right now, "tail" is manipulated by locked instructions by CPUs > who are contending for the ticketlock, but head can be manipulated > unlocked by the CPU which currently owns the ticketlock. If SLOWPATH > moved into head, then non-owner CPUs would be touching head, requiring > everyone to use locked instructions on it. > > That's the theory, but I don't see much (any?) code which depends on that. > > Ideally we could find a way so that pv ticketlocks could use a plain > unlocked add for the unlock like the non-pv case, but I just don't see a > way to do it.I agree, and I have to admit I am not sure I fully understand why unlock uses the locked add. Except we need a barrier to avoid the race with the enter_slowpath() users, of course. Perhaps this is the only reason? Anyway, I suggested this to avoid the overflow if we use xadd(), and I guess we need the locked insn anyway if we want to eliminate the unsafe read-after-unlock...> > BTW. If we move "clear slowpath" into "lock" path, then probably trylock > > should be changed too? Something like below, we just need to clear SLOWPATH > > before cmpxchg. > > How important / widely used is trylock these days?I am not saying this is that important. Just this looks more consistent imo and we can do this for free. Oleg.
Jeremy Fitzhardinge
2015-Feb-11 23:15 UTC
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11/2015 09:24 AM, Oleg Nesterov wrote:> I agree, and I have to admit I am not sure I fully understand why > unlock uses the locked add. Except we need a barrier to avoid the race > with the enter_slowpath() users, of course. Perhaps this is the only > reason?Right now it needs to be a locked operation to prevent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read reordered here */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) /* false */ /* ... */; /* other CPU sets SLOWPATH and blocks */ __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ /* other CPU hung */ So it doesn't *have* to be a locked operation. This should also work: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ lfence(); /* prevent read reordering */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); but in practice a locked add is cheaper than an lfence (or at least was). This *might* be OK, but I think it's on dubious ground: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ /* read overlaps write, and so is ordered */ if (unlikely(lock->head_tail & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT)) __ticket_unlock_slowpath(lock, prev); because I think Intel and AMD differed in interpretation about how overlapping but different-sized reads & writes are ordered (or it simply isn't architecturally defined). If the slowpath flag is moved to head, then it would always have to be locked anyway, because it needs to be atomic against other CPU's RMW operations setting the flag. J
Linus Torvalds
2015-Feb-11 23:28 UTC
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On Feb 11, 2015 3:15 PM, "Jeremy Fitzhardinge" <jeremy at goop.org> wrote:> > Right now it needs to be a locked operation to prevent read-reordering. > x86 memory ordering rules state that all writes are seen in a globally > consistent order, and are globally ordered wrt reads *on the same > addresses*, but reads to different addresses can be reordered wrt towrites. The modern x86 rules are actually much tighter than that. Every store is a release, and every load is an acquire. So a non-atomic store is actually a perfectly fine unlock. All preceding stores will be seen by other cpu's before the unlock, and while reads can pass stores, they only pass *earlier* stores. For *taking* a lock you need an atomic access, because otherwise loads inside the locked region could bleed out to before the store that takes the lock. Linus -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20150211/4c88a1e6/attachment-0001.html>
Oleg Nesterov
2015-Feb-12 14:18 UTC
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11, Jeremy Fitzhardinge wrote:> > On 02/11/2015 09:24 AM, Oleg Nesterov wrote: > > I agree, and I have to admit I am not sure I fully understand why > > unlock uses the locked add. Except we need a barrier to avoid the race > > with the enter_slowpath() users, of course. Perhaps this is the only > > reason? > > Right now it needs to be a locked operation to prevent read-reordering. > x86 memory ordering rules state that all writes are seen in a globally > consistent order, and are globally ordered wrt reads *on the same > addresses*, but reads to different addresses can be reordered wrt to writes. > > So, if the unlocking add were not a locked operation: > > __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ > > if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) > __ticket_unlock_slowpath(lock, prev); > > Then the read of lock->tickets.tail can be reordered before the unlock, > which introduces a race:Yes, yes, thanks, but this is what I meant. We need a barrier. Even if "Every store is a release" as Linus mentioned.> This *might* be OK, but I think it's on dubious ground: > > __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ > > /* read overlaps write, and so is ordered */ > if (unlikely(lock->head_tail & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT)) > __ticket_unlock_slowpath(lock, prev); > > because I think Intel and AMD differed in interpretation about how > overlapping but different-sized reads & writes are ordered (or it simply > isn't architecturally defined).can't comment, I simply so not know how the hardware works.> If the slowpath flag is moved to head, then it would always have to be > locked anyway, because it needs to be atomic against other CPU's RMW > operations setting the flag.Yes, this is true. But again, if we want to avoid the read-after-unlock, we need to update this lock and read SLOWPATH atomically, it seems that we can't avoid the locked insn. Oleg.
Apparently Analagous Threads
- [PATCH] x86 spinlock: Fix memory corruption on completing completions
- [PATCH] x86 spinlock: Fix memory corruption on completing completions
- [PATCH] x86 spinlock: Fix memory corruption on completing completions
- [PATCH] x86 spinlock: Fix memory corruption on completing completions
- [PATCH] x86 spinlock: Fix memory corruption on completing completions