search for: __ticket_unlock_slowpath

Displaying 20 results from an estimated 91 matches for "__ticket_unlock_slowpath".

2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote: > > On 02/10/2015 05:26 AM, Oleg Nesterov wrote: > > On 02/10, Raghavendra K T wrote: > >> Unfortunately xadd could result in head overflow as tail is high. > >> > >> The other option was repeated cmpxchg which is bad I believe. > >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH
2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote: > > On 02/10/2015 05:26 AM, Oleg Nesterov wrote: > > On 02/10, Raghavendra K T wrote: > >> Unfortunately xadd could result in head overflow as tail is high. > >> > >> The other option was repeated cmpxchg which is bad I believe. > >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ame addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read reordered here */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) /* false */ /* ... */; /* other CPU sets SLOWPATH and blocks */...
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ame addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read reordered here */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) /* false */ /* ... */; /* other CPU sets SLOWPATH and blocks */...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ntly it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) > __ticket_unlock_slowpath(lock, prev); > > > which is *exactly* the kind of things you cannot do with spinlocks, > because after you've done the "add_smp()" and released the spinlock > for the fast-path, you can't access the spinlock any more. Exactly > because a fast-path lock might com...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ntly it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) > __ticket_unlock_slowpath(lock, prev); > > > which is *exactly* the kind of things you cannot do with spinlocks, > because after you've done the "add_smp()" and released the spinlock > for the fast-path, you can't access the spinlock any more. Exactly > because a fast-path lock might com...
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...
2015 Feb 24
4
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...prev = *lock; > > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > > > /* add_smp() is a full mb() */ > > > > if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) > > __ticket_unlock_slowpath(lock, prev); > > > > which is *exactly* the kind of things you cannot do with spinlocks, > > because after you've done the "add_smp()" and released the spinlock > > for the fast-path, you can't access the spinlock any more. Exactly > > because a fas...
2015 Feb 24
4
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...prev = *lock; > > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > > > /* add_smp() is a full mb() */ > > > > if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) > > __ticket_unlock_slowpath(lock, prev); > > > > which is *exactly* the kind of things you cannot do with spinlocks, > > because after you've done the "add_smp()" and released the spinlock > > for the fast-path, you can't access the spinlock any more. Exactly > > because a fas...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...lock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is *exactly* the kind of things you cannot do with spinlocks, because after you've done the "add_smp()" and released the spinlock for the fast-path, you can't access the spinlock any more. Exactly because a fast-path lock might come in, and release the whole d...