search for: enter_slowpath

Displaying 16 results from an estimated 16 matches for "enter_slowpath".

2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...find a way so that pv ticketlocks could use a plain > unlocked add for the unlock like the non-pv case, but I just don't see a > way to do it. I agree, and I have to admit I am not sure I fully understand why unlock uses the locked add. Except we need a barrier to avoid the race with the enter_slowpath() users, of course. Perhaps this is the only reason? Anyway, I suggested this to avoid the overflow if we use xadd(), and I guess we need the locked insn anyway if we want to eliminate the unsafe read-after-unlock... > > BTW. If we move "clear slowpath" into "lock" path,...
2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...find a way so that pv ticketlocks could use a plain > unlocked add for the unlock like the non-pv case, but I just don't see a > way to do it. I agree, and I have to admit I am not sure I fully understand why unlock uses the locked add. Except we need a barrier to avoid the race with the enter_slowpath() users, of course. Perhaps this is the only reason? Anyway, I suggested this to avoid the overflow if we use xadd(), and I guess we need the locked insn anyway if we want to eliminate the unsafe read-after-unlock... > > BTW. If we move "clear slowpath" into "lock" path,...
2015 Feb 16
1
[Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...cpu = smp_processor_id(); > u64 start; > + __ticket_t head; > unsigned long flags; > > /* If kicker interrupts not initialized yet, just spin */ > @@ -159,11 +160,15 @@ __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > */ > __ticket_enter_slowpath(lock); > > + /* make sure enter_slowpath, which is atomic does not cross the read */ > + smp_mb__after_atomic(); > + > /* > * check again make sure it didn't become free while > * we weren't looking > */ > - if (ACCESS_ONCE(lock->tickets.head) == w...
2015 Feb 16
1
[Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...cpu = smp_processor_id(); > u64 start; > + __ticket_t head; > unsigned long flags; > > /* If kicker interrupts not initialized yet, just spin */ > @@ -159,11 +160,15 @@ __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > */ > __ticket_enter_slowpath(lock); > > + /* make sure enter_slowpath, which is atomic does not cross the read */ > + smp_mb__after_atomic(); > + > /* > * check again make sure it didn't become free while > * we weren't looking > */ > - if (ACCESS_ONCE(lock->tickets.head) == w...
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..4413315 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); + barrier(); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +61,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *l...
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..4413315 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); + barrier(); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +61,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *l...
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11/2015 09:24 AM, Oleg Nesterov wrote: > I agree, and I have to admit I am not sure I fully understand why > unlock uses the locked add. Except we need a barrier to avoid the race > with the enter_slowpath() users, of course. Perhaps this is the only > reason? Right now it needs to be a locked operation to prevent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to differ...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Well, I regret I mentioned the lack of barrier after enter_slowpath ;) On 02/15, Raghavendra K T wrote: > > @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct static_key *key); > > static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) > { > - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); > +...
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11/2015 09:24 AM, Oleg Nesterov wrote: > I agree, and I have to admit I am not sure I fully understand why > unlock uses the locked add. Except we need a barrier to avoid the race > with the enter_slowpath() users, of course. Perhaps this is the only > reason? Right now it needs to be a locked operation to prevent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to differ...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Well, I regret I mentioned the lack of barrier after enter_slowpath ;) On 02/15, Raghavendra K T wrote: > > @@ -46,7 +46,8 @@ static __always_inline bool static_key_false(struct static_key *key); > > static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) > { > - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); > +...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..cf87de3 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,7 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +60,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, } #e...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..cf87de3 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,7 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +60,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, } #e...
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..cf87de3 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,7 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +60,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, } #e...
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
...--git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h index 625660f..cf87de3 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -46,7 +46,7 @@ static __always_inline bool static_key_false(struct static_key *key); static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) { - set_bit(0, (volatile unsigned long *)&lock->tickets.tail); + set_bit(0, (volatile unsigned long *)&lock->tickets.head); } #else /* !CONFIG_PARAVIRT_SPINLOCKS */ @@ -60,10 +60,30 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, } #e...
2015 Feb 10
4
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Raghavendra K T wrote: > > On 02/10/2015 06:23 AM, Linus Torvalds wrote: > >> add_smp(&lock->tickets.head, TICKET_LOCK_INC); >> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) .. >> >> into something like >> >> val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC << TICKET_SHIFT);
2015 Feb 10
4
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Raghavendra K T wrote: > > On 02/10/2015 06:23 AM, Linus Torvalds wrote: > >> add_smp(&lock->tickets.head, TICKET_LOCK_INC); >> if (READ_ONCE(lock->tickets.tail) & TICKET_SLOWPATH_FLAG) .. >> >> into something like >> >> val = xadd((&lock->ticket.head_tail, TICKET_LOCK_INC << TICKET_SHIFT);