search for: __ticket_check_and_clear_slowpath

Displaying 20 results from an estimated 35 matches for "__ticket_check_and_clear_slowpath".

2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
.../asm/spinlock.h index 625660f..7fc50d7 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) set_bit(0, (volatile unsigned long *)&lock->tickets.tail); } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ + arch_spinlock_t old, new; + __ticket_t diff; + + old.tickets = READ_ONCE(lock->tickets); + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; + + /* try to clear slowpath flag when there are no contenders */ + if ((old.tickets.tail & TICKET_...
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
.../asm/spinlock.h index 625660f..7fc50d7 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) set_bit(0, (volatile unsigned long *)&lock->tickets.tail); } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ + arch_spinlock_t old, new; + __ticket_t diff; + + old.tickets = READ_ONCE(lock->tickets); + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; + + /* try to clear slowpath flag when there are no contenders */ + if ((old.tickets.tail & TICKET_...
2015 Feb 09
0
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
On 02/09, Raghavendra K T wrote: > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) > +{ > + arch_spinlock_t old, new; > + __ticket_t diff; > + > + old.tickets = READ_ONCE(lock->tickets); > + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; > + > + /* try to clear slowpath flag when there are no contenders...
2015 Feb 09
0
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
On 02/09, Raghavendra K T wrote: > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) > +{ > + arch_spinlock_t old, new; > + __ticket_t diff; > + > + old.tickets = READ_ONCE(lock->tickets); > + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; > + > + /* try to clear slowpath flag when there are no contenders...
2015 Apr 30
0
[PATCH 2/6] x86: move decision about clearing slowpath flag into arch_spin_lock()
The decision whether the slowpath flag is to be cleared for paravirtualized spinlocks is located in __ticket_check_and_clear_slowpath() today. Move that decision into arch_spin_lock() and add an unlikely attribute to it to avoid calling a function in case the compiler chooses not to inline __ticket_check_and_clear_slowpath() and the slowpath flag isn't set. Signed-off-by: Juergen Gross <jgross at suse.com> --- arch/x...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...f86 100644 > --- a/arch/x86/include/asm/spinlock.h > +++ b/arch/x86/include/asm/spinlock.h > @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) > set_bit(0, (volatile unsigned long *)&lock->tickets.tail); > } > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) > +{ > + arch_spinlock_t old, new; > + __ticket_t diff; > + > + old.tickets = READ_ONCE(lock->tickets); Couldn't the caller pass in the lock state that it read rather than re-reading it? > + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - ol...
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...f86 100644 > --- a/arch/x86/include/asm/spinlock.h > +++ b/arch/x86/include/asm/spinlock.h > @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) > set_bit(0, (volatile unsigned long *)&lock->tickets.tail); > } > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) > +{ > + arch_spinlock_t old, new; > + __ticket_t diff; > + > + old.tickets = READ_ONCE(lock->tickets); Couldn't the caller pass in the lock state that it read rather than re-reading it? > + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - ol...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
.../asm/spinlock.h index 625660f..0829f86 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) set_bit(0, (volatile unsigned long *)&lock->tickets.tail); } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ + arch_spinlock_t old, new; + __ticket_t diff; + + old.tickets = READ_ONCE(lock->tickets); + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; + + /* try to clear slowpath flag when there are no contenders */ + if ((old.tickets.tail & TICKET_...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
.../asm/spinlock.h index 625660f..0829f86 100644 --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) set_bit(0, (volatile unsigned long *)&lock->tickets.tail); } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ + arch_spinlock_t old, new; + __ticket_t diff; + + old.tickets = READ_ONCE(lock->tickets); + diff = (old.tickets.tail & ~TICKET_SLOWPATH_FLAG) - old.tickets.head; + + /* try to clear slowpath flag when there are no contenders */ + if ((old.tickets.tail & TICKET_...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...+++++++++++++--------------------- arch/x86/kernel/kvm.c | 4 +- 2 files changed, 45 insertions(+), 46 deletions(-) potential TODO: * The whole patch be splitted into, 1. move slowpath flag 2. fix memory corruption in completion problem ?? * May be we could directly pass inc for __ticket_check_and_clear_slowpath but I hope current code is more readable. Changes since V2: - Move the slowpath flag to head, this enables xadd usage in unlock code and inturn we can get rid of read/write after unlock (Oleg) - usage of ticket_equals (Oleg) Changes since V1: - Add missing TICKET_LOCK_INC before unl...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...+++++++++++++--------------------- arch/x86/kernel/kvm.c | 4 +- 2 files changed, 45 insertions(+), 46 deletions(-) potential TODO: * The whole patch be splitted into, 1. move slowpath flag 2. fix memory corruption in completion problem ?? * May be we could directly pass inc for __ticket_check_and_clear_slowpath but I hope current code is more readable. Changes since V2: - Move the slowpath flag to head, this enables xadd usage in unlock code and inturn we can get rid of read/write after unlock (Oleg) - usage of ticket_equals (Oleg) Changes since V1: - Add missing TICKET_LOCK_INC before unl...
2015 Feb 09
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ch/x86/include/asm/spinlock.h >> +++ b/arch/x86/include/asm/spinlock.h >> @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) >> set_bit(0, (volatile unsigned long *)&lock->tickets.tail); >> } >> >> +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) >> +{ >> + arch_spinlock_t old, new; >> + __ticket_t diff; >> + >> + old.tickets = READ_ONCE(lock->tickets); > > Couldn't the caller pass in the lock state that it read rather than > re-reading it? > Yes we could. do you mean...
2015 Feb 09
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ch/x86/include/asm/spinlock.h >> +++ b/arch/x86/include/asm/spinlock.h >> @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) >> set_bit(0, (volatile unsigned long *)&lock->tickets.tail); >> } >> >> +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) >> +{ >> + arch_spinlock_t old, new; >> + __ticket_t diff; >> + >> + old.tickets = READ_ONCE(lock->tickets); > > Couldn't the caller pass in the lock state that it read rather than > re-reading it? > Yes we could. do you mean...
2015 Feb 10
4
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ult in head overflow as tail is high. > > The other option was repeated cmpxchg which is bad I believe. > Any suggestions? Stupid question... what if we simply move SLOWPATH from .tail to .head? In this case arch_spin_unlock() could do xadd(tickets.head) and check the result In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg the whole .head_tail. Plus obviously more boring changes. This needs a separate patch even _if_ this can work. BTW. If we move "clear slowpath" into "lock" path, then probably trylock should be changed too? Something like below, we just need to clear...
2015 Feb 10
4
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...ult in head overflow as tail is high. > > The other option was repeated cmpxchg which is bad I believe. > Any suggestions? Stupid question... what if we simply move SLOWPATH from .tail to .head? In this case arch_spin_unlock() could do xadd(tickets.head) and check the result In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg the whole .head_tail. Plus obviously more boring changes. This needs a separate patch even _if_ this can work. BTW. If we move "clear slowpath" into "lock" path, then probably trylock should be changed too? Something like below, we just need to clear...
2015 Feb 11
1
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11, Raghavendra K T wrote: > > On 02/10/2015 06:56 PM, Oleg Nesterov wrote: > >> In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg >> the whole .head_tail. Plus obviously more boring changes. This needs a >> separate patch even _if_ this can work. > > Correct, but apart from this, before doing xadd in unlock, > we would have to make sure lsb bit is cleared so that we can live with...
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Damn, sorry for noise, forgot to mention... On 02/12, Raghavendra K T wrote: > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock, > + __ticket_t head) > +{ > + if (head & TICKET_SLOWPATH_FLAG) { > + arch_spinlock_t old, new; > + > + old.tickets.head = head; > + new.tickets.head = head & ~TICKET_SLOWPATH_FLAG; > + old.tickets.tail = new.tickets.head + TICKET_LOCK...
2015 Feb 11
1
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/11, Raghavendra K T wrote: > > On 02/10/2015 06:56 PM, Oleg Nesterov wrote: > >> In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg >> the whole .head_tail. Plus obviously more boring changes. This needs a >> separate patch even _if_ this can work. > > Correct, but apart from this, before doing xadd in unlock, > we would have to make sure lsb bit is cleared so that we can live with...
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Damn, sorry for noise, forgot to mention... On 02/12, Raghavendra K T wrote: > > +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock, > + __ticket_t head) > +{ > + if (head & TICKET_SLOWPATH_FLAG) { > + arch_spinlock_t old, new; > + > + old.tickets.head = head; > + new.tickets.head = head & ~TICKET_SLOWPATH_FLAG; > + old.tickets.tail = new.tickets.head + TICKET_LOCK...
2015 Feb 11
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...was repeated cmpxchg which is bad I believe. >> Any suggestions? > > Stupid question... what if we simply move SLOWPATH from .tail to .head? > In this case arch_spin_unlock() could do xadd(tickets.head) and check > the result It is a good idea. Trying this now. > In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg > the whole .head_tail. Plus obviously more boring changes. This needs a > separate patch even _if_ this can work. Correct, but apart from this, before doing xadd in unlock, we would have to make sure lsb bit is cleared so that we can live with 1 bit overflow to ta...