search for: kvm_lock_spin

Displaying 20 results from an estimated 98 matches for "kvm_lock_spin".

2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
....c @@ -609,7 +609,7 @@ static inline void check_zero(void) u8 ret; u8 old; - old = ACCESS_ONCE(zero_stats); + old = READ_ONCE(zero_stats); if (unlikely(old)) { ret = cmpxchg(&zero_stats, old, 0); /* This ensures only one fellow resets the stat */ @@ -727,6 +727,7 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) int cpu; u64 start; unsigned long flags; + __ticket_t head; if (in_nmi()) return; @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free whil...
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
....c @@ -609,7 +609,7 @@ static inline void check_zero(void) u8 ret; u8 old; - old = ACCESS_ONCE(zero_stats); + old = READ_ONCE(zero_stats); if (unlikely(old)) { ret = cmpxchg(&zero_stats, old, 0); /* This ensures only one fellow resets the stat */ @@ -727,6 +727,7 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) int cpu; u64 start; unsigned long flags; + __ticket_t head; if (in_nmi()) return; @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free whil...
2014 Mar 13
2
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
..._init kvm_spinlock_init(void) > if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > return; > > +#ifdef CONFIG_QUEUE_SPINLOCK > + pv_lock_ops.kick_cpu = kvm_kick_cpu_type; > + pv_lock_ops.hibernate = kvm_hibernate; > +#else > pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); > pv_lock_ops.unlock_kick = kvm_unlock_kick; > +#endif This should also disable the unfair path. Paolo
2015 Feb 11
1
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...H_FLAG) __ticket_unlock_kick(head); so it can't overflow to .tail? But probably I missed your concern. And we we do this, probably it makes sense to add something like bool tickets_equal(__ticket_t one, __ticket_t two) { return (one ^ two) & ~TICKET_SLOWPATH_FLAG; } and change kvm_lock_spinning() to use tickets_equal(tickets.head, want), plus it can have more users in asm/spinlock.h. Oleg.
2014 Mar 13
2
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
..._init kvm_spinlock_init(void) > if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > return; > > +#ifdef CONFIG_QUEUE_SPINLOCK > + pv_lock_ops.kick_cpu = kvm_kick_cpu_type; > + pv_lock_ops.hibernate = kvm_hibernate; > +#else > pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); > pv_lock_ops.unlock_kick = kvm_unlock_kick; > +#endif This should also disable the unfair path. Paolo
2015 Feb 11
1
[PATCH] x86 spinlock: Fix memory corruption on completing completions
...H_FLAG) __ticket_unlock_kick(head); so it can't overflow to .tail? But probably I missed your concern. And we we do this, probably it makes sense to add something like bool tickets_equal(__ticket_t one, __ticket_t two) { return (one ^ two) & ~TICKET_SLOWPATH_FLAG; } and change kvm_lock_spinning() to use tickets_equal(tickets.head, want), plus it can have more users in asm/spinlock.h. Oleg.
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...ssion that slowpath bit is in tail. You are right, situation could lead to positive max and may report false contention. And the "(__ticket_t)" typecast looks unnecessary, it only adds more > confusuin, but this is cosmetic too. > Done. >> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) >> * check again make sure it didn't become free while >> * we weren't looking. >> */ >> - if (ACCESS_ONCE(lock->tickets.head) == want) { >> + head = READ_ONCE(lock->tickets.head); >> + if...
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...ssion that slowpath bit is in tail. You are right, situation could lead to positive max and may report false contention. And the "(__ticket_t)" typecast looks unnecessary, it only adds more > confusuin, but this is cosmetic too. > Done. >> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) >> * check again make sure it didn't become free while >> * we weren't looking. >> */ >> - if (ACCESS_ONCE(lock->tickets.head) == want) { >> + head = READ_ONCE(lock->tickets.head); >> + if...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...| - tmp.head != head) + if (__tickets_equal(tmp.head, tmp.tail) || tmp.head != head) break; cpu_relax(); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 94f6434..e758b46 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -727,6 +727,7 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) int cpu; u64 start; unsigned long flags; + __ticket_t head; if (in_nmi()) return; @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free whil...
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...| - tmp.head != head) + if (__tickets_equal(tmp.head, tmp.tail) || tmp.head != head) break; cpu_relax(); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 94f6434..e758b46 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -727,6 +727,7 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) int cpu; u64 start; unsigned long flags; + __ticket_t head; if (in_nmi()) return; @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free whil...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2014 Mar 13
0
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
...vm_para_has_feature(KVM_FEATURE_PV_UNHALT)) >> return; >> >> +#ifdef CONFIG_QUEUE_SPINLOCK >> + pv_lock_ops.kick_cpu = kvm_kick_cpu_type; >> + pv_lock_ops.hibernate = kvm_hibernate; >> +#else >> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); >> pv_lock_ops.unlock_kick = kvm_unlock_kick; >> +#endif > > This should also disable the unfair path. > > Paolo > The unfair lock uses a different jump label and does not require any special PV ops. There is a separate init function for that. -Longman
2014 Jun 15
0
[PATCH 11/11] qspinlock, kvm: Add paravirt support
...pv_lock_ops.kick_node = PV_CALLEE_SAVE(__pv_kick_node); + + pv_lock_ops.wait_head = PV_CALLEE_SAVE(__pv_wait_head); + pv_lock_ops.queue_unlock = PV_CALLEE_SAVE(__pv_queue_unlock); + + pv_lock_ops.wait = kvm_wait; + pv_lock_ops.kick = kvm_kick_cpu; +#else pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); pv_lock_ops.unlock_kick = kvm_unlock_kick; +#endif } static __init int kvm_spinlock_init_jump(void) Index: linux-2.6/kernel/Kconfig.locks =================================================================== --- linux-2.6.orig/kernel/Kconfig.locks +++ linux-2.6/kernel/Kconfig.locks @@ -22...
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
On 02/12, Raghavendra K T wrote: > > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > * check again make sure it didn't become free while > * we weren't looking. > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = ACCESS_ONCE(lock->tickets.head); > + if (__tickets_equal(head, w...
2015 Feb 13
0
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
..._ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC can be true because of TICKET_SLOWPATH_FLAG in .head, even if it is actually unlocked. And the "(__ticket_t)" typecast looks unnecessary, it only adds more confusuin, but this is cosmetic too. > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > * check again make sure it didn't become free while > * we weren't looking. > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = READ_ONCE(lock->tickets.head); > + if (__tickets_equal(head, wan...
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
On 02/12, Raghavendra K T wrote: > > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > * check again make sure it didn't become free while > * we weren't looking. > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = ACCESS_ONCE(lock->tickets.head); > + if (__tickets_equal(head, w...
2015 Feb 13
0
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
..._ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC can be true because of TICKET_SLOWPATH_FLAG in .head, even if it is actually unlocked. And the "(__ticket_t)" typecast looks unnecessary, it only adds more confusuin, but this is cosmetic too. > @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) > * check again make sure it didn't become free while > * we weren't looking. > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = READ_ONCE(lock->tickets.head); > + if (__tickets_equal(head, wan...
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
....c @@ -609,7 +609,7 @@ static inline void check_zero(void) u8 ret; u8 old; - old = ACCESS_ONCE(zero_stats); + old = READ_ONCE(zero_stats); if (unlikely(old)) { ret = cmpxchg(&zero_stats, old, 0); /* This ensures only one fellow resets the stat */ @@ -727,6 +727,7 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) int cpu; u64 start; unsigned long flags; + __ticket_t head; if (in_nmi()) return; @@ -768,11 +769,15 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) */ __ticket_enter_slowpath(lock); + /* make sure...
2014 Feb 26
0
[PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support
...ck_ops to exploit KVM_FEATURE_PV_UNHALT if present. @@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return; +#ifdef CONFIG_QUEUE_SPINLOCK + pv_lock_ops.kick_cpu = kvm_kick_cpu_type; +#else pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); pv_lock_ops.unlock_kick = kvm_unlock_kick; +#endif } static __init int kvm_spinlock_init_jump(void) diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks index f185584..a70fdeb 100644 --- a/kernel/Kconfig.locks +++ b/kernel/Kconfig.locks @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPIN...