search for: taken_slow_pickup

Displaying 20 results from an estimated 103 matches for "taken_slow_pickup".

2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...m_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = READ_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } @@ -803,8 +805,8 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket) add_stats(RELEASED_SLOW, 1); for_each_cpu(cpu, &waiting_cpus) { const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu); - if (ACCESS_ONCE(w->lock) == lock...
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...m_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = READ_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } @@ -803,8 +805,8 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket) add_stats(RELEASED_SLOW, 1); for_each_cpu(cpu, &waiting_cpus) { const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu); - if (ACCESS_ONCE(w->lock) == lock...
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...t; * check again make sure it didn't become free while >> * we weren't looking. >> */ >> - if (ACCESS_ONCE(lock->tickets.head) == want) { >> + head = READ_ONCE(lock->tickets.head); >> + if (__tickets_equal(head, want)) { >> add_stats(TAKEN_SLOW_PICKUP, 1); >> goto out; > > This is off-topic, but with or without this change perhaps it makes sense > to add smp_mb__after_atomic(). It is nop on x86, just to make this code > more understandable for those (for me ;) who can never remember even the > x86 rules. > Hope you m...
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...t; * check again make sure it didn't become free while >> * we weren't looking. >> */ >> - if (ACCESS_ONCE(lock->tickets.head) == want) { >> + head = READ_ONCE(lock->tickets.head); >> + if (__tickets_equal(head, want)) { >> add_stats(TAKEN_SLOW_PICKUP, 1); >> goto out; > > This is off-topic, but with or without this change perhaps it makes sense > to add smp_mb__after_atomic(). It is nop on x86, just to make this code > more understandable for those (for me ;) who can never remember even the > x86 rules. > Hope you m...
2015 Feb 16
1
[Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...atomic(); > + > /* > * check again make sure it didn't become free while > * we weren't looking > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = READ_ONCE(lock->tickets.head); > + if (__tickets_equal(head, want)) { > add_stats(TAKEN_SLOW_PICKUP, 1); > goto out; > } > @@ -204,8 +209,8 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) > const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu); > > /* Make sure we read lock before want */ > - if (ACCESS_ONCE(w->lock) =...
2015 Feb 16
1
[Xen-devel] [PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...atomic(); > + > /* > * check again make sure it didn't become free while > * we weren't looking > */ > - if (ACCESS_ONCE(lock->tickets.head) == want) { > + head = READ_ONCE(lock->tickets.head); > + if (__tickets_equal(head, want)) { > add_stats(TAKEN_SLOW_PICKUP, 1); > goto out; > } > @@ -204,8 +209,8 @@ static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next) > const struct xen_lock_waiting *w = &per_cpu(lock_waiting, cpu); > > /* Make sure we read lock before want */ > - if (ACCESS_ONCE(w->lock) =...
2012 Jan 14
14
[PATCH RFC V4 0/5] kvm : Paravirt-spinlock support for KVM guests
The 5-patch series to follow this email extends KVM-hypervisor and Linux guest running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's implementation. One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Changes in V4: - reabsed to 3.2.0 pre. - use APIC
2012 Jan 14
14
[PATCH RFC V4 0/5] kvm : Paravirt-spinlock support for KVM guests
The 5-patch series to follow this email extends KVM-hypervisor and Linux guest running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's implementation. One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick another vcpu out of halt state. The blocking of vcpu is done using halt() in (lock_spinning) slowpath. Changes in V4: - reabsed to 3.2.0 pre. - use APIC
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. The current set of patches are for Xen/x86 spinlock/KVM guest side, to be included against -tip. I 'll be sending a separate patchset for KVM host based on kvm tree. Please note I have added the below performance result
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. The current set of patches are for Xen/x86 spinlock/KVM guest side, to be included against -tip. I 'll be sending a separate patchset for KVM host based on kvm tree. Please note I have added the below performance result
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. The current set of patches are for Xen/x86 spinlock/KVM guest side, to be included against -tip. I 'll be sending a separate patchset for KVM host based on kvm tree. Please note I have added the below performance result
2013 Jun 24
19
[PATCH RFC V10 0/18] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V10: Addressed Konrad's review comments: - Added break in patch 5 since now we know exact cpu to wakeup - Dropped patch 12 and Konrad needs to revert two patches to enable xen on hvm 70dd4998, f10cd522c -
2013 Jun 24
19
[PATCH RFC V10 0/18] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V10: Addressed Konrad's review comments: - Added break in patch 5 since now we know exact cpu to wakeup - Dropped patch 12 and Konrad needs to revert two patches to enable xen on hvm 70dd4998, f10cd522c -
2013 Jun 24
19
[PATCH RFC V10 0/18] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V10: Addressed Konrad's review comments: - Added break in patch 5 since now we know exact cpu to wakeup - Dropped patch 12 and Konrad needs to revert two patches to enable xen on hvm 70dd4998, f10cd522c -
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = ACCESS_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } -- 1.7.11.7
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
...lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = ACCESS_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } -- 1.7.11.7
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...m_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = READ_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } @@ -803,8 +805,8 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket) add_stats(RELEASED_SLOW, 1); for_each_cpu(cpu, &waiting_cpus) { const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu); - if (ACCESS_ONCE(w->lock) == lock...
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
...m_lock_spinning(struct arch_spinlock *lock, __ticket_t want) * check again make sure it didn't become free while * we weren't looking. */ - if (ACCESS_ONCE(lock->tickets.head) == want) { + head = READ_ONCE(lock->tickets.head); + if (__tickets_equal(head, want)) { add_stats(TAKEN_SLOW_PICKUP, 1); goto out; } @@ -803,8 +805,8 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket) add_stats(RELEASED_SLOW, 1); for_each_cpu(cpu, &waiting_cpus) { const struct kvm_lock_waiting *w = &per_cpu(klock_waiting, cpu); - if (ACCESS_ONCE(w->lock) == lock...
2013 Jul 22
21
[PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V11: - use safe_halt in lock_spinning path to avoid potential problem in case of irq_handlers taking lock in slowpath (Gleb) - add a0 flag for the kick hypercall for future extension (Gleb) - add stubs for
2013 Jul 22
21
[PATCH RFC V11 0/18] Paravirtualized ticket spinlocks
This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. The series provides implementation for both Xen and KVM. Changes in V11: - use safe_halt in lock_spinning path to avoid potential problem in case of irq_handlers taking lock in slowpath (Gleb) - add a0 flag for the kick hypercall for future extension (Gleb) - add stubs for