Displaying 20 results from an estimated 111 matches for "lock_spinning".
2015 Apr 30
0
[PATCH 3/6] x86: introduce new pvops function clear_slowpath
...--git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index f7b0b5c..3432713 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -336,6 +336,7 @@ typedef u16 __ticket_t;
struct pv_lock_ops {
struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+ void (*clear_slowpath)(arch_spinlock_t *lock, __ticket_t head);
};
/* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 268b...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2013 Jun 01
11
[PATCH RFC V9 0/19] Paravirtualized ticket spinlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...g (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?
Actually, I think the qspinlock can work roughly the same as the
pvticketlock, using the same lock_spinning and unlock_lock hooks.
The x86-specific codepath can use bit 1 in the ->wait byte as "I have
halted, please kick me".
value = _QSPINLOCK_WAITING;
i = 0;
do
cpu_relax();
while (ACCESS_ONCE(slock->lock) && i++ < BUSY_WAIT);
if (ACCESS_ONCE(slock->lock)) {
val...
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
...needs to revert below two patches to enable xen on hvm
70dd4998, f10cd522c
Changes in V12
- spiltted uapi header patch.
- dropped patch 18.
- bailout of lock spinning in case of NMI (Gleb)
- drop out patch 18 whose benefits are inconclusive (Gleb, Ingo)
Changes in V11:
- use safe_halt in lock_spinning path to avoid potential problem
in case of irq_handlers taking lock in slowpath (Gleb)
- add a0 flag for the kick hypercall for future extension (Gleb)
- add stubs for missing architecture for kvm_vcpu_schedule() (Gleb)
- Change hypercall documentation.
- Rebased to 3.11-rc1
Changes in V10...
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
...needs to revert below two patches to enable xen on hvm
70dd4998, f10cd522c
Changes in V12
- spiltted uapi header patch.
- dropped patch 18.
- bailout of lock spinning in case of NMI (Gleb)
- drop out patch 18 whose benefits are inconclusive (Gleb, Ingo)
Changes in V11:
- use safe_halt in lock_spinning path to avoid potential problem
in case of irq_handlers taking lock in slowpath (Gleb)
- add a0 flag for the kick hypercall for future extension (Gleb)
- add stubs for missing architecture for kvm_vcpu_schedule() (Gleb)
- Change hypercall documentation.
- Rebased to 3.11-rc1
Changes in V10...
2013 Aug 06
16
[PATCH V12 0/14] Paravirtualized ticket spinlocks
...needs to revert below two patches to enable xen on hvm
70dd4998, f10cd522c
Changes in V12
- spiltted uapi header patch.
- dropped patch 18.
- bailout of lock spinning in case of NMI (Gleb)
- drop out patch 18 whose benefits are inconclusive (Gleb, Ingo)
Changes in V11:
- use safe_halt in lock_spinning path to avoid potential problem
in case of irq_handlers taking lock in slowpath (Gleb)
- add a0 flag for the kick hypercall for future extension (Gleb)
- add stubs for missing architecture for kvm_vcpu_schedule() (Gleb)
- Change hypercall documentation.
- Rebased to 3.11-rc1
Changes in V10...
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
...Xen (and for other pvops users, but
there are none at present).
PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:
- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
iterations, then call out to the __ticket_lock_spinning() pvop,
which allows a backend to block the vCPU rather than spinning. This
pvop can set the lock into "slowpath state".
- When releasing a lock, if it is in "slowpath state", the call
__ticket_unlock_kick() to kick the next vCPU in line awake. If the
lock is no longe...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...ticketlock: make __ticket_spin_lock common
x86/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: use callee-save for lock_spinning
x86/ticketlock: don't inline _spin_unlock when using paravirt
spinlocks
x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
x86/ticketlock: add slowpath logic
x86/ticketlocks: tidy up __ticket_unlock_kick()
arch/x86/Kconfig | 3 +
arch/x86/i...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...ticketlock: make __ticket_spin_lock common
x86/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: use callee-save for lock_spinning
x86/ticketlock: don't inline _spin_unlock when using paravirt
spinlocks
x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
x86/ticketlock: add slowpath logic
x86/ticketlocks: tidy up __ticket_unlock_kick()
arch/x86/Kconfig | 3 +
arch/x86/i...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...ticketlock: make __ticket_spin_lock common
x86/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: use callee-save for lock_spinning
x86/ticketlock: don't inline _spin_unlock when using paravirt
spinlocks
x86/ticketlocks: when paravirtualizing ticket locks, increment by 2
x86/ticketlock: add slowpath logic
x86/ticketlocks: tidy up __ticket_unlock_kick()
arch/x86/Kconfig | 3 +
arch/x86/i...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...6/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: keep count of blocked cpus
x86/pvticketlock: use callee-save for lock_spinning
x86/pvticketlock: use callee-save for unlock_kick as well
x86/pvticketlock: make sure unlock is seen by everyone before
checking waiters
x86/ticketlock: loosen ordering restraints on unlock
x86/ticketlock: prevent compiler reordering into locked region
x86/ticketlock: don't inline...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...6/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: keep count of blocked cpus
x86/pvticketlock: use callee-save for lock_spinning
x86/pvticketlock: use callee-save for unlock_kick as well
x86/pvticketlock: make sure unlock is seen by everyone before
checking waiters
x86/ticketlock: loosen ordering restraints on unlock
x86/ticketlock: prevent compiler reordering into locked region
x86/ticketlock: don't inline...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...6/ticketlock: make __ticket_spin_trylock common
x86/spinlocks: replace pv spinlocks with pv ticketlocks
x86/ticketlock: collapse a layer of functions
xen/pvticketlock: Xen implementation for PV ticket locks
x86/pvticketlock: keep count of blocked cpus
x86/pvticketlock: use callee-save for lock_spinning
x86/pvticketlock: use callee-save for unlock_kick as well
x86/pvticketlock: make sure unlock is seen by everyone before
checking waiters
x86/ticketlock: loosen ordering restraints on unlock
x86/ticketlock: prevent compiler reordering into locked region
x86/ticketlock: don't inline...