Displaying 3 results from an estimated 3 matches for "__ticket_unlock".
Did you mean:
___ticket_unlock
2015 Apr 30
0
[PATCH 4/6] x86: introduce new pvops function spin_unlock
...ravirt.h
index 318f077..2f39129 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -730,6 +730,11 @@ static __always_inline void __ticket_clear_slowpath(arch_spinlock_t *lock,
PVOP_VCALL2(pv_lock_ops.clear_slowpath, lock, head);
}
+static __always_inline void __ticket_unlock(arch_spinlock_t *lock)
+{
+ PVOP_VCALL1_LOCK(pv_lock_ops.unlock, lock);
+}
+
void pv_lock_activate(void);
#endif
@@ -843,6 +848,7 @@ static inline notrace unsigned long arch_local_irq_save(void)
#undef PVOP_VCALL0
#undef PVOP_CALL0
#undef PVOP_VCALL1
+#undef PVOP_VCALL1_LOCK
#undef PVOP_CAL...
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When