Displaying 3 results from an estimated 3 matches for "start__unlock2".
Did you mean:
start__unlock1
2015 Apr 30
0
[PATCH 4/6] x86: introduce new pvops function spin_unlock
...en)
return 0;
}
+unsigned paravirt_patch_unlock(void *insnbuf, unsigned len)
+{
+ switch (sizeof(__ticket_t)) {
+ case __X86_CASE_B:
+ return paravirt_patch_insns(insnbuf, len,
+ start__unlock1, end__unlock1);
+ case __X86_CASE_W:
+ return paravirt_patch_insns(insnbuf, len,
+ start__unlock2, end__unlock2);
+ default:
+ __unlock_wrong_size();
+ }
+ return 0;
+}
+
unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
unsigned long addr, unsigned len)
{
diff --git a/arch/x86/kernel/paravirt_patch_64.c b/arch/x86/kernel/paravirt_patch_64.c
index a1da673..dc4d9af 100644
---...
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When