Displaying 3 results from an estimated 3 matches for "start__unlock1".
Did you mean:
start__unlock2
2015 Apr 30
0
[PATCH 4/6] x86: introduce new pvops function spin_unlock
...rg in %eax, return in %eax */
@@ -24,6 +34,21 @@ unsigned paravirt_patch_ident_64(void *insnbuf, unsigned len)
return 0;
}
+unsigned paravirt_patch_unlock(void *insnbuf, unsigned len)
+{
+ switch (sizeof(__ticket_t)) {
+ case __X86_CASE_B:
+ return paravirt_patch_insns(insnbuf, len,
+ start__unlock1, end__unlock1);
+ case __X86_CASE_W:
+ return paravirt_patch_insns(insnbuf, len,
+ start__unlock2, end__unlock2);
+ default:
+ __unlock_wrong_size();
+ }
+ return 0;
+}
+
unsigned native_patch(u8 type, u16 clobbers, void *ibuf,
unsigned long addr, unsigned len)
{
diff --git a/a...
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When