search for: xen_qlock_kick

Displaying 20 results from an estimated 36 matches for "xen_qlock_kick".

2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...nd safe_halt() are more paravirt calls, but given we're in > a KVM paravirt call already, I suppose we can directly use native_*() > here. > > Something like so then... I suppose, but then the Xen variants need TLC > too. Just to be sure I understand you correct: You mean that xen_qlock_kick() and xen_qlock_wait() and all functions called by those should gain the "notrace" attribute, right? I am not sure why the kick variants need it, though. IMO those are called only after the lock has been released, so they should be fine without notrace. And again: we shouldn't forge...
2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...nd safe_halt() are more paravirt calls, but given we're in > a KVM paravirt call already, I suppose we can directly use native_*() > here. > > Something like so then... I suppose, but then the Xen variants need TLC > too. Just to be sure I understand you correct: You mean that xen_qlock_kick() and xen_qlock_wait() and all functions called by those should gain the "notrace" attribute, right? I am not sure why the kick variants need it, though. IMO those are called only after the lock has been released, so they should be fine without notrace. And again: we shouldn't forge...
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
...nclude <asm/qspinlock.h> + +PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock); + +static void xen_qlock_wait(u8 *ptr, u8 val) +{ + int irq = __this_cpu_read(lock_kicker_irq); + + xen_clear_irq_pending(irq); + + barrier(); + + if (READ_ONCE(*ptr) == val) + xen_poll_irq(irq); +} + +static void xen_qlock_kick(int cpu) +{ + xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); +} + +#else + struct xen_lock_waiting { struct arch_spinlock *lock; __ticket_t want; }; -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; -static DEFINE_PER_CPU(char *, irq_name); static DEFINE_PER_CPU(struct xen_lock_waiting, l...
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
...nclude <asm/qspinlock.h> + +PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock); + +static void xen_qlock_wait(u8 *ptr, u8 val) +{ + int irq = __this_cpu_read(lock_kicker_irq); + + xen_clear_irq_pending(irq); + + barrier(); + + if (READ_ONCE(*ptr) == val) + xen_poll_irq(irq); +} + +static void xen_qlock_kick(int cpu) +{ + xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); +} + +#else + struct xen_lock_waiting { struct arch_spinlock *lock; __ticket_t want; }; -static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; -static DEFINE_PER_CPU(char *, irq_name); static DEFINE_PER_CPU(struct xen_lock_waiting, l...
2015 Apr 07
0
[PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
...lock.c @@ -17,6 +17,55 @@ #include "xen-ops.h" #include "debugfs.h" +static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; +static DEFINE_PER_CPU(char *, irq_name); +static bool xen_pvspin = true; + +#ifdef CONFIG_QUEUE_SPINLOCK + +#include <asm/qspinlock.h> + +static void xen_qlock_kick(int cpu) +{ + xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR); +} + +/* + * Halt the current CPU & release it back to the host + */ +static void xen_qlock_wait(u8 *byte, u8 val) +{ + int irq = __this_cpu_read(lock_kicker_irq); + + /* If kicker interrupts not initialized yet, just spin */ + if (ir...
2016 Oct 28
1
[Xen-devel] [PATCH v6 10/11] x86, xen: support vcpu preempted check
...ons due to us > * using paravirt patching and jump labels patching and having to do > @@ -137,6 +136,8 @@ void __init xen_init_spinlocks(void) > pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); > pv_lock_ops.wait = xen_qlock_wait; > pv_lock_ops.kick = xen_qlock_kick; > + > + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; > } > > /* > -- > 2.4.11 > > > _______________________________________________ > Xen-devel mailing list > Xen-devel at lists.xen.org > https://lists.xen.org/xen-devel
2016 Oct 28
1
[Xen-devel] [PATCH v6 10/11] x86, xen: support vcpu preempted check
...ons due to us > * using paravirt patching and jump labels patching and having to do > @@ -137,6 +136,8 @@ void __init xen_init_spinlocks(void) > pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); > pv_lock_ops.wait = xen_qlock_wait; > pv_lock_ops.kick = xen_qlock_kick; > + > + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; > } > > /* > -- > 2.4.11 > > > _______________________________________________ > Xen-devel mailing list > Xen-devel at lists.xen.org > https://lists.xen.org/xen-devel
2016 Oct 20
0
[PATCH v5 7/9] x86, xen: support vcpu preempted check
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -137,6 +136,8 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; + + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } /* -- 2.4.11
2016 Oct 28
0
[PATCH v6 10/11] x86, xen: support vcpu preempted check
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -137,6 +136,8 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; + + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } /* -- 2.4.11
2020 Aug 11
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...iven we're in > > a KVM paravirt call already, I suppose we can directly use native_*() > > here. > > > > Something like so then... I suppose, but then the Xen variants need TLC > > too. > > Just to be sure I understand you correct: > > You mean that xen_qlock_kick() and xen_qlock_wait() and all functions > called by those should gain the "notrace" attribute, right? > > I am not sure why the kick variants need it, though. IMO those are > called only after the lock has been released, so they should be fine > without notrace. The issu...
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -136,8 +138,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - - pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; + pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); } /*
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -136,8 +138,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - - pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; + pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); } /*
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -138,7 +136,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } static __init int xen_parse_nopvspin(char *arg) -- 1.8.3.1
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -138,7 +136,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } static __init int xen_parse_nopvspin(char *arg) -- 1.8.3.1
2020 Aug 11
2
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...t; a KVM paravirt call already, I suppose we can directly use native_*() >>> here. >>> >>> Something like so then... I suppose, but then the Xen variants need TLC >>> too. >> >> Just to be sure I understand you correct: >> >> You mean that xen_qlock_kick() and xen_qlock_wait() and all functions >> called by those should gain the "notrace" attribute, right? >> >> I am not sure why the kick variants need it, though. IMO those are >> called only after the lock has been released, so they should be fine >> without...
2020 Aug 11
2
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...t; a KVM paravirt call already, I suppose we can directly use native_*() >>> here. >>> >>> Something like so then... I suppose, but then the Xen variants need TLC >>> too. >> >> Just to be sure I understand you correct: >> >> You mean that xen_qlock_kick() and xen_qlock_wait() and all functions >> called by those should gain the "notrace" attribute, right? >> >> I am not sure why the kick variants need it, though. IMO those are >> called only after the lock has been released, so they should be fine >> without...
2016 Nov 16
0
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...ons due to us > * using paravirt patching and jump labels patching and having to do > @@ -136,8 +138,7 @@ void __init xen_init_spinlocks(void) > pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); > pv_lock_ops.wait = xen_qlock_wait; > pv_lock_ops.kick = xen_qlock_kick; > - > - pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; > + pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); > } > > /* >
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -138,7 +136,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } static __init int xen_parse_nopvspin(char *arg) -- 1.8.3.1
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...split in two init functions due to us * using paravirt patching and jump labels patching and having to do @@ -138,7 +136,7 @@ void __init xen_init_spinlocks(void) pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); pv_lock_ops.wait = xen_qlock_wait; pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_lock_ops.vcpu_is_preempted = xen_vcpu_stolen; } static __init int xen_parse_nopvspin(char *arg) -- 1.8.3.1
2018 Aug 10
0
[PATCH 04/10] x86/paravirt: use a single ops structure
...d) printk(KERN_DEBUG "xen: PV spinlocks enabled\n"); __pv_init_lock_hash(); - pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; - pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); - pv_lock_ops.wait = xen_qlock_wait; - pv_lock_ops.kick = xen_qlock_kick; - pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); + pv_ops.pv_lock_ops.queued_spin_lock_slowpath = + __pv_queued_spin_lock_slowpath; + pv_ops.pv_lock_ops.queued_spin_unlock = + PV_CALLEE_SAVE(__pv_queued_spin_unlock); + pv_ops.pv_lock_ops.wait = xen_qlock_wait; + pv_ops.pv_lock_...