search for: __asm_spinlock_h

Displaying 20 results from an estimated 20 matches for "__asm_spinlock_h".

2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...neric/qspinlock.h> + +#endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 21357fe05fe0..434615f1d761 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,7 +3,12 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include <asm/qspinlock.h> +#include <asm/qrwlock.h> +#else #include <asm/simple_spinlock.h> +#endif #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/p...
2020 Jul 06
0
[PATCH v3 4/6] powerpc/64s: implement queued spinlocks and rwlocks
...neric/qspinlock.h> + +#endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 21357fe05fe0..434615f1d761 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,7 +3,12 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include <asm/qspinlock.h> +#include <asm/qrwlock.h> +#else #include <asm/simple_spinlock.h> +#endif #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/p...
2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...6ec72282888d 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -10,5 +10,9 @@ #include <asm/simple_spinlock.h> #endif +#ifndef CONFIG_PARAVIRT_SPINLOCKS +static inline void pv_spinlocks_init(void) { } +#endif + #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ cheers
2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...6ec72282888d 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -10,5 +10,9 @@ #include <asm/simple_spinlock.h> #endif +#ifndef CONFIG_PARAVIRT_SPINLOCKS +static inline void pv_spinlocks_init(void) { } +#endif + #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ cheers
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2020 Jul 09
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...k.h > +++ b/arch/powerpc/include/asm/spinlock.h > @@ -10,5 +10,9 @@ > #include <asm/simple_spinlock.h> > #endif > > +#ifndef CONFIG_PARAVIRT_SPINLOCKS > +static inline void pv_spinlocks_init(void) { } > +#endif > + > #endif /* __KERNEL__ */ > #endif /* __ASM_SPINLOCK_H */ > > > cheers > We don't really need to do a pv_spinlocks_init() if pv_kick() isn't supported. Cheers, Longman
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...rwlock_t; + +#define __ARCH_RW_LOCK_UNLOCKED { 0 } + +#endif diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 79be9bb10bbb..21357fe05fe0 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,290 +3,7 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ -/* - * Simple spin lock operations. - * - * Copyright (C) 2001-2004 Paul Mackerras <paulus at au.ibm.com>, IBM - * Copyright (C) 2001 Anton Blanchard <anton at au.ibm.com>, IBM - * Copyright (C) 2002 Dave Engebretsen <engebret at us.ibm.com>, IBM - * Rework...
2019 Dec 26
0
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
...; #include <asm/qspinlock.h> +#include <asm/paravirt.h> /* See include/linux/spinlock.h */ #define smp_mb__after_spinlock() smp_mb() +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(long cpu) +{ + return pv_vcpu_is_preempted(cpu); +} + #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index fc6488660f64..b23cdae433a4 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -50,7 +50,7 @@ obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o obj-$(CONFIG_ACPI) += acpi.o obj-$(CONFIG_ACP...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2019 Dec 26
1
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
Hi Zengruan, Thank you for the patch! Yet something to improve: [auto build test ERROR on kvmarm/next] [also build test ERROR on kvm/linux-next linus/master v5.5-rc3 next-20191220] [cannot apply to arm64/for-next/core] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2019 Dec 26
7
[PATCH v2 0/6] KVM: arm64: VCPU preempted check support
This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the VCPU is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable VCPUs than physical CPUs in the system) as doing busy waits for preempted VCPUs will hurt system performance far worse than early yielding. We have observed some
2019 Dec 17
10
[PATCH 0/5] KVM: arm64: vcpu preempted check support
From: Zengruan Ye <yezengruan at huawei.com> This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the vcpu is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable vcpus than physical cpus in the system) as doing busy waits for preempted vcpus will hurt system performance far
2019 Dec 17
10
[PATCH 0/5] KVM: arm64: vcpu preempted check support
From: Zengruan Ye <yezengruan at huawei.com> This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the vcpu is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable vcpus than physical cpus in the system) as doing busy waits for preempted vcpus will hurt system performance far
2013 Feb 22
48
[PATCH v3 00/46] initial arm v8 (64-bit) support
This round implements all of the review comments from V2 and all patches are now acked. Unless there are any objections I intend to apply later this morning. Ian.
2012 Jan 09
39
[PATCH v4 00/25] xen: ARMv7 with virtualization extensions
Hello everyone, this is the fourth version of the patch series that introduces ARMv7 with virtualization extensions support in Xen. The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile Express simulator. See the following announce email for more informations about what we are trying to achieve, as well as the original git history: See
2011 Dec 06
57
[PATCH RFC 00/25] xen: ARMv7 with virtualization extensions
Hello everyone, this is the very first version of the patch series that introduces ARMv7 with virtualization extensions support in Xen. The series allows Xen and Dom0 to boot on a Cortex-A15 based Versatile Express simulator. See the following announce email for more informations about what we are trying to achieve, as well as the original git history: See
2013 Jan 23
132
[PATCH 00/45] initial arm v8 (64-bit) support
First off, Apologies for the massive patch series... This series boots a 32-bit dom0 kernel to a command prompt on an ARMv8 (AArch64) model. The kernel is the same one as I am currently using with the 32 bit hypervisor I haven''t yet tried starting a guest or anything super advanced like that ;-). Also there is not real support for 64-bit domains at all, although in one or two places I