search for: config_ppc64

Displaying 20 results from an estimated 27 matches for "config_ppc64".

Did you mean: config_ppc
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...ngebretsen <engebret at us.ibm.com>, IBM + * Rework to support virtual processors + * + * Type of int is used as a full 64b word is not necessary. + * + * (the type definitions are in asm/simple_spinlock_types.h) + */ +#include <linux/irqflags.h> +#include <asm/paravirt.h> +#ifdef CONFIG_PPC64 +#include <asm/paca.h> +#endif +#include <asm/synch.h> +#include <asm/ppc-opcode.h> + +#ifdef CONFIG_PPC64 +/* use 0x800000yy when locked, where yy == CPU number */ +#ifdef __BIG_ENDIAN__ +#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) +#else +#define LOCK_TOKEN (*(...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2020 Jul 06
0
[PATCH v3 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file
...ndex 000000000000..7a8546660a63 --- /dev/null +++ b/arch/powerpc/include/asm/paravirt.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_PARAVIRT_H +#define __ASM_PARAVIRT_H +#ifdef __KERNEL__ + +#include <linux/jump_label.h> +#include <asm/smp.h> +#ifdef CONFIG_PPC64 +#include <asm/paca.h> +#include <asm/hvcall.h> +#endif + +#ifdef CONFIG_PPC_SPLPAR +DECLARE_STATIC_KEY_FALSE(shared_processor); + +static inline bool is_shared_processor(void) +{ + return static_branch_unlikely(&shared_processor); +} + +/* If bit 0 is set, the cpu has been preempte...
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
From: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> This patch aims to enable qspinlock on PPC. And on pseries platform, it also support paravirt qspinlock. Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> --- arch/powerpc/include/asm/qspinlock.h | 37 +++++++++++++++ arch/powerpc/include/asm/qspinlock_paravirt.h | 36 +++++++++++++++
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
From: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> This patch aims to enable qspinlock on PPC. And on pseries platform, it also support paravirt qspinlock. Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com> --- arch/powerpc/include/asm/qspinlock.h | 37 +++++++++++++++ arch/powerpc/include/asm/qspinlock_paravirt.h | 36 +++++++++++++++
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call