search for: holder_cpu

Displaying 20 results from an estimated 37 matches for "holder_cpu".

2016 Dec 06
1
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...+ * confer our slices to a specified cpu and return. If it is in running state > + * or cpu is -1, then we will check confer. If confer is NULL, we will return > + * otherwise we confer our slices to lpar. > + */ > +void __spin_yield_cpu(int cpu, int confer) > +{ > + unsigned int holder_cpu = cpu, yield_count; As I said at: https://marc.info/?l=linux-kernel&m=147455748619343&w=2 @holder_cpu is not necessary and doesn't help anything. > + > + if (cpu == -1) > + goto yield_to_lpar; > + > + BUG_ON(holder_cpu >= nr_cpu_ids); > + yield_count = be32_to_...
2016 Dec 06
1
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...+ * confer our slices to a specified cpu and return. If it is in running state > + * or cpu is -1, then we will check confer. If confer is NULL, we will return > + * otherwise we confer our slices to lpar. > + */ > +void __spin_yield_cpu(int cpu, int confer) > +{ > + unsigned int holder_cpu = cpu, yield_count; As I said at: https://marc.info/?l=linux-kernel&m=147455748619343&w=2 @holder_cpu is not necessary and doesn't help anything. > + > + if (cpu == -1) > + goto yield_to_lpar; > + > + BUG_ON(holder_cpu >= nr_cpu_ids); > + yield_count = be32_to_...
2020 Jul 06
0
[PATCH v3 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file
...rch_spinlock_t *lock) { if (is_shared_processor()) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 6440d5943c00..04165b7a163f 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -27,14 +27,14 @@ void splpar_spin_yield(arch_spinlock_t *lock) return; holder_cpu = lock_value & 0xffff; BUG_ON(holder_cpu >= NR_CPUS); - yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + yield_count = yield_count_of(holder_cpu); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ rmb(); if (lock->slock != lock_val...
2016 Dec 05
0
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...nclude <asm/smp.h> +/* + * confer our slices to a specified cpu and return. If it is in running state + * or cpu is -1, then we will check confer. If confer is NULL, we will return + * otherwise we confer our slices to lpar. + */ +void __spin_yield_cpu(int cpu, int confer) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) + goto yield_to_lpar; + + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + /* if cpu is running, confer slices to lpar conditionally*/ + if ((yield_count & 1) == 0) + goto yield_to_lpar; + + plpar_h...
2016 May 17
0
[PATCH v2 3/6] powerpc: lib/locks.c: cpu yield/wake helper function
...R 0 #endif diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index a9ebd71..4109679a 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -23,6 +23,38 @@ #include <asm/hvcall.h> #include <asm/smp.h> +void __spin_yield_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_n...
2016 Apr 28
0
[PATCH] powerpc: enable qspinlock and its virtualization support
...kick; + } +} diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index f7deebd..6e9d3bb 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -23,6 +23,35 @@ #include <asm/hvcall.h> #include <asm/smp.h> +void __spin_yield_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_n...
2016 Apr 28
0
[PATCH] powerpc: enable qspinlock and its virtualization support
...kick; + } +} diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index f7deebd..6e9d3bb 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -23,6 +23,35 @@ #include <asm/hvcall.h> #include <asm/smp.h> +void __spin_yield_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_n...
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
...kick; + } +} diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index f7deebd..6e9d3bb 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -23,6 +23,35 @@ #include <asm/hvcall.h> #include <asm/smp.h> +void __spin_yield_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_n...
2016 Apr 28
2
[PATCH resend] powerpc: enable qspinlock and its virtualization support
...kick; + } +} diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index f7deebd..6e9d3bb 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -23,6 +23,35 @@ #include <asm/hvcall.h> #include <asm/smp.h> +void __spin_yield_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_n...
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2016 May 17
6
[PATCH v3 0/6] powerpc use pv-qpsinlock instead of spinlock
change fome v1: separate into 6 pathes from one patch some minor code changes. benchmark test results are below. run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory perf bench futex hash perf bench futex lock-pi perf record -advRT || perf bench sched messaging -g 1000 || perf report summary: _____test________________spinlcok______________pv-qspinlcok_____ |futex hash | 556370 ops |
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch