search for: h_confer

Displaying 20 results from an estimated 32 matches for "h_confer".

Did you mean: confer
2020 Jul 02
0
[PATCH 2/8] powerpc/pseries: use smp_rmb() in H_CONFER spin yield
...in_yield(arch_spinlock_t *lock) yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ - rmb(); + smp_rmb(); if (lock->slock != lock_value) return; /* something has changed */ plpar_hcall_norets(H_CONFER, @@ -56,7 +56,7 @@ void splpar_rw_yield(arch_rwlock_t *rw) yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ - rmb(); + smp_rmb(); if (rw->lock != lock_value) return; /* something has changed...
2020 Jul 02
1
[PATCH 2/8] powerpc/pseries: use smp_rmb() in H_CONFER spin yield
On Thu, Jul 02, 2020 at 05:48:33PM +1000, Nicholas Piggin wrote: > There is no need for rmb(), this allows faster lwsync here. Since you determined this; I'm thinking you actually understand the ordering here. How about recording this understanding in a comment? Also, should the lock->slock load not use READ_ONCE() ?
2020 Jul 05
1
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
.../arch/powerpc/include/asm/paravirt.h > index 7a8546660a63..f2d51f929cf5 100644 > --- a/arch/powerpc/include/asm/paravirt.h > +++ b/arch/powerpc/include/asm/paravirt.h > @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); > } > + > +static inline void prod_cpu(int cpu) > +{ > + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); > +} > + > +static inline void yield_to_any(void) > +{ > + plpar_hcall_norets(H_CONFER, -1, 0); > +}...
2020 Jul 06
0
[PATCH v3 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file
...ssor); +} + +/* If bit 0 is set, the cpu has been preempted */ +static inline u32 yield_count_of(int cpu) +{ + __be32 yield_count = READ_ONCE(lppaca_of(cpu).yield_count); + return be32_to_cpu(yield_count); +} + +static inline void yield_to_preempted(int cpu, u32 yield_count) +{ + plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); +} +#else +static inline bool is_shared_processor(void) +{ + return false; +} + +static inline u32 yield_count_of(int cpu) +{ + return 0; +} + +extern void ___bad_yield_to_preempted(void); +static inline void yield_to_preempted(int cpu, u32 yield_count...
2016 Dec 06
1
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...lpar; > + > + BUG_ON(holder_cpu >= nr_cpu_ids); > + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); > + > + /* if cpu is running, confer slices to lpar conditionally*/ > + if ((yield_count & 1) == 0) > + goto yield_to_lpar; > + > + plpar_hcall_norets(H_CONFER, > + get_hard_smp_processor_id(holder_cpu), yield_count); > + return; > + > +yield_to_lpar: > + if (confer) > + plpar_hcall_norets(H_CONFER, -1, 0); > +} > +EXPORT_SYMBOL_GPL(__spin_yield_cpu); > + > +void __spin_wake_cpu(int cpu) > +{ > + unsigned int holder_c...
2016 Dec 06
1
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...lpar; > + > + BUG_ON(holder_cpu >= nr_cpu_ids); > + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); > + > + /* if cpu is running, confer slices to lpar conditionally*/ > + if ((yield_count & 1) == 0) > + goto yield_to_lpar; > + > + plpar_hcall_norets(H_CONFER, > + get_hard_smp_processor_id(holder_cpu), yield_count); > + return; > + > +yield_to_lpar: > + if (confer) > + plpar_hcall_norets(H_CONFER, -1, 0); > +} > +EXPORT_SYMBOL_GPL(__spin_yield_cpu); > + > +void __spin_wake_cpu(int cpu) > +{ > + unsigned int holder_c...
2016 Dec 05
0
[PATCH v8 3/6] powerpc: lib/locks.c: Add cpu yield/wake helper function
...d_count; + + if (cpu == -1) + goto yield_to_lpar; + + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + /* if cpu is running, confer slices to lpar conditionally*/ + if ((yield_count & 1) == 0) + goto yield_to_lpar; + + plpar_hcall_norets(H_CONFER, + get_hard_smp_processor_id(holder_cpu), yield_count); + return; + +yield_to_lpar: + if (confer) + plpar_hcall_norets(H_CONFER, -1, 0); +} +EXPORT_SYMBOL_GPL(__spin_yield_cpu); + +void __spin_wake_cpu(int cpu) +{ + unsigned int holder_cpu = cpu; + + BUG_ON(holder_cpu >= nr_cpu_ids); + /* + *...
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
...ess of getting numbers and testing, but the implementation turned out to be surprisingly simple and we have a config option, so I think we could merge it fairly soon. Thanks, Nick Nicholas Piggin (8): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: use smp_rmb() in H_CONFER spin yield powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt qspinlocks for SPLPAR powerpc/qspinlock: optimised atomic_try_cm...
2020 Jul 02
0
[PATCH 6/8] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index 7a8546660a63..5fae9dfa6fe9 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) {...
2020 Jul 03
0
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index 7a8546660a63..f2d51f929cf5 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) {...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 06
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index 7a8546660a63..f2d51f929cf5 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) {...
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 25
10
[PATCH v3 0/6] powerpc use pv-qpsinlock as the default spinlock implemention
change from v2: __spin_yeild_cpu() will yield slices to lpar if target cpu is running. remove unnecessary rmb() in __spin_yield/wake_cpu. __pv_wait() will check the *ptr == val. some commit message change change fome v1: separate into 6 pathes from one patch some minor code changes. I do several tests on pseries IBM,8408-E8E with 32cpus, 64GB memory. benchmark test results are below. 2
2016 May 17
0
[PATCH v2 3/6] powerpc: lib/locks.c: cpu yield/wake helper function
..., yield_count; + + if (cpu == -1) { + plpar_hcall_norets(H_CEDE); + return; + } + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_count & 1) == 0) + return; /* virtual cpu is currently running */ + rmb(); + plpar_hcall_norets(H_CONFER, + get_hard_smp_processor_id(holder_cpu), yield_count); +} +EXPORT_SYMBOL_GPL(__spin_yield_cpu); + +void __spin_wake_cpu(int cpu) +{ + unsigned int holder_cpu = cpu, yield_count; + + BUG_ON(holder_cpu >= nr_cpu_ids); + yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + if ((yield_c...
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
9
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. sorry for not even doing a test on bigendian machine before!!! change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch
2016 Jun 02
8
[PATCH v5 0/6] powerPC/pSeries use pv-qpsinlock as the default spinlock implemention
From: root <root at ltcalpine2-lp13.aus.stglabs.ibm.com> change from v4: BUG FIX. thanks boqun reporting this issue. struct __qspinlock has different layout in bigendian mahcine. native_queued_spin_unlock() may write value to a wrong address. now fix it. change from v3: a big change in [PATCH v4 4/6] pv-qspinlock: powerpc support pv-qspinlock no other patch changed. and the patch