search for: yield_to_any

Displaying 12 results from an estimated 12 matches for "yield_to_any".

2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...100644 > --- a/arch/powerpc/include/asm/paravirt.h > +++ b/arch/powerpc/include/asm/paravirt.h > @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > ___bad_yield_to_preempted(); /* This would be a bug */ > } > + > +extern void ___bad_yield_to_any(void); > +static inline void yield_to_any(void) > +{ > + ___bad_yield_to_any(); /* This would be a bug */ > +} Why do we do that rather than just not defining yield_to_any() at all and letting the build fail on that? There's a condition somewhere that we know will false at compile...
2020 Jul 09
4
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...100644 > --- a/arch/powerpc/include/asm/paravirt.h > +++ b/arch/powerpc/include/asm/paravirt.h > @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > ___bad_yield_to_preempted(); /* This would be a bug */ > } > + > +extern void ___bad_yield_to_any(void); > +static inline void yield_to_any(void) > +{ > + ___bad_yield_to_any(); /* This would be a bug */ > +} Why do we do that rather than just not defining yield_to_any() at all and letting the build fail on that? There's a condition somewhere that we know will false at compile...
2020 Jul 05
1
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...eempted(int cpu, u32 yield_count) > { > plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); > } > + > +static inline void prod_cpu(int cpu) > +{ > + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); > +} > + > +static inline void yield_to_any(void) > +{ > + plpar_hcall_norets(H_CONFER, -1, 0); > +} > #else > static inline bool is_shared_processor(void) > { > @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) > { > ___bad_yield_to_preempted(); /* This would be a bug */...
2020 Jul 02
0
[PATCH 6/8] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...t.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) { @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { ___bad_yield_to_preempted(); /* This would be a bug */ } + +extern void ___bad_yield_to_any(void); +sta...
2020 Jul 03
0
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...t.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) { @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { ___bad_yield_to_preempted(); /* This would be a bug */ } + +extern void ___bad_yield_to_any(void); +sta...
2020 Jul 06
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...t.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) { @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { ___bad_yield_to_preempted(); /* This would be a bug */ } + +extern void ___bad_yield_to_any(void); +sta...
2020 Jul 09
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
.../include/asm/paravirt.h >> +++ b/arch/powerpc/include/asm/paravirt.h >> @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) >> { >> ___bad_yield_to_preempted(); /* This would be a bug */ >> } >> + >> +extern void ___bad_yield_to_any(void); >> +static inline void yield_to_any(void) >> +{ >> + ___bad_yield_to_any(); /* This would be a bug */ >> +} > Why do we do that rather than just not defining yield_to_any() at all > and letting the build fail on that? > > There's a condition somewhere...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though