search for: __this_cpu_dec

Displaying 16 results from an estimated 16 matches for "__this_cpu_dec".

2020 Jul 21
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...32 val; + int idx; + + BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + node = this_cpu_ptr(&qnodes[0].mcs); idx = node->count++; tail = encode_tail(smp_processor_id(), idx); @@ -559,7 +577,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) */ __this_cpu_dec(qnodes[0].mcs.count); } -EXPORT_SYMBOL(queued_spin_lock_slowpath); /* * Generate the paravirt code for queued_spin_unlock_slowpath().
2020 Jul 21
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...32 val; + int idx; + + BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + node = this_cpu_ptr(&qnodes[0].mcs); idx = node->count++; tail = encode_tail(smp_processor_id(), idx); @@ -559,7 +577,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) */ __this_cpu_dec(qnodes[0].mcs.count); } -EXPORT_SYMBOL(queued_spin_lock_slowpath); /* * Generate the paravirt code for queued_spin_unlock_slowpath().
2020 Nov 03
0
[patch V3 19/37] mm/highmem: Remove the old kmap_atomic cruft
...WARN_ON_ONCE(in_irq() && !irqs_disabled()); - BUG_ON(idx >= KM_TYPE_NR); -#endif - return idx; -} - -static inline int kmap_atomic_idx(void) -{ - return __this_cpu_read(__kmap_atomic_idx) - 1; -} -static inline void kmap_atomic_idx_pop(void) -{ -#ifdef CONFIG_DEBUG_HIGHMEM - int idx = __this_cpu_dec_return(__kmap_atomic_idx); - - BUG_ON(idx < 0); -#else - __this_cpu_dec(__kmap_atomic_idx); -#endif -} -#endif -#endif +#endif /* CONFIG_HIGHMEM */ /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() --- a/mm/highmem.c +++ b/mm/highmem.c @@ -31,12 +31,6 @@ #include &l...
2020 Jul 21
0
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...IG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); > + > node = this_cpu_ptr(&qnodes[0].mcs); > idx = node->count++; > tail = encode_tail(smp_processor_id(), idx); > @@ -559,7 +577,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > */ > __this_cpu_dec(qnodes[0].mcs.count); > } > -EXPORT_SYMBOL(queued_spin_lock_slowpath); > > /* > * Generate the paravirt code for queued_spin_unlock_slowpath(). > I would prefer to extract out the pending bit handling code out into a separate helper function which can be overridden by t...
2020 Jul 23
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...; _Q_TAIL_CPU_BITS)); >> + >> node = this_cpu_ptr(&qnodes[0].mcs); >> idx = node->count++; >> tail = encode_tail(smp_processor_id(), idx); >> @@ -559,7 +577,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) >> */ >> __this_cpu_dec(qnodes[0].mcs.count); >> } >> -EXPORT_SYMBOL(queued_spin_lock_slowpath); >> >> /* >> * Generate the paravirt code for queued_spin_unlock_slowpath(). >> > I would prefer to extract out the pending bit handling code out into a > separate helper fun...
2020 Jul 23
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...; _Q_TAIL_CPU_BITS)); >> + >> node = this_cpu_ptr(&qnodes[0].mcs); >> idx = node->count++; >> tail = encode_tail(smp_processor_id(), idx); >> @@ -559,7 +577,6 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) >> */ >> __this_cpu_dec(qnodes[0].mcs.count); >> } >> -EXPORT_SYMBOL(queued_spin_lock_slowpath); >> >> /* >> * Generate the paravirt code for queued_spin_unlock_slowpath(). >> > I would prefer to extract out the pending bit handling code out into a > separate helper fun...
2020 Nov 03
0
[patch V3 06/37] highmem: Provide generic variant of kmap_atomic*
...tomic(pfn_to_page(pfn)) - #define kmap_flush_unused() do {} while(0) #endif /* CONFIG_HIGHMEM */ +#if !defined(CONFIG_KMAP_LOCAL) #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) DECLARE_PER_CPU(int, __kmap_atomic_idx); @@ -196,22 +247,21 @@ static inline void kmap_atomic_idx_pop(v __this_cpu_dec(__kmap_atomic_idx); #endif } - +#endif #endif /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page. */ -#define kunmap_atomic(addr) \ -do {...
2020 Jul 07
6
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
Excerpts from Waiman Long's message of July 7, 2020 4:39 am: > On 7/6/20 12:35 AM, Nicholas Piggin wrote: >> v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). >> >> Thanks, >> Nick >> >> Nicholas Piggin (6): >> powerpc/powernv: must include hvcall.h to get PAPR defines >> powerpc/pseries: move some PAPR
2020 Jul 07
6
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
Excerpts from Waiman Long's message of July 7, 2020 4:39 am: > On 7/6/20 12:35 AM, Nicholas Piggin wrote: >> v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). >> >> Thanks, >> Nick >> >> Nicholas Piggin (6): >> powerpc/powernv: must include hvcall.h to get PAPR defines >> powerpc/pseries: move some PAPR
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 06
6
[PATCH v9 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v8 -> v9: mv qspinlocm config entry to
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2016 Dec 05
9
[PATCH v8 0/6] Implement qspinlock/pv-qspinlock on ppc
Hi All, this is the fairlock patchset. You can apply them and build successfully. patches are based on linux-next qspinlock can avoid waiter starved issue. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. v7 -> v8: add one patch to drop a function call
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all
2020 Nov 03
45
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all