search for: arch_spin_is_lock

Displaying 20 results from an estimated 114 matches for "arch_spin_is_lock".

Did you mean: arch_spin_is_locked
2013 May 09
4
[PATCH] mini-os: eliminate duplicated definition of spin_unlock_wait
...changed, 2 insertions(+), 2 deletions(-) diff --git a/extras/mini-os/include/spinlock.h b/extras/mini-os/include/spinlock.h index 70cf20f..6604e3c 100644 --- a/extras/mini-os/include/spinlock.h +++ b/extras/mini-os/include/spinlock.h @@ -30,7 +30,7 @@ typedef struct { #define spin_is_locked(x) arch_spin_is_locked(x) -#define spin_unlock_wait(x) do { barrier(); } while(spin_is_locked(x)) +#define spin_unlock_wait(x) arch_spin_unlock_wait(x) #define _spin_trylock(lock) ({_raw_spin_trylock(lock) ? \ diff --git a/extras/mini-os/include/x86/arch_spinlock.h b/extras/mini-os/include/x86/arch_spinlock....
2016 Jun 03
2
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
...arrier() > -#define __rw_yield(x) barrier() > -#define SHARED_PROCESSOR 0 > -#endif > - > ?static inline void arch_spin_lock(arch_spinlock_t *lock) > ?{ > ? CLEAR_IO_SYNC; > @@ -169,6 +171,7 @@ extern void arch_spin_unlock_wait(arch_spinlock_t > *lock); > ? do { while (arch_spin_is_locked(lock)) cpu_relax(); } while > (0) > ?#endif > ? > +#endif /* !CONFIG_QUEUED_SPINLOCKS */ > ?/* > ? * Read-write spinlocks, allowing multiple readers > ? * but only one writer. > diff --git a/arch/powerpc/include/asm/spinlock_types.h > b/arch/powerpc/include/asm/spinlock...
2016 Jun 03
2
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
...arrier() > -#define __rw_yield(x) barrier() > -#define SHARED_PROCESSOR 0 > -#endif > - > ?static inline void arch_spin_lock(arch_spinlock_t *lock) > ?{ > ? CLEAR_IO_SYNC; > @@ -169,6 +171,7 @@ extern void arch_spin_unlock_wait(arch_spinlock_t > *lock); > ? do { while (arch_spin_is_locked(lock)) cpu_relax(); } while > (0) > ?#endif > ? > +#endif /* !CONFIG_QUEUED_SPINLOCKS */ > ?/* > ? * Read-write spinlocks, allowing multiple readers > ? * but only one writer. > diff --git a/arch/powerpc/include/asm/spinlock_types.h > b/arch/powerpc/include/asm/spinlock...
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...lowpath(lock, prev); + if (unlikely(head & TICKET_SLOWPATH_FLAG)) { + head &= ~TICKET_SLOWPATH_FLAG; + __ticket_unlock_kick(lock, (head + TICKET_LOCK_INC)); + } } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); } @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) { struct __raw_tickets tmp = READ_ONCE(lock->tickets); - return tmp.tail != tmp.head; + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); } static inline int arch_spin_is_contended(arch_spinlock_t *lock) @@ -183,16 +180,16 @@ static __always_inline void...
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
...lowpath(lock, prev); + if (unlikely(head & TICKET_SLOWPATH_FLAG)) { + head &= ~TICKET_SLOWPATH_FLAG; + __ticket_unlock_kick(lock, (head + TICKET_LOCK_INC)); + } } else __add(&lock->tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX); } @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) { struct __raw_tickets tmp = READ_ONCE(lock->tickets); - return tmp.tail != tmp.head; + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); } static inline int arch_spin_is_contended(arch_spinlock_t *lock) @@ -183,16 +180,16 @@ static __always_inline void...
2015 Feb 13
0
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13, Raghavendra K T wrote: > > @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > struct __raw_tickets tmp = READ_ONCE(lock->tickets); > > - return tmp.tail != tmp.head; > + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); > } Well, this can probably use __tickets_equal() too. But this is cosmetic. It seems that...
2015 Feb 13
0
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13, Raghavendra K T wrote: > > @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) > { > struct __raw_tickets tmp = READ_ONCE(lock->tickets); > > - return tmp.tail != tmp.head; > + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); > } Well, this can probably use __tickets_equal() too. But this is cosmetic. It seems that...
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13/2015 09:02 PM, Oleg Nesterov wrote: > On 02/13, Raghavendra K T wrote: >> >> @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) >> { >> struct __raw_tickets tmp = READ_ONCE(lock->tickets); >> >> - return tmp.tail != tmp.head; >> + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); >> } > > Well, this can probably use __tickets_equal() too....
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
On 02/13/2015 09:02 PM, Oleg Nesterov wrote: > On 02/13, Raghavendra K T wrote: >> >> @@ -164,7 +161,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock) >> { >> struct __raw_tickets tmp = READ_ONCE(lock->tickets); >> >> - return tmp.tail != tmp.head; >> + return tmp.tail != (tmp.head & ~TICKET_SLOWPATH_FLAG); >> } > > Well, this can probably use __tickets_equal() too....
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) +#else +#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) +#endif +#else +#define LOCK_TOKEN 1 +#endif + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + return lock.slock == 0; +} + +static inline int arch_spin_is_locked(arch_spinlock_t *lock) +{ + smp_mb(); + return !arch_spin_value_unlocked(*lock); +} + +/* + * This returns the old value in the lock, so we succeeded + * in getting the lock if the return value is 0. + */ +static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock) +{ + unsigned long t...
2016 Jun 02
0
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
...k); -#else /* SPLPAR */ -#define __spin_yield(x) barrier() -#define __rw_yield(x) barrier() -#define SHARED_PROCESSOR 0 -#endif - static inline void arch_spin_lock(arch_spinlock_t *lock) { CLEAR_IO_SYNC; @@ -169,6 +171,7 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock); do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) #endif +#endif /* !CONFIG_QUEUED_SPINLOCKS */ /* * Read-write spinlocks, allowing multiple readers * but only one writer. diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 2351adc..bd7144e 100644 --- a/...
2016 Jun 02
0
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
...k); -#else /* SPLPAR */ -#define __spin_yield(x) barrier() -#define __rw_yield(x) barrier() -#define SHARED_PROCESSOR 0 -#endif - static inline void arch_spin_lock(arch_spinlock_t *lock) { CLEAR_IO_SYNC; @@ -169,6 +171,7 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock); do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) #endif +#endif /* !CONFIG_QUEUED_SPINLOCKS */ /* * Read-write spinlocks, allowing multiple readers * but only one writer. diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 2351adc..bd7144e 100644 --- a/...
2016 Jun 03
0
[PATCH v5 1/6] qspinlock: powerpc support qspinlock
...; -#define SHARED_PROCESSOR 0 > > -#endif > > - > > ?static inline void arch_spin_lock(arch_spinlock_t *lock) > > ?{ > > ? CLEAR_IO_SYNC; > > @@ -169,6 +171,7 @@ extern void > > arch_spin_unlock_wait(arch_spinlock_t > > *lock); > > ? do { while (arch_spin_is_locked(lock)) cpu_relax(); } > > while > > (0) > > ?#endif > > ? > > +#endif /* !CONFIG_QUEUED_SPINLOCKS */ > > ?/* > > ? * Read-write spinlocks, allowing multiple readers > > ? * but only one writer. > > diff --git a/arch/powerpc/include/asm/spinloc...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Hi all, This series does two major things: 1. It converts the bulk of the implementation to C, and makes the "small ticket" and "large ticket" code common. Only the actual size-dependent asm instructions are specific to the ticket size. The resulting generated asm is very similar to the current
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Hi all, This series does two major things: 1. It converts the bulk of the implementation to C, and makes the "small ticket" and "large ticket" code common. Only the actual size-dependent asm instructions are specific to the ticket size. The resulting generated asm is very similar to the current
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Hi all, This series does two major things: 1. It converts the bulk of the implementation to C, and makes the "small ticket" and "large ticket" code common. Only the actual size-dependent asm instructions are specific to the ticket size. The resulting generated asm is very similar to the current
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Changes since last posting: (Raghavendra K T) [ - Rebased to linux-3.3-rc6. - used function+enum in place of macro (better type checking) - use cmpxchg while resetting zero status for possible race [suggested by Dave Hansen for KVM patches ] ] This series replaces the existing paravirtualized spinlock mechanism with a
2012 Mar 21
15
[PATCH RFC V6 0/11] Paravirtualized ticketlocks
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> Changes since last posting: (Raghavendra K T) [ - Rebased to linux-3.3-rc6. - used function+enum in place of macro (better type checking) - use cmpxchg while resetting zero status for possible race [suggested by Dave Hansen for KVM patches ] ] This series replaces the existing paravirtualized spinlock mechanism with a
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. (targeted for 3.5 window) Changes in V7: - Reabsed patches to 3.4-rc3 - Added jumplabel split patch (originally from Andrew Jones rebased to 3.4-rc3 - jumplabel changes from Ingo and Jason taken and now using
2012 Apr 19
13
[PATCH RFC V7 0/12] Paravirtualized ticketlocks
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com> This series replaces the existing paravirtualized spinlock mechanism with a paravirtualized ticketlock mechanism. (targeted for 3.5 window) Changes in V7: - Reabsed patches to 3.4-rc3 - Added jumplabel split patch (originally from Andrew Jones rebased to 3.4-rc3 - jumplabel changes from Ingo and Jason taken and now using