search for: locked

Displaying 20 results from an estimated 23448 matches for "locked".

Did you mean: blocked
2009 Jan 20
3
Re : problem with running mysql on glusterfs
Hello. I would like to ask about having mysql data hosted on glusterfs. Please see my issue below. I have posted DEBUG log information on pastebin. I am using GlusterFS version: glusterfs 1.4.0rc7 FUSE Version :fuse-2.7.3glfs10 my ISSUE is: When hosting mysql data on glusterfs I have an issue: The first time I start the glusterfsd server using [root at mohan ~]#
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...> + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{ > + union arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + > + if (!qlock->locked && (cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0)) > + return 1; > + return 0; > +} > + > +/** > + * queue_spin_lock_unfair - acquire a queue spinlock unfairly > + * @lock: Pointer to queue spinlock structure > + */ > +static __always_inline void queue_s...
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...> + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{ > + union arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + > + if (!qlock->locked && (cmpxchg(&qlock->locked, 0, _Q_LOCKED_VAL) == 0)) > + return 1; > + return 0; > +} > + > +/** > + * queue_spin_lock_unfair - acquire a queue spinlock unfairly > + * @lock: Pointer to queue spinlock structure > + */ > +static __always_inline void queue_s...
2016 Dec 06
1
[PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock
...AM -0500, Pan Xinhui wrote: > This patch add basic code to enable qspinlock on powerpc. qspinlock is > one kind of fairlock implementation. And seen some performance improvement > under some scenarios. > > queued_spin_unlock() release the lock by just one write of NULL to the > ::locked field which sits at different places in the two endianness > system. > > We override some arch_spin_XXX as powerpc has io_sync stuff which makes > sure the io operations are protected by the lock correctly. > > There is another special case, see commit > 2c610022711 ("lo...
2016 Dec 06
1
[PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock
...AM -0500, Pan Xinhui wrote: > This patch add basic code to enable qspinlock on powerpc. qspinlock is > one kind of fairlock implementation. And seen some performance improvement > under some scenarios. > > queued_spin_unlock() release the lock by just one write of NULL to the > ::locked field which sits at different places in the two endianness > system. > > We override some arch_spin_XXX as powerpc has io_sync stuff which makes > sure the io operations are protected by the lock correctly. > > There is another special case, see commit > 2c610022711 ("lo...
2017 Sep 27
2
nbdkit 1.1.15 -- test-python failure
Hi, when I tested building nbdkit 1.1.15 in a current Debian chroot, I ran into the following test failure. A repeated build went fine through the tests and so far I have not been able to reproduce it with the previous version. The failing build was done using a clean Debian/sid, amd64 chroot spawned by sbuild. Cheers, -Hilko FAIL: test-python =================
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...e size, but at the cost of halving the max number of CPUs (127 for a 8-bit ticket, and a hard max of 32767 overall). The extra bit (well, two, but one is unused) in indicates whether the lock has gone into "slowpath state", which means one of its lockers has entered its slowpath and has blocked in the hypervisor. This means the current lock-holder needs to make sure it gets kicked out of the hypervisor on unlock. The spinlock remains in slowpath state until the last unlock happens (ie there are no more queued lockers). This code survives for a while with moderate testing, (make -j 100...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...e size, but at the cost of halving the max number of CPUs (127 for a 8-bit ticket, and a hard max of 32767 overall). The extra bit (well, two, but one is unused) in indicates whether the lock has gone into "slowpath state", which means one of its lockers has entered its slowpath and has blocked in the hypervisor. This means the current lock-holder needs to make sure it gets kicked out of the hypervisor on unlock. The spinlock remains in slowpath state until the last unlock happens (ie there are no more queued lockers). This code survives for a while with moderate testing, (make -j 100...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
...e size, but at the cost of halving the max number of CPUs (127 for a 8-bit ticket, and a hard max of 32767 overall). The extra bit (well, two, but one is unused) in indicates whether the lock has gone into "slowpath state", which means one of its lockers has entered its slowpath and has blocked in the hypervisor. This means the current lock-holder needs to make sure it gets kicked out of the hypervisor on unlock. The spinlock remains in slowpath state until the last unlock happens (ie there are no more queued lockers). This code survives for a while with moderate testing, (make -j 100...
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...the type definitions are in asm/simple_spinlock_types.h) + */ +#include <linux/irqflags.h> +#include <asm/paravirt.h> +#ifdef CONFIG_PPC64 +#include <asm/paca.h> +#endif +#include <asm/synch.h> +#include <asm/ppc-opcode.h> + +#ifdef CONFIG_PPC64 +/* use 0x800000yy when locked, where yy == CPU number */ +#ifdef __BIG_ENDIAN__ +#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) +#else +#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) +#endif +#else +#define LOCK_TOKEN 1 +#endif + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t...
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...largely unaffected. There are still some overheads, however: - When locking, there's some extra tests to count the spin iterations. There are no extra instructions in the uncontended case though. - When unlocking, there are two ways to detect when it is necessary to kick a blocked CPU: - with an unmodified struct spinlock, it can check to see if head == tail after unlock; if not, then there's someone else trying to lock, and we can do a kick. Unfortunately this generates very high level of redundant kicks, because the waiting CPU m...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...largely unaffected. There are still some overheads, however: - When locking, there's some extra tests to count the spin iterations. There are no extra instructions in the uncontended case though. - When unlocking, there are two ways to detect when it is necessary to kick a blocked CPU: - with an unmodified struct spinlock, it can check to see if head == tail after unlock; if not, then there's someone else trying to lock, and we can do a kick. Unfortunately this generates very high level of redundant kicks, because the waiting CPU m...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...largely unaffected. There are still some overheads, however: - When locking, there's some extra tests to count the spin iterations. There are no extra instructions in the uncontended case though. - When unlocking, there are two ways to detect when it is necessary to kick a blocked CPU: - with an unmodified struct spinlock, it can check to see if head == tail after unlock; if not, then there's someone else trying to lock, and we can do a kick. Unfortunately this generates very high level of redundant kicks, because the waiting CPU m...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
..._t *lock, __ticket_t ticket) @@ -59,6 +76,10 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, { } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ +} + #endif /* CONFIG_PARAVIRT_SPINLOCKS */ static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) @@ -84,7 +105,7 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock) register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC }; inc = xadd(&lock->tickets, inc); - if (likely(inc.head == inc.tail)) + if (likely(inc.head == (inc.tail & ~TIC...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
..._t *lock, __ticket_t ticket) @@ -59,6 +76,10 @@ static inline void __ticket_unlock_kick(arch_spinlock_t *lock, { } +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock) +{ +} + #endif /* CONFIG_PARAVIRT_SPINLOCKS */ static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) @@ -84,7 +105,7 @@ static __always_inline void arch_spin_lock(arch_spinlock_t *lock) register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC }; inc = xadd(&lock->tickets, inc); - if (likely(inc.head == inc.tail)) + if (likely(inc.head == (inc.tail & ~TIC...
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex