Displaying 20 results from an estimated 23527 matches for "locking".
Did you mean:
blocking
2009 Jan 20
3
Re : problem with running mysql on glusterfs
Hello. I would like to ask about having mysql data hosted on glusterfs.
Please see my issue below. I have posted DEBUG log information on pastebin.
I am using
GlusterFS version: glusterfs 1.4.0rc7
FUSE Version :fuse-2.7.3glfs10
my ISSUE is:
When hosting mysql data on glusterfs I have an issue:
The first time I start the glusterfsd server using
[root at mohan ~]#
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...airlocks_enabled))
> + return queue_spin_trylock_unfair(lock);
> + else
> + return queue_spin_trylock(lock);
> +}
So I really don't see the point of all this? Why do you need special
{try,}lock paths for this case? Are you worried about the upper 24bits?
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index ae1b19d..3723c83 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -217,6 +217,14 @@ static __always_inline int try_set_locked(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
&g...
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...airlocks_enabled))
> + return queue_spin_trylock_unfair(lock);
> + else
> + return queue_spin_trylock(lock);
> +}
So I really don't see the point of all this? Why do you need special
{try,}lock paths for this case? Are you worried about the upper 24bits?
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index ae1b19d..3723c83 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -217,6 +217,14 @@ static __always_inline int try_set_locked(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
&g...
2016 Dec 06
1
[PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock
...ed field which sits at different places in the two endianness
> system.
>
> We override some arch_spin_XXX as powerpc has io_sync stuff which makes
> sure the io operations are protected by the lock correctly.
>
> There is another special case, see commit
> 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++++++++++++++
> arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
> a...
2016 Dec 06
1
[PATCH v8 1/6] powerpc/qspinlock: powerpc support qspinlock
...ed field which sits at different places in the two endianness
> system.
>
> We override some arch_spin_XXX as powerpc has io_sync stuff which makes
> sure the io operations are protected by the lock correctly.
>
> There is another special case, see commit
> 2c610022711 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
> arch/powerpc/include/asm/qspinlock.h | 66 +++++++++++++++++++++++++++++++
> arch/powerpc/include/asm/spinlock.h | 31 +++++++++------
> a...
2017 Sep 27
2
nbdkit 1.1.15 -- test-python failure
...uot;execute": "query-qmp-schema" }' '{ "execute": "quit" }' | "/usr/bin/qemu-system-x86_64" -display none -machine "accel=kvm:tcg" -qmp stdio
libguestfs: saving test results
libguestfs: qemu version: 2.10
libguestfs: qemu mandatory locking: yes
libguestfs: trace: get_sockdir
libguestfs: trace: get_sockdir = "/tmp"
libguestfs: finished testing qemu features
libguestfs: trace: get_backend_setting "gdb"
libguestfs: trace: get_backend_setting = NULL (error)
libguestfs: command: run: dmesg | grep -Eoh 'lpj=[[:digit...
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com>
Hi all,
This is a revised version of the pvticket lock series.
The early part of the series is mostly unchanged: it converts the bulk
of the ticket lock code into C and makes the "small" and "large"
ticket code common. The only changes are the incorporation of various
review comments.
The latter part of
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com>
Hi all,
This is a revised version of the pvticket lock series.
The early part of the series is mostly unchanged: it converts the bulk
of the ticket lock code into C and makes the "small" and "large"
ticket code common. The only changes are the incorporation of various
review comments.
The latter part of
2010 Nov 16
23
[PATCH 00/14] PV ticket locks without expanding spinlock
From: Jeremy Fitzhardinge <jeremy.fitzhardinge at citrix.com>
Hi all,
This is a revised version of the pvticket lock series.
The early part of the series is mostly unchanged: it converts the bulk
of the ticket lock code into C and makes the "small" and "large"
ticket code common. The only changes are the incorporation of various
review comments.
The latter part of
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
To prepare for queued spinlocks. This is a simple rename except to update
preprocessor guard name and a file reference.
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
arch/powerpc/include/asm/simple_spinlock.h | 292 ++++++++++++++++++
.../include/asm/simple_spinlock_types.h | 21 ++
arch/powerpc/include/asm/spinlock.h | 285 +----------------
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized...
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
>
> One solution to this problem is to allow unfair lock in a
> para-virtualized...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...the normal unlock, and then
checks to see if it needs to do a special "kick" to wake the next
CPU.
The net result is that the pv-op calls are restricted to the slow
paths, and the normal fast-paths are largely unaffected.
There are still some overheads, however:
- When locking, there's some extra tests to count the spin iterations.
There are no extra instructions in the uncontended case though.
- When unlocking, there are two ways to detect when it is necessary
to kick a blocked CPU:
- with an unmodified struct spinlock, it can check to see if...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...the normal unlock, and then
checks to see if it needs to do a special "kick" to wake the next
CPU.
The net result is that the pv-op calls are restricted to the slow
paths, and the normal fast-paths are largely unaffected.
There are still some overheads, however:
- When locking, there's some extra tests to count the spin iterations.
There are no extra instructions in the uncontended case though.
- When unlocking, there are two ways to detect when it is necessary
to kick a blocked CPU:
- with an unmodified struct spinlock, it can check to see if...
2010 Nov 03
25
[PATCH 00/20] x86: ticket lock rewrite and paravirtualization
...the normal unlock, and then
checks to see if it needs to do a special "kick" to wake the next
CPU.
The net result is that the pv-op calls are restricted to the slow
paths, and the normal fast-paths are largely unaffected.
There are still some overheads, however:
- When locking, there's some extra tests to count the spin iterations.
There are no extra instructions in the uncontended case though.
- When unlocking, there are two ways to detect when it is necessary
to kick a blocked CPU:
- with an unmodified struct spinlock, it can check to see if...
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
...ion.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing...
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
...ion.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing...
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
...ion.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing...