Displaying 20 results from an estimated 900 matches similar to: "[RFC] Implement Batched (group) ticket lock"
2014 May 29
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote:
>
> TODO:
> - we need an intelligent way to nullify the effect of batching for baremetal
> (because extra cmpxchg is not required).
To do this, you will need to have 2 slightly different algorithms
depending on the paravirt_ticketlocks_enabled jump label.
>
> - My kernbench/ebizzy test on baremetal (32 cpu +ht sandybridge) did
2014 May 28
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote:
This patch looks very promising.
> TODO:
> - we need an intelligent way to nullify the effect of batching for baremetal
> (because extra cmpxchg is not required).
On (larger?) NUMA systems, the unfairness may be a nice performance
benefit, reducing cache line bouncing through the system, and it
could well outweigh the extra cmpxchg at
2014 Jun 28
2
[RFC PATCH v2] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems
related to spinlocks that affects performance.
1. LHP (lock holder preemption)
2. Lock Waiter Preemption (LWP)
3. Starvation/fairness
Though Ticketlocks solve fairness problem it worsens LWP, LHP problems. Though
pv-ticketlocks tried to address these problems we can further improve at the
cost of relaxed fairness. The following patch
2014 Jun 28
2
[RFC PATCH v2] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems
related to spinlocks that affects performance.
1. LHP (lock holder preemption)
2. Lock Waiter Preemption (LWP)
3. Starvation/fairness
Though Ticketlocks solve fairness problem it worsens LWP, LHP problems. Though
pv-ticketlocks tried to address these problems we can further improve at the
cost of relaxed fairness. The following patch
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Feb 06
10
[PATCH] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which
2015 Apr 30
0
[PATCH 4/6] x86: introduce new pvops function spin_unlock
To speed up paravirtualized spinlock handling when running on bare
metal introduce a new pvops function "spin_unlock". This is a simple
add instruction (possibly with lock prefix) when the kernel is running
on bare metal.
As the patched instruction includes a lock prefix in some
configurations annotate this location to be subject to lock prefix
patching. This is working even if
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When
2015 Apr 30
12
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
Paravirtualized spinlocks produce some overhead even if the kernel is
running on bare metal. The main reason are the more complex locking
and unlocking functions. Especially unlocking is no longer just one
instruction but so complex that it is no longer inlined.
This patch series addresses this issue by adding two more pvops
functions to reduce the size of the inlined spinlock functions. When
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
* Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> [2015-02-15 11:25:44]:
Resending the V5 with smp_mb__after_atomic() change without bumping up
revision
---8<---
>From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001
From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com>
Date: Fri, 6 Feb 2015 16:44:11 +0530
Subject: [PATCH RESEND V5] x86 spinlock:
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
* Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> [2015-02-15 11:25:44]:
Resending the V5 with smp_mb__after_atomic() change without bumping up
revision
---8<---
>From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001
From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com>
Date: Fri, 6 Feb 2015 16:44:11 +0530
Subject: [PATCH RESEND V5] x86 spinlock:
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/06/2015 06:49 AM, Raghavendra K T wrote:
> Paravirt spinlock clears slowpath flag after doing unlock.
> As explained by Linus currently it does:
> prev = *lock;
> add_smp(&lock->tickets.head, TICKET_LOCK_INC);
>
> /* add_smp() is a full mb() */
>
> if (unlikely(lock->tickets.tail &