similar to: [RFC PATCH v2] Implement Batched (group) ticket lock

Displaying 20 results from an estimated 500 matches similar to: "[RFC PATCH v2] Implement Batched (group) ticket lock"

2014 May 28
7
[RFC] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems related to spinlocks that affect performance. 1. LHP (lock holder preemption) 2. Lock Waiter Preemption (LWP) 3. Starvation/fairness Though ticketlocks solve the fairness problem, it worsens LWP, LHP problems. pv-ticketlocks tried to address this. But we can further improve at the cost of relaxed fairness. In this patch, we form a batch
2014 May 28
7
[RFC] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems related to spinlocks that affect performance. 1. LHP (lock holder preemption) 2. Lock Waiter Preemption (LWP) 3. Starvation/fairness Though ticketlocks solve the fairness problem, it worsens LWP, LHP problems. pv-ticketlocks tried to address this. But we can further improve at the cost of relaxed fairness. In this patch, we form a batch
2014 May 28
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote: This patch looks very promising. > TODO: > - we need an intelligent way to nullify the effect of batching for baremetal > (because extra cmpxchg is not required). On (larger?) NUMA systems, the unfairness may be a nice performance benefit, reducing cache line bouncing through the system, and it could well outweigh the extra cmpxchg at
2014 May 29
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote: > > TODO: > - we need an intelligent way to nullify the effect of batching for baremetal > (because extra cmpxchg is not required). To do this, you will need to have 2 slightly different algorithms depending on the paravirt_ticketlocks_enabled jump label. > > - My kernbench/ebizzy test on baremetal (32 cpu +ht sandybridge) did
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
* Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> [2015-02-15 11:25:44]: Resending the V5 with smp_mb__after_atomic() change without bumping up revision ---8<--- >From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001 From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> Date: Fri, 6 Feb 2015 16:44:11 +0530 Subject: [PATCH RESEND V5] x86 spinlock:
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
* Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> [2015-02-15 11:25:44]: Resending the V5 with smp_mb__after_atomic() change without bumping up revision ---8<--- >From 0b9ecde30e3bf5b5b24009fd2ac5fc7ac4b81158 Mon Sep 17 00:00:00 2001 From: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> Date: Fri, 6 Feb 2015 16:44:11 +0530 Subject: [PATCH RESEND V5] x86 spinlock:
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is
2015 Feb 24
2
[PATCH for stable] x86/spinlocks/paravirt: Fix memory corruption on unlock
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Feb 09
2
[PATCH V2] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Apr 30
0
[PATCH 3/6] x86: introduce new pvops function clear_slowpath
To speed up paravirtualized spinlock handling when running on bare metal introduce a new pvops function "clear_slowpath". This is a nop when the kernel is running on bare metal. As the clear_slowpath function is common for all users add a new initialization function to set the pvops function pointer in order to avoid spreading the knowledge which function to use. Signed-off-by: Juergen
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/06/2015 06:49 AM, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail &
2015 Feb 08
0
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/06/2015 06:49 AM, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail &
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Feb 12
8
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which
2015 Apr 30
0
[PATCH 2/6] x86: move decision about clearing slowpath flag into arch_spin_lock()
The decision whether the slowpath flag is to be cleared for paravirtualized spinlocks is located in __ticket_check_and_clear_slowpath() today. Move that decision into arch_spin_lock() and add an unlikely attribute to it to avoid calling a function in case the compiler chooses not to inline __ticket_check_and_clear_slowpath() and the slowpath flag isn't set. Signed-off-by: Juergen Gross
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is
2015 Feb 15
7
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock. As explained by Linus currently it does: prev = *lock; add_smp(&lock->tickets.head, TICKET_LOCK_INC); /* add_smp() is a full mb() */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); which is