search for: ebizzi

Displaying 20 results from an estimated 63 matches for "ebizzi".

Did you mean: ebizzy
2014 Apr 07
2
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/07/2014 02:14 AM, Raghavendra K T wrote: > > > I tested the v7,v8 of qspinlock with unfair config on kvm guest. > I was curious about unfair locks performance in undercommit cases. > (overcommit case is expected to perform well) > > But I am seeing hang in overcommit cases. Gdb showed that many vcpus > are halted and there was no progress. Suspecting the problem
2014 Apr 07
2
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/07/2014 02:14 AM, Raghavendra K T wrote: > > > I tested the v7,v8 of qspinlock with unfair config on kvm guest. > I was curious about unfair locks performance in undercommit cases. > (overcommit case is expected to perform well) > > But I am seeing hang in overcommit cases. Gdb showed that many vcpus > are halted and there was no progress. Suspecting the problem
2014 Apr 08
1
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/07/2014 01:51 PM, Raghavendra K T wrote: > On 04/07/2014 10:08 PM, Waiman Long wrote: >> On 04/07/2014 02:14 AM, Raghavendra K T wrote: > [...] >>> But I am seeing hang in overcommit cases. Gdb showed that many vcpus >>> are halted and there was no progress. Suspecting the problem /race with >>> halting, I removed the halt() part of kvm_hibernate(). I
2014 Apr 08
1
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/07/2014 01:51 PM, Raghavendra K T wrote: > On 04/07/2014 10:08 PM, Waiman Long wrote: >> On 04/07/2014 02:14 AM, Raghavendra K T wrote: > [...] >>> But I am seeing hang in overcommit cases. Gdb showed that many vcpus >>> are halted and there was no progress. Suspecting the problem /race with >>> halting, I removed the halt() part of kvm_hibernate(). I
2014 May 07
0
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
This patch adds the necessary KVM specific code to allow KVM to support the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) 3) Both VMs are
2014 Apr 07
0
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/07/2014 10:08 PM, Waiman Long wrote: > On 04/07/2014 02:14 AM, Raghavendra K T wrote: [...] >> But I am seeing hang in overcommit cases. Gdb showed that many vcpus >> are halted and there was no progress. Suspecting the problem /race with >> halting, I removed the halt() part of kvm_hibernate(). I am yet to >> take a closer look at the code on halt() related
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
This patch adds the necessary KVM specific code to allow KVM to support the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) The tests run
2014 Oct 29
0
[PATCH v13 10/11] pvqspinlock, x86: Enable PV qspinlock for KVM
This patch adds the necessary KVM specific code to allow KVM to support the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) The tests run
2014 May 07
1
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
> Raghavendra KT had done some performance testing on this patch with > the following results: > > Overall we are seeing good improvement for pv-unfair version. > > System: 32 cpu sandybridge with HT on (4 node with 32 GB each) > Guest : 8GB with 16 vcpu/VM. > Average was taken over 8-10 data points. > > Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y > > A =
2014 May 07
1
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
> Raghavendra KT had done some performance testing on this patch with > the following results: > > Overall we are seeing good improvement for pv-unfair version. > > System: 32 cpu sandybridge with HT on (4 node with 32 GB each) > Guest : 8GB with 16 vcpu/VM. > Average was taken over 8-10 data points. > > Base = 3.15-rc2 with PRAVIRT_SPINLOCK = y > > A =
2014 May 28
7
[RFC] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems related to spinlocks that affect performance. 1. LHP (lock holder preemption) 2. Lock Waiter Preemption (LWP) 3. Starvation/fairness Though ticketlocks solve the fairness problem, it worsens LWP, LHP problems. pv-ticketlocks tried to address this. But we can further improve at the cost of relaxed fairness. In this patch, we form a batch
2014 May 28
7
[RFC] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems related to spinlocks that affect performance. 1. LHP (lock holder preemption) 2. Lock Waiter Preemption (LWP) 3. Starvation/fairness Though ticketlocks solve the fairness problem, it worsens LWP, LHP problems. pv-ticketlocks tried to address this. But we can further improve at the cost of relaxed fairness. In this patch, we form a batch
2014 Apr 07
0
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/02/2014 06:57 PM, Waiman Long wrote: > N.B. Sorry for the duplicate. This patch series were resent as the > original one was rejected by the vger.kernel.org list server > due to long header. There is no change in content. > > v7->v8: > - Remove one unneeded atomic operation from the slowpath, thus > improving performance. > - Simplify some of
2014 Apr 27
0
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
On 04/17/2014 08:33 PM, Waiman Long wrote: > v8->v9: > - Integrate PeterZ's version of the queue spinlock patch with some > modification: > http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org > - Break the more complex patches into smaller ones to ease review effort. > - Fix a racing condition in the PV qspinlock code. > > v7->v8:
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair lock
2014 May 29
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote: > > TODO: > - we need an intelligent way to nullify the effect of batching for baremetal > (because extra cmpxchg is not required). To do this, you will need to have 2 slightly different algorithms depending on the paravirt_ticketlocks_enabled jump label. > > - My kernbench/ebizzy test on baremetal (32 cpu +ht sandybridge) did
2014 May 28
0
[RFC] Implement Batched (group) ticket lock
On 05/28/2014 08:16 AM, Raghavendra K T wrote: This patch looks very promising. > TODO: > - we need an intelligent way to nullify the effect of batching for baremetal > (because extra cmpxchg is not required). On (larger?) NUMA systems, the unfairness may be a nice performance benefit, reducing cache line bouncing through the system, and it could well outweigh the extra cmpxchg at
2014 Oct 29
15
[PATCH v13 00/11] qspinlock: a 4-byte queue spinlock with PV support
v12->v13: - Change patch 9 to generate separate versions of the queue_spin_lock_slowpath functions for bare metal and PV guest. This reduces the performance impact of the PV code on bare metal systems. v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo
2014 Oct 29
15
[PATCH v13 00/11] qspinlock: a 4-byte queue spinlock with PV support
v12->v13: - Change patch 9 to generate separate versions of the queue_spin_lock_slowpath functions for bare metal and PV guest. This reduces the performance impact of the PV code on bare metal systems. v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving