Konrad Rzeszutek Wilk
2014-Apr-03 17:23 UTC
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote:> On 04/02/2014 04:35 PM, Waiman Long wrote: > >On 04/02/2014 10:32 AM, Konrad Rzeszutek Wilk wrote: > >>On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote: > >>>N.B. Sorry for the duplicate. This patch series were resent as the > >>> original one was rejected by the vger.kernel.org list server > >>> due to long header. There is no change in content. > >>> > >>>v7->v8: > >>> - Remove one unneeded atomic operation from the slowpath, thus > >>> improving performance. > >>> - Simplify some of the codes and add more comments. > >>> - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable > >>> unfair lock. > >>> - Reduce unfair lock slowpath lock stealing frequency depending > >>> on its distance from the queue head. > >>> - Add performance data for IvyBridge-EX CPU. > >>FYI, your v7 patch with 32 VCPUs (on a 32 cpu socket machine) on an > >>HVM guest under Xen after a while stops working. The workload > >>is doing 'make -j32' on the Linux kernel. > >> > >>Completely unresponsive. Thoughts? > >> > > > >Thank for reporting that. I haven't done that much testing on Xen. > >My focus was in KVM. I will perform more test on Xen to see if I > >can reproduce the problem. > > > > BTW, does the halting and sending IPI mechanism work in HVM? I sawYes.> that in RHEL7, PV spinlock was explicitly disabled when in HVM mode. > However, this piece of code isn't in upstream code. So I wonder if > there is problem with that.The PV ticketlock fixed it for HVM. It was disabled before because the PV guests were using bytelocks while the HVM were using ticketlocks and you couldnt' swap in PV bytelocks for ticketlocks during startup.> > -Longman
Waiman Long
2014-Apr-04 02:57 UTC
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On 04/03/2014 01:23 PM, Konrad Rzeszutek Wilk wrote:> On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote: >> On 04/02/2014 04:35 PM, Waiman Long wrote: >>> On 04/02/2014 10:32 AM, Konrad Rzeszutek Wilk wrote: >>>> On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote: >>>>> N.B. Sorry for the duplicate. This patch series were resent as the >>>>> original one was rejected by the vger.kernel.org list server >>>>> due to long header. There is no change in content. >>>>> >>>>> v7->v8: >>>>> - Remove one unneeded atomic operation from the slowpath, thus >>>>> improving performance. >>>>> - Simplify some of the codes and add more comments. >>>>> - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable >>>>> unfair lock. >>>>> - Reduce unfair lock slowpath lock stealing frequency depending >>>>> on its distance from the queue head. >>>>> - Add performance data for IvyBridge-EX CPU. >>>> FYI, your v7 patch with 32 VCPUs (on a 32 cpu socket machine) on an >>>> HVM guest under Xen after a while stops working. The workload >>>> is doing 'make -j32' on the Linux kernel. >>>> >>>> Completely unresponsive. Thoughts? >>>> >>> Thank for reporting that. I haven't done that much testing on Xen. >>> My focus was in KVM. I will perform more test on Xen to see if I >>> can reproduce the problem. >>> >> BTW, does the halting and sending IPI mechanism work in HVM? I saw > Yes. >> that in RHEL7, PV spinlock was explicitly disabled when in HVM mode. >> However, this piece of code isn't in upstream code. So I wonder if >> there is problem with that. > The PV ticketlock fixed it for HVM. It was disabled before because > the PV guests were using bytelocks while the HVM were using ticketlocks > and you couldnt' swap in PV bytelocks for ticketlocks during startup.The RHEL7 code has used PV ticketlock already. RHEL7 uses a single kernel for all configurations. So PV ticketlock as well as Xen and KVM support was compiled in. I think booting the kernel on bare metal will cause the Xen code to work in HVM mode thus activating the PV spinlock code which has a negative impact on performance. That may be why it was disabled so that the bare metal performance will not be impacted. BTW, could you send me more information about the configuration of the machine, like the .config file that you used? -Longman
Konrad Rzeszutek Wilk
2014-Apr-04 16:55 UTC
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
On Thu, Apr 03, 2014 at 10:57:18PM -0400, Waiman Long wrote:> On 04/03/2014 01:23 PM, Konrad Rzeszutek Wilk wrote: > >On Wed, Apr 02, 2014 at 10:10:17PM -0400, Waiman Long wrote: > >>On 04/02/2014 04:35 PM, Waiman Long wrote: > >>>On 04/02/2014 10:32 AM, Konrad Rzeszutek Wilk wrote: > >>>>On Wed, Apr 02, 2014 at 09:27:29AM -0400, Waiman Long wrote: > >>>>>N.B. Sorry for the duplicate. This patch series were resent as the > >>>>> original one was rejected by the vger.kernel.org list server > >>>>> due to long header. There is no change in content. > >>>>> > >>>>>v7->v8: > >>>>> - Remove one unneeded atomic operation from the slowpath, thus > >>>>> improving performance. > >>>>> - Simplify some of the codes and add more comments. > >>>>> - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable > >>>>> unfair lock. > >>>>> - Reduce unfair lock slowpath lock stealing frequency depending > >>>>> on its distance from the queue head. > >>>>> - Add performance data for IvyBridge-EX CPU. > >>>>FYI, your v7 patch with 32 VCPUs (on a 32 cpu socket machine) on an > >>>>HVM guest under Xen after a while stops working. The workload > >>>>is doing 'make -j32' on the Linux kernel. > >>>> > >>>>Completely unresponsive. Thoughts? > >>>> > >>>Thank for reporting that. I haven't done that much testing on Xen. > >>>My focus was in KVM. I will perform more test on Xen to see if I > >>>can reproduce the problem. > >>> > >>BTW, does the halting and sending IPI mechanism work in HVM? I saw > >Yes. > >>that in RHEL7, PV spinlock was explicitly disabled when in HVM mode. > >>However, this piece of code isn't in upstream code. So I wonder if > >>there is problem with that. > >The PV ticketlock fixed it for HVM. It was disabled before because > >the PV guests were using bytelocks while the HVM were using ticketlocks > >and you couldnt' swap in PV bytelocks for ticketlocks during startup. > > The RHEL7 code has used PV ticketlock already. RHEL7 uses a single > kernel for all configurations. So PV ticketlock as well as Xen and > KVM support was compiled in. I think booting the kernel on bare > metal will cause the Xen code to work in HVM mode thus activating > the PV spinlock code which has a negative impact on performance.Huh? -EPARSE> That may be why it was disabled so that the bare metal performance > will not be impacted.I am not following you.> > BTW, could you send me more information about the configuration of > the machine, like the .config file that you used?Marcos, could you please send that information to Peter. Thanks!> > -Longman
Maybe Matching Threads
- [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
- [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
- [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
- [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
- [PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support