Rik van Riel
2015-Apr-09 13:16 UTC
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On 04/09/2015 03:01 AM, Peter Zijlstra wrote:> On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: >> For a virtual guest with the qspinlock patch, a simple unfair byte lock >> will be used if PV spinlock is not configured in or the hypervisor >> isn't either KVM or Xen. The byte lock works fine with small guest >> of just a few vCPUs. On a much larger guest, however, byte lock can >> have serious performance problem. > > Who cares?There are some people out there running guests with dozens of vCPUs. If the code exists to make those setups run better, is there a good reason not to use it? Having said that, only KVM and Xen seem to support very large guests, and PV spinlock is available there. I believe both VMware and Hyperv have a 32 VCPU limit, anyway. -- All rights reversed
Peter Zijlstra
2015-Apr-09 14:13 UTC
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:> On 04/09/2015 03:01 AM, Peter Zijlstra wrote: > > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: > >> For a virtual guest with the qspinlock patch, a simple unfair byte lock > >> will be used if PV spinlock is not configured in or the hypervisor > >> isn't either KVM or Xen. The byte lock works fine with small guest > >> of just a few vCPUs. On a much larger guest, however, byte lock can > >> have serious performance problem. > > > > Who cares? > > There are some people out there running guests with dozens > of vCPUs. If the code exists to make those setups run better, > is there a good reason not to use it?Well use paravirt, !paravirt stuff sucks performance wise anyhow. The question really is: is the added complexity worth the maintenance burden. And I'm just not convinced !paravirt virt is a performance critical target.> Having said that, only KVM and Xen seem to support very > large guests, and PV spinlock is available there. > > I believe both VMware and Hyperv have a 32 VCPU limit, anyway.Don't we have Hyperv paravirt drivers? They could add support for paravirt spinlocks too.
Rik van Riel
2015-Apr-09 14:30 UTC
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On 04/09/2015 10:13 AM, Peter Zijlstra wrote:> On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote: >> On 04/09/2015 03:01 AM, Peter Zijlstra wrote: >>> On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: >>>> For a virtual guest with the qspinlock patch, a simple unfair byte lock >>>> will be used if PV spinlock is not configured in or the hypervisor >>>> isn't either KVM or Xen. The byte lock works fine with small guest >>>> of just a few vCPUs. On a much larger guest, however, byte lock can >>>> have serious performance problem. >>> >>> Who cares? >> >> There are some people out there running guests with dozens >> of vCPUs. If the code exists to make those setups run better, >> is there a good reason not to use it? > > Well use paravirt, !paravirt stuff sucks performance wise anyhow. > > The question really is: is the added complexity worth the maintenance > burden. And I'm just not convinced !paravirt virt is a performance > critical target.Fair enough.
Waiman Long
2015-Apr-09 21:52 UTC
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On 04/09/2015 10:13 AM, Peter Zijlstra wrote:> On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote: >> On 04/09/2015 03:01 AM, Peter Zijlstra wrote: >>> On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote: >>>> For a virtual guest with the qspinlock patch, a simple unfair byte lock >>>> will be used if PV spinlock is not configured in or the hypervisor >>>> isn't either KVM or Xen. The byte lock works fine with small guest >>>> of just a few vCPUs. On a much larger guest, however, byte lock can >>>> have serious performance problem. >>> Who cares? >> There are some people out there running guests with dozens >> of vCPUs. If the code exists to make those setups run better, >> is there a good reason not to use it? > Well use paravirt, !paravirt stuff sucks performance wise anyhow. > > The question really is: is the added complexity worth the maintenance > burden. And I'm just not convinced !paravirt virt is a performance > critical target.I am just thinking that the unfair qspinlock is better performing than the simple byte lock. However, my current priority is to get native and PV qspinlock upstream. The unfair qspinlock can certainly wait. Cheers, Longman
Reasonably Related Threads
- [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
- [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
- [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
- [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
- [PATCH v15 16/16] unfair qspinlock: a queue based unfair lock