search for: unfairness

Displaying 20 results from an estimated 514 matches for "unfairness".

Did you mean: fairness
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 12/03/14 18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
On Fri, May 30, 2014 at 11:43:55AM -0400, Waiman Long wrote: > Enabling this configuration feature causes a slight decrease the > performance of an uncontended lock-unlock operation by about 1-2% > mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
On Fri, May 30, 2014 at 11:43:55AM -0400, Waiman Long wrote: > Enabling this configuration feature causes a slight decrease the > performance of an uncontended lock-unlock operation by about 1-2% > mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in
2014 Mar 13
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...y likely to lose a race with > another running VCPU trying to take a lock (since it takes time for the > VCPU to be rescheduled). Actually, I think the unfair version should be automatically selected if running on a hypervisor. Per-hypervisor pvops can choose to enable the fair one. Lock unfairness may be particularly evident on a virtualized guest when the host is overcommitted, but problems with fair locks are even worse. In fact, RHEL/CentOS 6 already uses unfair locks if X86_FEATURE_HYPERVISOR is set. The patch was rejected upstream in favor of pv ticketlocks, but pv ticketlocks do n...
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair lock
2014 Jun 12
2
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...disable the > unfair lock code in the slowpath, but still allow the unfair version in the > fast path to get the best possible performance in a virtual guest. > > Yes, I could take that out to allow either unfair or paravirt spinlock, but > not both. I do think that a little bit of unfairness will help in the > virtual environment. When will you learn to like simplicity and stop this massive over engineering effort? There's no sane reason to have the test-and-set virt and paravirt locks enabled at the same bloody time. There's 3 distinct cases: - native - virt - paravi...
2014 Jun 12
2
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...disable the > unfair lock code in the slowpath, but still allow the unfair version in the > fast path to get the best possible performance in a virtual guest. > > Yes, I could take that out to allow either unfair or paravirt spinlock, but > not both. I do think that a little bit of unfairness will help in the > virtual environment. When will you learn to like simplicity and stop this massive over engineering effort? There's no sane reason to have the test-and-set virt and paravirt locks enabled at the same bloody time. There's 3 distinct cases: - native - virt - paravi...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > +static inline void arch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > +static inline void arch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...t;another running VCPU trying to take a lock (since it takes time for the > >VCPU to be rescheduled). > > Actually, I think the unfair version should be automatically > selected if running on a hypervisor. Per-hypervisor pvops can > choose to enable the fair one. > > Lock unfairness may be particularly evident on a virtualized guest > when the host is overcommitted, but problems with fair locks are > even worse. > > In fact, RHEL/CentOS 6 already uses unfair locks if > X86_FEATURE_HYPERVISOR is set. The patch was rejected upstream in > favor of pv ticketloc...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...t;another running VCPU trying to take a lock (since it takes time for the > >VCPU to be rescheduled). > > Actually, I think the unfair version should be automatically > selected if running on a hypervisor. Per-hypervisor pvops can > choose to enable the fair one. > > Lock unfairness may be particularly evident on a virtualized guest > when the host is overcommitted, but problems with fair locks are > even worse. > > In fact, RHEL/CentOS 6 already uses unfair locks if > X86_FEATURE_HYPERVISOR is set. The patch was rejected upstream in > favor of pv ticketloc...
2014 Jun 12
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...rt spinlock code will disable the unfair lock code in the slowpath, but still allow the unfair version in the fast path to get the best possible performance in a virtual guest. Yes, I could take that out to allow either unfair or paravirt spinlock, but not both. I do think that a little bit of unfairness will help in the virtual environment. >> +/* >> + * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will >> + * jump to the unfair versions if the static key virt_unfairlocks_enabled >> + * is true. >> + */ >> +#undef arch_spin_lock >&...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...air > version. It is not as unfair as the other unfair locking schemes that spins > on the lock repetitively. So lock starvation should be less a problem. > > On the other hand, it may not perform as well as the other unfair locking > schemes. It is a compromise to provide some lock unfairness without > sacrificing the good cacheline behavior of the queue spinlock. But but but,.. any kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because w...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...air > version. It is not as unfair as the other unfair locking schemes that spins > on the lock repetitively. So lock starvation should be less a problem. > > On the other hand, it may not perform as well as the other unfair locking > schemes. It is a compromise to provide some lock unfairness without > sacrificing the good cacheline behavior of the queue spinlock. But but but,.. any kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because w...
2014 May 30
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair queue
2014 Mar 12
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Locking is always an issue in a virtualized environment as the virtual CPU that is waiting on a lock may get scheduled out and hence block any progress in lock acquisition even when the lock has been freed. One solution to this problem is to allow unfair lock in a para-virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is
2014 Jun 12
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...t; unfair lock code in the slowpath, but still allow the unfair version in the >> fast path to get the best possible performance in a virtual guest. >> >> Yes, I could take that out to allow either unfair or paravirt spinlock, but >> not both. I do think that a little bit of unfairness will help in the >> virtual environment. > When will you learn to like simplicity and stop this massive over > engineering effort? > > There's no sane reason to have the test-and-set virt and paravirt locks > enabled at the same bloody time. > > There's 3 distinct...
2014 May 07
0
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
This patch adds the necessary KVM specific code to allow KVM to support the CPU halting and kicking operations needed by the queue spinlock PV code. Two KVM guests of 20 CPU cores (2 nodes) were created for performance testing in one of the following three configurations: 1) Only 1 VM is active 2) Both VMs are active and they share the same 20 physical CPUs (200% overcommit) 3) Both VMs are
2014 Feb 26
2
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case,