search for: unfair

Displaying 20 results from an estimated 514 matches for "unfair".

2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer can > come and steal the lock if the next-in-line CPU to get the lock is > scheduled out. Unfair lock in a native environment is generally not a > good idea as there is a possibility of lock starvation for a hea...
2014 Mar 13
3
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...18:54, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer can > come and steal the lock if the next-in-line CPU to get the lock is > scheduled out. Unfair lock in a native environment is generally not a > good idea as there is a possibility of lock starvation for a hea...
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...nlock operation by about 1-2% > mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in application performance. No, entirely unacceptable. > +#ifdef CONFIG_VIRT_UNFAIR_LOCKS > +/** > + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{...
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...nlock operation by about 1-2% > mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in application performance. No, entirely unacceptable. > +#ifdef CONFIG_VIRT_UNFAIR_LOCKS > +/** > + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{...
2014 Mar 13
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...wrote: >> Locking is always an issue in a virtualized environment as the virtual >> CPU that is waiting on a lock may get scheduled out and hence block >> any progress in lock acquisition even when the lock has been freed. >> >> One solution to this problem is to allow unfair lock in a >> para-virtualized environment. In this case, a new lock acquirer can >> come and steal the lock if the next-in-line CPU to get the lock is >> scheduled out. Unfair lock in a native environment is generally not a >> good idea as there is a possibility of lock star...
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair lock is the test-and-set byte lock where an lock acquirer constantly spins on the lock word and attempt to grab it when the loc...
2014 Jun 12
2
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...due to the use of a static key. However, uncontended lock-unlock > >>operation are really just a tiny percentage of a real workload. So > >>there should no noticeable change in application performance. > >No, entirely unacceptable. > > > >>+#ifdef CONFIG_VIRT_UNFAIR_LOCKS > >>+/** > >>+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > >>+ * @lock : Pointer to queue spinlock structure > >>+ * Return: 1 if lock acquired, 0 if failed > >>+ */ > >>+static __always_inline int queue_spin...
2014 Jun 12
2
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...due to the use of a static key. However, uncontended lock-unlock > >>operation are really just a tiny percentage of a real workload. So > >>there should no noticeable change in application performance. > >No, entirely unacceptable. > > > >>+#ifdef CONFIG_VIRT_UNFAIR_LOCKS > >>+/** > >>+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > >>+ * @lock : Pointer to queue spinlock structure > >>+ * Return: 1 if lock acquired, 0 if failed > >>+ */ > >>+static __always_inline int queue_spin...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > +static inline void arch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while (!queue_spin_trylock(lock)) cpu_relax(); return; } At the top of queue_spin_lock_slowpat...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > +static inline void arch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while (!queue_spin_trylock(lock)) cpu_relax(); return; } At the top of queue_spin_lock_slowpat...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ocking is always an issue in a virtualized environment as the virtual > >>CPU that is waiting on a lock may get scheduled out and hence block > >>any progress in lock acquisition even when the lock has been freed. > >> > >>One solution to this problem is to allow unfair lock in a > >>para-virtualized environment. In this case, a new lock acquirer can > >>come and steal the lock if the next-in-line CPU to get the lock is > >>scheduled out. Unfair lock in a native environment is generally not a > >>good idea as there is a possibil...
2014 Mar 17
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ocking is always an issue in a virtualized environment as the virtual > >>CPU that is waiting on a lock may get scheduled out and hence block > >>any progress in lock acquisition even when the lock has been freed. > >> > >>One solution to this problem is to allow unfair lock in a > >>para-virtualized environment. In this case, a new lock acquirer can > >>come and steal the lock if the next-in-line CPU to get the lock is > >>scheduled out. Unfair lock in a native environment is generally not a > >>good idea as there is a possibil...
2014 Jun 12
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...1-2% >> mainly due to the use of a static key. However, uncontended lock-unlock >> operation are really just a tiny percentage of a real workload. So >> there should no noticeable change in application performance. > No, entirely unacceptable. > >> +#ifdef CONFIG_VIRT_UNFAIR_LOCKS >> +/** >> + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly >> + * @lock : Pointer to queue spinlock structure >> + * Return: 1 if lock acquired, 0 if failed >> + */ >> +static __always_inline int queue_spin_trylock_unfair(struct q...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...at 04:05:19PM -0400, Waiman Long wrote: > On 03/13/2014 11:15 AM, Peter Zijlstra wrote: > >On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > >>+static inline void arch_spin_lock(struct qspinlock *lock) > >>+{ > >>+ if (static_key_false(&paravirt_unfairlocks_enabled)) > >>+ queue_spin_lock_unfair(lock); > >>+ else > >>+ queue_spin_lock(lock); > >>+} > >So I would have expected something like: > > > > if (static_key_false(&paravirt_spinlock)) { > > while (!queue_spin_trylock(lock))...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...at 04:05:19PM -0400, Waiman Long wrote: > On 03/13/2014 11:15 AM, Peter Zijlstra wrote: > >On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote: > >>+static inline void arch_spin_lock(struct qspinlock *lock) > >>+{ > >>+ if (static_key_false(&paravirt_unfairlocks_enabled)) > >>+ queue_spin_lock_unfair(lock); > >>+ else > >>+ queue_spin_lock(lock); > >>+} > >So I would have expected something like: > > > > if (static_key_false(&paravirt_spinlock)) { > > while (!queue_spin_trylock(lock))...
2014 May 30
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair queue spinlock can be implemented by allowing lock stealing in the fast path. The slowpath will also be modified to run a simpl...
2014 Mar 12
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Locking is always an issue in a virtualized environment as the virtual CPU that is waiting on a lock may get scheduled out and hence block any progress in lock acquisition even when the lock has been freed. One solution to this problem is to allow unfair lock in a para-virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. Unfair lock in a native environment is generally not a good idea as there is a possibility of lock starvation for a heavily contended lock....
2014 Jun 12
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...a static key. However, uncontended lock-unlock >>>> operation are really just a tiny percentage of a real workload. So >>>> there should no noticeable change in application performance. >>> No, entirely unacceptable. >>> >>>> +#ifdef CONFIG_VIRT_UNFAIR_LOCKS >>>> +/** >>>> + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly >>>> + * @lock : Pointer to queue spinlock structure >>>> + * Return: 1 if lock acquired, 0 if failed >>>> + */ >>>> +static __alwa...
2014 May 07
0
[PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM
...d of the AIM7 benchmark on both ext4 and xfs RAM disks at 3000 users on a 3.15-rc1 based kernel. The "ebizzy -m" test was was also run and its performance data were recorded. With two VMs running, the "idle=poll" kernel option was added to simulate a busy guest. The entry "unfair + PV qspinlock" below means that both the unfair lock and PV spinlock configuration options were turned on. AIM7 XFS Disk Test (no overcommit) kernel JPM Real Time Sys Time Usr Time ----- --- --------- -------- -------- PV...
2014 Feb 26
2
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
...-0500, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case, a new lock acquirer can > come and steal the lock if the next-in-line CPU to get the lock is > scheduled out. Unfair lock in a native environment is generally not a Hmm, how do you know if the 'next-in-line CPU' is scheduled...