Displaying 3 results from an estimated 3 matches for "__atomic_and".
2014 Feb 27
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...&slock->wait, value >> 8);
if (ACCESS_ONCE(slock->lock)) {
... call lock_spinning hook ...
}
}
/*
* Set the lock bit & clear the halted+waiting bits
*/
if (cmpxchg(&slock->lock_wait, value,
_QSPINLOCK_LOCKED) == value)
return -1; /* Got the lock */
__atomic_and(&slock->lock_wait, ~QSPINLOCK_HALTED);
The lock_spinning/unlock_lock code can probably be much simpler, because
you do not need to keep a list of all spinning locks. Unlock_lock can
just use the CPU number to wake up the right CPU.
Paolo
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU. It is more similar to PLE