Displaying 12 results from an estimated 12 matches for "yeey".
Did you mean:
yee
2014 Jun 17
5
[PATCH 03/11] qspinlock: Add pending bit
...pxchg(&lock->val, 0, _Q_LOCKED_VAL);
if (likely(val == 0))
return;
+
+ /* One more attempt - but if we fail mark it as pending. */
+ if (val == _Q_LOCKED_VAL) {
+ new = Q_LOCKED_VAL |_Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == _Q_LOCKED_VAL) /* YEEY! */
+ return;
+ val = old;
+ }
queue_spin_lock_slowpath(lock, val);
}
and then the slowpath preserves most of the old logic path
(with the pending bit stuff)?
>
> Signed-off-by: Peter Zijlstra <peterz at infradead.org>
> ---
> include/asm-generic/qspinlock_types.h |...
2014 Jun 17
5
[PATCH 03/11] qspinlock: Add pending bit
...pxchg(&lock->val, 0, _Q_LOCKED_VAL);
if (likely(val == 0))
return;
+
+ /* One more attempt - but if we fail mark it as pending. */
+ if (val == _Q_LOCKED_VAL) {
+ new = Q_LOCKED_VAL |_Q_PENDING_VAL;
+
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == _Q_LOCKED_VAL) /* YEEY! */
+ return;
+ val = old;
+ }
queue_spin_lock_slowpath(lock, val);
}
and then the slowpath preserves most of the old logic path
(with the pending bit stuff)?
>
> Signed-off-by: Peter Zijlstra <peterz at infradead.org>
> ---
> include/asm-generic/qspinlock_types.h |...
2014 Jun 17
3
[PATCH 03/11] qspinlock: Add pending bit
...> return;
> >+
> >+ /* One more attempt - but if we fail mark it as pending. */
> >+ if (val == _Q_LOCKED_VAL) {
> >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+ old = atomic_cmpxchg(&lock->val, val, new);
> >+ if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+ return;
>
> No, it can leave like that. The unlock path will not clear the pending bit.
Err, you are right. It needs to go back in the slowpath.
> We are trying to make the fastpath as simple as possible as it may be
> inlined. The complexity of the queue spinlock is...
2014 Jun 17
3
[PATCH 03/11] qspinlock: Add pending bit
...> return;
> >+
> >+ /* One more attempt - but if we fail mark it as pending. */
> >+ if (val == _Q_LOCKED_VAL) {
> >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >+
> >+ old = atomic_cmpxchg(&lock->val, val, new);
> >+ if (old == _Q_LOCKED_VAL) /* YEEY! */
> >+ return;
>
> No, it can leave like that. The unlock path will not clear the pending bit.
Err, you are right. It needs to go back in the slowpath.
> We are trying to make the fastpath as simple as possible as it may be
> inlined. The complexity of the queue spinlock is...
2014 Jun 18
0
[PATCH 03/11] qspinlock: Add pending bit
...7/06/2014 22:36, Konrad Rzeszutek Wilk ha scritto:
> + /* One more attempt - but if we fail mark it as pending. */
> + if (val == _Q_LOCKED_VAL) {
> + new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> + old = atomic_cmpxchg(&lock->val, val, new);
> + if (old == _Q_LOCKED_VAL) /* YEEY! */
> + return;
> + val = old;
> + }
Note that Peter's code is in a for(;;) loop:
+ for (;;) {
+ /*
+ * If we observe any contention; queue.
+ */
+ if (val & ~_Q_LOCKED_MASK)
+ goto queue;
+
+ new = _Q_LOCKED_VAL;
+ if (val == new)
+ new |= _Q_PENDING_VAL;
+
+ ol...
2014 Jun 17
0
[PATCH 03/11] qspinlock: Add pending bit
...; > >+ /* One more attempt - but if we fail mark it as pending. */
> > >+ if (val == _Q_LOCKED_VAL) {
> > >+ new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> > >+
> > >+ old = atomic_cmpxchg(&lock->val, val, new);
> > >+ if (old == _Q_LOCKED_VAL) /* YEEY! */
> > >+ return;
> >
> > No, it can leave like that. The unlock path will not clear the pending bit.
>
> Err, you are right. It needs to go back in the slowpath.
What I should have wrote is:
if (old == 0) /* YEEY */
return;
As that would the same thing as this...
2014 Jun 17
1
[PATCH 03/11] qspinlock: Add pending bit
...we fail mark it as pending. */
> >>>> + if (val == _Q_LOCKED_VAL) {
> >>>> + new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >>>> +
> >>>> + old = atomic_cmpxchg(&lock->val, val, new);
> >>>> + if (old == _Q_LOCKED_VAL) /* YEEY! */
> >>>> + return;
> >>> No, it can leave like that. The unlock path will not clear the pending bit.
> >> Err, you are right. It needs to go back in the slowpath.
> > What I should have wrote is:
> >
> > if (old == 0) /* YEEY */
> &g...
2014 Jun 17
1
[PATCH 03/11] qspinlock: Add pending bit
...we fail mark it as pending. */
> >>>> + if (val == _Q_LOCKED_VAL) {
> >>>> + new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> >>>> +
> >>>> + old = atomic_cmpxchg(&lock->val, val, new);
> >>>> + if (old == _Q_LOCKED_VAL) /* YEEY! */
> >>>> + return;
> >>> No, it can leave like that. The unlock path will not clear the pending bit.
> >> Err, you are right. It needs to go back in the slowpath.
> > What I should have wrote is:
> >
> > if (old == 0) /* YEEY */
> &g...
2014 Jun 17
0
[PATCH 03/11] qspinlock: Add pending bit
...; if (likely(val == 0))
> return;
> +
> + /* One more attempt - but if we fail mark it as pending. */
> + if (val == _Q_LOCKED_VAL) {
> + new = Q_LOCKED_VAL |_Q_PENDING_VAL;
> +
> + old = atomic_cmpxchg(&lock->val, val, new);
> + if (old == _Q_LOCKED_VAL) /* YEEY! */
> + return;
No, it can leave like that. The unlock path will not clear the pending
bit. We are trying to make the fastpath as simple as possible as it may
be inlined. The complexity of the queue spinlock is in the slowpath.
Moreover, an cmpxchg followed immediately followed by another...
2014 Jun 15
28
[PATCH 00/11] qspinlock with paravirt support
Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.
The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.
I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be
2014 Jun 15
28
[PATCH 00/11] qspinlock with paravirt support
Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.
The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.
I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be
2013 Dec 02
3
no-amd-iommu-perdev-intremap + no-intremap = BOOM with Xen 4.4 (no-intremap by itself OK).
...on
(XEN) I/O virtualisation disabled
A bit of Googling and I add in "no-amd-iommu-perdev-intremap,no-intremap"
That combination looks to blow up the box. If I just do:
'iommu=verbose,debug,no-intremap' it shows that IOMMU is
enabled (yeey)
(XEN) AMD-Vi: IOMMU 0 Enabled.
(XEN) I/O virtualisation enabled
and it boots!
But if I also add "no-amd-iommu-perdev-intremap" (in addition to
no-intremap) it blows up:
(XEN) ----[ Xen-4.4-unstable x86_64 debug=y Not tainted ]----...