Displaying 4 results from an estimated 4 matches for "slowpath_enter".
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
.... It is nop on x86, just to make this code
> more understandable for those (for me ;) who can never remember even the
> x86 rules.
>
Hope you meant it for add_stat. yes smp_mb__after_atomic() would be
harmless barrier() in x86. Did not add this V5 as yoiu though but this
made me look at slowpath_enter code and added an explicit barrier()
there :).
2015 Feb 15
1
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
.... It is nop on x86, just to make this code
> more understandable for those (for me ;) who can never remember even the
> x86 rules.
>
Hope you meant it for add_stat. yes smp_mb__after_atomic() would be
harmless barrier() in x86. Did not add this V5 as yoiu though but this
made me look at slowpath_enter code and added an explicit barrier()
there :).
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is
2015 Feb 13
3
[PATCH V4] x86 spinlock: Fix memory corruption on completing completions
Paravirt spinlock clears slowpath flag after doing unlock.
As explained by Linus currently it does:
prev = *lock;
add_smp(&lock->tickets.head, TICKET_LOCK_INC);
/* add_smp() is a full mb() */
if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);
which is