Michael S. Tsirkin
2009-Aug-27 16:07 UTC
[PATCHv5 2/3] mm: reduce atomic use on use_mm fast path
When mm switched to matches that of active mm, we don't need to increment and then drop the mm count. Making that conditional reduces contention on that cache line on SMP systems. Acked-by: Andrea Arcangeli <aarcange at redhat.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- mm/mmu_context.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/mmu_context.c b/mm/mmu_context.c index 9989c2f..0777654 100644 --- a/mm/mmu_context.c +++ b/mm/mmu_context.c @@ -27,13 +27,16 @@ void use_mm(struct mm_struct *mm) task_lock(tsk); active_mm = tsk->active_mm; - atomic_inc(&mm->mm_count); + if (active_mm != mm) { + atomic_inc(&mm->mm_count); + tsk->active_mm = mm; + } tsk->mm = mm; - tsk->active_mm = mm; switch_mm(active_mm, mm, tsk); task_unlock(tsk); - mmdrop(active_mm); + if (active_mm != mm) + mmdrop(active_mm); } EXPORT_SYMBOL_GPL(use_mm); -- 1.6.2.5
Apparently Analagous Threads
- [PATCHv5 2/3] mm: reduce atomic use on use_mm fast path
- [PATCHv5 1/3] mm: export use_mm/unuse_mm to modules
- [PATCHv5 1/3] mm: export use_mm/unuse_mm to modules
- [PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
- [PATCHv3 1/2] mm: export use_mm/unuse_mm to modules