Michael S. Tsirkin
2009-Sep-17 07:22 UTC
[PATCHv3 2/2] mm: reduce atomic use on use_mm fast path
When mm switched to matches that of active mm, we don't need to
increment and then drop the mm count. In a simple benchmark this
happens in about 50% of time. Making that conditional reduces
contention on that cache line on SMP systems.
Acked-by: Andrea Arcangeli <aarcange at redhat.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
mm/mmu_context.c | 9 ++++++---
1 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/mmu_context.c b/mm/mmu_context.c
index fd473b5..ded9081 100644
--- a/mm/mmu_context.c
+++ b/mm/mmu_context.c
@@ -26,13 +26,16 @@ void use_mm(struct mm_struct *mm)
task_lock(tsk);
active_mm = tsk->active_mm;
- atomic_inc(&mm->mm_count);
+ if (active_mm != mm) {
+ atomic_inc(&mm->mm_count);
+ tsk->active_mm = mm;
+ }
tsk->mm = mm;
- tsk->active_mm = mm;
switch_mm(active_mm, mm, tsk);
task_unlock(tsk);
- mmdrop(active_mm);
+ if (active_mm != mm)
+ mmdrop(active_mm);
}
/*
--
1.6.2.5
Maybe Matching Threads
- [PATCHv3 2/2] mm: reduce atomic use on use_mm fast path
- [PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
- [PATCHv3 1/2] mm: move use_mm/unuse_mm from aio.c to mm/
- [PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
- [PATCHv3 1/2] mm: export use_mm/unuse_mm to modules
