Andrea Arcangeli
2012-Jun-07 21:00 UTC
Re: [ 08/82] mm: pmd_read_atomic: fix 32bit PAE pmd walk vs pmd_populate SMP race condition
Hi, this should avoid the cmpxchg8b (to make Xen happy) but without reintroducing the race condition. It''s actually going to be faster too, but it''s conceptually more complicated as the pmd high/low may be inconsistent at times, but at those times we''re going to declare the pmd unstable and ignore it anyway so it''s ok. NOTE: in theory I could also drop the high part when THP=y thanks to the barrier() in the caller (and the barrier is needed for the generic version anyway): static inline pmd_t pmd_read_atomic(pmd_t *pmdp) { pmdval_t ret; u32 *tmp = (u32 *)pmdp; ret = (pmdval_t) (*tmp); +#ifndef CONFIG_TRANSPARENT_HUGEPAGE if (ret) { /* * If the low part is null, we must not read the high part * or we can end up with a partial pmd. */ smp_rmb(); ret |= ((pmdval_t)*(tmp + 1)) << 32; } +#endif return (pmd_t) { ret }; } But it''s not worth the extra complexity. It looks cleaner if we deal with "good" pmds if they''re later found pointing to a pte (even if we discard them and force pte_offset to re-read the *pmd). Andrea Arcangeli (1): thp: avoid atomic64_read in pmd_read_atomic for 32bit PAE arch/x86/include/asm/pgtable-3level.h | 30 +++++++++++++++++------------- include/asm-generic/pgtable.h | 10 ++++++++++ 2 files changed, 27 insertions(+), 13 deletions(-) -- To unsubscribe, send a message with ''unsubscribe linux-mm'' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don''t email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>