Marcin Slusarz
2010-Jun-05 16:49 UTC
[Nouveau] [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages
After every iounmap mmiotrace has to free kmmio_fault_pages, but it can't do it directly, so it defers freeing by RCU. It usually works, but when mmiotraced code calls ioremap-iounmap multiple times without sleeping between (so RCU won't kick in and start freeing) it can be given the same virtual address, so at every iounmap mmiotrace will schedule the same pages for release. Obviously it will explode on second free. Fix it by marking kmmio_fault_pages which are scheduled for release and not adding them second time. Signed-off-by: Marcin Slusarz <marcin.slusarz at gmail.com> Cc: Pekka Paalanen <pq at iki.fi> Cc: Stuart Bennett <stuart at freedesktop.org> --- arch/x86/mm/kmmio.c | 16 +++++++++++++--- 1 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c index 5d0e67f..e5d5e2c 100644 --- a/arch/x86/mm/kmmio.c +++ b/arch/x86/mm/kmmio.c @@ -45,6 +45,8 @@ struct kmmio_fault_page { * Protected by kmmio_lock, when linked into kmmio_page_table. */ int count; + + bool scheduled_for_release; }; struct kmmio_delayed_release { @@ -398,8 +400,11 @@ static void release_kmmio_fault_page(unsigned long page, BUG_ON(f->count < 0); if (!f->count) { disarm_kmmio_fault_page(f); - f->release_next = *release_list; - *release_list = f; + if (!f->scheduled_for_release) { + f->release_next = *release_list; + *release_list = f; + f->scheduled_for_release = true; + } } } @@ -471,8 +476,10 @@ static void remove_kmmio_fault_pages(struct rcu_head *head) prevp = &f->release_next; } else { *prevp = f->release_next; + f->release_next = NULL; + f->scheduled_for_release = false; } - f = f->release_next; + f = *prevp; } spin_unlock_irqrestore(&kmmio_lock, flags); @@ -510,6 +517,9 @@ void unregister_kmmio_probe(struct kmmio_probe *p) kmmio_count--; spin_unlock_irqrestore(&kmmio_lock, flags); + if (!release_list) + return; + drelease = kmalloc(sizeof(*drelease), GFP_ATOMIC); if (!drelease) { pr_crit("leaking kmmio_fault_page objects.\n"); -- 1.7.1
Pekka Paalanen
2010-Jun-05 17:29 UTC
[Nouveau] [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages
On Sat, 5 Jun 2010 18:49:42 +0200 Marcin Slusarz <marcin.slusarz at gmail.com> wrote:> After every iounmap mmiotrace has to free kmmio_fault_pages, but > it can't do it directly, so it defers freeing by RCU. > > It usually works, but when mmiotraced code calls ioremap-iounmap > multiple times without sleeping between (so RCU won't kick in and > start freeing) it can be given the same virtual address, so at > every iounmap mmiotrace will schedule the same pages for release. > Obviously it will explode on second free. > > Fix it by marking kmmio_fault_pages which are scheduled for > release and not adding them second time. > > Signed-off-by: Marcin Slusarz <marcin.slusarz at gmail.com> > Cc: Pekka Paalanen <pq at iki.fi> > Cc: Stuart Bennett <stuart at freedesktop.org>Excellent work! Unfortunately I cannot review this patch right now, I am sick. The description sounds good, though, and I have no objections. Thank you very much!> --- > arch/x86/mm/kmmio.c | 16 +++++++++++++--- > 1 files changed, 13 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c > index 5d0e67f..e5d5e2c 100644 > --- a/arch/x86/mm/kmmio.c > +++ b/arch/x86/mm/kmmio.c > @@ -45,6 +45,8 @@ struct kmmio_fault_page { > * Protected by kmmio_lock, when linked into > kmmio_page_table. */ > int count; > + > + bool scheduled_for_release; > }; > > struct kmmio_delayed_release { > @@ -398,8 +400,11 @@ static void > release_kmmio_fault_page(unsigned long page, BUG_ON(f->count < 0); > if (!f->count) { > disarm_kmmio_fault_page(f); > - f->release_next = *release_list; > - *release_list = f; > + if (!f->scheduled_for_release) { > + f->release_next = *release_list; > + *release_list = f; > + f->scheduled_for_release = true; > + } > } > } > > @@ -471,8 +476,10 @@ static void remove_kmmio_fault_pages(struct > rcu_head *head) prevp = &f->release_next; > } else { > *prevp = f->release_next; > + f->release_next = NULL; > + f->scheduled_for_release = false; > } > - f = f->release_next; > + f = *prevp; > } > spin_unlock_irqrestore(&kmmio_lock, flags); > > @@ -510,6 +517,9 @@ void unregister_kmmio_probe(struct > kmmio_probe *p) kmmio_count--; > spin_unlock_irqrestore(&kmmio_lock, flags); > > + if (!release_list) > + return; > + > drelease = kmalloc(sizeof(*drelease), GFP_ATOMIC); > if (!drelease) { > pr_crit("leaking kmmio_fault_page objects.\n"); > -- > 1.7.1 > >-- Pekka Paalanen http://www.iki.fi/pq/
Marcin Slusarz
2010-Jun-05 19:33 UTC
[Nouveau] [PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages
On Sat, Jun 05, 2010 at 06:49:42PM +0200, Marcin Slusarz wrote:> After every iounmap mmiotrace has to free kmmio_fault_pages, but it > can't do it directly, so it defers freeing by RCU. > > It usually works, but when mmiotraced code calls ioremap-iounmap > multiple times without sleeping between (so RCU won't kick in and > start freeing) it can be given the same virtual address, so at > every iounmap mmiotrace will schedule the same pages for release. > Obviously it will explode on second free. > > Fix it by marking kmmio_fault_pages which are scheduled for release > and not adding them second time. >Attached patch for mmiotrace testing module allows to reliably reproduce the bug. It can be folded into the main patch. --- diff --git a/arch/x86/mm/testmmiotrace.c b/arch/x86/mm/testmmiotrace.c index 8565d94..5f0937b 100644 --- a/arch/x86/mm/testmmiotrace.c +++ b/arch/x86/mm/testmmiotrace.c @@ -90,6 +90,19 @@ static void do_test(unsigned long size) iounmap(p); } +static void do_test2(void) +{ + void __iomem *p; + int i; + + for (i = 0; i < 10; ++i) { + p = ioremap_nocache(mmio_address, 4096); + if (p) + iounmap(p); + } + synchronize_rcu(); /* will freeing work? */ +} + static int __init init(void) { unsigned long size = (read_far) ? (8 << 20) : (16 << 10); @@ -104,6 +117,7 @@ static int __init init(void) "and writing 16 kB of rubbish in there.\n", size >> 10, mmio_address); do_test(size); + do_test2(); pr_info("All done.\n"); return 0; }
Possibly Parallel Threads
- [PATCHv2] kmmio/mmiotrace: fix double free of kmmio_fault_pages
- [GIT PULL] x86/mm for 2.6.36
- RFC: [PATCH] x86/kmmio: fix mmiotrace for hugepages
- [PATCH 4.5 160/200] x86/mm/kmmio: Fix mmiotrace for hugepages
- [PATCH 4.4 137/163] x86/mm/kmmio: Fix mmiotrace for hugepages