search for: kmmio

Displaying 20 results from an estimated 34 matches for "kmmio".

Did you mean: mmio
2008 Mar 10
7
[Bug 14941] New: ioremap leak in DRM
http://bugs.freedesktop.org/show_bug.cgi?id=14941 Summary: ioremap leak in DRM Product: xorg Version: unspecified Platform: x86-64 (AMD64) OS/Version: Linux (All) Status: NEW Severity: minor Priority: medium Component: Driver/nouveau AssignedTo: nouveau at lists.freedesktop.org ReportedBy:
2020 Oct 20
1
[PATCH] x86/mm/kmmio: correctly handle kzalloc return
Replacing return value -1 to error code Signed-off-by: Mugilraj Dhavachelvan <dmugil2000 at gmail.com> --- arch/x86/mm/kmmio.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c index be020a7bc414..15430520c232 100644 --- a/arch/x86/mm/kmmio.c +++ b/arch/x86/mm/kmmio.c @@ -386,7 +386,7 @@ static int add_kmmio_fault_page(unsigned long addr) f = kzalloc(sizeof(...
2010 Jun 05
2
[PATCH] kmmio/mmiotrace: fix double free of kmmio_fault_pages
After every iounmap mmiotrace has to free kmmio_fault_pages, but it can't do it directly, so it defers freeing by RCU. It usually works, but when mmiotraced code calls ioremap-iounmap multiple times without sleeping between (so RCU won't kick in and start freeing) it can be given the same virtual address, so at every iounmap mmiotrace w...
2017 Dec 08
0
[PATCH] x86/mm/kmmio: Fix returned errno code
add_kmmio_fault_page using -1 instead of the -ENOMEM defined macro to specify kmmio_fault_page allocation failed. Smatch tool warning: arch/x86/mm/kmmio.c:389 add_kmmio_fault_page() warn: returning -1 instead of -ENOMEM is sloppy Signed-off-by: Vasyl Gomonovych <gomonovych at gmail.com> --- arch/x86...
2016 Mar 03
1
RFC: [PATCH] x86/kmmio: fix mmiotrace for hugepages
Because Linux might use bigger pages than the 4K pages to handle those mmio ioremaps, the kmmio code shouldn't rely on the pade id as it currently does. Using the memory address instead of the page id let's us lookup how big the page is and what it's base address is, so that we won't get a page fault within the same page twice anymore. I don't know if I got this right th...
2020 Feb 05
0
[PATCH] x86/mm/kmmio: Use this_cpu_ptr() instead get_cpu_var() for kmmio_ctx
Both call sites, that access kmmio_ctx, access kmmio_ctx with interrupts disabled. There is no need to use get_cpu_var() which additionally disables preemption. Use this_cpu_ptr() to access the kmmio_ctx variable of the current CPU. Signed-off-by: Sebastian Andrzej Siewior <bigeasy at linutronix.de> --- arch/x86/mm/kmmio.c...
2020 Oct 20
0
[PATCH] x86/mm/kmmio: correctly handle kzalloc return
On Tue, 20 Oct 2020 14:13:44 +0530 Mugilraj Dhavachelvan <dmugil2000 at gmail.com> wrote: > Replacing return value -1 to error code > > Signed-off-by: Mugilraj Dhavachelvan <dmugil2000 at gmail.com> > --- > arch/x86/mm/kmmio.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c > index be020a7bc414..15430520c232 100644 > --- a/arch/x86/mm/kmmio.c > +++ b/arch/x86/mm/kmmio.c > @@ -386,7 +386,7 @@ static int add_kmmio_fault_page(unsigned...
2016 Jul 12
0
[added to the 4.1 stable tree] x86/mm/kmmio: Fix mmiotrace for hugepages
...;nouveau at karolherbst.de> This patch has been added to the 4.1 stable tree. If you have any objections, please let us know. =============== [ Upstream commit cfa52c0cfa4d727aa3e457bf29aeff296c528a08 ] Because Linux might use bigger pages than the 4K pages to handle those mmio ioremaps, the kmmio code shouldn't rely on the pade id as it currently does. Using the memory address instead of the page id lets us look up how big the page is and what its base address is, so that we won't get a page fault within the same page twice anymore. Tested-by: Pierre Moreau <pierre.morrow at fr...
2016 Jul 12
0
[added to the 3.18 stable tree] x86/mm/kmmio: Fix mmiotrace for hugepages
...nouveau at karolherbst.de> This patch has been added to the 3.18 stable tree. If you have any objections, please let us know. =============== [ Upstream commit cfa52c0cfa4d727aa3e457bf29aeff296c528a08 ] Because Linux might use bigger pages than the 4K pages to handle those mmio ioremaps, the kmmio code shouldn't rely on the pade id as it currently does. Using the memory address instead of the page id lets us look up how big the page is and what its base address is, so that we won't get a page fault within the same page twice anymore. Tested-by: Pierre Moreau <pierre.morrow at fr...
2017 Nov 27
0
[PATCH] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
If something calls ioremap with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap the address passed to unregister_kmmio_probe was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths thoug...
2016 May 03
0
[PATCH 4.5 160/200] x86/mm/kmmio: Fix mmiotrace for hugepages
4.5-stable review patch. If anyone has any objections, please let me know. ------------------ From: Karol Herbst <nouveau at karolherbst.de> commit cfa52c0cfa4d727aa3e457bf29aeff296c528a08 upstream. Because Linux might use bigger pages than the 4K pages to handle those mmio ioremaps, the kmmio code shouldn't rely on the pade id as it currently does. Using the memory address instead of the page id lets us look up how big the page is and what its base address is, so that we won't get a page fault within the same page twice anymore. Tested-by: Pierre Moreau <pierre.morrow at fr...
2016 May 03
0
[PATCH 4.4 137/163] x86/mm/kmmio: Fix mmiotrace for hugepages
4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Karol Herbst <nouveau at karolherbst.de> commit cfa52c0cfa4d727aa3e457bf29aeff296c528a08 upstream. Because Linux might use bigger pages than the 4K pages to handle those mmio ioremaps, the kmmio code shouldn't rely on the pade id as it currently does. Using the memory address instead of the page id lets us look up how big the page is and what its base address is, so that we won't get a page fault within the same page twice anymore. Tested-by: Pierre Moreau <pierre.morrow at fr...
2018 Jan 28
0
[PATCH AUTOSEL for 4.14 095/100] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2018 Jan 28
0
[PATCH AUTOSEL for 4.4 34/36] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2018 Jan 28
0
[PATCH AUTOSEL for 4.9 46/49] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2018 Jan 28
0
[PATCH AUTOSEL for 3.18 23/25] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2010 Jun 13
1
[PATCHv2] kmmio/mmiotrace: fix double free of kmmio_fault_pages
After every iounmap mmiotrace has to free kmmio_fault_pages, but it can't do it directly, so it defers freeing by RCU. It usually works, but when mmiotraced code calls ioremap-iounmap multiple times without sleeping between (so RCU won't kick in and start freeing) it can be given the same virtual address, so at every iounmap mmiotrace w...
2018 Feb 23
0
[PATCH 4.14 148/159] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2018 Feb 23
0
[PATCH 3.18 54/58] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...
2018 Feb 23
0
[PATCH 4.4 059/193] x86/mm/kmmio: Fix mmiotrace for page unaligned addresses
...19acc687 ] If something calls ioremap() with an address not aligned to PAGE_SIZE, the returned address might be not aligned as well. This led to a probe registered on exactly the returned address, but the entire page was armed for mmiotracing. On calling iounmap() the address passed to unregister_kmmio_probe() was PAGE_SIZE aligned by the caller leading to a complete freeze of the machine. We should always page align addresses while (un)registerung mappings, because the mmiotracer works on top of pages, not mappings. We still keep track of the probes based on their real addresses and lengths tho...