Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 00/37] mm/highmem: Preemptible variant of kmap_atomic & friends
Following up to the discussion in: https://lore.kernel.org/r/20200914204209.256266093 at linutronix.de and the second version of this: https://lore.kernel.org/r/20201029221806.189523375 at linutronix.de this series provides a preemptible variant of kmap_atomic & related interfaces. This is achieved by: - Removing the RT dependency from migrate_disable/enable() - Consolidating all kmap atomic implementations in generic code including a useful version of the CONFIG_DEBUG_HIGHMEM which provides guard pages between the individual maps instead of just increasing the map size. - Switching from per CPU storage of the kmap index to a per task storage - Adding a pteval array to the per task storage which contains the ptevals of the currently active temporary kmaps - Adding context switch code which checks whether the outgoing or the incoming task has active temporary kmaps. If so, the outgoing task's kmaps are removed and the incoming task's kmaps are restored. - Adding new interfaces k[un]map_local*() which are not disabling preemption and can be called from any context (except NMI). Contrary to kmap() which provides preemptible and "persistant" mappings, these interfaces are meant to replace the temporary mappings provided by kmap_atomic*() today. This allows to get rid of conditional mapping choices and allows to have preemptible short term mappings on 64bit which are today enforced to be non-preemptible due to the highmem constraints. It clearly puts overhead on the highmem users, but highmem is slow anyway. This is not a wholesale conversion which makes kmap_atomic magically preemptible because there might be usage sites which rely on the implicit preempt disable. So this needs to be done on a case by case basis and the call sites converted to kmap_local(). Note, that this is only tested on X86 and completely untested on all other architectures (at least it compiles except on csky which does not compile with the newest cross tools from kernel.org independent of this change). The lot is available from git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git highmem It is based on Peter Zijlstras migrate disable branch which is close to be merged into the tip tree, but still not finalized: git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/migrate-disable The series has the following parts: Patches 1 - 22: Consolidation work which is independent of the scheduler changes 79 files changed, 595 insertions(+), 1296 deletions(-) Patch 23: Needs to be folded back into the sched/migrate-disable Patches 24 - 26: The preemptible kmap_local() implementation 9 files changed, 283 insertions(+), 57 deletions(-) Patches 27 - 37: Cleanup of the less common kmap/io_map_atomic users 19 files changed, 114 insertions(+), 256 deletions(-) Vs. merging this pile: If everyone agrees, I'd like to take the first part (1-22) through tip so that the preemptible implementation can be sorted in tip once the scheduler prerequisites are there. The initial cleanups (27-37) might have to wait if there are conflicts vs. the drm/gpu tree. We'll see.>From what I can tell kmap_atomic() can be removed all together andcompletly replaced by kmap_local(). Most of the usage sites are trivial and just doing memcpy(), memset() or trivial operations on the temporarily mapped page. The interesting ones are those which do either conditional stuff or have copy_.*_user_inatomic() inside. As shown with the crash and drm/gpu cleanups this allows to simplify the code quite a bit. Changes vs. V2: - Remove the migrate disable from kmap_local and only issue that when the there is an actual highmem mapping. (Linus) - Reordered the series so the consolidation is upfront - Get rid of kmap_types.h and the associated cruft - Fixup documentation and add function documentation for kmap_* - Splitout the internal implementation into a seperate header - More cleanups - removal of unused functions - Replace a few of the less frequently used kmap_atomic and io_mapping_map_atomic variants and remove those interfaces. Thanks, tglx --- arch/alpha/include/asm/kmap_types.h | 15 arch/arc/include/asm/kmap_types.h | 14 arch/arm/include/asm/kmap_types.h | 10 arch/arm/mm/highmem.c | 121 ------- arch/ia64/include/asm/kmap_types.h | 13 arch/microblaze/mm/highmem.c | 78 ---- arch/mips/include/asm/kmap_types.h | 13 arch/nds32/mm/highmem.c | 48 -- arch/parisc/include/asm/kmap_types.h | 13 arch/powerpc/include/asm/kmap_types.h | 13 arch/powerpc/mm/highmem.c | 67 ---- arch/sh/include/asm/kmap_types.h | 15 arch/sparc/include/asm/kmap_types.h | 11 arch/sparc/mm/highmem.c | 115 ------- arch/um/include/asm/kmap_types.h | 13 arch/x86/include/asm/kmap_types.h | 13 b/Documentation/driver-api/io-mapping.rst | 92 ++--- b/arch/arc/Kconfig | 1 b/arch/arc/include/asm/highmem.h | 26 + b/arch/arc/mm/highmem.c | 54 --- b/arch/arm/Kconfig | 1 b/arch/arm/include/asm/fixmap.h | 4 b/arch/arm/include/asm/highmem.h | 33 +- b/arch/arm/mm/Makefile | 1 b/arch/arm/mm/cache-feroceon-l2.c | 6 b/arch/arm/mm/cache-xsc3l2.c | 4 b/arch/csky/Kconfig | 1 b/arch/csky/include/asm/fixmap.h | 4 b/arch/csky/include/asm/highmem.h | 6 b/arch/csky/mm/highmem.c | 75 ---- b/arch/microblaze/Kconfig | 1 b/arch/microblaze/include/asm/fixmap.h | 4 b/arch/microblaze/include/asm/highmem.h | 6 b/arch/microblaze/mm/Makefile | 1 b/arch/microblaze/mm/init.c | 6 b/arch/mips/Kconfig | 1 b/arch/mips/include/asm/fixmap.h | 4 b/arch/mips/include/asm/highmem.h | 6 b/arch/mips/kernel/crash_dump.c | 42 -- b/arch/mips/mm/highmem.c | 77 ---- b/arch/mips/mm/init.c | 4 b/arch/nds32/Kconfig.cpu | 1 b/arch/nds32/include/asm/fixmap.h | 4 b/arch/nds32/include/asm/highmem.h | 22 - b/arch/nds32/mm/Makefile | 1 b/arch/openrisc/mm/init.c | 1 b/arch/openrisc/mm/ioremap.c | 1 b/arch/powerpc/Kconfig | 1 b/arch/powerpc/include/asm/fixmap.h | 4 b/arch/powerpc/include/asm/highmem.h | 7 b/arch/powerpc/mm/Makefile | 1 b/arch/powerpc/mm/mem.c | 7 b/arch/sh/include/asm/fixmap.h | 8 b/arch/sh/mm/init.c | 8 b/arch/sparc/Kconfig | 1 b/arch/sparc/include/asm/highmem.h | 8 b/arch/sparc/include/asm/vaddrs.h | 4 b/arch/sparc/mm/Makefile | 3 b/arch/sparc/mm/srmmu.c | 2 b/arch/um/include/asm/fixmap.h | 1 b/arch/x86/Kconfig | 3 b/arch/x86/include/asm/fixmap.h | 5 b/arch/x86/include/asm/highmem.h | 13 b/arch/x86/include/asm/iomap.h | 13 b/arch/x86/include/asm/paravirt_types.h | 1 b/arch/x86/kernel/crash_dump_32.c | 48 -- b/arch/x86/mm/highmem_32.c | 59 --- b/arch/x86/mm/init_32.c | 15 b/arch/x86/mm/iomap_32.c | 57 --- b/arch/xtensa/Kconfig | 1 b/arch/xtensa/include/asm/fixmap.h | 4 b/arch/xtensa/include/asm/highmem.h | 12 b/arch/xtensa/mm/highmem.c | 46 -- b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 b/drivers/gpu/drm/i915/i915_gem.c | 40 -- b/drivers/gpu/drm/i915/selftests/i915_gem.c | 4 b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 8 b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/fbmem.h | 8 b/drivers/gpu/drm/qxl/qxl_image.c | 18 - b/drivers/gpu/drm/qxl/qxl_ioctl.c | 27 - b/drivers/gpu/drm/qxl/qxl_object.c | 12 b/drivers/gpu/drm/qxl/qxl_object.h | 4 b/drivers/gpu/drm/qxl/qxl_release.c | 4 b/drivers/gpu/drm/ttm/ttm_bo_util.c | 20 - b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 30 - b/fs/aio.c | 1 b/fs/btrfs/ctree.h | 1 b/include/asm-generic/Kbuild | 2 b/include/asm-generic/kmap_size.h | 12 b/include/linux/highmem-internal.h | 210 ++++++++++++ b/include/linux/highmem.h | 294 ++++++------------ b/include/linux/io-mapping.h | 28 - b/include/linux/kernel.h | 21 - b/include/linux/preempt.h | 38 -- b/include/linux/sched.h | 11 b/kernel/entry/common.c | 2 b/kernel/fork.c | 1 b/kernel/sched/core.c | 63 +++ b/kernel/sched/sched.h | 4 b/lib/smp_processor_id.c | 2 b/mm/Kconfig | 3 b/mm/highmem.c | 255 ++++++++++++++- include/asm-generic/kmap_types.h | 11 103 files changed, 959 insertions(+), 1576 deletions(-)
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 01/37] mm/highmem: Un-EXPORT __kmap_atomic_idx()
Nothing in modules can use that. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Reviewed-by: Christoph Hellwig <hch at lst.de> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: linux-mm at kvack.org --- mm/highmem.c | 2 -- 1 file changed, 2 deletions(-) --- a/mm/highmem.c +++ b/mm/highmem.c @@ -108,8 +108,6 @@ static inline wait_queue_head_t *get_pkm atomic_long_t _totalhigh_pages __read_mostly; EXPORT_SYMBOL(_totalhigh_pages); -EXPORT_PER_CPU_SYMBOL(__kmap_atomic_idx); - unsigned int nr_free_highpages (void) { struct zone *zone;
Nothing uses totalhigh_pages_dec() and totalhigh_pages_set(). Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem.h | 10 ---------- 1 file changed, 10 deletions(-) --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -104,21 +104,11 @@ static inline void totalhigh_pages_inc(v atomic_long_inc(&_totalhigh_pages); } -static inline void totalhigh_pages_dec(void) -{ - atomic_long_dec(&_totalhigh_pages); -} - static inline void totalhigh_pages_add(long count) { atomic_long_add(count, &_totalhigh_pages); } -static inline void totalhigh_pages_set(long val) -{ - atomic_long_set(&_totalhigh_pages, val); -} - void kmap_flush_unused(void); struct page *kmap_to_page(void *addr);
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 03/37] fs: Remove asm/kmap_types.h includes
Historical leftovers from the time where kmap() had fixed slots. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Alexander Viro <viro at zeniv.linux.org.uk> Cc: Benjamin LaHaise <bcrl at kvack.org> Cc: linux-fsdevel at vger.kernel.org Cc: linux-aio at kvack.org Cc: Chris Mason <clm at fb.com> Cc: Josef Bacik <josef at toxicpanda.com> Cc: David Sterba <dsterba at suse.com> Cc: linux-btrfs at vger.kernel.org --- fs/aio.c | 1 - fs/btrfs/ctree.h | 1 - 2 files changed, 2 deletions(-) --- a/fs/aio.c +++ b/fs/aio.c @@ -43,7 +43,6 @@ #include <linux/mount.h> #include <linux/pseudo_fs.h> -#include <asm/kmap_types.h> #include <linux/uaccess.h> #include <linux/nospec.h> --- a/fs/btrfs/ctree.h +++ b/fs/btrfs/ctree.h @@ -17,7 +17,6 @@ #include <linux/wait.h> #include <linux/slab.h> #include <trace/events/btrfs.h> -#include <asm/kmap_types.h> #include <asm/unaligned.h> #include <linux/pagemap.h> #include <linux/btrfs.h>
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 04/37] sh/highmem: Remove all traces of unused cruft
For whatever reasons SH has highmem bits all over the place but does not enable it via Kconfig. Remove the bitrot. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- arch/sh/include/asm/fixmap.h | 8 -------- arch/sh/include/asm/kmap_types.h | 15 --------------- arch/sh/mm/init.c | 8 -------- 3 files changed, 31 deletions(-) --- a/arch/sh/include/asm/fixmap.h +++ b/arch/sh/include/asm/fixmap.h @@ -13,9 +13,6 @@ #include <linux/kernel.h> #include <linux/threads.h> #include <asm/page.h> -#ifdef CONFIG_HIGHMEM -#include <asm/kmap_types.h> -#endif /* * Here we define all the compile-time 'special' virtual @@ -53,11 +50,6 @@ enum fixed_addresses { FIX_CMAP_BEGIN, FIX_CMAP_END = FIX_CMAP_BEGIN + (FIX_N_COLOURS * NR_CPUS) - 1, -#ifdef CONFIG_HIGHMEM - FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, -#endif - #ifdef CONFIG_IOREMAP_FIXED /* * FIX_IOREMAP entries are useful for mapping physical address --- a/arch/sh/include/asm/kmap_types.h +++ /dev/null @@ -1,15 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __SH_KMAP_TYPES_H -#define __SH_KMAP_TYPES_H - -/* Dummy header just to define km_type. */ - -#ifdef CONFIG_DEBUG_HIGHMEM -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -362,9 +362,6 @@ void __init mem_init(void) mem_init_print_info(NULL); pr_info("virtual kernel memory layout:\n" " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n" -#ifdef CONFIG_HIGHMEM - " pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n" -#endif " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n" " lowmem : 0x%08lx - 0x%08lx (%4ld MB) (cached)\n" #ifdef CONFIG_UNCACHED_MAPPING @@ -376,11 +373,6 @@ void __init mem_init(void) FIXADDR_START, FIXADDR_TOP, (FIXADDR_TOP - FIXADDR_START) >> 10, -#ifdef CONFIG_HIGHMEM - PKMAP_BASE, PKMAP_BASE+LAST_PKMAP*PAGE_SIZE, - (LAST_PKMAP*PAGE_SIZE) >> 10, -#endif - (unsigned long)VMALLOC_START, VMALLOC_END, (VMALLOC_END - VMALLOC_START) >> 20,
kmap_types.h is a misnomer because the old atomic MAP based array does not exist anymore and the whole indirection of architectures including kmap_types.h is inconinstent and does not allow to provide guard page debugging for this misfeature. Add a common header file which defines the mapping stack size for all architectures. Will be used when converting architectures over to a generic kmap_local/atomic implementation. The array size is chosen with the following constraints in mind: - The deepest nest level in one context is 3 according to code inspection. - The worst case nesting for the upcoming reemptible version would be: 2 maps in task context and a fault inside 2 maps in the fault handler 3 maps in softirq 2 maps in interrupt So a total of 16 is sufficient and probably overestimated. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/asm-generic/Kbuild | 1 + include/asm-generic/kmap_size.h | 12 ++++++++++++ 2 files changed, 13 insertions(+) --- a/include/asm-generic/Kbuild +++ b/include/asm-generic/Kbuild @@ -31,6 +31,7 @@ mandatory-y += irq_regs.h mandatory-y += irq_work.h mandatory-y += kdebug.h mandatory-y += kmap_types.h +mandatory-y += kmap_size.h mandatory-y += kprobes.h mandatory-y += linkage.h mandatory-y += local.h --- /dev/null +++ b/include/asm-generic/kmap_size.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_GENERIC_KMAP_SIZE_H +#define _ASM_GENERIC_KMAP_SIZE_H + +/* For debug this provides guard pages between the maps */ +#ifdef CONFIG_DEBUG_HIGHMEM +# define KM_MAX_IDX 33 +#else +# define KM_MAX_IDX 16 +#endif + +#endif
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 06/37] highmem: Provide generic variant of kmap_atomic*
The kmap_atomic* interfaces in all architectures are pretty much the same except for post map operations (flush) and pre- and post unmap operations. Provide a generic variant for that. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: linux-mm at kvack.org --- V3: Do not reuse the kmap_atomic_idx pile and use kmap_size.h right away V2: Address review comments from Christoph (style and EXPORT variant) --- include/linux/highmem.h | 82 ++++++++++++++++++++++----- mm/Kconfig | 3 + mm/highmem.c | 144 +++++++++++++++++++++++++++++++++++++++++++++++- 3 files changed, 211 insertions(+), 18 deletions(-) --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -31,9 +31,16 @@ static inline void invalidate_kernel_vma #include <asm/kmap_types.h> +/* + * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. + */ +#ifdef CONFIG_KMAP_LOCAL +void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); +void *__kmap_local_page_prot(struct page *page, pgprot_t prot); +void kunmap_local_indexed(void *vaddr); +#endif + #ifdef CONFIG_HIGHMEM -extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); -extern void kunmap_atomic_high(void *kvaddr); #include <asm/highmem.h> #ifndef ARCH_HAS_KMAP_FLUSH_TLB @@ -81,6 +88,11 @@ static inline void kunmap(struct page *p * be used in IRQ contexts, so in some (very limited) cases we need * it. */ + +#ifndef CONFIG_KMAP_LOCAL +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); +void kunmap_atomic_high(void *kvaddr); + static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); @@ -89,7 +101,38 @@ static inline void *kmap_atomic_prot(str return page_address(page); return kmap_atomic_high_prot(page, prot); } -#define kmap_atomic(page) kmap_atomic_prot(page, kmap_prot) + +static inline void __kunmap_atomic(void *vaddr) +{ + kunmap_atomic_high(vaddr); +} +#else /* !CONFIG_KMAP_LOCAL */ + +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_local_page_prot(page, prot); +} + +static inline void *kmap_atomic_pfn(unsigned long pfn) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_local_pfn_prot(pfn, kmap_prot); +} + +static inline void __kunmap_atomic(void *addr) +{ + kunmap_local_indexed(addr); +} + +#endif /* CONFIG_KMAP_LOCAL */ + +static inline void *kmap_atomic(struct page *page) +{ + return kmap_atomic_prot(page, kmap_prot); +} /* declarations for linux/mm/highmem.c */ unsigned int nr_free_highpages(void); @@ -147,25 +190,33 @@ static inline void *kmap_atomic(struct p pagefault_disable(); return page_address(page); } -#define kmap_atomic_prot(page, prot) kmap_atomic(page) -static inline void kunmap_atomic_high(void *addr) +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + return kmap_atomic(page); +} + +static inline void *kmap_atomic_pfn(unsigned long pfn) +{ + return kmap_atomic(pfn_to_page(pfn)); +} + +static inline void __kunmap_atomic(void *addr) { /* * Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic() - * handles re-enabling faults + preemption + * handles re-enabling faults and preemption */ #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(addr); #endif } -#define kmap_atomic_pfn(pfn) kmap_atomic(pfn_to_page(pfn)) - #define kmap_flush_unused() do {} while(0) #endif /* CONFIG_HIGHMEM */ +#if !defined(CONFIG_KMAP_LOCAL) #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) DECLARE_PER_CPU(int, __kmap_atomic_idx); @@ -196,22 +247,21 @@ static inline void kmap_atomic_idx_pop(v __this_cpu_dec(__kmap_atomic_idx); #endif } - +#endif #endif /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page. */ -#define kunmap_atomic(addr) \ -do { \ - BUILD_BUG_ON(__same_type((addr), struct page *)); \ - kunmap_atomic_high(addr); \ - pagefault_enable(); \ - preempt_enable(); \ +#define kunmap_atomic(__addr) \ +do { \ + BUILD_BUG_ON(__same_type((__addr), struct page *)); \ + __kunmap_atomic(__addr); \ + pagefault_enable(); \ + preempt_enable(); \ } while (0) - /* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */ #ifndef clear_user_highpage static inline void clear_user_highpage(struct page *page, unsigned long vaddr) --- a/mm/Kconfig +++ b/mm/Kconfig @@ -872,4 +872,7 @@ config ARCH_HAS_HUGEPD config MAPPING_DIRTY_HELPERS bool +config KMAP_LOCAL + bool + endmenu --- a/mm/highmem.c +++ b/mm/highmem.c @@ -31,9 +31,11 @@ #include <asm/tlbflush.h> #include <linux/vmalloc.h> +#ifndef CONFIG_KMAP_LOCAL #if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) DEFINE_PER_CPU(int, __kmap_atomic_idx); #endif +#endif /* * Virtual_count is not a pure "count". @@ -365,9 +367,147 @@ void kunmap_high(struct page *page) if (need_wakeup) wake_up(pkmap_map_wait); } - EXPORT_SYMBOL(kunmap_high); -#endif /* CONFIG_HIGHMEM */ +#endif /* CONFIG_HIGHMEM */ + +#ifdef CONFIG_KMAP_LOCAL + +#include <asm/kmap_size.h> + +static DEFINE_PER_CPU(int, __kmap_local_idx); + +static inline int kmap_local_idx_push(void) +{ + int idx = __this_cpu_inc_return(__kmap_local_idx) - 1; + + WARN_ON_ONCE(in_irq() && !irqs_disabled()); + BUG_ON(idx >= KM_MAX_IDX); + return idx; +} + +static inline int kmap_local_idx(void) +{ + return __this_cpu_read(__kmap_local_idx) - 1; +} + +static inline void kmap_local_idx_pop(void) +{ + int idx = __this_cpu_dec_return(__kmap_local_idx); + + BUG_ON(idx < 0); +} + +#ifndef arch_kmap_local_post_map +# define arch_kmap_local_post_map(vaddr, pteval) do { } while (0) +#endif +#ifndef arch_kmap_local_pre_unmap +# define arch_kmap_local_pre_unmap(vaddr) do { } while (0) +#endif + +#ifndef arch_kmap_local_post_unmap +# define arch_kmap_local_post_unmap(vaddr) do { } while (0) +#endif + +#ifndef arch_kmap_local_map_idx +#define arch_kmap_local_map_idx(idx, pfn) kmap_local_calc_idx(idx) +#endif + +#ifndef arch_kmap_local_unmap_idx +#define arch_kmap_local_unmap_idx(idx, vaddr) kmap_local_calc_idx(idx) +#endif + +#ifndef arch_kmap_local_high_get +static inline void *arch_kmap_local_high_get(struct page *page) +{ + return NULL; +} +#endif + +/* Unmap a local mapping which was obtained by kmap_high_get() */ +static inline void kmap_high_unmap_local(unsigned long vaddr) +{ +#ifdef ARCH_NEEDS_KMAP_HIGH_GET + if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) + kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)])); +#endif +} + +static inline int kmap_local_calc_idx(int idx) +{ + return idx + KM_MAX_IDX * smp_processor_id(); +} + +static pte_t *__kmap_pte; + +static pte_t *kmap_get_pte(void) +{ + if (!__kmap_pte) + __kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); + return __kmap_pte; +} + +void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot) +{ + pte_t pteval, *kmap_pte = kmap_get_pte(); + unsigned long vaddr; + int idx; + + preempt_disable(); + idx = arch_kmap_local_map_idx(kmap_local_idx_push(), pfn); + vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + BUG_ON(!pte_none(*(kmap_pte - idx))); + pteval = pfn_pte(pfn, prot); + set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); + arch_kmap_local_post_map(vaddr, pteval); + preempt_enable(); + + return (void *)vaddr; +} +EXPORT_SYMBOL_GPL(__kmap_local_pfn_prot); + +void *__kmap_local_page_prot(struct page *page, pgprot_t prot) +{ + void *kmap; + + if (!PageHighMem(page)) + return page_address(page); + + /* Try kmap_high_get() if architecture has it enabled */ + kmap = arch_kmap_local_high_get(page); + if (kmap) + return kmap; + + return __kmap_local_pfn_prot(page_to_pfn(page), prot); +} +EXPORT_SYMBOL(__kmap_local_page_prot); + +void kunmap_local_indexed(void *vaddr) +{ + unsigned long addr = (unsigned long) vaddr & PAGE_MASK; + pte_t *kmap_pte = kmap_get_pte(); + int idx; + + if (addr < __fix_to_virt(FIX_KMAP_END) || + addr > __fix_to_virt(FIX_KMAP_BEGIN)) { + WARN_ON_ONCE(addr < PAGE_OFFSET); + + /* Handle mappings which were obtained by kmap_high_get() */ + kmap_high_unmap_local(addr); + return; + } + + preempt_disable(); + idx = arch_kmap_local_unmap_idx(kmap_local_idx(), addr); + WARN_ON_ONCE(addr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); + + arch_kmap_local_pre_unmap(addr); + pte_clear(&init_mm, addr, kmap_pte - idx); + arch_kmap_local_post_unmap(addr); + kmap_local_idx_pop(); + preempt_enable(); +} +EXPORT_SYMBOL(kunmap_local_indexed); +#endif #if defined(HASHED_PAGE_VIRTUAL)
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 07/37] highmem: Make DEBUG_HIGHMEM functional
For some obscure reason when CONFIG_DEBUG_HIGHMEM is enabled the stack depth is increased from 20 to 41. But the only thing DEBUG_HIGHMEM does is to enable a few BUG_ON()'s in the mapping code. That's a leftover from the historical mapping code which had fixed entries for various purposes. DEBUG_HIGHMEM inserted guard mappings between the map types. But that got all ditched when kmap_atomic() switched to a stack based map management. Though the WITH_KM_FENCE magic survived without being functional. All the thing does today is to increase the stack depth. Add a working implementation to the generic kmap_local* implementation. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- mm/highmem.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) --- a/mm/highmem.c +++ b/mm/highmem.c @@ -374,9 +374,19 @@ EXPORT_SYMBOL(kunmap_high); static DEFINE_PER_CPU(int, __kmap_local_idx); +/* + * With DEBUG_HIGHMEM the stack depth is doubled and every second + * slot is unused which acts as a guard page + */ +#ifdef CONFIG_DEBUG_HIGHMEM +# define KM_INCR 2 +#else +# define KM_INCR 1 +#endif + static inline int kmap_local_idx_push(void) { - int idx = __this_cpu_inc_return(__kmap_local_idx) - 1; + int idx = __this_cpu_add_return(__kmap_local_idx, KM_INCR) - 1; WARN_ON_ONCE(in_irq() && !irqs_disabled()); BUG_ON(idx >= KM_MAX_IDX); @@ -390,7 +400,7 @@ static inline int kmap_local_idx(void) static inline void kmap_local_idx_pop(void) { - int idx = __this_cpu_dec_return(__kmap_local_idx); + int idx = __this_cpu_sub_return(__kmap_local_idx, KM_INCR); BUG_ON(idx < 0); }
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 08/37] x86/mm/highmem: Use generic kmap atomic implementation
Convert X86 to the generic kmap atomic implementation and make the iomap_atomic() naming convention consistent while at it. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: x86 at kernel.org --- V3: Remove the kmap_types cruft --- arch/x86/Kconfig | 3 + arch/x86/include/asm/fixmap.h | 5 +- arch/x86/include/asm/highmem.h | 13 +++++-- arch/x86/include/asm/iomap.h | 18 +++++----- arch/x86/include/asm/kmap_types.h | 13 ------- arch/x86/include/asm/paravirt_types.h | 1 arch/x86/mm/highmem_32.c | 59 ---------------------------------- arch/x86/mm/init_32.c | 15 -------- arch/x86/mm/iomap_32.c | 59 ++-------------------------------- include/linux/highmem.h | 2 - include/linux/io-mapping.h | 2 - mm/highmem.c | 2 - 12 files changed, 31 insertions(+), 161 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -14,10 +14,11 @@ config X86_32 select ARCH_WANT_IPC_PARSE_VERSION select CLKSRC_I8253 select CLONE_BACKWARDS + select GENERIC_VDSO_32 select HAVE_DEBUG_STACKOVERFLOW + select KMAP_LOCAL select MODULES_USE_ELF_REL select OLD_SIGACTION - select GENERIC_VDSO_32 config X86_64 def_bool y --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -31,7 +31,7 @@ #include <asm/pgtable_types.h> #ifdef CONFIG_X86_32 #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #else #include <uapi/asm/vsyscall.h> #endif @@ -94,7 +94,7 @@ enum fixed_addresses { #endif #ifdef CONFIG_X86_32 FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, #ifdef CONFIG_PCI_MMCONFIG FIX_PCIE_MCFG, #endif @@ -151,7 +151,6 @@ extern void reserve_top_address(unsigned extern int fixmaps_set; -extern pte_t *kmap_pte; extern pte_t *pkmap_page_table; void __native_set_fixmap(enum fixed_addresses idx, pte_t pte); --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -23,7 +23,6 @@ #include <linux/interrupt.h> #include <linux/threads.h> -#include <asm/kmap_types.h> #include <asm/tlbflush.h> #include <asm/paravirt.h> #include <asm/fixmap.h> @@ -58,11 +57,17 @@ extern unsigned long highstart_pfn, high #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) -void *kmap_atomic_pfn(unsigned long pfn); -void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot); - #define flush_cache_kmaps() do { } while (0) +#define arch_kmap_local_post_map(vaddr, pteval) \ + arch_flush_lazy_mmu_mode() + +#define arch_kmap_local_post_unmap(vaddr) \ + do { \ + flush_tlb_one_kernel((vaddr)); \ + arch_flush_lazy_mmu_mode(); \ + } while (0) + extern void add_highpages_with_active_regions(int nid, unsigned long start_pfn, unsigned long end_pfn); --- a/arch/x86/include/asm/iomap.h +++ b/arch/x86/include/asm/iomap.h @@ -9,19 +9,21 @@ #include <linux/fs.h> #include <linux/mm.h> #include <linux/uaccess.h> +#include <linux/highmem.h> #include <asm/cacheflush.h> #include <asm/tlbflush.h> -void __iomem * -iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot); +void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot); -void -iounmap_atomic(void __iomem *kvaddr); +static inline void iounmap_atomic(void __iomem *vaddr) +{ + kunmap_local_indexed((void __force *)vaddr); + pagefault_enable(); + preempt_enable(); +} -int -iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); +int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); -void -iomap_free(resource_size_t base, unsigned long size); +void iomap_free(resource_size_t base, unsigned long size); #endif /* _ASM_X86_IOMAP_H */ --- a/arch/x86/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_X86_KMAP_TYPES_H -#define _ASM_X86_KMAP_TYPES_H - -#if defined(CONFIG_X86_32) && defined(CONFIG_DEBUG_HIGHMEM) -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif /* _ASM_X86_KMAP_TYPES_H */ --- a/arch/x86/include/asm/paravirt_types.h +++ b/arch/x86/include/asm/paravirt_types.h @@ -41,7 +41,6 @@ #ifndef __ASSEMBLY__ #include <asm/desc_defs.h> -#include <asm/kmap_types.h> #include <asm/pgtable_types.h> #include <asm/nospec-branch.h> --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -4,65 +4,6 @@ #include <linux/swap.h> /* for totalram_pages */ #include <linux/memblock.h> -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned long vaddr; - int idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - BUG_ON(!pte_none(*(kmap_pte-idx))); - set_pte(kmap_pte-idx, mk_pte(page, prot)); - arch_flush_lazy_mmu_mode(); - - return (void *)vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -/* - * This is the same as kmap_atomic() but can map memory that doesn't - * have a struct page associated with it. - */ -void *kmap_atomic_pfn(unsigned long pfn) -{ - return kmap_atomic_prot_pfn(pfn, kmap_prot); -} -EXPORT_SYMBOL_GPL(kmap_atomic_pfn); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - - if (vaddr >= __fix_to_virt(FIX_KMAP_END) && - vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) { - int idx, type; - - type = kmap_atomic_idx(); - idx = type + KM_TYPE_NR * smp_processor_id(); - -#ifdef CONFIG_DEBUG_HIGHMEM - WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); -#endif - /* - * Force other mappings to Oops if they'll try to access this - * pte without first remap it. Keeping stale mappings around - * is a bad idea also, in case the page changes cacheability - * attributes or becomes a protected page in a hypervisor. - */ - kpte_clear_flush(kmap_pte-idx, vaddr); - kmap_atomic_idx_pop(); - arch_flush_lazy_mmu_mode(); - } -#ifdef CONFIG_DEBUG_HIGHMEM - else { - BUG_ON(vaddr < PAGE_OFFSET); - BUG_ON(vaddr >= (unsigned long)high_memory); - } -#endif -} -EXPORT_SYMBOL(kunmap_atomic_high); - void __init set_highmem_pages_init(void) { struct zone *zone; --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -394,19 +394,6 @@ kernel_physical_mapping_init(unsigned lo return last_map_addr; } -pte_t *kmap_pte; - -static void __init kmap_init(void) -{ - unsigned long kmap_vstart; - - /* - * Cache the first kmap pte: - */ - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); - kmap_pte = virt_to_kpte(kmap_vstart); -} - #ifdef CONFIG_HIGHMEM static void __init permanent_kmaps_init(pgd_t *pgd_base) { @@ -712,8 +699,6 @@ void __init paging_init(void) __flush_tlb_all(); - kmap_init(); - /* * NOTE: at this point the bootmem allocator is fully available. */ --- a/arch/x86/mm/iomap_32.c +++ b/arch/x86/mm/iomap_32.c @@ -44,28 +44,7 @@ void iomap_free(resource_size_t base, un } EXPORT_SYMBOL_GPL(iomap_free); -void *kmap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) -{ - unsigned long vaddr; - int idx, type; - - preempt_disable(); - pagefault_disable(); - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - set_pte(kmap_pte - idx, pfn_pte(pfn, prot)); - arch_flush_lazy_mmu_mode(); - - return (void *)vaddr; -} - -/* - * Map 'pfn' using protections 'prot' - */ -void __iomem * -iomap_atomic_prot_pfn(unsigned long pfn, pgprot_t prot) +void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot) { /* * For non-PAT systems, translate non-WB request to UC- just in @@ -81,36 +60,8 @@ iomap_atomic_prot_pfn(unsigned long pfn, /* Filter out unsupported __PAGE_KERNEL* bits: */ pgprot_val(prot) &= __default_kernel_pte_mask; - return (void __force __iomem *) kmap_atomic_prot_pfn(pfn, prot); -} -EXPORT_SYMBOL_GPL(iomap_atomic_prot_pfn); - -void -iounmap_atomic(void __iomem *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - - if (vaddr >= __fix_to_virt(FIX_KMAP_END) && - vaddr <= __fix_to_virt(FIX_KMAP_BEGIN)) { - int idx, type; - - type = kmap_atomic_idx(); - idx = type + KM_TYPE_NR * smp_processor_id(); - -#ifdef CONFIG_DEBUG_HIGHMEM - WARN_ON_ONCE(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); -#endif - /* - * Force other mappings to Oops if they'll try to access this - * pte without first remap it. Keeping stale mappings around - * is a bad idea also, in case the page changes cacheability - * attributes or becomes a protected page in a hypervisor. - */ - kpte_clear_flush(kmap_pte-idx, vaddr); - kmap_atomic_idx_pop(); - } - - pagefault_enable(); - preempt_enable(); + preempt_disable(); + pagefault_disable(); + return (void __force __iomem *)__kmap_local_pfn_prot(pfn, prot); } -EXPORT_SYMBOL_GPL(iounmap_atomic); +EXPORT_SYMBOL_GPL(iomap_atomic_pfn_prot); --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -217,7 +217,7 @@ static inline void __kunmap_atomic(void #endif /* CONFIG_HIGHMEM */ #if !defined(CONFIG_KMAP_LOCAL) -#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) +#if defined(CONFIG_HIGHMEM) DECLARE_PER_CPU(int, __kmap_atomic_idx); --- a/include/linux/io-mapping.h +++ b/include/linux/io-mapping.h @@ -69,7 +69,7 @@ io_mapping_map_atomic_wc(struct io_mappi BUG_ON(offset >= mapping->size); phys_addr = mapping->base + offset; - return iomap_atomic_prot_pfn(PHYS_PFN(phys_addr), mapping->prot); + return iomap_atomic_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); } static inline void --- a/mm/highmem.c +++ b/mm/highmem.c @@ -32,7 +32,7 @@ #include <linux/vmalloc.h> #ifndef CONFIG_KMAP_LOCAL -#if defined(CONFIG_HIGHMEM) || defined(CONFIG_X86_32) +#ifdef CONFIG_HIGHMEM DEFINE_PER_CPU(int, __kmap_atomic_idx); #endif #endif
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 09/37] arc/mm/highmem: Use generic kmap atomic implementation
Adopt the map ordering to match the other architectures and the generic code. Also make the maximum entries limited and not dependend on the number of CPUs. With the original implementation did the following calculation: nr_slots = mapsize >> PAGE_SHIFT; The results in either 512 or 1024 total slots depending on configuration. The total slots have to be divided by the number of CPUs to get the number of slots per CPU (former KM_TYPE_NR). ARC supports up to 4k CPUs, so this just falls apart in random ways depending on the number of CPUs and the actual kmap (atomic) nesting. The comment in highmem.c: * - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE * slots across NR_CPUS would be more than sufficient (generic code defines * KM_TYPE_NR as 20). is just wrong. KM_TYPE_NR (now KM_MAX_IDX) is the number of slots per CPU because kmap_local/atomic() needs to support nested mappings (thread, softirq, interrupt). While KM_MAX_IDX might be overestimated, the above reasoning is just wrong and clearly the highmem code was never tested with any system with more than a few CPUs. Use the default number of slots and fail the build when it does not fit. Randomly failing at runtime is not a really good option. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Vineet Gupta <vgupta at synopsys.com> Cc: linux-snps-arc at lists.infradead.org --- V3: Make it actually more correct. --- arch/arc/Kconfig | 1 arch/arc/include/asm/highmem.h | 26 ++++++++++++++---- arch/arc/include/asm/kmap_types.h | 14 --------- arch/arc/mm/highmem.c | 54 +++----------------------------------- 4 files changed, 26 insertions(+), 69 deletions(-) --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -507,6 +507,7 @@ config LINUX_RAM_BASE config HIGHMEM bool "High Memory Support" select ARCH_DISCONTIGMEM_ENABLE + select KMAP_LOCAL help With ARC 2G:2G address split, only upper 2G is directly addressable by kernel. Enable this to potentially allow access to rest of 2G and PAE --- a/arch/arc/include/asm/highmem.h +++ b/arch/arc/include/asm/highmem.h @@ -9,17 +9,29 @@ #ifdef CONFIG_HIGHMEM #include <uapi/asm/page.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> + +#define FIXMAP_SIZE PGDIR_SIZE +#define PKMAP_SIZE PGDIR_SIZE /* start after vmalloc area */ #define FIXMAP_BASE (PAGE_OFFSET - FIXMAP_SIZE - PKMAP_SIZE) -#define FIXMAP_SIZE PGDIR_SIZE /* only 1 PGD worth */ -#define KM_TYPE_NR ((FIXMAP_SIZE >> PAGE_SHIFT)/NR_CPUS) -#define FIXMAP_ADDR(nr) (FIXMAP_BASE + ((nr) << PAGE_SHIFT)) + +#define FIX_KMAP_SLOTS (KM_MAX_IDX * NR_CPUS) +#define FIX_KMAP_BEGIN (0UL) +#define FIX_KMAP_END ((FIX_KMAP_BEGIN + FIX_KMAP_SLOTS) - 1) + +#define FIXADDR_TOP (FIXMAP_BASE + (FIX_KMAP_END << PAGE_SHIFT)) + +/* + * This should be converted to the asm-generic version, but of course this + * is needlessly different from all other architectures. Sigh - tglx + */ +#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT)) +#define __virt_to_fix(x) (((FIXADDR_TOP - ((x) & PAGE_MASK))) >> PAGE_SHIFT) /* start after fixmap area */ #define PKMAP_BASE (FIXMAP_BASE + FIXMAP_SIZE) -#define PKMAP_SIZE PGDIR_SIZE #define LAST_PKMAP (PKMAP_SIZE >> PAGE_SHIFT) #define LAST_PKMAP_MASK (LAST_PKMAP - 1) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) @@ -29,11 +41,13 @@ extern void kmap_init(void); +#define arch_kmap_local_post_unmap(vaddr) \ + local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE) + static inline void flush_cache_kmaps(void) { flush_cache_all(); } - #endif #endif --- a/arch/arc/include/asm/kmap_types.h +++ /dev/null @@ -1,14 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * Copyright (C) 2015 Synopsys, Inc. (www.synopsys.com) - */ - -#ifndef _ASM_KMAP_TYPES_H -#define _ASM_KMAP_TYPES_H - -/* - * We primarily need to define KM_TYPE_NR here but that in turn - * is a function of PGDIR_SIZE etc. - * To avoid circular deps issue, put everything in asm/highmem.h - */ -#endif --- a/arch/arc/mm/highmem.c +++ b/arch/arc/mm/highmem.c @@ -36,9 +36,8 @@ * This means each only has 1 PGDIR_SIZE worth of kvaddr mappings, which means * 2M of kvaddr space for typical config (8K page and 11:8:13 traversal split) * - * - fixmap anyhow needs a limited number of mappings. So 2M kvaddr == 256 PTE - * slots across NR_CPUS would be more than sufficient (generic code defines - * KM_TYPE_NR as 20). + * - The fixed KMAP slots for kmap_local/atomic() require KM_MAX_IDX slots per + * CPU. So the number of CPUs sharing a single PTE page is limited. * * - pkmap being preemptible, in theory could do with more than 256 concurrent * mappings. However, generic pkmap code: map_new_virtual(), doesn't traverse @@ -47,48 +46,6 @@ */ extern pte_t * pkmap_page_table; -static pte_t * fixmap_page_table; - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - int idx, cpu_idx; - unsigned long vaddr; - - cpu_idx = kmap_atomic_idx_push(); - idx = cpu_idx + KM_TYPE_NR * smp_processor_id(); - vaddr = FIXMAP_ADDR(idx); - - set_pte_at(&init_mm, vaddr, fixmap_page_table + idx, - mk_pte(page, prot)); - - return (void *)vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kv) -{ - unsigned long kvaddr = (unsigned long)kv; - - if (kvaddr >= FIXMAP_BASE && kvaddr < (FIXMAP_BASE + FIXMAP_SIZE)) { - - /* - * Because preemption is disabled, this vaddr can be associated - * with the current allocated index. - * But in case of multiple live kmap_atomic(), it still relies on - * callers to unmap in right order. - */ - int cpu_idx = kmap_atomic_idx(); - int idx = cpu_idx + KM_TYPE_NR * smp_processor_id(); - - WARN_ON(kvaddr != FIXMAP_ADDR(idx)); - - pte_clear(&init_mm, kvaddr, fixmap_page_table + idx); - local_flush_tlb_kernel_range(kvaddr, kvaddr + PAGE_SIZE); - - kmap_atomic_idx_pop(); - } -} -EXPORT_SYMBOL(kunmap_atomic_high); static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr) { @@ -108,10 +65,9 @@ void __init kmap_init(void) { /* Due to recursive include hell, we can't do this in processor.h */ BUILD_BUG_ON(PAGE_OFFSET < (VMALLOC_END + FIXMAP_SIZE + PKMAP_SIZE)); + BUILD_BUG_ON(LAST_PKMAP > PTRS_PER_PTE); + BUILD_BUG_ON(FIX_KMAP_SLOTS > PTRS_PER_PTE); - BUILD_BUG_ON(KM_TYPE_NR > PTRS_PER_PTE); pkmap_page_table = alloc_kmap_pgtable(PKMAP_BASE); - - BUILD_BUG_ON(LAST_PKMAP > PTRS_PER_PTE); - fixmap_page_table = alloc_kmap_pgtable(FIXMAP_BASE); + alloc_kmap_pgtable(FIXMAP_BASE); }
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Russell King <linux at armlinux.org.uk> Cc: Arnd Bergmann <arnd at arndb.de> Cc: linux-arm-kernel at lists.infradead.org --- V3: Remove the kmap types cruft --- arch/arm/Kconfig | 1 arch/arm/include/asm/fixmap.h | 4 - arch/arm/include/asm/highmem.h | 33 +++++++--- arch/arm/include/asm/kmap_types.h | 10 --- arch/arm/mm/Makefile | 1 arch/arm/mm/highmem.c | 121 -------------------------------------- 6 files changed, 26 insertions(+), 144 deletions(-) --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1498,6 +1498,7 @@ config HAVE_ARCH_PFN_VALID config HIGHMEM bool "High Memory Support" depends on MMU + select KMAP_LOCAL help The address space of ARM processors is only 4 Gigabytes large and it has to accommodate user address space, kernel address --- a/arch/arm/include/asm/fixmap.h +++ b/arch/arm/include/asm/fixmap.h @@ -7,14 +7,14 @@ #define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE) #include <linux/pgtable.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> enum fixed_addresses { FIX_EARLYCON_MEM_BASE, __end_of_permanent_fixed_addresses, FIX_KMAP_BEGIN = __end_of_permanent_fixed_addresses, - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, /* Support writing RO kernel text via kprobes, jump labels, etc. */ FIX_TEXT_POKE0, --- a/arch/arm/include/asm/highmem.h +++ b/arch/arm/include/asm/highmem.h @@ -2,7 +2,7 @@ #ifndef _ASM_HIGHMEM_H #define _ASM_HIGHMEM_H -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #define PKMAP_BASE (PAGE_OFFSET - PMD_SIZE) #define LAST_PKMAP PTRS_PER_PTE @@ -46,19 +46,32 @@ extern pte_t *pkmap_page_table; #ifdef ARCH_NEEDS_KMAP_HIGH_GET extern void *kmap_high_get(struct page *page); -#else + +static inline void *arch_kmap_local_high_get(struct page *page) +{ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !cache_is_vivt()) + return NULL; + return kmap_high_get(page); +} +#define arch_kmap_local_high_get arch_kmap_local_high_get + +#else /* ARCH_NEEDS_KMAP_HIGH_GET */ static inline void *kmap_high_get(struct page *page) { return NULL; } -#endif +#endif /* !ARCH_NEEDS_KMAP_HIGH_GET */ -/* - * The following functions are already defined by <linux/highmem.h> - * when CONFIG_HIGHMEM is not set. - */ -#ifdef CONFIG_HIGHMEM -extern void *kmap_atomic_pfn(unsigned long pfn); -#endif +#define arch_kmap_local_post_map(vaddr, pteval) \ + local_flush_tlb_kernel_page(vaddr) + +#define arch_kmap_local_pre_unmap(vaddr) \ +do { \ + if (cache_is_vivt()) \ + __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); \ +} while (0) + +#define arch_kmap_local_post_unmap(vaddr) \ + local_flush_tlb_kernel_page(vaddr) #endif --- a/arch/arm/include/asm/kmap_types.h +++ /dev/null @@ -1,10 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef __ARM_KMAP_TYPES_H -#define __ARM_KMAP_TYPES_H - -/* - * This is the "bare minimum". AIO seems to require this. - */ -#define KM_TYPE_NR 16 - -#endif --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -19,7 +19,6 @@ obj-$(CONFIG_MODULES) += proc-syms.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o -obj-$(CONFIG_HIGHMEM) += highmem.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM_PV_FIXUP) += pv-fixup-asm.o --- a/arch/arm/mm/highmem.c +++ /dev/null @@ -1,121 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * arch/arm/mm/highmem.c -- ARM highmem support - * - * Author: Nicolas Pitre - * Created: september 8, 2008 - * Copyright: Marvell Semiconductors Inc. - */ - -#include <linux/module.h> -#include <linux/highmem.h> -#include <linux/interrupt.h> -#include <asm/fixmap.h> -#include <asm/cacheflush.h> -#include <asm/tlbflush.h> -#include "mm.h" - -static inline void set_fixmap_pte(int idx, pte_t pte) -{ - unsigned long vaddr = __fix_to_virt(idx); - pte_t *ptep = virt_to_kpte(vaddr); - - set_pte_ext(ptep, pte, 0); - local_flush_tlb_kernel_page(vaddr); -} - -static inline pte_t get_fixmap_pte(unsigned long vaddr) -{ - pte_t *ptep = virt_to_kpte(vaddr); - - return *ptep; -} - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned int idx; - unsigned long vaddr; - void *kmap; - int type; - -#ifdef CONFIG_DEBUG_HIGHMEM - /* - * There is no cache coherency issue when non VIVT, so force the - * dedicated kmap usage for better debugging purposes in that case. - */ - if (!cache_is_vivt()) - kmap = NULL; - else -#endif - kmap = kmap_high_get(page); - if (kmap) - return kmap; - - type = kmap_atomic_idx_push(); - - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(idx); -#ifdef CONFIG_DEBUG_HIGHMEM - /* - * With debugging enabled, kunmap_atomic forces that entry to 0. - * Make sure it was indeed properly unmapped. - */ - BUG_ON(!pte_none(get_fixmap_pte(vaddr))); -#endif - /* - * When debugging is off, kunmap_atomic leaves the previous mapping - * in place, so the contained TLB flush ensures the TLB is updated - * with the new mapping. - */ - set_fixmap_pte(idx, mk_pte(page, prot)); - - return (void *)vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - int idx, type; - - if (kvaddr >= (void *)FIXADDR_START) { - type = kmap_atomic_idx(); - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); - - if (cache_is_vivt()) - __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(vaddr != __fix_to_virt(idx)); - set_fixmap_pte(idx, __pte(0)); -#else - (void) idx; /* to kill a warning */ -#endif - kmap_atomic_idx_pop(); - } else if (vaddr >= PKMAP_ADDR(0) && vaddr < PKMAP_ADDR(LAST_PKMAP)) { - /* this address was obtained through kmap_high_get() */ - kunmap_high(pte_page(pkmap_page_table[PKMAP_NR(vaddr)])); - } -} -EXPORT_SYMBOL(kunmap_atomic_high); - -void *kmap_atomic_pfn(unsigned long pfn) -{ - unsigned long vaddr; - int idx, type; - struct page *page = pfn_to_page(pfn); - - preempt_disable(); - pagefault_disable(); - if (!PageHighMem(page)) - return page_address(page); - - type = kmap_atomic_idx_push(); - idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(idx); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(get_fixmap_pte(vaddr))); -#endif - set_fixmap_pte(idx, pfn_pte(pfn, kmap_prot)); - - return (void *)vaddr; -}
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 11/37] csky/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: linux-csky at vger.kernel.org --- V3: Does not compile with gcc 10 --- arch/csky/Kconfig | 1 arch/csky/include/asm/fixmap.h | 4 +- arch/csky/include/asm/highmem.h | 6 ++- arch/csky/mm/highmem.c | 75 ---------------------------------------- 4 files changed, 8 insertions(+), 78 deletions(-) --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -286,6 +286,7 @@ config NR_CPUS config HIGHMEM bool "High Memory Support" depends on !CPU_CK610 + select KMAP_LOCAL default y config FORCE_MAX_ZONEORDER --- a/arch/csky/include/asm/fixmap.h +++ b/arch/csky/include/asm/fixmap.h @@ -8,7 +8,7 @@ #include <asm/memory.h> #ifdef CONFIG_HIGHMEM #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif enum fixed_addresses { @@ -17,7 +17,7 @@ enum fixed_addresses { #endif #ifdef CONFIG_HIGHMEM FIX_KMAP_BEGIN, - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, #endif __end_of_fixed_addresses }; --- a/arch/csky/include/asm/highmem.h +++ b/arch/csky/include/asm/highmem.h @@ -9,7 +9,7 @@ #include <linux/init.h> #include <linux/interrupt.h> #include <linux/uaccess.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #include <asm/cache.h> /* undef for production */ @@ -32,10 +32,12 @@ extern pte_t *pkmap_page_table; #define ARCH_HAS_KMAP_FLUSH_TLB extern void kmap_flush_tlb(unsigned long addr); -extern void *kmap_atomic_pfn(unsigned long pfn); #define flush_cache_kmaps() do {} while (0) +#define arch_kmap_local_post_map(vaddr, pteval) kmap_flush_tlb(vaddr) +#define arch_kmap_local_post_unmap(vaddr) kmap_flush_tlb(vaddr) + extern void kmap_init(void); #endif /* __KERNEL__ */ --- a/arch/csky/mm/highmem.c +++ b/arch/csky/mm/highmem.c @@ -9,8 +9,6 @@ #include <asm/tlbflush.h> #include <asm/cacheflush.h> -static pte_t *kmap_pte; - unsigned long highstart_pfn, highend_pfn; void kmap_flush_tlb(unsigned long addr) @@ -19,67 +17,7 @@ void kmap_flush_tlb(unsigned long addr) } EXPORT_SYMBOL(kmap_flush_tlb); -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned long vaddr; - int idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(*(kmap_pte - idx))); -#endif - set_pte(kmap_pte-idx, mk_pte(page, prot)); - flush_tlb_one((unsigned long)vaddr); - - return (void *)vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - int idx; - - if (vaddr < FIXADDR_START) - return; - -#ifdef CONFIG_DEBUG_HIGHMEM - idx = KM_TYPE_NR*smp_processor_id() + kmap_atomic_idx(); - - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); - - pte_clear(&init_mm, vaddr, kmap_pte - idx); - flush_tlb_one(vaddr); -#else - (void) idx; /* to kill a warning */ -#endif - kmap_atomic_idx_pop(); -} -EXPORT_SYMBOL(kunmap_atomic_high); - -/* - * This is the same as kmap_atomic() but can map memory that doesn't - * have a struct page associated with it. - */ -void *kmap_atomic_pfn(unsigned long pfn) -{ - unsigned long vaddr; - int idx, type; - - pagefault_disable(); - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL)); - flush_tlb_one(vaddr); - - return (void *) vaddr; -} - -static void __init kmap_pages_init(void) +void __init kmap_init(void) { unsigned long vaddr; pgd_t *pgd; @@ -96,14 +34,3 @@ static void __init kmap_pages_init(void) pte = pte_offset_kernel(pmd, vaddr); pkmap_page_table = pte; } - -void __init kmap_init(void) -{ - unsigned long vaddr; - - kmap_pages_init(); - - vaddr = __fix_to_virt(FIX_KMAP_BEGIN); - - kmap_pte = pte_offset_kernel((pmd_t *)pgd_offset_k(vaddr), vaddr); -}
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 12/37] microblaze/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Michal Simek <monstr at monstr.eu> --- V3: Remove the kmap types cruft --- arch/microblaze/Kconfig | 1 arch/microblaze/include/asm/fixmap.h | 4 - arch/microblaze/include/asm/highmem.h | 6 ++ arch/microblaze/mm/Makefile | 1 arch/microblaze/mm/highmem.c | 78 ---------------------------------- arch/microblaze/mm/init.c | 6 -- 6 files changed, 8 insertions(+), 88 deletions(-) --- a/arch/microblaze/Kconfig +++ b/arch/microblaze/Kconfig @@ -155,6 +155,7 @@ config XILINX_UNCACHED_SHADOW config HIGHMEM bool "High memory support" depends on MMU + select KMAP_LOCAL help The address space of Microblaze processors is only 4 Gigabytes large and it has to accommodate user address space, kernel address --- a/arch/microblaze/include/asm/fixmap.h +++ b/arch/microblaze/include/asm/fixmap.h @@ -20,7 +20,7 @@ #include <asm/page.h> #ifdef CONFIG_HIGHMEM #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif #define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE)) @@ -47,7 +47,7 @@ enum fixed_addresses { FIX_HOLE, #ifdef CONFIG_HIGHMEM FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * num_possible_cpus()) - 1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * num_possible_cpus()) - 1, #endif __end_of_fixed_addresses }; --- a/arch/microblaze/include/asm/highmem.h +++ b/arch/microblaze/include/asm/highmem.h @@ -25,7 +25,6 @@ #include <linux/uaccess.h> #include <asm/fixmap.h> -extern pte_t *kmap_pte; extern pte_t *pkmap_page_table; /* @@ -52,6 +51,11 @@ extern pte_t *pkmap_page_table; #define flush_cache_kmaps() { flush_icache(); flush_dcache(); } +#define arch_kmap_local_post_map(vaddr, pteval) \ + local_flush_tlb_page(NULL, vaddr); +#define arch_kmap_local_post_unmap(vaddr) \ + local_flush_tlb_page(NULL, vaddr); + #endif /* __KERNEL__ */ #endif /* _ASM_HIGHMEM_H */ --- a/arch/microblaze/mm/Makefile +++ b/arch/microblaze/mm/Makefile @@ -6,4 +6,3 @@ obj-y := consistent.o init.o obj-$(CONFIG_MMU) += pgtable.o mmu_context.o fault.o -obj-$(CONFIG_HIGHMEM) += highmem.o --- a/arch/microblaze/mm/highmem.c +++ /dev/null @@ -1,78 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * highmem.c: virtual kernel memory mappings for high memory - * - * PowerPC version, stolen from the i386 version. - * - * Used in CONFIG_HIGHMEM systems for memory pages which - * are not addressable by direct kernel virtual addresses. - * - * Copyright (C) 1999 Gerhard Wichert, Siemens AG - * Gerhard.Wichert at pdb.siemens.de - * - * - * Redesigned the x86 32-bit VM architecture to deal with - * up to 16 Terrabyte physical memory. With current x86 CPUs - * we now support up to 64 Gigabytes physical RAM. - * - * Copyright (C) 1999 Ingo Molnar <mingo at redhat.com> - * - * Reworked for PowerPC by various contributors. Moved from - * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp. - */ - -#include <linux/export.h> -#include <linux/highmem.h> - -/* - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap - * gives a more generic (and caching) interface. But kmap_atomic can - * be used in IRQ contexts, so in some (very limited) cases we need - * it. - */ -#include <asm/tlbflush.h> - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - - unsigned long vaddr; - int idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(*(kmap_pte-idx))); -#endif - set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot)); - local_flush_tlb_page(NULL, vaddr); - - return (void *) vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - int type; - unsigned int idx; - - if (vaddr < __fix_to_virt(FIX_KMAP_END)) - return; - - type = kmap_atomic_idx(); - - idx = type + KM_TYPE_NR * smp_processor_id(); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); -#endif - /* - * force other mappings to Oops if they'll try to access - * this pte without first remap it - */ - pte_clear(&init_mm, vaddr, kmap_pte-idx); - local_flush_tlb_page(NULL, vaddr); - - kmap_atomic_idx_pop(); -} -EXPORT_SYMBOL(kunmap_atomic_high); --- a/arch/microblaze/mm/init.c +++ b/arch/microblaze/mm/init.c @@ -49,17 +49,11 @@ unsigned long lowmem_size; EXPORT_SYMBOL(min_low_pfn); EXPORT_SYMBOL(max_low_pfn); -#ifdef CONFIG_HIGHMEM -pte_t *kmap_pte; -EXPORT_SYMBOL(kmap_pte); - static void __init highmem_init(void) { pr_debug("%x\n", (u32)PKMAP_BASE); map_page(PKMAP_BASE, 0, 0); /* XXX gross */ pkmap_page_table = virt_to_kpte(PKMAP_BASE); - - kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); } static void highmem_setup(void)
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 13/37] mips/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Thomas Bogendoerfer <tsbogend at alpha.franken.de> Cc: linux-mips at vger.kernel.org --- V3: Remove the kmap types cruft --- arch/mips/Kconfig | 1 arch/mips/include/asm/fixmap.h | 4 - arch/mips/include/asm/highmem.h | 6 +- arch/mips/include/asm/kmap_types.h | 13 ------ arch/mips/mm/highmem.c | 77 ------------------------------------- arch/mips/mm/init.c | 4 - 6 files changed, 6 insertions(+), 99 deletions(-) --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -2719,6 +2719,7 @@ config WAR_MIPS34K_MISSED_ITLB config HIGHMEM bool "High Memory Support" depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM && !CPU_MIPS32_3_5_EVA + select KMAP_LOCAL config CPU_SUPPORTS_HIGHMEM bool --- a/arch/mips/include/asm/fixmap.h +++ b/arch/mips/include/asm/fixmap.h @@ -17,7 +17,7 @@ #include <spaces.h> #ifdef CONFIG_HIGHMEM #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif /* @@ -52,7 +52,7 @@ enum fixed_addresses { #ifdef CONFIG_HIGHMEM /* reserved pte's for temporary kernel mappings */ FIX_KMAP_BEGIN = FIX_CMAP_END + 1, - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, #endif __end_of_fixed_addresses }; --- a/arch/mips/include/asm/highmem.h +++ b/arch/mips/include/asm/highmem.h @@ -24,7 +24,7 @@ #include <linux/interrupt.h> #include <linux/uaccess.h> #include <asm/cpu-features.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> /* declarations for highmem.c */ extern unsigned long highstart_pfn, highend_pfn; @@ -48,11 +48,11 @@ extern pte_t *pkmap_page_table; #define ARCH_HAS_KMAP_FLUSH_TLB extern void kmap_flush_tlb(unsigned long addr); -extern void *kmap_atomic_pfn(unsigned long pfn); #define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases) -extern void kmap_init(void); +#define arch_kmap_local_post_map(vaddr, pteval) local_flush_tlb_one(vaddr) +#define arch_kmap_local_post_unmap(vaddr) local_flush_tlb_one(vaddr) #endif /* __KERNEL__ */ --- a/arch/mips/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_KMAP_TYPES_H -#define _ASM_KMAP_TYPES_H - -#ifdef CONFIG_DEBUG_HIGHMEM -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif --- a/arch/mips/mm/highmem.c +++ b/arch/mips/mm/highmem.c @@ -8,8 +8,6 @@ #include <asm/fixmap.h> #include <asm/tlbflush.h> -static pte_t *kmap_pte; - unsigned long highstart_pfn, highend_pfn; void kmap_flush_tlb(unsigned long addr) @@ -17,78 +15,3 @@ void kmap_flush_tlb(unsigned long addr) flush_tlb_one(addr); } EXPORT_SYMBOL(kmap_flush_tlb); - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned long vaddr; - int idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(*(kmap_pte - idx))); -#endif - set_pte(kmap_pte-idx, mk_pte(page, prot)); - local_flush_tlb_one((unsigned long)vaddr); - - return (void*) vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - int type __maybe_unused; - - if (vaddr < FIXADDR_START) - return; - - type = kmap_atomic_idx(); -#ifdef CONFIG_DEBUG_HIGHMEM - { - int idx = type + KM_TYPE_NR * smp_processor_id(); - - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); - - /* - * force other mappings to Oops if they'll try to access - * this pte without first remap it - */ - pte_clear(&init_mm, vaddr, kmap_pte-idx); - local_flush_tlb_one(vaddr); - } -#endif - kmap_atomic_idx_pop(); -} -EXPORT_SYMBOL(kunmap_atomic_high); - -/* - * This is the same as kmap_atomic() but can map memory that doesn't - * have a struct page associated with it. - */ -void *kmap_atomic_pfn(unsigned long pfn) -{ - unsigned long vaddr; - int idx, type; - - preempt_disable(); - pagefault_disable(); - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - set_pte(kmap_pte-idx, pfn_pte(pfn, PAGE_KERNEL)); - flush_tlb_one(vaddr); - - return (void*) vaddr; -} - -void __init kmap_init(void) -{ - unsigned long kmap_vstart; - - /* cache the first kmap pte */ - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); - kmap_pte = virt_to_kpte(kmap_vstart); -} --- a/arch/mips/mm/init.c +++ b/arch/mips/mm/init.c @@ -36,7 +36,6 @@ #include <asm/cachectl.h> #include <asm/cpu.h> #include <asm/dma.h> -#include <asm/kmap_types.h> #include <asm/maar.h> #include <asm/mmu_context.h> #include <asm/sections.h> @@ -402,9 +401,6 @@ void __init paging_init(void) pagetable_init(); -#ifdef CONFIG_HIGHMEM - kmap_init(); -#endif #ifdef CONFIG_ZONE_DMA max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN; #endif
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 14/37] nds32/mm/highmem: Switch to generic kmap atomic
The mapping code is odd and looks broken. See FIXME in the comment. Also fix the harmless off by one in the FIX_KMAP_END define. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Nick Hu <nickhu at andestech.com> Cc: Greentime Hu <green.hu at gmail.com> Cc: Vincent Chen <deanbo422 at gmail.com> --- V3: Remove the kmap types cruft --- arch/nds32/Kconfig.cpu | 1 arch/nds32/include/asm/fixmap.h | 4 +-- arch/nds32/include/asm/highmem.h | 22 +++++++++++++---- arch/nds32/mm/Makefile | 1 arch/nds32/mm/highmem.c | 48 --------------------------------------- 5 files changed, 19 insertions(+), 57 deletions(-) --- a/arch/nds32/Kconfig.cpu +++ b/arch/nds32/Kconfig.cpu @@ -157,6 +157,7 @@ config HW_SUPPORT_UNALIGNMENT_ACCESS config HIGHMEM bool "High Memory Support" depends on MMU && !CPU_CACHE_ALIASING + select KMAP_LOCAL help The address space of Andes processors is only 4 Gigabytes large and it has to accommodate user address space, kernel address --- a/arch/nds32/include/asm/fixmap.h +++ b/arch/nds32/include/asm/fixmap.h @@ -6,7 +6,7 @@ #ifdef CONFIG_HIGHMEM #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif enum fixed_addresses { @@ -14,7 +14,7 @@ enum fixed_addresses { FIX_KMAP_RESERVED, FIX_KMAP_BEGIN, #ifdef CONFIG_HIGHMEM - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS), + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, #endif FIX_EARLYCON_MEM_BASE, __end_of_fixed_addresses --- a/arch/nds32/include/asm/highmem.h +++ b/arch/nds32/include/asm/highmem.h @@ -5,7 +5,6 @@ #define _ASM_HIGHMEM_H #include <asm/proc-fns.h> -#include <asm/kmap_types.h> #include <asm/fixmap.h> /* @@ -45,11 +44,22 @@ extern pte_t *pkmap_page_table; extern void kmap_init(void); /* - * The following functions are already defined by <linux/highmem.h> - * when CONFIG_HIGHMEM is not set. + * FIXME: The below looks broken vs. a kmap_atomic() in task context which + * is interupted and another kmap_atomic() happens in interrupt context. + * But what do I know about nds32. -- tglx */ -#ifdef CONFIG_HIGHMEM -extern void *kmap_atomic_pfn(unsigned long pfn); -#endif +#define arch_kmap_local_post_map(vaddr, pteval) \ + do { \ + __nds32__tlbop_inv(vaddr); \ + __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); \ + __nds32__tlbop_rwr(pteval); \ + __nds32__isb(); \ + } while (0) + +#define arch_kmap_local_pre_unmap(vaddr) \ + do { \ + __nds32__tlbop_inv(vaddr); \ + __nds32__isb(); \ + } while (0) #endif --- a/arch/nds32/mm/Makefile +++ b/arch/nds32/mm/Makefile @@ -3,7 +3,6 @@ obj-y := extable.o tlb.o fault.o init mm-nds32.o cacheflush.o proc.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o -obj-$(CONFIG_HIGHMEM) += highmem.o ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE) --- a/arch/nds32/mm/highmem.c +++ /dev/null @@ -1,48 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -// Copyright (C) 2005-2017 Andes Technology Corporation - -#include <linux/export.h> -#include <linux/highmem.h> -#include <linux/sched.h> -#include <linux/smp.h> -#include <linux/interrupt.h> -#include <linux/memblock.h> -#include <asm/fixmap.h> -#include <asm/tlbflush.h> - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned int idx; - unsigned long vaddr, pte; - int type; - pte_t *ptep; - - type = kmap_atomic_idx_push(); - - idx = type + KM_TYPE_NR * smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - pte = (page_to_pfn(page) << PAGE_SHIFT) | prot; - ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr); - set_pte(ptep, pte); - - __nds32__tlbop_inv(vaddr); - __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); - __nds32__tlbop_rwr(pte); - __nds32__isb(); - return (void *)vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - if (kvaddr >= (void *)FIXADDR_START) { - unsigned long vaddr = (unsigned long)kvaddr; - pte_t *ptep; - kmap_atomic_idx_pop(); - __nds32__tlbop_inv(vaddr); - __nds32__isb(); - ptep = pte_offset_kernel(pmd_off_k(vaddr), vaddr); - set_pte(ptep, 0); - } -} -EXPORT_SYMBOL(kunmap_atomic_high);
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 15/37] powerpc/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Michael Ellerman <mpe at ellerman.id.au> Cc: Benjamin Herrenschmidt <benh at kernel.crashing.org> Cc: Paul Mackerras <paulus at samba.org> Cc: linuxppc-dev at lists.ozlabs.org --- V3: Remove the kmap types cruft --- arch/powerpc/Kconfig | 1 arch/powerpc/include/asm/fixmap.h | 4 +- arch/powerpc/include/asm/highmem.h | 7 ++- arch/powerpc/include/asm/kmap_types.h | 13 ------ arch/powerpc/mm/Makefile | 1 arch/powerpc/mm/highmem.c | 67 ---------------------------------- arch/powerpc/mm/mem.c | 7 --- 7 files changed, 8 insertions(+), 92 deletions(-) --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -409,6 +409,7 @@ menu "Kernel options" config HIGHMEM bool "High memory support" depends on PPC32 + select KMAP_LOCAL source "kernel/Kconfig.hz" --- a/arch/powerpc/include/asm/fixmap.h +++ b/arch/powerpc/include/asm/fixmap.h @@ -20,7 +20,7 @@ #include <asm/page.h> #ifdef CONFIG_HIGHMEM #include <linux/threads.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif #ifdef CONFIG_KASAN @@ -55,7 +55,7 @@ enum fixed_addresses { FIX_EARLY_DEBUG_BASE = FIX_EARLY_DEBUG_TOP+(ALIGN(SZ_128K, PAGE_SIZE)/PAGE_SIZE)-1, #ifdef CONFIG_HIGHMEM FIX_KMAP_BEGIN, /* reserved pte's for temporary kernel mappings */ - FIX_KMAP_END = FIX_KMAP_BEGIN+(KM_TYPE_NR*NR_CPUS)-1, + FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, #endif #ifdef CONFIG_PPC_8xx /* For IMMR we need an aligned 512K area */ --- a/arch/powerpc/include/asm/highmem.h +++ b/arch/powerpc/include/asm/highmem.h @@ -24,12 +24,10 @@ #ifdef __KERNEL__ #include <linux/interrupt.h> -#include <asm/kmap_types.h> #include <asm/cacheflush.h> #include <asm/page.h> #include <asm/fixmap.h> -extern pte_t *kmap_pte; extern pte_t *pkmap_page_table; /* @@ -60,6 +58,11 @@ extern pte_t *pkmap_page_table; #define flush_cache_kmaps() flush_cache_all() +#define arch_kmap_local_post_map(vaddr, pteval) \ + local_flush_tlb_page(NULL, vaddr) +#define arch_kmap_local_post_unmap(vaddr) \ + local_flush_tlb_page(NULL, vaddr) + #endif /* __KERNEL__ */ #endif /* _ASM_HIGHMEM_H */ --- a/arch/powerpc/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-or-later */ -#ifndef _ASM_POWERPC_KMAP_TYPES_H -#define _ASM_POWERPC_KMAP_TYPES_H - -#ifdef __KERNEL__ - -/* - */ - -#define KM_TYPE_NR 16 - -#endif /* __KERNEL__ */ -#endif /* _ASM_POWERPC_KMAP_TYPES_H */ --- a/arch/powerpc/mm/Makefile +++ b/arch/powerpc/mm/Makefile @@ -16,7 +16,6 @@ obj-$(CONFIG_NEED_MULTIPLE_NODES) += num obj-$(CONFIG_PPC_MM_SLICES) += slice.o obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o -obj-$(CONFIG_HIGHMEM) += highmem.o obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o obj-$(CONFIG_PPC_PTDUMP) += ptdump/ obj-$(CONFIG_KASAN) += kasan/ --- a/arch/powerpc/mm/highmem.c +++ /dev/null @@ -1,67 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * highmem.c: virtual kernel memory mappings for high memory - * - * PowerPC version, stolen from the i386 version. - * - * Used in CONFIG_HIGHMEM systems for memory pages which - * are not addressable by direct kernel virtual addresses. - * - * Copyright (C) 1999 Gerhard Wichert, Siemens AG - * Gerhard.Wichert at pdb.siemens.de - * - * - * Redesigned the x86 32-bit VM architecture to deal with - * up to 16 Terrabyte physical memory. With current x86 CPUs - * we now support up to 64 Gigabytes physical RAM. - * - * Copyright (C) 1999 Ingo Molnar <mingo at redhat.com> - * - * Reworked for PowerPC by various contributors. Moved from - * highmem.h by Benjamin Herrenschmidt (c) 2009 IBM Corp. - */ - -#include <linux/highmem.h> -#include <linux/module.h> - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned long vaddr; - int idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - WARN_ON(IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !pte_none(*(kmap_pte - idx))); - __set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot), 1); - local_flush_tlb_page(NULL, vaddr); - - return (void*) vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - - if (vaddr < __fix_to_virt(FIX_KMAP_END)) - return; - - if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM)) { - int type = kmap_atomic_idx(); - unsigned int idx; - - idx = type + KM_TYPE_NR * smp_processor_id(); - WARN_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); - - /* - * force other mappings to Oops if they'll try to access - * this pte without first remap it - */ - pte_clear(&init_mm, vaddr, kmap_pte-idx); - local_flush_tlb_page(NULL, vaddr); - } - - kmap_atomic_idx_pop(); -} -EXPORT_SYMBOL(kunmap_atomic_high); --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -61,11 +61,6 @@ unsigned long long memory_limit; bool init_mem_is_free; -#ifdef CONFIG_HIGHMEM -pte_t *kmap_pte; -EXPORT_SYMBOL(kmap_pte); -#endif - pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn, unsigned long size, pgprot_t vma_prot) { @@ -235,8 +230,6 @@ void __init paging_init(void) map_kernel_page(PKMAP_BASE, 0, __pgprot(0)); /* XXX gross */ pkmap_page_table = virt_to_kpte(PKMAP_BASE); - - kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN)); #endif /* CONFIG_HIGHMEM */ printk(KERN_DEBUG "Top of RAM: 0x%llx, Total RAM: 0x%llx\n",
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 16/37] sparc/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: "David S. Miller" <davem at davemloft.net> Cc: sparclinux at vger.kernel.org --- V3: Remove the kmap types cruft --- arch/sparc/Kconfig | 1 arch/sparc/include/asm/highmem.h | 8 +- arch/sparc/include/asm/kmap_types.h | 11 --- arch/sparc/include/asm/vaddrs.h | 4 - arch/sparc/mm/Makefile | 3 arch/sparc/mm/highmem.c | 115 ------------------------------------ arch/sparc/mm/srmmu.c | 2 7 files changed, 8 insertions(+), 136 deletions(-) --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -139,6 +139,7 @@ config MMU config HIGHMEM bool default y if SPARC32 + select KMAP_LOCAL config ZONE_DMA bool --- a/arch/sparc/include/asm/highmem.h +++ b/arch/sparc/include/asm/highmem.h @@ -24,7 +24,6 @@ #include <linux/interrupt.h> #include <linux/pgtable.h> #include <asm/vaddrs.h> -#include <asm/kmap_types.h> #include <asm/pgtsrmmu.h> /* declarations for highmem.c */ @@ -33,8 +32,6 @@ extern unsigned long highstart_pfn, high #define kmap_prot __pgprot(SRMMU_ET_PTE | SRMMU_PRIV | SRMMU_CACHE) extern pte_t *pkmap_page_table; -void kmap_init(void) __init; - /* * Right now we initialize only a single pte table. It can be extended * easily, subsequent pte tables have to be allocated in one physical @@ -53,6 +50,11 @@ void kmap_init(void) __init; #define flush_cache_kmaps() flush_cache_all() +/* FIXME: Use __flush_tlb_one(vaddr) instead of flush_cache_all() -- Anton */ +#define arch_kmap_local_post_map(vaddr, pteval) flush_cache_all() +#define arch_kmap_local_post_unmap(vaddr) flush_cache_all() + + #endif /* __KERNEL__ */ #endif /* _ASM_HIGHMEM_H */ --- a/arch/sparc/include/asm/kmap_types.h +++ /dev/null @@ -1,11 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_KMAP_TYPES_H -#define _ASM_KMAP_TYPES_H - -/* Dummy header just to define km_type. None of this - * is actually used on sparc. -DaveM - */ - -#include <asm-generic/kmap_types.h> - -#endif --- a/arch/sparc/include/asm/vaddrs.h +++ b/arch/sparc/include/asm/vaddrs.h @@ -32,13 +32,13 @@ #define SRMMU_NOCACHE_ALCRATIO 64 /* 256 pages per 64MB of system RAM */ #ifndef __ASSEMBLY__ -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> enum fixed_addresses { FIX_HOLE, #ifdef CONFIG_HIGHMEM FIX_KMAP_BEGIN, - FIX_KMAP_END = (KM_TYPE_NR * NR_CPUS), + FIX_KMAP_END = (KM_MAX_IDX * NR_CPUS), #endif __end_of_fixed_addresses }; --- a/arch/sparc/mm/Makefile +++ b/arch/sparc/mm/Makefile @@ -15,6 +15,3 @@ obj-$(CONFIG_SPARC32) += leon_mm.o # Only used by sparc64 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o - -# Only used by sparc32 -obj-$(CONFIG_HIGHMEM) += highmem.o --- a/arch/sparc/mm/highmem.c +++ /dev/null @@ -1,115 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * highmem.c: virtual kernel memory mappings for high memory - * - * Provides kernel-static versions of atomic kmap functions originally - * found as inlines in include/asm-sparc/highmem.h. These became - * needed as kmap_atomic() and kunmap_atomic() started getting - * called from within modules. - * -- Tomas Szepe <szepe at pinerecords.com>, September 2002 - * - * But kmap_atomic() and kunmap_atomic() cannot be inlined in - * modules because they are loaded with btfixup-ped functions. - */ - -/* - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap - * gives a more generic (and caching) interface. But kmap_atomic can - * be used in IRQ contexts, so in some (very limited) cases we need it. - * - * XXX This is an old text. Actually, it's good to use atomic kmaps, - * provided you remember that they are atomic and not try to sleep - * with a kmap taken, much like a spinlock. Non-atomic kmaps are - * shared by CPUs, and so precious, and establishing them requires IPI. - * Atomic kmaps are lightweight and we may have NCPUS more of them. - */ -#include <linux/highmem.h> -#include <linux/export.h> -#include <linux/mm.h> - -#include <asm/cacheflush.h> -#include <asm/tlbflush.h> -#include <asm/vaddrs.h> - -static pte_t *kmap_pte; - -void __init kmap_init(void) -{ - unsigned long address = __fix_to_virt(FIX_KMAP_BEGIN); - - /* cache the first kmap pte */ - kmap_pte = virt_to_kpte(address); -} - -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) -{ - unsigned long vaddr; - long idx, type; - - type = kmap_atomic_idx_push(); - idx = type + KM_TYPE_NR*smp_processor_id(); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); - -/* XXX Fix - Anton */ -#if 0 - __flush_cache_one(vaddr); -#else - flush_cache_all(); -#endif - -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(*(kmap_pte-idx))); -#endif - set_pte(kmap_pte-idx, mk_pte(page, prot)); -/* XXX Fix - Anton */ -#if 0 - __flush_tlb_one(vaddr); -#else - flush_tlb_all(); -#endif - - return (void*) vaddr; -} -EXPORT_SYMBOL(kmap_atomic_high_prot); - -void kunmap_atomic_high(void *kvaddr) -{ - unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK; - int type; - - if (vaddr < FIXADDR_START) - return; - - type = kmap_atomic_idx(); - -#ifdef CONFIG_DEBUG_HIGHMEM - { - unsigned long idx; - - idx = type + KM_TYPE_NR * smp_processor_id(); - BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN+idx)); - - /* XXX Fix - Anton */ -#if 0 - __flush_cache_one(vaddr); -#else - flush_cache_all(); -#endif - - /* - * force other mappings to Oops if they'll try to access - * this pte without first remap it - */ - pte_clear(&init_mm, vaddr, kmap_pte-idx); - /* XXX Fix - Anton */ -#if 0 - __flush_tlb_one(vaddr); -#else - flush_tlb_all(); -#endif - } -#endif - - kmap_atomic_idx_pop(); -} -EXPORT_SYMBOL(kunmap_atomic_high); --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -971,8 +971,6 @@ void __init srmmu_paging_init(void) sparc_context_init(num_contexts); - kmap_init(); - { unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 17/37] xtensa/mm/highmem: Switch to generic kmap atomic
No reason having the same code in every architecture Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Chris Zankel <chris at zankel.net> Cc: Max Filippov <jcmvbkbc at gmail.com> Cc: linux-xtensa at linux-xtensa.org --- V3: Remove the kmap types cruft --- arch/xtensa/Kconfig | 1 arch/xtensa/include/asm/fixmap.h | 4 +-- arch/xtensa/include/asm/highmem.h | 12 ++++++++- arch/xtensa/mm/highmem.c | 46 ++++---------------------------------- 4 files changed, 18 insertions(+), 45 deletions(-) --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -666,6 +666,7 @@ endchoice config HIGHMEM bool "High Memory Support" depends on MMU + select KMAP_LOCAL help Linux can use the full amount of RAM in the system by default. However, the default MMUv2 setup only maps the --- a/arch/xtensa/include/asm/fixmap.h +++ b/arch/xtensa/include/asm/fixmap.h @@ -16,7 +16,7 @@ #ifdef CONFIG_HIGHMEM #include <linux/threads.h> #include <linux/pgtable.h> -#include <asm/kmap_types.h> +#include <asm/kmap_size.h> #endif /* @@ -39,7 +39,7 @@ enum fixed_addresses { /* reserved pte's for temporary kernel mappings */ FIX_KMAP_BEGIN, FIX_KMAP_END = FIX_KMAP_BEGIN + - (KM_TYPE_NR * NR_CPUS * DCACHE_N_COLORS) - 1, + (KM_MAX_IDX * NR_CPUS * DCACHE_N_COLORS) - 1, #endif __end_of_fixed_addresses }; --- a/arch/xtensa/include/asm/highmem.h +++ b/arch/xtensa/include/asm/highmem.h @@ -16,9 +16,8 @@ #include <linux/pgtable.h> #include <asm/cacheflush.h> #include <asm/fixmap.h> -#include <asm/kmap_types.h> -#define PKMAP_BASE ((FIXADDR_START - \ +#define PKMAP_BASE ((FIXADDR_START - \ (LAST_PKMAP + 1) * PAGE_SIZE) & PMD_MASK) #define LAST_PKMAP (PTRS_PER_PTE * DCACHE_N_COLORS) #define LAST_PKMAP_MASK (LAST_PKMAP - 1) @@ -68,6 +67,15 @@ static inline void flush_cache_kmaps(voi flush_cache_all(); } +enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn); +#define arch_kmap_local_map_idx kmap_local_map_idx + +enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr); +#define arch_kmap_local_unmap_idx kmap_local_unmap_idx + +#define arch_kmap_local_post_unmap(vaddr) \ + local_flush_tlb_kernel_range(vaddr, vaddr + PAGE_SIZE) + void kmap_init(void); #endif --- a/arch/xtensa/mm/highmem.c +++ b/arch/xtensa/mm/highmem.c @@ -12,8 +12,6 @@ #include <linux/highmem.h> #include <asm/tlbflush.h> -static pte_t *kmap_pte; - #if DCACHE_WAY_SIZE > PAGE_SIZE unsigned int last_pkmap_nr_arr[DCACHE_N_COLORS]; wait_queue_head_t pkmap_map_wait_arr[DCACHE_N_COLORS]; @@ -33,59 +31,25 @@ static inline void kmap_waitqueues_init( static inline enum fixed_addresses kmap_idx(int type, unsigned long color) { - return (type + KM_TYPE_NR * smp_processor_id()) * DCACHE_N_COLORS + + return (type + KM_MAX_IDX * smp_processor_id()) * DCACHE_N_COLORS + color; } -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) +enum fixed_addresses kmap_local_map_idx(int type, unsigned long pfn) { - enum fixed_addresses idx; - unsigned long vaddr; - - idx = kmap_idx(kmap_atomic_idx_push(), - DCACHE_ALIAS(page_to_phys(page))); - vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); -#ifdef CONFIG_DEBUG_HIGHMEM - BUG_ON(!pte_none(*(kmap_pte + idx))); -#endif - set_pte(kmap_pte + idx, mk_pte(page, prot)); - - return (void *)vaddr; + return kmap_idx(type, DCACHE_ALIAS(pfn << PAGE_SHIFT)); } -EXPORT_SYMBOL(kmap_atomic_high_prot); -void kunmap_atomic_high(void *kvaddr) +enum fixed_addresses kmap_local_unmap_idx(int type, unsigned long addr) { - if (kvaddr >= (void *)FIXADDR_START && - kvaddr < (void *)FIXADDR_TOP) { - int idx = kmap_idx(kmap_atomic_idx(), - DCACHE_ALIAS((unsigned long)kvaddr)); - - /* - * Force other mappings to Oops if they'll try to access this - * pte without first remap it. Keeping stale mappings around - * is a bad idea also, in case the page changes cacheability - * attributes or becomes a protected page in a hypervisor. - */ - pte_clear(&init_mm, kvaddr, kmap_pte + idx); - local_flush_tlb_kernel_range((unsigned long)kvaddr, - (unsigned long)kvaddr + PAGE_SIZE); - - kmap_atomic_idx_pop(); - } + return kmap_idx(type, DCACHE_ALIAS(addr)); } -EXPORT_SYMBOL(kunmap_atomic_high); void __init kmap_init(void) { - unsigned long kmap_vstart; - /* Check if this memory layout is broken because PKMAP overlaps * page table. */ BUILD_BUG_ON(PKMAP_BASE < TLBTEMP_BASE_1 + TLBTEMP_SIZE); - /* cache the first kmap pte */ - kmap_vstart = __fix_to_virt(FIX_KMAP_BEGIN); - kmap_pte = virt_to_kpte(kmap_vstart); kmap_waitqueues_init(); }
The header is not longer used and on alpha, ia64, openrisc, parisc and um it was completely unused anyway as these architectures have no highmem support. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- arch/alpha/include/asm/kmap_types.h | 15 --------------- arch/ia64/include/asm/kmap_types.h | 13 ------------- arch/openrisc/mm/init.c | 1 - arch/openrisc/mm/ioremap.c | 1 - arch/parisc/include/asm/kmap_types.h | 13 ------------- arch/um/include/asm/fixmap.h | 1 - arch/um/include/asm/kmap_types.h | 13 ------------- include/asm-generic/Kbuild | 1 - include/asm-generic/kmap_types.h | 11 ----------- include/linux/highmem.h | 2 -- 10 files changed, 71 deletions(-) --- a/arch/alpha/include/asm/kmap_types.h +++ /dev/null @@ -1,15 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_KMAP_TYPES_H -#define _ASM_KMAP_TYPES_H - -/* Dummy header just to define km_type. */ - -#ifdef CONFIG_DEBUG_HIGHMEM -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif --- a/arch/ia64/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_IA64_KMAP_TYPES_H -#define _ASM_IA64_KMAP_TYPES_H - -#ifdef CONFIG_DEBUG_HIGHMEM -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif /* _ASM_IA64_KMAP_TYPES_H */ --- a/arch/openrisc/mm/init.c +++ b/arch/openrisc/mm/init.c @@ -33,7 +33,6 @@ #include <asm/io.h> #include <asm/tlb.h> #include <asm/mmu_context.h> -#include <asm/kmap_types.h> #include <asm/fixmap.h> #include <asm/tlbflush.h> #include <asm/sections.h> --- a/arch/openrisc/mm/ioremap.c +++ b/arch/openrisc/mm/ioremap.c @@ -15,7 +15,6 @@ #include <linux/io.h> #include <linux/pgtable.h> #include <asm/pgalloc.h> -#include <asm/kmap_types.h> #include <asm/fixmap.h> #include <asm/bug.h> #include <linux/sched.h> --- a/arch/parisc/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_KMAP_TYPES_H -#define _ASM_KMAP_TYPES_H - -#ifdef CONFIG_DEBUG_HIGHMEM -#define __WITH_KM_FENCE -#endif - -#include <asm-generic/kmap_types.h> - -#undef __WITH_KM_FENCE - -#endif --- a/arch/um/include/asm/fixmap.h +++ b/arch/um/include/asm/fixmap.h @@ -3,7 +3,6 @@ #define __UM_FIXMAP_H #include <asm/processor.h> -#include <asm/kmap_types.h> #include <asm/archparam.h> #include <asm/page.h> #include <linux/threads.h> --- a/arch/um/include/asm/kmap_types.h +++ /dev/null @@ -1,13 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -/* - * Copyright (C) 2002 Jeff Dike (jdike at karaya.com) - */ - -#ifndef __UM_KMAP_TYPES_H -#define __UM_KMAP_TYPES_H - -/* No more #include "asm/arch/kmap_types.h" ! */ - -#define KM_TYPE_NR 14 - -#endif --- a/include/asm-generic/Kbuild +++ b/include/asm-generic/Kbuild @@ -30,7 +30,6 @@ mandatory-y += irq.h mandatory-y += irq_regs.h mandatory-y += irq_work.h mandatory-y += kdebug.h -mandatory-y += kmap_types.h mandatory-y += kmap_size.h mandatory-y += kprobes.h mandatory-y += linkage.h --- a/include/asm-generic/kmap_types.h +++ /dev/null @@ -1,11 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _ASM_GENERIC_KMAP_TYPES_H -#define _ASM_GENERIC_KMAP_TYPES_H - -#ifdef __WITH_KM_FENCE -# define KM_TYPE_NR 41 -#else -# define KM_TYPE_NR 20 -#endif - -#endif --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -29,8 +29,6 @@ static inline void invalidate_kernel_vma } #endif -#include <asm/kmap_types.h> - /* * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. */
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 19/37] mm/highmem: Remove the old kmap_atomic cruft
All users gone. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- include/linux/highmem.h | 63 +++--------------------------------------------- mm/highmem.c | 7 ----- 2 files changed, 5 insertions(+), 65 deletions(-) --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -86,31 +86,16 @@ static inline void kunmap(struct page *p * be used in IRQ contexts, so in some (very limited) cases we need * it. */ - -#ifndef CONFIG_KMAP_LOCAL -void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); -void kunmap_atomic_high(void *kvaddr); - static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); pagefault_disable(); - if (!PageHighMem(page)) - return page_address(page); - return kmap_atomic_high_prot(page, prot); -} - -static inline void __kunmap_atomic(void *vaddr) -{ - kunmap_atomic_high(vaddr); + return __kmap_local_page_prot(page, prot); } -#else /* !CONFIG_KMAP_LOCAL */ -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void *kmap_atomic(struct page *page) { - preempt_disable(); - pagefault_disable(); - return __kmap_local_page_prot(page, prot); + return kmap_atomic_prot(page, kmap_prot); } static inline void *kmap_atomic_pfn(unsigned long pfn) @@ -125,13 +110,6 @@ static inline void __kunmap_atomic(void kunmap_local_indexed(addr); } -#endif /* CONFIG_KMAP_LOCAL */ - -static inline void *kmap_atomic(struct page *page) -{ - return kmap_atomic_prot(page, kmap_prot); -} - /* declarations for linux/mm/highmem.c */ unsigned int nr_free_highpages(void); extern atomic_long_t _totalhigh_pages; @@ -212,41 +190,8 @@ static inline void __kunmap_atomic(void #define kmap_flush_unused() do {} while(0) -#endif /* CONFIG_HIGHMEM */ - -#if !defined(CONFIG_KMAP_LOCAL) -#if defined(CONFIG_HIGHMEM) - -DECLARE_PER_CPU(int, __kmap_atomic_idx); - -static inline int kmap_atomic_idx_push(void) -{ - int idx = __this_cpu_inc_return(__kmap_atomic_idx) - 1; - -#ifdef CONFIG_DEBUG_HIGHMEM - WARN_ON_ONCE(in_irq() && !irqs_disabled()); - BUG_ON(idx >= KM_TYPE_NR); -#endif - return idx; -} - -static inline int kmap_atomic_idx(void) -{ - return __this_cpu_read(__kmap_atomic_idx) - 1; -} -static inline void kmap_atomic_idx_pop(void) -{ -#ifdef CONFIG_DEBUG_HIGHMEM - int idx = __this_cpu_dec_return(__kmap_atomic_idx); - - BUG_ON(idx < 0); -#else - __this_cpu_dec(__kmap_atomic_idx); -#endif -} -#endif -#endif +#endif /* CONFIG_HIGHMEM */ /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() --- a/mm/highmem.c +++ b/mm/highmem.c @@ -31,12 +31,6 @@ #include <asm/tlbflush.h> #include <linux/vmalloc.h> -#ifndef CONFIG_KMAP_LOCAL -#ifdef CONFIG_HIGHMEM -DEFINE_PER_CPU(int, __kmap_atomic_idx); -#endif -#endif - /* * Virtual_count is not a pure "count". * 0 means that it is not mapped, and has not been mapped @@ -410,6 +404,7 @@ static inline void kmap_local_idx_pop(vo #ifndef arch_kmap_local_post_map # define arch_kmap_local_post_map(vaddr, pteval) do { } while (0) #endif + #ifndef arch_kmap_local_pre_unmap # define arch_kmap_local_pre_unmap(vaddr) do { } while (0) #endif
Switch the atomic iomap implementation over to kmap_local and stick the preempt/pagefault mechanics into the generic code similar to the kmap_atomic variants. Rename the x86 map function in preparation for a non-atomic variant. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V2: New patch to make review easier --- arch/x86/include/asm/iomap.h | 9 +-------- arch/x86/mm/iomap_32.c | 6 ++---- include/linux/io-mapping.h | 8 ++++++-- 3 files changed, 9 insertions(+), 14 deletions(-) --- a/arch/x86/include/asm/iomap.h +++ b/arch/x86/include/asm/iomap.h @@ -13,14 +13,7 @@ #include <asm/cacheflush.h> #include <asm/tlbflush.h> -void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot); - -static inline void iounmap_atomic(void __iomem *vaddr) -{ - kunmap_local_indexed((void __force *)vaddr); - pagefault_enable(); - preempt_enable(); -} +void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot); int iomap_create_wc(resource_size_t base, unsigned long size, pgprot_t *prot); --- a/arch/x86/mm/iomap_32.c +++ b/arch/x86/mm/iomap_32.c @@ -44,7 +44,7 @@ void iomap_free(resource_size_t base, un } EXPORT_SYMBOL_GPL(iomap_free); -void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot) +void __iomem *__iomap_local_pfn_prot(unsigned long pfn, pgprot_t prot) { /* * For non-PAT systems, translate non-WB request to UC- just in @@ -60,8 +60,6 @@ void __iomem *iomap_atomic_pfn_prot(unsi /* Filter out unsupported __PAGE_KERNEL* bits: */ pgprot_val(prot) &= __default_kernel_pte_mask; - preempt_disable(); - pagefault_disable(); return (void __force __iomem *)__kmap_local_pfn_prot(pfn, prot); } -EXPORT_SYMBOL_GPL(iomap_atomic_pfn_prot); +EXPORT_SYMBOL_GPL(__iomap_local_pfn_prot); --- a/include/linux/io-mapping.h +++ b/include/linux/io-mapping.h @@ -69,13 +69,17 @@ io_mapping_map_atomic_wc(struct io_mappi BUG_ON(offset >= mapping->size); phys_addr = mapping->base + offset; - return iomap_atomic_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); + preempt_disable(); + pagefault_disable(); + return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); } static inline void io_mapping_unmap_atomic(void __iomem *vaddr) { - iounmap_atomic(vaddr); + kunmap_local_indexed((void __force *)vaddr); + pagefault_enable(); + preempt_enable(); } static inline void __iomem *
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 21/37] Documentation/io-mapping: Remove outdated blurb
The implementation details in the documentation are outdated and not really helpful. Remove them. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- Documentation/driver-api/io-mapping.rst | 22 ---------------------- 1 file changed, 22 deletions(-) --- a/Documentation/driver-api/io-mapping.rst +++ b/Documentation/driver-api/io-mapping.rst @@ -73,25 +73,3 @@ for pages mapped with io_mapping_map_wc. At driver close time, the io_mapping object must be freed:: void io_mapping_free(struct io_mapping *mapping) - -Current Implementation -=====================- -The initial implementation of these functions uses existing mapping -mechanisms and so provides only an abstraction layer and no new -functionality. - -On 64-bit processors, io_mapping_create_wc calls ioremap_wc for the whole -range, creating a permanent kernel-visible mapping to the resource. The -map_atomic and map functions add the requested offset to the base of the -virtual address returned by ioremap_wc. - -On 32-bit processors with HIGHMEM defined, io_mapping_map_atomic_wc uses -kmap_atomic_pfn to map the specified page in an atomic fashion; -kmap_atomic_pfn isn't really supposed to be used with device pages, but it -provides an efficient mapping for this usage. - -On 32-bit processors without HIGHMEM defined, io_mapping_map_atomic_wc and -io_mapping_map_wc both use ioremap_wc, a terribly inefficient function which -performs an IPI to inform all processors about the new mapping. This results -in a significant performance penalty.
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 22/37] highmem: High implementation details and document API
Move the gory details of kmap & al into a private header and only document the interfaces which are usable by drivers. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem-internal.h | 174 +++++++++++++++++++++++++ include/linux/highmem.h | 270 ++++++++++++++------------------------- mm/highmem.c | 11 - 3 files changed, 276 insertions(+), 179 deletions(-) --- /dev/null +++ b/include/linux/highmem-internal.h @@ -0,0 +1,174 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_HIGHMEM_INTERNAL_H +#define _LINUX_HIGHMEM_INTERNAL_H + +/* + * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. + */ +#ifdef CONFIG_KMAP_LOCAL +void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); +void *__kmap_local_page_prot(struct page *page, pgprot_t prot); +void kunmap_local_indexed(void *vaddr); +#endif + +#ifdef CONFIG_HIGHMEM +#include <asm/highmem.h> + +#ifndef ARCH_HAS_KMAP_FLUSH_TLB +static inline void kmap_flush_tlb(unsigned long addr) { } +#endif + +#ifndef kmap_prot +#define kmap_prot PAGE_KERNEL +#endif + +void *kmap_high(struct page *page); +void kunmap_high(struct page *page); +void __kmap_flush_unused(void); +struct page *__kmap_to_page(void *addr); + +static inline void *kmap(struct page *page) +{ + void *addr; + + might_sleep(); + if (!PageHighMem(page)) + addr = page_address(page); + else + addr = kmap_high(page); + kmap_flush_tlb((unsigned long)addr); + return addr; +} + +static inline void kunmap(struct page *page) +{ + might_sleep(); + if (!PageHighMem(page)) + return; + kunmap_high(page); +} + +static inline struct page *kmap_to_page(void *addr) +{ + return __kmap_to_page(addr); +} + +static inline void kmap_flush_unused(void) +{ + __kmap_flush_unused(); +} + +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_local_page_prot(page, prot); +} + +static inline void *kmap_atomic(struct page *page) +{ + return kmap_atomic_prot(page, kmap_prot); +} + +static inline void *kmap_atomic_pfn(unsigned long pfn) +{ + preempt_disable(); + pagefault_disable(); + return __kmap_local_pfn_prot(pfn, kmap_prot); +} + +static inline void __kunmap_atomic(void *addr) +{ + kunmap_local_indexed(addr); + pagefault_enable(); + preempt_enable(); +} + +unsigned int __nr_free_highpages(void); +extern atomic_long_t _totalhigh_pages; + +static inline unsigned int nr_free_highpages(void) +{ + return __nr_free_highpages(); +} + +static inline unsigned long totalhigh_pages(void) +{ + return (unsigned long)atomic_long_read(&_totalhigh_pages); +} + +static inline void totalhigh_pages_inc(void) +{ + atomic_long_inc(&_totalhigh_pages); +} + +static inline void totalhigh_pages_add(long count) +{ + atomic_long_add(count, &_totalhigh_pages); +} + +#else /* CONFIG_HIGHMEM */ + +static inline struct page *kmap_to_page(void *addr) +{ + return virt_to_page(addr); +} + +static inline void *kmap(struct page *page) +{ + might_sleep(); + return page_address(page); +} + +static inline void kunmap_high(struct page *page) { } +static inline void kmap_flush_unused(void) { } + +static inline void kunmap(struct page *page) +{ +#ifdef ARCH_HAS_FLUSH_ON_KUNMAP + kunmap_flush_on_unmap(page_address(page)); +#endif +} + +static inline void *kmap_atomic(struct page *page) +{ + preempt_disable(); + pagefault_disable(); + return page_address(page); +} + +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + return kmap_atomic(page); +} + +static inline void *kmap_atomic_pfn(unsigned long pfn) +{ + return kmap_atomic(pfn_to_page(pfn)); +} + +static inline void __kunmap_atomic(void *addr) +{ +#ifdef ARCH_HAS_FLUSH_ON_KUNMAP + kunmap_flush_on_unmap(addr); +#endif + pagefault_enable(); + preempt_enable(); +} + +static inline unsigned int nr_free_highpages(void) { return 0; } +static inline unsigned long totalhigh_pages(void) { return 0UL; } + +#endif /* CONFIG_HIGHMEM */ + +/* + * Prevent people trying to call kunmap_atomic() as if it were kunmap() + * kunmap_atomic() should get the return value of kmap_atomic, not the page. + */ +#define kunmap_atomic(__addr) \ +do { \ + BUILD_BUG_ON(__same_type((__addr), struct page *)); \ + __kunmap_atomic(__addr); \ +} while (0) + +#endif --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -11,199 +11,125 @@ #include <asm/cacheflush.h> -#ifndef ARCH_HAS_FLUSH_ANON_PAGE -static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr) -{ -} -#endif +#include "highmem-internal.h" -#ifndef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE -static inline void flush_kernel_dcache_page(struct page *page) -{ -} -static inline void flush_kernel_vmap_range(void *vaddr, int size) -{ -} -static inline void invalidate_kernel_vmap_range(void *vaddr, int size) -{ -} -#endif - -/* - * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. +/** + * kmap - Map a page for long term usage + * @page: Pointer to the page to be mapped + * + * Returns: The virtual address of the mapping + * + * Can only be invoked from preemptible task context because on 32bit + * systems with CONFIG_HIGHMEM enabled this function might sleep. + * + * For systems with CONFIG_HIGHMEM=n and for pages in the low memory area + * this returns the virtual address of the direct kernel mapping. + * + * The returned virtual address is globally visible and valid up to the + * point where it is unmapped via kunmap(). The pointer can be handed to + * other contexts. + * + * For highmem pages on 32bit systems this can be slow as the mapping space + * is limited and protected by a global lock. In case that there is no + * mapping slot available the function blocks until a slot is released via + * kunmap(). */ -#ifdef CONFIG_KMAP_LOCAL -void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); -void *__kmap_local_page_prot(struct page *page, pgprot_t prot); -void kunmap_local_indexed(void *vaddr); -#endif - -#ifdef CONFIG_HIGHMEM -#include <asm/highmem.h> +static inline void *kmap(struct page *page); -#ifndef ARCH_HAS_KMAP_FLUSH_TLB -static inline void kmap_flush_tlb(unsigned long addr) { } -#endif - -#ifndef kmap_prot -#define kmap_prot PAGE_KERNEL -#endif - -void *kmap_high(struct page *page); -static inline void *kmap(struct page *page) -{ - void *addr; - - might_sleep(); - if (!PageHighMem(page)) - addr = page_address(page); - else - addr = kmap_high(page); - kmap_flush_tlb((unsigned long)addr); - return addr; -} - -void kunmap_high(struct page *page); - -static inline void kunmap(struct page *page) -{ - might_sleep(); - if (!PageHighMem(page)) - return; - kunmap_high(page); -} - -/* - * kmap_atomic/kunmap_atomic is significantly faster than kmap/kunmap because - * no global lock is needed and because the kmap code must perform a global TLB - * invalidation when the kmap pool wraps. - * - * However when holding an atomic kmap it is not legal to sleep, so atomic - * kmaps are appropriate for short, tight code paths only. - * - * The use of kmap_atomic/kunmap_atomic is discouraged - kmap/kunmap - * gives a more generic (and caching) interface. But kmap_atomic can - * be used in IRQ contexts, so in some (very limited) cases we need - * it. +/** + * kunmap - Unmap the virtual address mapped by kmap() + * @addr: Virtual address to be unmapped + * + * Counterpart to kmap(). A NOOP for CONFIG_HIGHMEM=n and for mappings of + * pages in the low memory area. */ -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) -{ - preempt_disable(); - pagefault_disable(); - return __kmap_local_page_prot(page, prot); -} - -static inline void *kmap_atomic(struct page *page) -{ - return kmap_atomic_prot(page, kmap_prot); -} - -static inline void *kmap_atomic_pfn(unsigned long pfn) -{ - preempt_disable(); - pagefault_disable(); - return __kmap_local_pfn_prot(pfn, kmap_prot); -} - -static inline void __kunmap_atomic(void *addr) -{ - kunmap_local_indexed(addr); -} - -/* declarations for linux/mm/highmem.c */ -unsigned int nr_free_highpages(void); -extern atomic_long_t _totalhigh_pages; -static inline unsigned long totalhigh_pages(void) -{ - return (unsigned long)atomic_long_read(&_totalhigh_pages); -} +static inline void kunmap(struct page *page); -static inline void totalhigh_pages_inc(void) -{ - atomic_long_inc(&_totalhigh_pages); -} - -static inline void totalhigh_pages_add(long count) -{ - atomic_long_add(count, &_totalhigh_pages); -} - -void kmap_flush_unused(void); - -struct page *kmap_to_page(void *addr); - -#else /* CONFIG_HIGHMEM */ +/** + * kmap_to_page - Get the page for a kmap'ed address + * @addr: The address to look up + * + * Returns: The page which is mapped to @addr. + */ +static inline struct page *kmap_to_page(void *addr); -static inline unsigned int nr_free_highpages(void) { return 0; } +/** + * kmap_flush_unused - Flush all unused kmap mappings in order to + * remove stray mappings + */ +static inline void kmap_flush_unused(void); -static inline struct page *kmap_to_page(void *addr) -{ - return virt_to_page(addr); -} +/** + * kmap_atomic - Atomically map a page for temporary usage + * @page: Pointer to the page to be mapped + * + * Returns: The virtual address of the mapping + * + * Side effect: On return pagefaults and preemption are disabled. + * + * Can be invoked from any context. + * + * Requires careful handling when nesting multiple mappings because the map + * management is stack based. The unmap has to be in the reverse order of + * the map operation: + * + * addr1 = kmap_atomic(page1); + * addr2 = kmap_atomic(page2); + * ... + * kunmap_atomic(addr2); + * kunmap_atomic(addr1); + * + * Unmapping addr1 before addr2 is invalid and causes malfunction. + * + * Contrary to kmap() mappings the mapping is only valid in the context of + * the caller and cannot be handed to other contexts. + * + * On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the + * virtual address of the direct mapping. Only real highmem pages are + * temporarily mapped. + * + * While it is significantly faster than kmap() it comes with restrictions + * about the pointer validity and the side effects of disabling page faults + * and preemption. Use it only when absolutely necessary, e.g. from non + * preemptible contexts. + */ +static inline void *kmap_atomic(struct page *page); -static inline unsigned long totalhigh_pages(void) { return 0UL; } +/** + * kunmap_atomic - Unmap the virtual address mapped by kmap_atomic() + * @addr: Virtual address to be unmapped + * + * Counterpart to kmap_atomic(). + * + * Undoes the side effects of kmap_atomic(), i.e. reenabling pagefaults and + * preemption. + * + * Other than that a NOOP for CONFIG_HIGHMEM=n and for mappings of pages + * in the low memory area. For real highmen pages the mapping which was + * established with kmap_atomic() is destroyed. + */ -static inline void *kmap(struct page *page) -{ - might_sleep(); - return page_address(page); -} +/* Highmem related interfaces for management code */ +static inline unsigned int nr_free_highpages(void); +static inline unsigned long totalhigh_pages(void); -static inline void kunmap_high(struct page *page) +#ifndef ARCH_HAS_FLUSH_ANON_PAGE +static inline void flush_anon_page(struct vm_area_struct *vma, struct page *page, unsigned long vmaddr) { } - -static inline void kunmap(struct page *page) -{ -#ifdef ARCH_HAS_FLUSH_ON_KUNMAP - kunmap_flush_on_unmap(page_address(page)); #endif -} -static inline void *kmap_atomic(struct page *page) +#ifndef ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE +static inline void flush_kernel_dcache_page(struct page *page) { - preempt_disable(); - pagefault_disable(); - return page_address(page); } - -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void flush_kernel_vmap_range(void *vaddr, int size) { - return kmap_atomic(page); } - -static inline void *kmap_atomic_pfn(unsigned long pfn) +static inline void invalidate_kernel_vmap_range(void *vaddr, int size) { - return kmap_atomic(pfn_to_page(pfn)); } - -static inline void __kunmap_atomic(void *addr) -{ - /* - * Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic() - * handles re-enabling faults and preemption - */ -#ifdef ARCH_HAS_FLUSH_ON_KUNMAP - kunmap_flush_on_unmap(addr); #endif -} - -#define kmap_flush_unused() do {} while(0) - - -#endif /* CONFIG_HIGHMEM */ - -/* - * Prevent people trying to call kunmap_atomic() as if it were kunmap() - * kunmap_atomic() should get the return value of kmap_atomic, not the page. - */ -#define kunmap_atomic(__addr) \ -do { \ - BUILD_BUG_ON(__same_type((__addr), struct page *)); \ - __kunmap_atomic(__addr); \ - pagefault_enable(); \ - preempt_enable(); \ -} while (0) /* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */ #ifndef clear_user_highpage --- a/mm/highmem.c +++ b/mm/highmem.c @@ -104,7 +104,7 @@ static inline wait_queue_head_t *get_pkm atomic_long_t _totalhigh_pages __read_mostly; EXPORT_SYMBOL(_totalhigh_pages); -unsigned int nr_free_highpages (void) +unsigned int __nr_free_highpages (void) { struct zone *zone; unsigned int pages = 0; @@ -141,7 +141,7 @@ pte_t * pkmap_page_table; do { spin_unlock(&kmap_lock); (void)(flags); } while (0) #endif -struct page *kmap_to_page(void *vaddr) +struct page *__kmap_to_page(void *vaddr) { unsigned long addr = (unsigned long)vaddr; @@ -152,7 +152,7 @@ struct page *kmap_to_page(void *vaddr) return virt_to_page(addr); } -EXPORT_SYMBOL(kmap_to_page); +EXPORT_SYMBOL(__kmap_to_page); static void flush_all_zero_pkmaps(void) { @@ -194,10 +194,7 @@ static void flush_all_zero_pkmaps(void) flush_tlb_kernel_range(PKMAP_ADDR(0), PKMAP_ADDR(LAST_PKMAP)); } -/** - * kmap_flush_unused - flush all unused kmap mappings in order to remove stray mappings - */ -void kmap_flush_unused(void) +void __kmap_flush_unused(void) { lock_kmap(); flush_all_zero_pkmaps();
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 23/37] sched: Make migrate_disable/enable() independent of RT
Now that the scheduler can deal with migrate disable properly, there is no real compelling reason to make it only available for RT. There are quite some code pathes which needlessly disable preemption in order to prevent migration and some constructs like kmap_atomic() enforce it implicitly. Making it available independent of RT allows to provide a preemptible variant of kmap_atomic() and makes the code more consistent in general. FIXME: Rework the comment in preempt.h Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Peter Zijlstra <peterz at infradead.org> Cc: Ingo Molnar <mingo at kernel.org> Cc: Juri Lelli <juri.lelli at redhat.com> Cc: Vincent Guittot <vincent.guittot at linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann at arm.com> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Ben Segall <bsegall at google.com> Cc: Mel Gorman <mgorman at suse.de> Cc: Daniel Bristot de Oliveira <bristot at redhat.com> --- include/linux/kernel.h | 21 ++++++++++++++------- include/linux/preempt.h | 38 +++----------------------------------- include/linux/sched.h | 2 +- kernel/sched/core.c | 45 +++++++++++++++++++++++++++++++++++---------- kernel/sched/sched.h | 4 ++-- lib/smp_processor_id.c | 2 +- 6 files changed, 56 insertions(+), 56 deletions(-) --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -204,6 +204,7 @@ extern int _cond_resched(void); extern void ___might_sleep(const char *file, int line, int preempt_offset); extern void __might_sleep(const char *file, int line, int preempt_offset); extern void __cant_sleep(const char *file, int line, int preempt_offset); +extern void __cant_migrate(const char *file, int line); /** * might_sleep - annotation for functions that can sleep @@ -227,6 +228,18 @@ extern void __cant_sleep(const char *fil # define cant_sleep() \ do { __cant_sleep(__FILE__, __LINE__, 0); } while (0) # define sched_annotate_sleep() (current->task_state_change = 0) + +/** + * cant_migrate - annotation for functions that cannot migrate + * + * Will print a stack trace if executed in code which is migratable + */ +# define cant_migrate() \ + do { \ + if (IS_ENABLED(CONFIG_SMP)) \ + __cant_migrate(__FILE__, __LINE__); \ + } while (0) + /** * non_block_start - annotate the start of section where sleeping is prohibited * @@ -251,6 +264,7 @@ extern void __cant_sleep(const char *fil int preempt_offset) { } # define might_sleep() do { might_resched(); } while (0) # define cant_sleep() do { } while (0) +# define cant_migrate() do { } while (0) # define sched_annotate_sleep() do { } while (0) # define non_block_start() do { } while (0) # define non_block_end() do { } while (0) @@ -258,13 +272,6 @@ extern void __cant_sleep(const char *fil #define might_sleep_if(cond) do { if (cond) might_sleep(); } while (0) -#ifndef CONFIG_PREEMPT_RT -# define cant_migrate() cant_sleep() -#else - /* Placeholder for now */ -# define cant_migrate() do { } while (0) -#endif - /** * abs - return absolute value of an argument * @x: the value. If it is unsigned type, it is converted to signed type first. --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -322,7 +322,7 @@ static inline void preempt_notifier_init #endif -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP /* * Migrate-Disable and why it is undesired. @@ -382,43 +382,11 @@ static inline void preempt_notifier_init extern void migrate_disable(void); extern void migrate_enable(void); -#elif defined(CONFIG_PREEMPT_RT) +#else static inline void migrate_disable(void) { } static inline void migrate_enable(void) { } -#else /* !CONFIG_PREEMPT_RT */ - -/** - * migrate_disable - Prevent migration of the current task - * - * Maps to preempt_disable() which also disables preemption. Use - * migrate_disable() to annotate that the intent is to prevent migration, - * but not necessarily preemption. - * - * Can be invoked nested like preempt_disable() and needs the corresponding - * number of migrate_enable() invocations. - */ -static __always_inline void migrate_disable(void) -{ - preempt_disable(); -} - -/** - * migrate_enable - Allow migration of the current task - * - * Counterpart to migrate_disable(). - * - * As migrate_disable() can be invoked nested, only the outermost invocation - * reenables migration. - * - * Currently mapped to preempt_enable(). - */ -static __always_inline void migrate_enable(void) -{ - preempt_enable(); -} - -#endif /* CONFIG_SMP && CONFIG_PREEMPT_RT */ +#endif /* CONFIG_SMP */ #endif /* __LINUX_PREEMPT_H */ --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -715,7 +715,7 @@ struct task_struct { const cpumask_t *cpus_ptr; cpumask_t cpus_mask; void *migration_pending; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP unsigned short migration_disabled; #endif unsigned short migration_flags; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1696,8 +1696,6 @@ void check_preempt_curr(struct rq *rq, s #ifdef CONFIG_SMP -#ifdef CONFIG_PREEMPT_RT - static void __do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask, u32 flags); @@ -1772,8 +1770,6 @@ static inline bool rq_has_pinned_tasks(s return rq->nr_pinned; } -#endif - /* * Per-CPU kthreads are allowed to run on !active && online CPUs, see * __set_cpus_allowed_ptr() and select_fallback_rq(). @@ -2841,7 +2837,7 @@ void sched_set_stop_task(int cpu, struct } } -#else +#else /* CONFIG_SMP */ static inline int __set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask, @@ -2850,10 +2846,6 @@ static inline int __set_cpus_allowed_ptr return set_cpus_allowed_ptr(p, new_mask); } -#endif /* CONFIG_SMP */ - -#if !defined(CONFIG_SMP) || !defined(CONFIG_PREEMPT_RT) - static inline void migrate_disable_switch(struct rq *rq, struct task_struct *p) { } static inline bool rq_has_pinned_tasks(struct rq *rq) @@ -2861,7 +2853,7 @@ static inline bool rq_has_pinned_tasks(s return false; } -#endif +#endif /* !CONFIG_SMP */ static void ttwu_stat(struct task_struct *p, int cpu, int wake_flags) @@ -7883,6 +7875,39 @@ void __cant_sleep(const char *file, int add_taint(TAINT_WARN, LOCKDEP_STILL_OK); } EXPORT_SYMBOL_GPL(__cant_sleep); + +#ifdef CONFIG_SMP +void __cant_migrate(const char *file, int line) +{ + static unsigned long prev_jiffy; + + if (irqs_disabled()) + return; + + if (is_migration_disabled(current)) + return; + + if (!IS_ENABLED(CONFIG_PREEMPT_COUNT)) + return; + + if (preempt_count() > 0) + return; + + if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy) + return; + prev_jiffy = jiffies; + + pr_err("BUG: assuming non migratable context at %s:%d\n", file, line); + pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n", + in_atomic(), irqs_disabled(), is_migration_disabled(current), + current->pid, current->comm); + + debug_show_held_locks(current); + dump_stack(); + add_taint(TAINT_WARN, LOCKDEP_STILL_OK); +} +EXPORT_SYMBOL_GPL(__cant_migrate); +#endif #endif #ifdef CONFIG_MAGIC_SYSRQ --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1056,7 +1056,7 @@ struct rq { struct cpuidle_state *idle_state; #endif -#if defined(CONFIG_PREEMPT_RT) && defined(CONFIG_SMP) +#ifdef CONFIG_SMP unsigned int nr_pinned; #endif unsigned int push_busy; @@ -1092,7 +1092,7 @@ static inline int cpu_of(struct rq *rq) static inline bool is_migration_disabled(struct task_struct *p) { -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP return p->migration_disabled; #else return false; --- a/lib/smp_processor_id.c +++ b/lib/smp_processor_id.c @@ -26,7 +26,7 @@ unsigned int check_preemption_disabled(c if (current->nr_cpus_allowed == 1) goto out; -#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT_RT) +#ifdef CONFIG_SMP if (current->migration_disabled) goto out; #endif
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 24/37] sched: highmem: Store local kmaps in task struct
Instead of storing the map per CPU provide and use per task storage. That prepares for local kmaps which are preemptible. The context switch code is preparatory and not yet in use because kmap_atomic() runs with preemption disabled. Will be made usable in the next step. The context switch logic is safe even when an interrupt happens after clearing or before restoring the kmaps. The kmap index in task struct is not modified so any nesting kmap in an interrupt will use unused indices and on return the counter is the same as before. Also add an assert into the return to user space code. Going back to user space with an active kmap local is a nono. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: Handle the debug case correctly --- include/linux/highmem-internal.h | 10 +++ include/linux/sched.h | 9 +++ kernel/entry/common.c | 2 kernel/fork.c | 1 kernel/sched/core.c | 18 +++++++ mm/highmem.c | 99 +++++++++++++++++++++++++++++++++++---- 6 files changed, 129 insertions(+), 10 deletions(-) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -9,6 +9,16 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); void *__kmap_local_page_prot(struct page *page, pgprot_t prot); void kunmap_local_indexed(void *vaddr); +void kmap_local_fork(struct task_struct *tsk); +void __kmap_local_sched_out(void); +void __kmap_local_sched_in(void); +static inline void kmap_assert_nomap(void) +{ + DEBUG_LOCKS_WARN_ON(current->kmap_ctrl.idx); +} +#else +static inline void kmap_local_fork(struct task_struct *tsk) { } +static inline void kmap_assert_nomap(void) { } #endif #ifdef CONFIG_HIGHMEM --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -34,6 +34,7 @@ #include <linux/rseq.h> #include <linux/seqlock.h> #include <linux/kcsan.h> +#include <asm/kmap_size.h> /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -629,6 +630,13 @@ struct wake_q_node { struct wake_q_node *next; }; +struct kmap_ctrl { +#ifdef CONFIG_KMAP_LOCAL + int idx; + pte_t pteval[KM_TYPE_NR]; +#endif +}; + struct task_struct { #ifdef CONFIG_THREAD_INFO_IN_TASK /* @@ -1294,6 +1302,7 @@ struct task_struct { unsigned int sequential_io; unsigned int sequential_io_avg; #endif + struct kmap_ctrl kmap_ctrl; #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; #endif --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -2,6 +2,7 @@ #include <linux/context_tracking.h> #include <linux/entry-common.h> +#include <linux/highmem.h> #include <linux/livepatch.h> #include <linux/audit.h> @@ -194,6 +195,7 @@ static void exit_to_user_mode_prepare(st /* Ensure that the address limit is intact and no locks are held */ addr_limit_user_check(); + kmap_assert_nomap(); lockdep_assert_irqs_disabled(); lockdep_sys_exit(); } --- a/kernel/fork.c +++ b/kernel/fork.c @@ -930,6 +930,7 @@ static struct task_struct *dup_task_stru account_kernel_stack(tsk, 1); kcov_task_init(tsk); + kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION tsk->fail_nth = 0; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4053,6 +4053,22 @@ static inline void finish_lock_switch(st # define finish_arch_post_lock_switch() do { } while (0) #endif +static inline void kmap_local_sched_out(void) +{ +#ifdef CONFIG_KMAP_LOCAL + if (unlikely(current->kmap_ctrl.idx)) + __kmap_local_sched_out(); +#endif +} + +static inline void kmap_local_sched_in(void) +{ +#ifdef CONFIG_KMAP_LOCAL + if (unlikely(current->kmap_ctrl.idx)) + __kmap_local_sched_in(); +#endif +} + /** * prepare_task_switch - prepare to switch tasks * @rq: the runqueue preparing to switch @@ -4075,6 +4091,7 @@ prepare_task_switch(struct rq *rq, struc perf_event_task_sched_out(prev, next); rseq_preempt(prev); fire_sched_out_preempt_notifiers(prev, next); + kmap_local_sched_out(); prepare_task(next); prepare_arch_switch(next); } @@ -4141,6 +4158,7 @@ static struct rq *finish_task_switch(str finish_lock_switch(rq); finish_arch_post_lock_switch(); kcov_finish_switch(current); + kmap_local_sched_in(); fire_sched_in_preempt_notifiers(current); /* --- a/mm/highmem.c +++ b/mm/highmem.c @@ -365,8 +365,6 @@ EXPORT_SYMBOL(kunmap_high); #include <asm/kmap_size.h> -static DEFINE_PER_CPU(int, __kmap_local_idx); - /* * With DEBUG_HIGHMEM the stack depth is doubled and every second * slot is unused which acts as a guard page @@ -379,23 +377,21 @@ static DEFINE_PER_CPU(int, __kmap_local_ static inline int kmap_local_idx_push(void) { - int idx = __this_cpu_add_return(__kmap_local_idx, KM_INCR) - 1; - WARN_ON_ONCE(in_irq() && !irqs_disabled()); - BUG_ON(idx >= KM_MAX_IDX); - return idx; + current->kmap_ctrl.idx += KM_INCR; + BUG_ON(current->kmap_ctrl.idx >= KM_TYPE_NR); + return current->kmap_ctrl.idx - 1; } static inline int kmap_local_idx(void) { - return __this_cpu_read(__kmap_local_idx) - 1; + return current->kmap_ctrl.idx - 1; } static inline void kmap_local_idx_pop(void) { - int idx = __this_cpu_sub_return(__kmap_local_idx, KM_INCR); - - BUG_ON(idx < 0); + current->kmap_ctrl.idx -= KM_INCR; + BUG_ON(current->kmap_ctrl.idx < 0); } #ifndef arch_kmap_local_post_map @@ -461,6 +457,7 @@ void *__kmap_local_pfn_prot(unsigned lon pteval = pfn_pte(pfn, prot); set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); arch_kmap_local_post_map(vaddr, pteval); + current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; preempt_enable(); return (void *)vaddr; @@ -505,10 +502,92 @@ void kunmap_local_indexed(void *vaddr) arch_kmap_local_pre_unmap(addr); pte_clear(&init_mm, addr, kmap_pte - idx); arch_kmap_local_post_unmap(addr); + current->kmap_ctrl.pteval[kmap_local_idx()] = __pte(0); kmap_local_idx_pop(); preempt_enable(); } EXPORT_SYMBOL(kunmap_local_indexed); + +/* + * Invoked before switch_to(). This is safe even when during or after + * clearing the maps an interrupt which needs a kmap_local happens because + * the task::kmap_ctrl.idx is not modified by the unmapping code so a + * nested kmap_local will use the next unused index and restore the index + * on unmap. The already cleared kmaps of the outgoing task are irrelevant + * because the interrupt context does not know about them. The same applies + * when scheduling back in for an interrupt which happens before the + * restore is complete. + */ +void __kmap_local_sched_out(void) +{ + struct task_struct *tsk = current; + pte_t *kmap_pte = kmap_get_pte(); + int i; + + /* Clear kmaps */ + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { + pte_t pteval = tsk->kmap_ctrl.pteval[i]; + unsigned long addr; + int idx; + + /* With debug all even slots are unmapped and act as guard */ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { + WARN_ON_ONCE(!pte_none(pteval)); + continue; + } + if (WARN_ON_ONCE(pte_none(pteval))) + continue; + + /* + * This is a horrible hack for XTENSA to calculate the + * coloured PTE index. Uses the PFN encoded into the pteval + * and the map index calculation because the actual mapped + * virtual address is not stored in task::kmap_ctrl. + * For any sane architecture this is optimized out. + */ + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); + + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + arch_kmap_local_pre_unmap(addr); + pte_clear(&init_mm, addr, kmap_pte - idx); + arch_kmap_local_post_unmap(addr); + } +} + +void __kmap_local_sched_in(void) +{ + struct task_struct *tsk = current; + pte_t *kmap_pte = kmap_get_pte(); + int i; + + /* Restore kmaps */ + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { + pte_t pteval = tsk->kmap_ctrl.pteval[i]; + unsigned long addr; + int idx; + + /* With debug all even slots are unmapped and act as guard */ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { + WARN_ON_ONCE(!pte_none(pteval)); + continue; + } + if (WARN_ON_ONCE(pte_none(pteval))) + continue; + + /* See comment in __kmap_local_sched_out() */ + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + set_pte_at(&init_mm, addr, kmap_pte - idx, pteval); + arch_kmap_local_post_map(addr, pteval); + } +} + +void kmap_local_fork(struct task_struct *tsk) +{ + if (WARN_ON_ONCE(tsk->kmap_ctrl.idx)) + memset(&tsk->kmap_ctrl, 0, sizeof(tsk->kmap_ctrl)); +} + #endif #if defined(HASHED_PAGE_VIRTUAL)
Now that the kmap atomic index is stored in task struct provide a preemptible variant. On context switch the maps of an outgoing task are removed and the map of the incoming task are restored. That's obviously slow, but highmem is slow anyway. The kmap_local.*() functions can be invoked from both preemptible and atomic context. kmap local sections disable migration to keep the resulting virtual mapping address correct, but disable neither pagefaults nor preemption. A wholesale conversion of kmap_atomic to be fully preemptible is not possible because some of the usage sites might rely on the preemption disable for serialization or on the implicit pagefault disable. Needs to be done on a case by case basis. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: Move migrate disable into the actual highmem mapping code so it only affects real highmem mappings. V2: Make it more consistent and add commentry --- include/linux/highmem-internal.h | 48 +++++++++++++++++++++++++++++++++++++++ include/linux/highmem.h | 43 +++++++++++++++++++++------------- mm/highmem.c | 6 ++++ 3 files changed, 81 insertions(+), 16 deletions(-) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -69,6 +69,26 @@ static inline void kmap_flush_unused(voi __kmap_flush_unused(); } +static inline void *kmap_local_page(struct page *page) +{ + return __kmap_local_page_prot(page, kmap_prot); +} + +static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) +{ + return __kmap_local_page_prot(page, prot); +} + +static inline void *kmap_local_pfn(unsigned long pfn) +{ + return __kmap_local_pfn_prot(pfn, kmap_prot); +} + +static inline void __kunmap_local(void *vaddr) +{ + kunmap_local_indexed(vaddr); +} + static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); @@ -141,6 +161,28 @@ static inline void kunmap(struct page *p #endif } +static inline void *kmap_local_page(struct page *page) +{ + return page_address(page); +} + +static inline void *kmap_local_page_prot(struct page *page, pgprot_t prot) +{ + return kmap_local_page(page); +} + +static inline void *kmap_local_pfn(unsigned long pfn) +{ + return kmap_local_page(pfn_to_page(pfn)); +} + +static inline void __kunmap_local(void *addr) +{ +#ifdef ARCH_HAS_FLUSH_ON_KUNMAP + kunmap_flush_on_unmap(addr); +#endif +} + static inline void *kmap_atomic(struct page *page) { preempt_disable(); @@ -182,4 +224,10 @@ do { \ __kunmap_atomic(__addr); \ } while (0) +#define kunmap_local(__addr) \ +do { \ + BUILD_BUG_ON(__same_type((__addr), struct page *)); \ + __kunmap_local(__addr); \ +} while (0) + #endif --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -60,24 +60,22 @@ static inline struct page *kmap_to_page( static inline void kmap_flush_unused(void); /** - * kmap_atomic - Atomically map a page for temporary usage + * kmap_local_page - Map a page for temporary usage * @page: Pointer to the page to be mapped * * Returns: The virtual address of the mapping * - * Side effect: On return pagefaults and preemption are disabled. - * * Can be invoked from any context. * * Requires careful handling when nesting multiple mappings because the map * management is stack based. The unmap has to be in the reverse order of * the map operation: * - * addr1 = kmap_atomic(page1); - * addr2 = kmap_atomic(page2); + * addr1 = kmap_local_page(page1); + * addr2 = kmap_local_page(page2); * ... - * kunmap_atomic(addr2); - * kunmap_atomic(addr1); + * kunmap_local(addr2); + * kunmap_local(addr1); * * Unmapping addr1 before addr2 is invalid and causes malfunction. * @@ -88,10 +86,26 @@ static inline void kmap_flush_unused(voi * virtual address of the direct mapping. Only real highmem pages are * temporarily mapped. * - * While it is significantly faster than kmap() it comes with restrictions - * about the pointer validity and the side effects of disabling page faults - * and preemption. Use it only when absolutely necessary, e.g. from non - * preemptible contexts. + * While it is significantly faster than kmap() for the higmem case it + * comes with restrictions about the pointer validity. Only use when really + * necessary. + * + * On HIGHMEM enabled systems mapping a highmem page has the side effect of + * disabling migration in order to keep the virtual address stable across + * preemption. No caller of kmap_local_page() can rely on this side effect. + */ +static inline void *kmap_local_page(struct page *page); + +/** + * kmap_atomic - Atomically map a page for temporary usage - Deprecated! + * @page: Pointer to the page to be mapped + * + * Returns: The virtual address of the mapping + * + * Effectively a wrapper around kmap_local_page() which disables pagefaults + * and preemption. + * + * Do not use in new code. Use kmap_local_page() instead. */ static inline void *kmap_atomic(struct page *page); @@ -101,12 +115,9 @@ static inline void *kmap_atomic(struct p * * Counterpart to kmap_atomic(). * - * Undoes the side effects of kmap_atomic(), i.e. reenabling pagefaults and + * Effectively a wrapper around kunmap_local() which additionally undoes + * the side effects of kmap_atomic(), i.e. reenabling pagefaults and * preemption. - * - * Other than that a NOOP for CONFIG_HIGHMEM=n and for mappings of pages - * in the low memory area. For real highmen pages the mapping which was - * established with kmap_atomic() is destroyed. */ /* Highmem related interfaces for management code */ --- a/mm/highmem.c +++ b/mm/highmem.c @@ -450,6 +450,11 @@ void *__kmap_local_pfn_prot(unsigned lon unsigned long vaddr; int idx; + /* + * Disable migration so resulting virtual address is stable + * accross preemption. + */ + migrate_disable(); preempt_disable(); idx = arch_kmap_local_map_idx(kmap_local_idx_push(), pfn); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -505,6 +510,7 @@ void kunmap_local_indexed(void *vaddr) current->kmap_ctrl.pteval[kmap_local_idx()] = __pte(0); kmap_local_idx_pop(); preempt_enable(); + migrate_enable(); } EXPORT_SYMBOL(kunmap_local_indexed);
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 26/37] io-mapping: Provide iomap_local variant
Similar to kmap local provide a iomap local variant which only disables migration, but neither disables pagefaults nor preemption. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: Restrict migrate disable to the 32bit mapping case and update documentation. V2: Split out from the large combo patch and add the !IOMAP_ATOMIC variants --- Documentation/driver-api/io-mapping.rst | 76 +++++++++++++++++++------------- include/linux/io-mapping.h | 30 +++++++++++- 2 files changed, 74 insertions(+), 32 deletions(-) --- a/Documentation/driver-api/io-mapping.rst +++ b/Documentation/driver-api/io-mapping.rst @@ -20,55 +20,71 @@ as it would consume too much of the kern mappable, while 'size' indicates how large a mapping region to enable. Both are in bytes. -This _wc variant provides a mapping which may only be used -with the io_mapping_map_atomic_wc or io_mapping_map_wc. +This _wc variant provides a mapping which may only be used with +io_mapping_map_atomic_wc(), io_mapping_map_local_wc() or +io_mapping_map_wc(). + +With this mapping object, individual pages can be mapped either temporarily +or long term, depending on the requirements. Of course, temporary maps are +more efficient. They come in two flavours:: -With this mapping object, individual pages can be mapped either atomically -or not, depending on the necessary scheduling environment. Of course, atomic -maps are more efficient:: + void *io_mapping_map_local_wc(struct io_mapping *mapping, + unsigned long offset) void *io_mapping_map_atomic_wc(struct io_mapping *mapping, unsigned long offset) -'offset' is the offset within the defined mapping region. -Accessing addresses beyond the region specified in the -creation function yields undefined results. Using an offset -which is not page aligned yields an undefined result. The -return value points to a single page in CPU address space. - -This _wc variant returns a write-combining map to the -page and may only be used with mappings created by -io_mapping_create_wc +'offset' is the offset within the defined mapping region. Accessing +addresses beyond the region specified in the creation function yields +undefined results. Using an offset which is not page aligned yields an +undefined result. The return value points to a single page in CPU address +space. -Note that the task may not sleep while holding this page -mapped. +This _wc variant returns a write-combining map to the page and may only be +used with mappings created by io_mapping_create_wc() -:: +Temporary mappings are only valid in the context of the caller. The mapping +is not guaranteed to be globaly visible. - void io_mapping_unmap_atomic(void *vaddr) +io_mapping_map_local_wc() has a side effect on X86 32bit as it disables +migration to make the mapping code work. No caller can rely on this side +effect. + +io_mapping_map_atomic_wc() has the side effect of disabling preemption and +pagefaults. Don't use in new code. Use io_mapping_map_local_wc() instead. -'vaddr' must be the value returned by the last -io_mapping_map_atomic_wc call. This unmaps the specified -page and allows the task to sleep once again. +Nested mappings need to be undone in reverse order because the mapping +code uses a stack for keeping track of them:: -If you need to sleep while holding the lock, you can use the non-atomic -variant, although they may be significantly slower. + addr1 = io_mapping_map_local_wc(map1, offset1); + addr2 = io_mapping_map_local_wc(map2, offset2); + ... + io_mapping_unmap_local(addr2); + io_mapping_unmap_local(addr1); -:: +The mappings are released with:: + + void io_mapping_unmap_local(void *vaddr) + void io_mapping_unmap_atomic(void *vaddr) + +'vaddr' must be the value returned by the last io_mapping_map_local_wc() or +io_mapping_map_atomic_wc() call. This unmaps the specified mapping and +undoes the side effects of the mapping functions. + +If you need to sleep while holding a mapping, you can use the regular +variant, although this may be significantly slower:: void *io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset) -This works like io_mapping_map_atomic_wc except it allows -the task to sleep while holding the page mapped. - +This works like io_mapping_map_atomic/local_wc() except it has no side +effects and the pointer is globaly visible. -:: +The mappings are released with:: void io_mapping_unmap(void *vaddr) -This works like io_mapping_unmap_atomic, except it is used -for pages mapped with io_mapping_map_wc. +Use for pages mapped with io_mapping_map_wc(). At driver close time, the io_mapping object must be freed:: --- a/include/linux/io-mapping.h +++ b/include/linux/io-mapping.h @@ -83,6 +83,21 @@ io_mapping_unmap_atomic(void __iomem *va } static inline void __iomem * +io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) +{ + resource_size_t phys_addr; + + BUG_ON(offset >= mapping->size); + phys_addr = mapping->base + offset; + return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); +} + +static inline void io_mapping_unmap_local(void __iomem *vaddr) +{ + kunmap_local_indexed((void __force *)vaddr); +} + +static inline void __iomem * io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset, unsigned long size) @@ -101,7 +116,7 @@ io_mapping_unmap(void __iomem *vaddr) iounmap(vaddr); } -#else +#else /* HAVE_ATOMIC_IOMAP */ #include <linux/uaccess.h> @@ -166,7 +181,18 @@ io_mapping_unmap_atomic(void __iomem *va preempt_enable(); } -#endif /* HAVE_ATOMIC_IOMAP */ +static inline void __iomem * +io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) +{ + return io_mapping_map_wc(mapping, offset, PAGE_SIZE); +} + +static inline void io_mapping_unmap_local(void __iomem *vaddr) +{ + io_mapping_unmap(vaddr); +} + +#endif /* !HAVE_ATOMIC_IOMAP */ static inline struct io_mapping * io_mapping_create_wc(resource_size_t base,
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 27/37] x86/crashdump/32: Simplify copy_oldmem_page()
Replace kmap_atomic_pfn() with kmap_local_pfn() which is preemptible and can take page faults. Remove the indirection of the dump page and the related cruft which is not longer required. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- arch/x86/kernel/crash_dump_32.c | 48 ++++++++-------------------------------- 1 file changed, 10 insertions(+), 38 deletions(-) --- a/arch/x86/kernel/crash_dump_32.c +++ b/arch/x86/kernel/crash_dump_32.c @@ -13,8 +13,6 @@ #include <linux/uaccess.h> -static void *kdump_buf_page; - static inline bool is_crashed_pfn_valid(unsigned long pfn) { #ifndef CONFIG_X86_PAE @@ -41,15 +39,11 @@ static inline bool is_crashed_pfn_valid( * @userbuf: if set, @buf is in user address space, use copy_to_user(), * otherwise @buf is in kernel address space, use memcpy(). * - * Copy a page from "oldmem". For this page, there is no pte mapped - * in the current kernel. We stitch up a pte, similar to kmap_atomic. - * - * Calling copy_to_user() in atomic context is not desirable. Hence first - * copying the data to a pre-allocated kernel page and then copying to user - * space in non-atomic context. + * Copy a page from "oldmem". For this page, there might be no pte mapped + * in the current kernel. */ -ssize_t copy_oldmem_page(unsigned long pfn, char *buf, - size_t csize, unsigned long offset, int userbuf) +ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, + unsigned long offset, int userbuf) { void *vaddr; @@ -59,38 +53,16 @@ ssize_t copy_oldmem_page(unsigned long p if (!is_crashed_pfn_valid(pfn)) return -EFAULT; - vaddr = kmap_atomic_pfn(pfn); + vaddr = kmap_local_pfn(pfn); if (!userbuf) { - memcpy(buf, (vaddr + offset), csize); - kunmap_atomic(vaddr); + memcpy(buf, vaddr + offset, csize); } else { - if (!kdump_buf_page) { - printk(KERN_WARNING "Kdump: Kdump buffer page not" - " allocated\n"); - kunmap_atomic(vaddr); - return -EFAULT; - } - copy_page(kdump_buf_page, vaddr); - kunmap_atomic(vaddr); - if (copy_to_user(buf, (kdump_buf_page + offset), csize)) - return -EFAULT; + if (copy_to_user(buf, vaddr + offset, csize)) + csize = -EFAULT; } - return csize; -} + kunmap_local(vaddr); -static int __init kdump_buf_page_init(void) -{ - int ret = 0; - - kdump_buf_page = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!kdump_buf_page) { - printk(KERN_WARNING "Kdump: Failed to allocate kdump buffer" - " page\n"); - ret = -ENOMEM; - } - - return ret; + return csize; } -arch_initcall(kdump_buf_page_init);
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 28/37] mips/crashdump: Simplify copy_oldmem_page()
Replace kmap_atomic_pfn() with kmap_local_pfn() which is preemptible and can take page faults. Remove the indirection of the dump page and the related cruft which is not longer required. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Thomas Bogendoerfer <tsbogend at alpha.franken.de> Cc: linux-mips at vger.kernel.org --- V3: New patch --- arch/mips/kernel/crash_dump.c | 42 +++++++----------------------------------- 1 file changed, 7 insertions(+), 35 deletions(-) --- a/arch/mips/kernel/crash_dump.c +++ b/arch/mips/kernel/crash_dump.c @@ -5,8 +5,6 @@ #include <linux/uaccess.h> #include <linux/slab.h> -static void *kdump_buf_page; - /** * copy_oldmem_page - copy one page from "oldmem" * @pfn: page frame number to be copied @@ -17,51 +15,25 @@ static void *kdump_buf_page; * @userbuf: if set, @buf is in user address space, use copy_to_user(), * otherwise @buf is in kernel address space, use memcpy(). * - * Copy a page from "oldmem". For this page, there is no pte mapped + * Copy a page from "oldmem". For this page, there might be no pte mapped * in the current kernel. - * - * Calling copy_to_user() in atomic context is not desirable. Hence first - * copying the data to a pre-allocated kernel page and then copying to user - * space in non-atomic context. */ -ssize_t copy_oldmem_page(unsigned long pfn, char *buf, - size_t csize, unsigned long offset, int userbuf) +ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, + unsigned long offset, int userbuf) { void *vaddr; if (!csize) return 0; - vaddr = kmap_atomic_pfn(pfn); + vaddr = kmap_local_pfn(pfn); if (!userbuf) { - memcpy(buf, (vaddr + offset), csize); - kunmap_atomic(vaddr); + memcpy(buf, vaddr + offset, csize); } else { - if (!kdump_buf_page) { - pr_warn("Kdump: Kdump buffer page not allocated\n"); - - return -EFAULT; - } - copy_page(kdump_buf_page, vaddr); - kunmap_atomic(vaddr); - if (copy_to_user(buf, (kdump_buf_page + offset), csize)) - return -EFAULT; + if (copy_to_user(buf, vaddr + offset, csize)) + csize = -EFAULT; } return csize; } - -static int __init kdump_buf_page_init(void) -{ - int ret = 0; - - kdump_buf_page = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!kdump_buf_page) { - pr_warn("Kdump: Failed to allocate kdump buffer page\n"); - ret = -ENOMEM; - } - - return ret; -} -arch_initcall(kdump_buf_page_init);
There is no requirement to disable pagefaults and preemption for these cache management mappings. Replace kmap_atomic_pfn() with kmap_local_pfn(). This allows to remove kmap_atomic_pfn() in the next step. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Russell King <linux at armlinux.org.uk> Cc: linux-arm-kernel at lists.infradead.org --- V3: New patch --- arch/arm/mm/cache-feroceon-l2.c | 6 +++--- arch/arm/mm/cache-xsc3l2.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) --- a/arch/arm/mm/cache-feroceon-l2.c +++ b/arch/arm/mm/cache-feroceon-l2.c @@ -49,9 +49,9 @@ static inline unsigned long l2_get_va(un * we simply install a virtual mapping for it only for the * TLB lookup to occur, hence no need to flush the untouched * memory mapping afterwards (note: a cache flush may happen - * in some circumstances depending on the path taken in kunmap_atomic). + * in some circumstances depending on the path taken in kunmap_local). */ - void *vaddr = kmap_atomic_pfn(paddr >> PAGE_SHIFT); + void *vaddr = kmap_local_pfn(paddr >> PAGE_SHIFT); return (unsigned long)vaddr + (paddr & ~PAGE_MASK); #else return __phys_to_virt(paddr); @@ -61,7 +61,7 @@ static inline unsigned long l2_get_va(un static inline void l2_put_va(unsigned long vaddr) { #ifdef CONFIG_HIGHMEM - kunmap_atomic((void *)vaddr); + kunmap_local((void *)vaddr); #endif } --- a/arch/arm/mm/cache-xsc3l2.c +++ b/arch/arm/mm/cache-xsc3l2.c @@ -59,7 +59,7 @@ static inline void l2_unmap_va(unsigned { #ifdef CONFIG_HIGHMEM if (va != -1) - kunmap_atomic((void *)va); + kunmap_local((void *)va); #endif } @@ -75,7 +75,7 @@ static inline unsigned long l2_map_va(un * in place for it. */ l2_unmap_va(prev_va); - va = (unsigned long)kmap_atomic_pfn(pa >> PAGE_SHIFT); + va = (unsigned long)kmap_local_pfn(pa >> PAGE_SHIFT); } return va + (pa_offset >> (32 - PAGE_SHIFT)); #else
No more users. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem-internal.h | 12 ------------ 1 file changed, 12 deletions(-) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -99,13 +99,6 @@ static inline void *kmap_atomic(struct p return kmap_atomic_prot(page, kmap_prot); } -static inline void *kmap_atomic_pfn(unsigned long pfn) -{ - preempt_disable(); - pagefault_disable(); - return __kmap_local_pfn_prot(pfn, kmap_prot); -} - static inline void __kunmap_atomic(void *addr) { kunmap_local_indexed(addr); @@ -193,11 +186,6 @@ static inline void *kmap_atomic_prot(str return kmap_atomic(page); } -static inline void *kmap_atomic_pfn(unsigned long pfn) -{ - return kmap_atomic(pfn_to_page(pfn)); -} - static inline void __kunmap_atomic(void *addr) { #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 31/37] drm/ttm: Replace kmap_atomic() usage
There is no reason to disable pagefaults and preemption as a side effect of kmap_atomic_prot(). Use kmap_local_page_prot() instead and document the reasoning for the mapping usage with the given pgprot. Remove the NULL pointer check for the map. These functions return a valid address for valid pages and the return was bogus anyway as it would have left preemption and pagefaults disabled. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Christian Koenig <christian.koenig at amd.com> Cc: Huang Rui <ray.huang at amd.com> Cc: David Airlie <airlied at linux.ie> Cc: Daniel Vetter <daniel at ffwll.ch> Cc: dri-devel at lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/ttm/ttm_bo_util.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -181,13 +181,15 @@ static int ttm_copy_io_ttm_page(struct t return -ENOMEM; src = (void *)((unsigned long)src + (page << PAGE_SHIFT)); - dst = kmap_atomic_prot(d, prot); - if (!dst) - return -ENOMEM; + /* + * Ensure that a highmem page is mapped with the correct + * pgprot. For non highmem the mapping is already there. + */ + dst = kmap_local_page_prot(d, prot); memcpy_fromio(dst, src, PAGE_SIZE); - kunmap_atomic(dst); + kunmap_local(dst); return 0; } @@ -203,13 +205,15 @@ static int ttm_copy_ttm_io_page(struct t return -ENOMEM; dst = (void *)((unsigned long)dst + (page << PAGE_SHIFT)); - src = kmap_atomic_prot(s, prot); - if (!src) - return -ENOMEM; + /* + * Ensure that a highmem page is mapped with the correct + * pgprot. For non highmem the mapping is already there. + */ + src = kmap_local_page_prot(s, prot); memcpy_toio(dst, src, PAGE_SIZE); - kunmap_atomic(src); + kunmap_local(src); return 0; }
There is no reason to disable pagefaults and preemption as a side effect of kmap_atomic_prot(). Use kmap_local_page_prot() instead and document the reasoning for the mapping usage with the given pgprot. Remove the NULL pointer check for the map. These functions return a valid address for valid pages and the return was bogus anyway as it would have left preemption and pagefaults disabled. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: VMware Graphics <linux-graphics-maintainer at vmware.com> Cc: Roland Scheidegger <sroland at vmware.com> Cc: David Airlie <airlied at linux.ie> Cc: Daniel Vetter <daniel at ffwll.ch> Cc: dri-devel at lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 30 ++++++++++++------------------ 1 file changed, 12 insertions(+), 18 deletions(-) --- a/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_blit.c @@ -375,12 +375,12 @@ static int vmw_bo_cpu_blit_line(struct v copy_size = min_t(u32, copy_size, PAGE_SIZE - src_page_offset); if (unmap_src) { - kunmap_atomic(d->src_addr); + kunmap_local(d->src_addr); d->src_addr = NULL; } if (unmap_dst) { - kunmap_atomic(d->dst_addr); + kunmap_local(d->dst_addr); d->dst_addr = NULL; } @@ -388,12 +388,8 @@ static int vmw_bo_cpu_blit_line(struct v if (WARN_ON_ONCE(dst_page >= d->dst_num_pages)) return -EINVAL; - d->dst_addr - kmap_atomic_prot(d->dst_pages[dst_page], - d->dst_prot); - if (!d->dst_addr) - return -ENOMEM; - + d->dst_addr = kmap_local_page_prot(d->dst_pages[dst_page], + d->dst_prot); d->mapped_dst = dst_page; } @@ -401,12 +397,8 @@ static int vmw_bo_cpu_blit_line(struct v if (WARN_ON_ONCE(src_page >= d->src_num_pages)) return -EINVAL; - d->src_addr - kmap_atomic_prot(d->src_pages[src_page], - d->src_prot); - if (!d->src_addr) - return -ENOMEM; - + d->src_addr = kmap_local_page_prot(d->src_pages[src_page], + d->src_prot); d->mapped_src = src_page; } diff->do_cpy(diff, d->dst_addr + dst_page_offset, @@ -436,8 +428,10 @@ static int vmw_bo_cpu_blit_line(struct v * * Performs a CPU blit from one buffer object to another avoiding a full * bo vmap which may exhaust- or fragment vmalloc space. - * On supported architectures (x86), we're using kmap_atomic which avoids - * cross-processor TLB- and cache flushes and may, on non-HIGHMEM systems + * + * On supported architectures (x86), we're using kmap_local_prot() which + * avoids cross-processor TLB- and cache flushes. kmap_local_prot() will + * either map a highmem page with the proper pgprot on HIGHMEM=y systems or * reference already set-up mappings. * * Neither of the buffer objects may be placed in PCI memory @@ -500,9 +494,9 @@ int vmw_bo_cpu_blit(struct ttm_buffer_ob } out: if (d.src_addr) - kunmap_atomic(d.src_addr); + kunmap_local(d.src_addr); if (d.dst_addr) - kunmap_atomic(d.dst_addr); + kunmap_local(d.dst_addr); return ret; }
No more users. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- include/linux/highmem-internal.h | 14 ++------------ 1 file changed, 2 insertions(+), 12 deletions(-) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -87,16 +87,11 @@ static inline void __kunmap_local(void * kunmap_local_indexed(vaddr); } -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); - return __kmap_local_page_prot(page, prot); -} - -static inline void *kmap_atomic(struct page *page) -{ - return kmap_atomic_prot(page, kmap_prot); + return __kmap_local_page_prot(page, kmap_prot); } static inline void __kunmap_atomic(void *addr) @@ -181,11 +176,6 @@ static inline void *kmap_atomic(struct p return page_address(page); } -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) -{ - return kmap_atomic(page); -} - static inline void __kunmap_atomic(void *addr) { #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 34/37] drm/qxl: Replace io_mapping_map_atomic_wc()
None of these mapping requires the side effect of disabling pagefaults and preemption. Use io_mapping_map_local_wc() instead, rename the related functions accordingly and clean up qxl_process_single_command() to use a plain copy_from_user() as the local maps are not disabling pagefaults. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Dave Airlie <airlied at redhat.com> Cc: Gerd Hoffmann <kraxel at redhat.com> Cc: David Airlie <airlied at linux.ie> Cc: Daniel Vetter <daniel at ffwll.ch> Cc: virtualization at lists.linux-foundation.org Cc: spice-devel at lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/qxl/qxl_image.c | 18 +++++++++--------- drivers/gpu/drm/qxl/qxl_ioctl.c | 27 +++++++++++++-------------- drivers/gpu/drm/qxl/qxl_object.c | 12 ++++++------ drivers/gpu/drm/qxl/qxl_object.h | 4 ++-- drivers/gpu/drm/qxl/qxl_release.c | 4 ++-- 5 files changed, 32 insertions(+), 33 deletions(-) --- a/drivers/gpu/drm/qxl/qxl_image.c +++ b/drivers/gpu/drm/qxl/qxl_image.c @@ -124,12 +124,12 @@ qxl_image_init_helper(struct qxl_device wrong (check the bitmaps are sent correctly first) */ - ptr = qxl_bo_kmap_atomic_page(qdev, chunk_bo, 0); + ptr = qxl_bo_kmap_local_page(qdev, chunk_bo, 0); chunk = ptr; chunk->data_size = height * chunk_stride; chunk->prev_chunk = 0; chunk->next_chunk = 0; - qxl_bo_kunmap_atomic_page(qdev, chunk_bo, ptr); + qxl_bo_kunmap_local_page(qdev, chunk_bo, ptr); { void *k_data, *i_data; @@ -143,7 +143,7 @@ qxl_image_init_helper(struct qxl_device i_data = (void *)data; while (remain > 0) { - ptr = qxl_bo_kmap_atomic_page(qdev, chunk_bo, page << PAGE_SHIFT); + ptr = qxl_bo_kmap_local_page(qdev, chunk_bo, page << PAGE_SHIFT); if (page == 0) { chunk = ptr; @@ -157,7 +157,7 @@ qxl_image_init_helper(struct qxl_device memcpy(k_data, i_data, size); - qxl_bo_kunmap_atomic_page(qdev, chunk_bo, ptr); + qxl_bo_kunmap_local_page(qdev, chunk_bo, ptr); i_data += size; remain -= size; page++; @@ -175,10 +175,10 @@ qxl_image_init_helper(struct qxl_device page_offset = offset_in_page(out_offset); size = min((int)(PAGE_SIZE - page_offset), remain); - ptr = qxl_bo_kmap_atomic_page(qdev, chunk_bo, page_base); + ptr = qxl_bo_kmap_local_page(qdev, chunk_bo, page_base); k_data = ptr + page_offset; memcpy(k_data, i_data, size); - qxl_bo_kunmap_atomic_page(qdev, chunk_bo, ptr); + qxl_bo_kunmap_local_page(qdev, chunk_bo, ptr); remain -= size; i_data += size; out_offset += size; @@ -189,7 +189,7 @@ qxl_image_init_helper(struct qxl_device qxl_bo_kunmap(chunk_bo); image_bo = dimage->bo; - ptr = qxl_bo_kmap_atomic_page(qdev, image_bo, 0); + ptr = qxl_bo_kmap_local_page(qdev, image_bo, 0); image = ptr; image->descriptor.id = 0; @@ -212,7 +212,7 @@ qxl_image_init_helper(struct qxl_device break; default: DRM_ERROR("unsupported image bit depth\n"); - qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr); + qxl_bo_kunmap_local_page(qdev, image_bo, ptr); return -EINVAL; } image->u.bitmap.flags = QXL_BITMAP_TOP_DOWN; @@ -222,7 +222,7 @@ qxl_image_init_helper(struct qxl_device image->u.bitmap.palette = 0; image->u.bitmap.data = qxl_bo_physical_address(qdev, chunk_bo, 0); - qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr); + qxl_bo_kunmap_local_page(qdev, image_bo, ptr); return 0; } --- a/drivers/gpu/drm/qxl/qxl_ioctl.c +++ b/drivers/gpu/drm/qxl/qxl_ioctl.c @@ -89,11 +89,11 @@ apply_reloc(struct qxl_device *qdev, str { void *reloc_page; - reloc_page = qxl_bo_kmap_atomic_page(qdev, info->dst_bo, info->dst_offset & PAGE_MASK); + reloc_page = qxl_bo_kmap_local_page(qdev, info->dst_bo, info->dst_offset & PAGE_MASK); *(uint64_t *)(reloc_page + (info->dst_offset & ~PAGE_MASK)) = qxl_bo_physical_address(qdev, info->src_bo, info->src_offset); - qxl_bo_kunmap_atomic_page(qdev, info->dst_bo, reloc_page); + qxl_bo_kunmap_local_page(qdev, info->dst_bo, reloc_page); } static void @@ -105,9 +105,9 @@ apply_surf_reloc(struct qxl_device *qdev if (info->src_bo && !info->src_bo->is_primary) id = info->src_bo->surface_id; - reloc_page = qxl_bo_kmap_atomic_page(qdev, info->dst_bo, info->dst_offset & PAGE_MASK); + reloc_page = qxl_bo_kmap_local_page(qdev, info->dst_bo, info->dst_offset & PAGE_MASK); *(uint32_t *)(reloc_page + (info->dst_offset & ~PAGE_MASK)) = id; - qxl_bo_kunmap_atomic_page(qdev, info->dst_bo, reloc_page); + qxl_bo_kunmap_local_page(qdev, info->dst_bo, reloc_page); } /* return holding the reference to this object */ @@ -149,7 +149,6 @@ static int qxl_process_single_command(st struct qxl_bo *cmd_bo; void *fb_cmd; int i, ret, num_relocs; - int unwritten; switch (cmd->type) { case QXL_CMD_DRAW: @@ -185,21 +184,21 @@ static int qxl_process_single_command(st goto out_free_reloc; /* TODO copy slow path code from i915 */ - fb_cmd = qxl_bo_kmap_atomic_page(qdev, cmd_bo, (release->release_offset & PAGE_MASK)); - unwritten = __copy_from_user_inatomic_nocache - (fb_cmd + sizeof(union qxl_release_info) + (release->release_offset & ~PAGE_MASK), - u64_to_user_ptr(cmd->command), cmd->command_size); + fb_cmd = qxl_bo_kmap_local_page(qdev, cmd_bo, (release->release_offset & PAGE_MASK)); - { + if (copy_from_user(fb_cmd + sizeof(union qxl_release_info) + + (release->release_offset & ~PAGE_MASK), + u64_to_user_ptr(cmd->command), cmd->command_size)) { + ret = -EFAULT; + } else { struct qxl_drawable *draw = fb_cmd; draw->mm_time = qdev->rom->mm_clock; } - qxl_bo_kunmap_atomic_page(qdev, cmd_bo, fb_cmd); - if (unwritten) { - DRM_ERROR("got unwritten %d\n", unwritten); - ret = -EFAULT; + qxl_bo_kunmap_local_page(qdev, cmd_bo, fb_cmd); + if (ret) { + DRM_ERROR("copy from user failed %d\n", ret); goto out_free_release; } --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -172,8 +172,8 @@ int qxl_bo_kmap(struct qxl_bo *bo, void return 0; } -void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, - struct qxl_bo *bo, int page_offset) +void *qxl_bo_kmap_local_page(struct qxl_device *qdev, + struct qxl_bo *bo, int page_offset) { unsigned long offset; void *rptr; @@ -188,7 +188,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl goto fallback; offset = bo->tbo.mem.start << PAGE_SHIFT; - return io_mapping_map_atomic_wc(map, offset + page_offset); + return io_mapping_map_local_wc(map, offset + page_offset); fallback: if (bo->kptr) { rptr = bo->kptr + (page_offset * PAGE_SIZE); @@ -214,14 +214,14 @@ void qxl_bo_kunmap(struct qxl_bo *bo) ttm_bo_kunmap(&bo->kmap); } -void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, - struct qxl_bo *bo, void *pmap) +void qxl_bo_kunmap_local_page(struct qxl_device *qdev, + struct qxl_bo *bo, void *pmap) { if ((bo->tbo.mem.mem_type != TTM_PL_VRAM) && (bo->tbo.mem.mem_type != TTM_PL_PRIV)) goto fallback; - io_mapping_unmap_atomic(pmap); + io_mapping_unmap_local(pmap); return; fallback: qxl_bo_kunmap(bo); --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -88,8 +88,8 @@ extern int qxl_bo_create(struct qxl_devi struct qxl_bo **bo_ptr); extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); extern void qxl_bo_kunmap(struct qxl_bo *bo); -void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); -void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); +void *qxl_bo_kmap_local_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); +void qxl_bo_kunmap_local_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); extern struct qxl_bo *qxl_bo_ref(struct qxl_bo *bo); extern void qxl_bo_unref(struct qxl_bo **bo); extern int qxl_bo_pin(struct qxl_bo *bo); --- a/drivers/gpu/drm/qxl/qxl_release.c +++ b/drivers/gpu/drm/qxl/qxl_release.c @@ -408,7 +408,7 @@ union qxl_release_info *qxl_release_map( union qxl_release_info *info; struct qxl_bo *bo = release->release_bo; - ptr = qxl_bo_kmap_atomic_page(qdev, bo, release->release_offset & PAGE_MASK); + ptr = qxl_bo_kmap_local_page(qdev, bo, release->release_offset & PAGE_MASK); if (!ptr) return NULL; info = ptr + (release->release_offset & ~PAGE_MASK); @@ -423,7 +423,7 @@ void qxl_release_unmap(struct qxl_device void *ptr; ptr = ((void *)info) - (release->release_offset & ~PAGE_MASK); - qxl_bo_kunmap_atomic_page(qdev, bo, ptr); + qxl_bo_kunmap_local_page(qdev, bo, ptr); } void qxl_release_fence_buffer_objects(struct qxl_release *release)
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 35/37] drm/nouveau/device: Replace io_mapping_map_atomic_wc()
Neither fbmem_peek() nor fbmem_poke() require to disable pagefaults and preemption as a side effect of io_mapping_map_atomic_wc(). Use io_mapping_map_local_wc() instead. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Ben Skeggs <bskeggs at redhat.com> Cc: David Airlie <airlied at linux.ie> Cc: Daniel Vetter <daniel at ffwll.ch> Cc: dri-devel at lists.freedesktop.org Cc: nouveau at lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/nouveau/nvkm/subdev/devinit/fbmem.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) --- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/fbmem.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/fbmem.h @@ -60,19 +60,19 @@ fbmem_fini(struct io_mapping *fb) static inline u32 fbmem_peek(struct io_mapping *fb, u32 off) { - u8 __iomem *p = io_mapping_map_atomic_wc(fb, off & PAGE_MASK); + u8 __iomem *p = io_mapping_map_local_wc(fb, off & PAGE_MASK); u32 val = ioread32(p + (off & ~PAGE_MASK)); - io_mapping_unmap_atomic(p); + io_mapping_unmap_local(p); return val; } static inline void fbmem_poke(struct io_mapping *fb, u32 off, u32 val) { - u8 __iomem *p = io_mapping_map_atomic_wc(fb, off & PAGE_MASK); + u8 __iomem *p = io_mapping_map_local_wc(fb, off & PAGE_MASK); iowrite32(val, p + (off & ~PAGE_MASK)); wmb(); - io_mapping_unmap_atomic(p); + io_mapping_unmap_local(p); } static inline bool
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 36/37] drm/i915: Replace io_mapping_map_atomic_wc()
None of these mapping requires the side effect of disabling pagefaults and preemption. Use io_mapping_map_local_wc() instead, and clean up gtt_user_read() and gtt_user_write() to use a plain copy_from_user() as the local maps are not disabling pagefaults. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> Cc: Jani Nikula <jani.nikula at linux.intel.com> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com> Cc: Rodrigo Vivi <rodrigo.vivi at intel.com> Cc: David Airlie <airlied at linux.ie> Cc: Daniel Vetter <daniel at ffwll.ch> Cc: intel-gfx at lists.freedesktop.org Cc: dri-devel at lists.freedesktop.org --- V3: New patch --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 7 +--- drivers/gpu/drm/i915/i915_gem.c | 40 ++++++++----------------- drivers/gpu/drm/i915/selftests/i915_gem.c | 4 +- drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 8 ++--- 4 files changed, 22 insertions(+), 37 deletions(-) --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1081,7 +1081,7 @@ static void reloc_cache_reset(struct rel struct i915_ggtt *ggtt = cache_to_ggtt(cache); intel_gt_flush_ggtt_writes(ggtt->vm.gt); - io_mapping_unmap_atomic((void __iomem *)vaddr); + io_mapping_unmap_local((void __iomem *)vaddr); if (drm_mm_node_allocated(&cache->node)) { ggtt->vm.clear_range(&ggtt->vm, @@ -1147,7 +1147,7 @@ static void *reloc_iomap(struct drm_i915 if (cache->vaddr) { intel_gt_flush_ggtt_writes(ggtt->vm.gt); - io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr)); + io_mapping_unmap_local((void __force __iomem *) unmask_page(cache->vaddr)); } else { struct i915_vma *vma; int err; @@ -1195,8 +1195,7 @@ static void *reloc_iomap(struct drm_i915 offset += page << PAGE_SHIFT; } - vaddr = (void __force *)io_mapping_map_atomic_wc(&ggtt->iomap, - offset); + vaddr = (void __force *)io_mapping_map_local_wc(&ggtt->iomap, offset); cache->page = page; cache->vaddr = (unsigned long)vaddr; --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -379,22 +379,15 @@ gtt_user_read(struct io_mapping *mapping char __user *user_data, int length) { void __iomem *vaddr; - unsigned long unwritten; + bool fail = false; /* We can use the cpu mem copy function because this is X86. */ - vaddr = io_mapping_map_atomic_wc(mapping, base); - unwritten = __copy_to_user_inatomic(user_data, - (void __force *)vaddr + offset, - length); - io_mapping_unmap_atomic(vaddr); - if (unwritten) { - vaddr = io_mapping_map_wc(mapping, base, PAGE_SIZE); - unwritten = copy_to_user(user_data, - (void __force *)vaddr + offset, - length); - io_mapping_unmap(vaddr); - } - return unwritten; + vaddr = io_mapping_map_local_wc(mapping, base); + if (copy_to_user(user_data, (void __force *)vaddr + offset, length)) + fail = true; + io_mapping_unmap_local(vaddr); + + return fail; } static int @@ -557,21 +550,14 @@ ggtt_write(struct io_mapping *mapping, char __user *user_data, int length) { void __iomem *vaddr; - unsigned long unwritten; + bool fail = false; /* We can use the cpu mem copy function because this is X86. */ - vaddr = io_mapping_map_atomic_wc(mapping, base); - unwritten = __copy_from_user_inatomic_nocache((void __force *)vaddr + offset, - user_data, length); - io_mapping_unmap_atomic(vaddr); - if (unwritten) { - vaddr = io_mapping_map_wc(mapping, base, PAGE_SIZE); - unwritten = copy_from_user((void __force *)vaddr + offset, - user_data, length); - io_mapping_unmap(vaddr); - } - - return unwritten; + vaddr = io_mapping_map_local_wc(mapping, base); + if (copy_from_user((void __force *)vaddr + offset, user_data, length)) + fail = true; + io_mapping_unmap_local(vaddr); + return fail; } /** --- a/drivers/gpu/drm/i915/selftests/i915_gem.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem.c @@ -57,12 +57,12 @@ static void trash_stolen(struct drm_i915 ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0); - s = io_mapping_map_atomic_wc(&ggtt->iomap, slot); + s = io_mapping_map_local_wc(&ggtt->iomap, slot); for (x = 0; x < PAGE_SIZE / sizeof(u32); x++) { prng = next_pseudo_random32(prng); iowrite32(prng, &s[x]); } - io_mapping_unmap_atomic(s); + io_mapping_unmap_local(s); } ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE); --- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c @@ -1200,9 +1200,9 @@ static int igt_ggtt_page(void *arg) u64 offset = tmp.start + order[n] * PAGE_SIZE; u32 __iomem *vaddr; - vaddr = io_mapping_map_atomic_wc(&ggtt->iomap, offset); + vaddr = io_mapping_map_local_wc(&ggtt->iomap, offset); iowrite32(n, vaddr + n); - io_mapping_unmap_atomic(vaddr); + io_mapping_unmap_local(vaddr); } intel_gt_flush_ggtt_writes(ggtt->vm.gt); @@ -1212,9 +1212,9 @@ static int igt_ggtt_page(void *arg) u32 __iomem *vaddr; u32 val; - vaddr = io_mapping_map_atomic_wc(&ggtt->iomap, offset); + vaddr = io_mapping_map_local_wc(&ggtt->iomap, offset); val = ioread32(vaddr + n); - io_mapping_unmap_atomic(vaddr); + io_mapping_unmap_local(vaddr); if (val != n) { pr_err("insert page failed: found %d, expected %d\n",
Thomas Gleixner
2020-Nov-03 09:27 UTC
[patch V3 37/37] io-mapping: Remove io_mapping_map_atomic_wc()
No more users. Get rid of it and remove the traces in documentation. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V3: New patch --- Documentation/driver-api/io-mapping.rst | 22 +++++----------- include/linux/io-mapping.h | 42 +------------------------------- 2 files changed, 9 insertions(+), 55 deletions(-) --- a/Documentation/driver-api/io-mapping.rst +++ b/Documentation/driver-api/io-mapping.rst @@ -21,19 +21,15 @@ mappable, while 'size' indicates how lar enable. Both are in bytes. This _wc variant provides a mapping which may only be used with -io_mapping_map_atomic_wc(), io_mapping_map_local_wc() or -io_mapping_map_wc(). +io_mapping_map_local_wc() or io_mapping_map_wc(). With this mapping object, individual pages can be mapped either temporarily or long term, depending on the requirements. Of course, temporary maps are -more efficient. They come in two flavours:: +more efficient. void *io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) - void *io_mapping_map_atomic_wc(struct io_mapping *mapping, - unsigned long offset) - 'offset' is the offset within the defined mapping region. Accessing addresses beyond the region specified in the creation function yields undefined results. Using an offset which is not page aligned yields an @@ -50,9 +46,6 @@ io_mapping_map_local_wc() has a side eff migration to make the mapping code work. No caller can rely on this side effect. -io_mapping_map_atomic_wc() has the side effect of disabling preemption and -pagefaults. Don't use in new code. Use io_mapping_map_local_wc() instead. - Nested mappings need to be undone in reverse order because the mapping code uses a stack for keeping track of them:: @@ -65,11 +58,10 @@ Nested mappings need to be undone in rev The mappings are released with:: void io_mapping_unmap_local(void *vaddr) - void io_mapping_unmap_atomic(void *vaddr) -'vaddr' must be the value returned by the last io_mapping_map_local_wc() or -io_mapping_map_atomic_wc() call. This unmaps the specified mapping and -undoes the side effects of the mapping functions. +'vaddr' must be the value returned by the last io_mapping_map_local_wc() +call. This unmaps the specified mapping and undoes eventual side effects of +the mapping function. If you need to sleep while holding a mapping, you can use the regular variant, although this may be significantly slower:: @@ -77,8 +69,8 @@ If you need to sleep while holding a map void *io_mapping_map_wc(struct io_mapping *mapping, unsigned long offset) -This works like io_mapping_map_atomic/local_wc() except it has no side -effects and the pointer is globaly visible. +This works like io_mapping_map_local_wc() except it has no side effects and +the pointer is globaly visible. The mappings are released with:: --- a/include/linux/io-mapping.h +++ b/include/linux/io-mapping.h @@ -60,28 +60,7 @@ io_mapping_fini(struct io_mapping *mappi iomap_free(mapping->base, mapping->size); } -/* Atomic map/unmap */ -static inline void __iomem * -io_mapping_map_atomic_wc(struct io_mapping *mapping, - unsigned long offset) -{ - resource_size_t phys_addr; - - BUG_ON(offset >= mapping->size); - phys_addr = mapping->base + offset; - preempt_disable(); - pagefault_disable(); - return __iomap_local_pfn_prot(PHYS_PFN(phys_addr), mapping->prot); -} - -static inline void -io_mapping_unmap_atomic(void __iomem *vaddr) -{ - kunmap_local_indexed((void __force *)vaddr); - pagefault_enable(); - preempt_enable(); -} - +/* Temporary mappings which are only valid in the current context */ static inline void __iomem * io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) { @@ -163,24 +142,7 @@ io_mapping_unmap(void __iomem *vaddr) { } -/* Atomic map/unmap */ -static inline void __iomem * -io_mapping_map_atomic_wc(struct io_mapping *mapping, - unsigned long offset) -{ - preempt_disable(); - pagefault_disable(); - return io_mapping_map_wc(mapping, offset, PAGE_SIZE); -} - -static inline void -io_mapping_unmap_atomic(void __iomem *vaddr) -{ - io_mapping_unmap(vaddr); - pagefault_enable(); - preempt_enable(); -} - +/* Temporary mappings which are only valid in the current context */ static inline void __iomem * io_mapping_map_local_wc(struct io_mapping *mapping, unsigned long offset) {
David Sterba
2020-Nov-03 11:12 UTC
[Nouveau] [patch V3 03/37] fs: Remove asm/kmap_types.h includes
On Tue, Nov 03, 2020 at 10:27:15AM +0100, Thomas Gleixner wrote:> Historical leftovers from the time where kmap() had fixed slots. > > Signed-off-by: Thomas Gleixner <tglx at linutronix.de> > Cc: Alexander Viro <viro at zeniv.linux.org.uk> > Cc: Benjamin LaHaise <bcrl at kvack.org> > Cc: linux-fsdevel at vger.kernel.org > Cc: linux-aio at kvack.org > Cc: Chris Mason <clm at fb.com> > Cc: Josef Bacik <josef at toxicpanda.com> > Cc: David Sterba <dsterba at suse.com>Acked-by: David Sterba <dsterba at suse.com> For the btrfs bits> fs/btrfs/ctree.h | 1 -> --- a/fs/btrfs/ctree.h > +++ b/fs/btrfs/ctree.h > @@ -17,7 +17,6 @@ > #include <linux/wait.h> > #include <linux/slab.h> > #include <trace/events/btrfs.h> > -#include <asm/kmap_types.h> > #include <asm/unaligned.h> > #include <linux/pagemap.h> > #include <linux/btrfs.h>
Arnd Bergmann
2020-Nov-03 12:25 UTC
[Nouveau] [patch V3 05/37] asm-generic: Provide kmap_size.h
On Tue, Nov 3, 2020 at 10:27 AM Thomas Gleixner <tglx at linutronix.de> wrote:> > kmap_types.h is a misnomer because the old atomic MAP based array does not > exist anymore and the whole indirection of architectures including > kmap_types.h is inconinstent and does not allow to provide guard page > debugging for this misfeature. > > Add a common header file which defines the mapping stack size for all > architectures. Will be used when converting architectures over to a > generic kmap_local/atomic implementation. > > The array size is chosen with the following constraints in mind: > > - The deepest nest level in one context is 3 according to code > inspection. > > - The worst case nesting for the upcoming reemptible version would be: > > 2 maps in task context and a fault inside > 2 maps in the fault handler > 3 maps in softirq > 2 maps in interrupt > > So a total of 16 is sufficient and probably overestimated. > > Signed-off-by: Thomas Gleixner <tglx at linutronix.de>Acked-by: Arnd Bergmann <arnd at arndb.de>
Thomas Gleixner
2020-Nov-03 13:49 UTC
[Nouveau] [patch V3 24/37] sched: highmem: Store local kmaps in task struct
On Tue, Nov 03 2020 at 10:27, Thomas Gleixner wrote:> +struct kmap_ctrl { > +#ifdef CONFIG_KMAP_LOCAL > + int idx; > + pte_t pteval[KM_TYPE_NR];I'm a moron. Fixed it on the test machine ...
Thomas Gleixner
2020-Nov-03 13:51 UTC
[Nouveau] [patch V4 24/37] sched: highmem: Store local kmaps in task struct
Instead of storing the map per CPU provide and use per task storage. That prepares for local kmaps which are preemptible. The context switch code is preparatory and not yet in use because kmap_atomic() runs with preemption disabled. Will be made usable in the next step. The context switch logic is safe even when an interrupt happens after clearing or before restoring the kmaps. The kmap index in task struct is not modified so any nesting kmap in an interrupt will use unused indices and on return the counter is the same as before. Also add an assert into the return to user space code. Going back to user space with an active kmap local is a nono. Signed-off-by: Thomas Gleixner <tglx at linutronix.de> --- V4: Use the version which actually compiles and works V3: Handle the debug case correctly --- include/linux/highmem-internal.h | 10 +++ include/linux/sched.h | 9 +++ kernel/entry/common.c | 2 kernel/fork.c | 1 kernel/sched/core.c | 18 +++++++ mm/highmem.c | 99 +++++++++++++++++++++++++++++++++++---- 6 files changed, 129 insertions(+), 10 deletions(-) --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -9,6 +9,16 @@ void *__kmap_local_pfn_prot(unsigned long pfn, pgprot_t prot); void *__kmap_local_page_prot(struct page *page, pgprot_t prot); void kunmap_local_indexed(void *vaddr); +void kmap_local_fork(struct task_struct *tsk); +void __kmap_local_sched_out(void); +void __kmap_local_sched_in(void); +static inline void kmap_assert_nomap(void) +{ + DEBUG_LOCKS_WARN_ON(current->kmap_ctrl.idx); +} +#else +static inline void kmap_local_fork(struct task_struct *tsk) { } +static inline void kmap_assert_nomap(void) { } #endif #ifdef CONFIG_HIGHMEM --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -34,6 +34,7 @@ #include <linux/rseq.h> #include <linux/seqlock.h> #include <linux/kcsan.h> +#include <asm/kmap_size.h> /* task_struct member predeclarations (sorted alphabetically): */ struct audit_context; @@ -629,6 +630,13 @@ struct wake_q_node { struct wake_q_node *next; }; +struct kmap_ctrl { +#ifdef CONFIG_KMAP_LOCAL + int idx; + pte_t pteval[KM_MAX_IDX]; +#endif +}; + struct task_struct { #ifdef CONFIG_THREAD_INFO_IN_TASK /* @@ -1294,6 +1302,7 @@ struct task_struct { unsigned int sequential_io; unsigned int sequential_io_avg; #endif + struct kmap_ctrl kmap_ctrl; #ifdef CONFIG_DEBUG_ATOMIC_SLEEP unsigned long task_state_change; #endif --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -2,6 +2,7 @@ #include <linux/context_tracking.h> #include <linux/entry-common.h> +#include <linux/highmem.h> #include <linux/livepatch.h> #include <linux/audit.h> @@ -194,6 +195,7 @@ static void exit_to_user_mode_prepare(st /* Ensure that the address limit is intact and no locks are held */ addr_limit_user_check(); + kmap_assert_nomap(); lockdep_assert_irqs_disabled(); lockdep_sys_exit(); } --- a/kernel/fork.c +++ b/kernel/fork.c @@ -930,6 +930,7 @@ static struct task_struct *dup_task_stru account_kernel_stack(tsk, 1); kcov_task_init(tsk); + kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION tsk->fail_nth = 0; --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4053,6 +4053,22 @@ static inline void finish_lock_switch(st # define finish_arch_post_lock_switch() do { } while (0) #endif +static inline void kmap_local_sched_out(void) +{ +#ifdef CONFIG_KMAP_LOCAL + if (unlikely(current->kmap_ctrl.idx)) + __kmap_local_sched_out(); +#endif +} + +static inline void kmap_local_sched_in(void) +{ +#ifdef CONFIG_KMAP_LOCAL + if (unlikely(current->kmap_ctrl.idx)) + __kmap_local_sched_in(); +#endif +} + /** * prepare_task_switch - prepare to switch tasks * @rq: the runqueue preparing to switch @@ -4075,6 +4091,7 @@ prepare_task_switch(struct rq *rq, struc perf_event_task_sched_out(prev, next); rseq_preempt(prev); fire_sched_out_preempt_notifiers(prev, next); + kmap_local_sched_out(); prepare_task(next); prepare_arch_switch(next); } @@ -4141,6 +4158,7 @@ static struct rq *finish_task_switch(str finish_lock_switch(rq); finish_arch_post_lock_switch(); kcov_finish_switch(current); + kmap_local_sched_in(); fire_sched_in_preempt_notifiers(current); /* --- a/mm/highmem.c +++ b/mm/highmem.c @@ -365,8 +365,6 @@ EXPORT_SYMBOL(kunmap_high); #include <asm/kmap_size.h> -static DEFINE_PER_CPU(int, __kmap_local_idx); - /* * With DEBUG_HIGHMEM the stack depth is doubled and every second * slot is unused which acts as a guard page @@ -379,23 +377,21 @@ static DEFINE_PER_CPU(int, __kmap_local_ static inline int kmap_local_idx_push(void) { - int idx = __this_cpu_add_return(__kmap_local_idx, KM_INCR) - 1; - WARN_ON_ONCE(in_irq() && !irqs_disabled()); - BUG_ON(idx >= KM_MAX_IDX); - return idx; + current->kmap_ctrl.idx += KM_INCR; + BUG_ON(current->kmap_ctrl.idx >= KM_MAX_IDX); + return current->kmap_ctrl.idx - 1; } static inline int kmap_local_idx(void) { - return __this_cpu_read(__kmap_local_idx) - 1; + return current->kmap_ctrl.idx - 1; } static inline void kmap_local_idx_pop(void) { - int idx = __this_cpu_sub_return(__kmap_local_idx, KM_INCR); - - BUG_ON(idx < 0); + current->kmap_ctrl.idx -= KM_INCR; + BUG_ON(current->kmap_ctrl.idx < 0); } #ifndef arch_kmap_local_post_map @@ -461,6 +457,7 @@ void *__kmap_local_pfn_prot(unsigned lon pteval = pfn_pte(pfn, prot); set_pte_at(&init_mm, vaddr, kmap_pte - idx, pteval); arch_kmap_local_post_map(vaddr, pteval); + current->kmap_ctrl.pteval[kmap_local_idx()] = pteval; preempt_enable(); return (void *)vaddr; @@ -505,10 +502,92 @@ void kunmap_local_indexed(void *vaddr) arch_kmap_local_pre_unmap(addr); pte_clear(&init_mm, addr, kmap_pte - idx); arch_kmap_local_post_unmap(addr); + current->kmap_ctrl.pteval[kmap_local_idx()] = __pte(0); kmap_local_idx_pop(); preempt_enable(); } EXPORT_SYMBOL(kunmap_local_indexed); + +/* + * Invoked before switch_to(). This is safe even when during or after + * clearing the maps an interrupt which needs a kmap_local happens because + * the task::kmap_ctrl.idx is not modified by the unmapping code so a + * nested kmap_local will use the next unused index and restore the index + * on unmap. The already cleared kmaps of the outgoing task are irrelevant + * because the interrupt context does not know about them. The same applies + * when scheduling back in for an interrupt which happens before the + * restore is complete. + */ +void __kmap_local_sched_out(void) +{ + struct task_struct *tsk = current; + pte_t *kmap_pte = kmap_get_pte(); + int i; + + /* Clear kmaps */ + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { + pte_t pteval = tsk->kmap_ctrl.pteval[i]; + unsigned long addr; + int idx; + + /* With debug all even slots are unmapped and act as guard */ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { + WARN_ON_ONCE(!pte_none(pteval)); + continue; + } + if (WARN_ON_ONCE(pte_none(pteval))) + continue; + + /* + * This is a horrible hack for XTENSA to calculate the + * coloured PTE index. Uses the PFN encoded into the pteval + * and the map index calculation because the actual mapped + * virtual address is not stored in task::kmap_ctrl. + * For any sane architecture this is optimized out. + */ + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); + + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + arch_kmap_local_pre_unmap(addr); + pte_clear(&init_mm, addr, kmap_pte - idx); + arch_kmap_local_post_unmap(addr); + } +} + +void __kmap_local_sched_in(void) +{ + struct task_struct *tsk = current; + pte_t *kmap_pte = kmap_get_pte(); + int i; + + /* Restore kmaps */ + for (i = 0; i < tsk->kmap_ctrl.idx; i++) { + pte_t pteval = tsk->kmap_ctrl.pteval[i]; + unsigned long addr; + int idx; + + /* With debug all even slots are unmapped and act as guard */ + if (IS_ENABLED(CONFIG_DEBUG_HIGHMEM) && !(i & 0x01)) { + WARN_ON_ONCE(!pte_none(pteval)); + continue; + } + if (WARN_ON_ONCE(pte_none(pteval))) + continue; + + /* See comment in __kmap_local_sched_out() */ + idx = arch_kmap_local_map_idx(i, pte_pfn(pteval)); + addr = __fix_to_virt(FIX_KMAP_BEGIN + idx); + set_pte_at(&init_mm, addr, kmap_pte - idx, pteval); + arch_kmap_local_post_map(addr, pteval); + } +} + +void kmap_local_fork(struct task_struct *tsk) +{ + if (WARN_ON_ONCE(tsk->kmap_ctrl.idx)) + memset(&tsk->kmap_ctrl, 0, sizeof(tsk->kmap_ctrl)); +} + #endif #if defined(HASHED_PAGE_VIRTUAL)
Linus Torvalds
2020-Nov-03 17:48 UTC
[Nouveau] [patch V3 22/37] highmem: High implementation details and document API
On Tue, Nov 3, 2020 at 2:33 AM Thomas Gleixner <tglx at linutronix.de> wrote:> > +static inline void *kmap(struct page *page) > +{ > + void *addr; > + > + might_sleep(); > + if (!PageHighMem(page)) > + addr = page_address(page); > + else > + addr = kmap_high(page); > + kmap_flush_tlb((unsigned long)addr); > + return addr; > +} > + > +static inline void kunmap(struct page *page) > +{ > + might_sleep(); > + if (!PageHighMem(page)) > + return; > + kunmap_high(page); > +}I have no complaints about the patch, but it strikes me that if people want to actually have much better debug coverage, this is where it should be (I like the "every other address" thing too, don't get me wrong). In particular, instead of these PageHighMem(page) tests, I think something like this would be better: #ifdef CONFIG_DEBUG_HIGHMEM #define page_use_kmap(page) ((page),1) #else #define page_use_kmap(page) PageHighMem(page) #endif adn then replace those "if (!PageHighMem(page))" tests with "if (!page_use_kmap())" instead. IOW, in debug mode, it would _always_ remap the page, whether it's highmem or not. That would really stress the highmem code and find any fragilities. No? Anyway, this is all sepatrate from the series, which still looks fine to me. Just a reaction to seeing the patch, and Thomas' earlier mention that the highmem debugging doesn't actually do much. Linus
Marek Szyprowski
2020-Nov-12 08:10 UTC
[Nouveau] [patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
Hi Thomas, On 03.11.2020 10:27, Thomas Gleixner wrote:> No reason having the same code in every architecture. > > Signed-off-by: Thomas Gleixner <tglx at linutronix.de> > Cc: Russell King <linux at armlinux.org.uk> > Cc: Arnd Bergmann <arnd at arndb.de> > Cc: linux-arm-kernel at lists.infradead.orgThis patch landed in linux-next 20201109 as commit 2a15ba82fa6c ("ARM: highmem: Switch to generic kmap atomic"). However it causes a following warning on my test boards (Samsung Exynos SoC based): Run /sbin/init as init process INIT: version 2.88 booting ------------[ cut here ]------------ WARNING: CPU: 3 PID: 120 at mm/highmem.c:502 kunmap_local_indexed+0x194/0x1d0 Modules linked in: CPU: 3 PID: 120 Comm: init Not tainted 5.10.0-rc2-00010-g2a15ba82fa6c #1924 Hardware name: Samsung Exynos (Flattened Device Tree) [<c0111514>] (unwind_backtrace) from [<c010ceb8>] (show_stack+0x10/0x14) [<c010ceb8>] (show_stack) from [<c0b1b408>] (dump_stack+0xb4/0xd4) [<c0b1b408>] (dump_stack) from [<c0126988>] (__warn+0x98/0x104) [<c0126988>] (__warn) from [<c0126aa4>] (warn_slowpath_fmt+0xb0/0xb8) [<c0126aa4>] (warn_slowpath_fmt) from [<c028e22c>] (kunmap_local_indexed+0x194/0x1d0) [<c028e22c>] (kunmap_local_indexed) from [<c02d37f4>] (remove_arg_zero+0xa0/0x158) [<c02d37f4>] (remove_arg_zero) from [<c034cfc8>] (load_script+0x250/0x318) [<c034cfc8>] (load_script) from [<c02d2f7c>] (bprm_execve+0x3d0/0x930) [<c02d2f7c>] (bprm_execve) from [<c02d3dc8>] (do_execveat_common+0x174/0x184) [<c02d3dc8>] (do_execveat_common) from [<c02d4cec>] (sys_execve+0x30/0x38) [<c02d4cec>] (sys_execve) from [<c0100060>] (ret_fast_syscall+0x0/0x28) Exception stack(0xc4561fa8 to 0xc4561ff0) 1fa0:?????????????????? b6f2bab8 bef7dac4 bef7dac4 bef7d8fc 004b9b58 bef7dac8 1fc0: b6f2bab8 bef7dac4 bef7d8fc 0000000b 004b8000 004bac44 bef7da3c bef7d8dc 1fe0: 0000002f bef7d89c b6d6dc74 b6d6d65c irq event stamp: 1283 hardirqs last? enabled at (1293): [<c019f564>] console_unlock+0x430/0x6b0 hardirqs last disabled at (1302): [<c019f55c>] console_unlock+0x428/0x6b0 softirqs last? enabled at (1282): [<c0101768>] __do_softirq+0x528/0x674 softirqs last disabled at (1269): [<c012fed4>] irq_exit+0x1dc/0x1e8 ---[ end trace 6f32a2fb4294655f ]--- I can do more tests to help fixing this issue. Just let me know what to do. ... Best regards -- Marek Szyprowski, PhD Samsung R&D Institute Poland
Thomas Gleixner
2020-Nov-12 11:03 UTC
[Nouveau] [patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
Marek, On Thu, Nov 12 2020 at 09:10, Marek Szyprowski wrote:> On 03.11.2020 10:27, Thomas Gleixner wrote: > > I can do more tests to help fixing this issue. Just let me know what to do.Just sent out the fix before I saw your report. https://lore.kernel.org/r/87y2j6n8mj.fsf at nanos.tec.linutronix.de Thanks, tglx
Sebastian Andrzej Siewior
2020-Nov-12 11:07 UTC
[Nouveau] [patch V3 10/37] ARM: highmem: Switch to generic kmap atomic
On 2020-11-12 09:10:34 [+0100], Marek Szyprowski wrote:> I can do more tests to help fixing this issue. Just let me know what to do.-> https://lkml.kernel.org/r/87y2j6n8mj.fsf at nanos.tec.linutronix.de Sebastian
Maybe Matching Threads
- [patch V3 32/37] drm/vmgfx: Replace kmap_atomic()
- [patch V3 25/37] mm/highmem: Provide kmap_local*
- [patch V3 19/37] mm/highmem: Remove the old kmap_atomic cruft
- [patch V3 06/37] highmem: Provide generic variant of kmap_atomic*
- [PATCH v3 3/8] vringh: replace kmap_atomic() with kmap_local_page()