Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of Linux 5.2-rc6 and has some minor conflicts with the hmm tree that are easy to resolve. Diffstat summary: 34 files changed, 379 insertions(+), 1016 deletions(-) Git tree: git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup.4 Gitweb: http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/hmm-devmem-cleanup.4 Changes since v3: - pull in "mm/swap: Fix release_pages() when releasing devmap pages" and rebase the other patches on top of that - fold the hmm_devmem_add_resource into the DEVICE_PUBLIC memory removal patch - remove _vm_normal_page as it isn't needed without DEVICE_PUBLIC memory - pick up various ACKs Changes since v2: - fix nvdimm kunit build - add a new memory type for device dax - fix a few issues in intermediate patches that didn't show up in the end result - incorporate feedback from Michal Hocko, including killing of the DEVICE_PUBLIC memory type entirely Changes since v1: - rebase - also switch p2pdma to the internal refcount - add type checking for pgmap->type - rename the migrate method to migrate_to_ram - cleanup the altmap_valid flag - various tidbits from the reviews
Christoph Hellwig
2019-Jul-01 06:19 UTC
[Nouveau] [PATCH 01/22] mm/hmm.c: suppress compilation warnings when CONFIG_HUGETLB_PAGE is not set
From: Jason Gunthorpe <jgg at mellanox.com> gcc reports that several variables are defined but not used. For the first hunk CONFIG_HUGETLB_PAGE the entire if block is already protected by pud_huge() which is forced to 0. None of the stuff under the ifdef causes compilation problems as it is already stubbed out in the header files. For the second hunk the dummy huge_page_shift macro doesn't touch the argument, so just inline the argument. Link: http://lkml.kernel.org/r/20190522195151.GA23955 at ziepe.ca Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Andrew Morton <akpm at linux-foundation.org> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Signed-off-by: Andrew Morton <akpm at linux-foundation.org> --- mm/hmm.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index c5d840e34b28..c62ae414a3a2 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -788,7 +788,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, return hmm_vma_walk_hole_(addr, end, fault, write_fault, walk); -#ifdef CONFIG_HUGETLB_PAGE pfn = pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); for (i = 0; i < npages; ++i, ++pfn) { hmm_vma_walk->pgmap = get_dev_pagemap(pfn, @@ -804,9 +803,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, } hmm_vma_walk->last = end; return 0; -#else - return -EINVAL; -#endif } split_huge_pud(walk->vma, pudp, addr); @@ -1015,9 +1011,8 @@ long hmm_range_snapshot(struct hmm_range *range) return -EFAULT; if (is_vm_hugetlb_page(vma)) { - struct hstate *h = hstate_vma(vma); - - if (huge_page_shift(h) != range->page_shift && + if (huge_page_shift(hstate_vma(vma)) !+ range->page_shift && range->page_shift != PAGE_SHIFT) return -EINVAL; } else { -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 02/22] mm/hmm: update HMM documentation
From: Ralph Campbell <rcampbell at nvidia.com> Update the HMM documentation to reflect the latest API and make a few minor wording changes. Cc: John Hubbard <jhubbard at nvidia.com> Cc: Ira Weiny <ira.weiny at intel.com> Cc: Dan Williams <dan.j.williams at intel.com> Cc: Arnd Bergmann <arnd at arndb.de> Cc: Balbir Singh <bsingharora at gmail.com> Cc: Dan Carpenter <dan.carpenter at oracle.com> Cc: Matthew Wilcox <willy at infradead.org> Cc: Souptick Joarder <jrdr.linux at gmail.com> Cc: Andrew Morton <akpm at linux-foundation.org> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> --- Documentation/vm/hmm.rst | 141 ++++++++++++++++++++------------------- include/linux/hmm.h | 7 +- 2 files changed, 78 insertions(+), 70 deletions(-) diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst index 7cdf7282e022..7b6eeda5a7c0 100644 --- a/Documentation/vm/hmm.rst +++ b/Documentation/vm/hmm.rst @@ -10,7 +10,7 @@ of this being specialized struct page for such memory (see sections 5 to 7 of this document). HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., -allowing a device to transparently access program address coherently with +allowing a device to transparently access program addresses coherently with the CPU meaning that any valid pointer on the CPU is also a valid pointer for the device. This is becoming mandatory to simplify the use of advanced heterogeneous computing where GPU, DSP, or FPGA are used to perform various @@ -22,8 +22,8 @@ expose the hardware limitations that are inherent to many platforms. The third section gives an overview of the HMM design. The fourth section explains how CPU page-table mirroring works and the purpose of HMM in this context. The fifth section deals with how device memory is represented inside the kernel. -Finally, the last section presents a new migration helper that allows lever- -aging the device DMA engine. +Finally, the last section presents a new migration helper that allows +leveraging the device DMA engine. .. contents:: :local: @@ -39,20 +39,20 @@ address space. I use shared address space to refer to the opposite situation: i.e., one in which any application memory region can be used by a device transparently. -Split address space happens because device can only access memory allocated -through device specific API. This implies that all memory objects in a program +Split address space happens because devices can only access memory allocated +through a device specific API. This implies that all memory objects in a program are not equal from the device point of view which complicates large programs that rely on a wide set of libraries. -Concretely this means that code that wants to leverage devices like GPUs needs -to copy object between generically allocated memory (malloc, mmap private, mmap +Concretely, this means that code that wants to leverage devices like GPUs needs +to copy objects between generically allocated memory (malloc, mmap private, mmap share) and memory allocated through the device driver API (this still ends up with an mmap but of the device file). For flat data sets (array, grid, image, ...) this isn't too hard to achieve but -complex data sets (list, tree, ...) are hard to get right. Duplicating a +for complex data sets (list, tree, ...) it's hard to get right. Duplicating a complex data set needs to re-map all the pointer relations between each of its -elements. This is error prone and program gets harder to debug because of the +elements. This is error prone and programs get harder to debug because of the duplicate data set and addresses. Split address space also means that libraries cannot transparently use data @@ -77,12 +77,12 @@ I/O bus, device memory characteristics I/O buses cripple shared address spaces due to a few limitations. Most I/O buses only allow basic memory access from device to main memory; even cache -coherency is often optional. Access to device memory from CPU is even more +coherency is often optional. Access to device memory from a CPU is even more limited. More often than not, it is not cache coherent. If we only consider the PCIE bus, then a device can access main memory (often through an IOMMU) and be cache coherent with the CPUs. However, it only allows -a limited set of atomic operations from device on main memory. This is worse +a limited set of atomic operations from the device on main memory. This is worse in the other direction: the CPU can only access a limited range of the device memory and cannot perform atomic operations on it. Thus device memory cannot be considered the same as regular memory from the kernel point of view. @@ -93,20 +93,20 @@ The final limitation is latency. Access to main memory from the device has an order of magnitude higher latency than when the device accesses its own memory. Some platforms are developing new I/O buses or additions/modifications to PCIE -to address some of these limitations (OpenCAPI, CCIX). They mainly allow two- -way cache coherency between CPU and device and allow all atomic operations the +to address some of these limitations (OpenCAPI, CCIX). They mainly allow +two-way cache coherency between CPU and device and allow all atomic operations the architecture supports. Sadly, not all platforms are following this trend and some major architectures are left without hardware solutions to these problems. So for shared address space to make sense, not only must we allow devices to access any memory but we must also permit any memory to be migrated to device -memory while device is using it (blocking CPU access while it happens). +memory while the device is using it (blocking CPU access while it happens). Shared address space and migration ================================= -HMM intends to provide two main features. First one is to share the address +HMM intends to provide two main features. The first one is to share the address space by duplicating the CPU page table in the device page table so the same address points to the same physical memory for any valid main memory address in the process address space. @@ -121,14 +121,14 @@ why HMM provides helpers to factor out everything that can be while leaving the hardware specific details to the device driver. The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that -allows allocating a struct page for each page of the device memory. Those pages +allows allocating a struct page for each page of device memory. Those pages are special because the CPU cannot map them. However, they allow migrating main memory to device memory using existing migration mechanisms and everything -looks like a page is swapped out to disk from the CPU point of view. Using a -struct page gives the easiest and cleanest integration with existing mm mech- -anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE +looks like a page that is swapped out to disk from the CPU point of view. Using a +struct page gives the easiest and cleanest integration with existing mm +mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE memory for the device memory and second to perform migration. Policy decisions -of what and when to migrate things is left to the device driver. +of what and when to migrate is left to the device driver. Note that any CPU access to a device page triggers a page fault and a migration back to main memory. For example, when a page backing a given CPU address A is @@ -136,8 +136,8 @@ migrated from a main memory page to a device page, then any CPU access to address A triggers a page fault and initiates a migration back to main memory. With these two features, HMM not only allows a device to mirror process address -space and keeping both CPU and device page table synchronized, but also lever- -ages device memory by migrating the part of the data set that is actively being +space and keeps both CPU and device page tables synchronized, but also +leverages device memory by migrating the part of the data set that is actively being used by the device. @@ -151,21 +151,28 @@ registration of an hmm_mirror struct:: int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm); - int hmm_mirror_register_locked(struct hmm_mirror *mirror, - struct mm_struct *mm); - -The locked variant is to be used when the driver is already holding mmap_sem -of the mm in write mode. The mirror struct has a set of callbacks that are used +The mirror struct has a set of callbacks that are used to propagate CPU page tables:: struct hmm_mirror_ops { + /* release() - release hmm_mirror + * + * @mirror: pointer to struct hmm_mirror + * + * This is called when the mm_struct is being released. The callback + * must ensure that all access to any pages obtained from this mirror + * is halted before the callback returns. All future access should + * fault. + */ + void (*release)(struct hmm_mirror *mirror); + /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update_type: type of update that occurred to the CPU page table - * @start: virtual start address of the range to update - * @end: virtual end address of the range to update + * @update: update information (see struct mmu_notifier_range) + * Return: -EAGAIN if update.blockable false and callback need to + * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU * page table is updated. The device driver must update its page table @@ -176,14 +183,12 @@ to propagate CPU page tables:: * page tables are completely updated (TLBs flushed, etc); this is a * synchronous call. */ - void (*update)(struct hmm_mirror *mirror, - enum hmm_update action, - unsigned long start, - unsigned long end); + int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror, + const struct hmm_update *update); }; The device driver must perform the update action to the range (mark range -read only, or fully unmap, ...). The device must be done with the update before +read only, or fully unmap, etc.). The device must complete the update before the driver callback returns. When the device driver wants to populate a range of virtual addresses, it can @@ -194,17 +199,18 @@ use either:: The first one (hmm_range_snapshot()) will only fetch present CPU page table entries and will not trigger a page fault on missing or non-present entries. -The second one does trigger a page fault on missing or read-only entry if the -write parameter is true. Page faults use the generic mm page fault code path -just like a CPU page fault. +The second one does trigger a page fault on missing or read-only entries if +write access is requested (see below). Page faults use the generic mm page +fault code path just like a CPU page fault. Both functions copy CPU page table entries into their pfns array argument. Each entry in that array corresponds to an address in the virtual range. HMM provides a set of flags to help the driver identify special CPU page table entries. -Locking with the update() callback is the most important aspect the driver must -respect in order to keep things properly synchronized. The usage pattern is:: +Locking within the sync_cpu_device_pagetables() callback is the most important +aspect the driver must respect in order to keep things properly synchronized. +The usage pattern is:: int driver_populate_range(...) { @@ -239,11 +245,11 @@ respect in order to keep things properly synchronized. The usage pattern is:: hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); goto again; } - hmm_mirror_unregister(&range); + hmm_range_unregister(&range); return ret; } take_lock(driver->update); - if (!range.valid) { + if (!hmm_range_valid(&range)) { release_lock(driver->update); up_read(&mm->mmap_sem); goto again; @@ -251,15 +257,15 @@ respect in order to keep things properly synchronized. The usage pattern is:: // Use pfns array content to update device page table - hmm_mirror_unregister(&range); + hmm_range_unregister(&range); release_lock(driver->update); up_read(&mm->mmap_sem); return 0; } The driver->update lock is the same lock that the driver takes inside its -update() callback. That lock must be held before checking the range.valid -field to avoid any race with a concurrent CPU page table update. +sync_cpu_device_pagetables() callback. That lock must be held before calling +hmm_range_valid() to avoid any race with a concurrent CPU page table update. HMM implements all this on top of the mmu_notifier API because we wanted a simpler API and also to be able to perform optimizations latter on like doing @@ -279,46 +285,47 @@ concurrently). Leverage default_flags and pfn_flags_mask ======================================== -The hmm_range struct has 2 fields default_flags and pfn_flags_mask that allows -to set fault or snapshot policy for a whole range instead of having to set them -for each entries in the range. +The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify +fault or snapshot policy for the whole range instead of having to set them +for each entry in the pfns array. + +For instance, if the device flags for range.flags are:: -For instance if the device flags for device entries are: - VALID (1 << 63) - WRITE (1 << 62) + range.flags[HMM_PFN_VALID] = (1 << 63); + range.flags[HMM_PFN_WRITE] = (1 << 62); -Now let say that device driver wants to fault with at least read a range then -it does set:: +and the device driver wants pages for a range with at least read permission, +it sets:: range->default_flags = (1 << 63); range->pfn_flags_mask = 0; -and calls hmm_range_fault() as described above. This will fill fault all page +and calls hmm_range_fault() as described above. This will fill fault all pages in the range with at least read permission. -Now let say driver wants to do the same except for one page in the range for -which its want to have write. Now driver set:: +Now let's say the driver wants to do the same except for one page in the range for +which it wants to have write permission. Now driver set:: range->default_flags = (1 << 63); range->pfn_flags_mask = (1 << 62); range->pfns[index_of_write] = (1 << 62); -With this HMM will fault in all page with at least read (ie valid) and for the +With this, HMM will fault in all pages with at least read (i.e., valid) and for the address == range->start + (index_of_write << PAGE_SHIFT) it will fault with -write permission ie if the CPU pte does not have write permission set then HMM +write permission i.e., if the CPU pte does not have write permission set then HMM will call handle_mm_fault(). -Note that HMM will populate the pfns array with write permission for any entry -that have write permission within the CPU pte no matter what are the values set +Note that HMM will populate the pfns array with write permission for any page +that is mapped with CPU write permission no matter what values are set in default_flags or pfn_flags_mask. Represent and manage device memory from core kernel point of view ================================================================ -Several different designs were tried to support device memory. First one used -a device specific data structure to keep information about migrated memory and -HMM hooked itself in various places of mm code to handle any access to +Several different designs were tried to support device memory. The first one +used a device specific data structure to keep information about migrated memory +and HMM hooked itself in various places of mm code to handle any access to addresses that were backed by device memory. It turns out that this ended up replicating most of the fields of struct page and also needed many kernel code paths to be updated to understand this new kind of memory. @@ -341,7 +348,7 @@ The hmm_devmem_ops is where most of the important things are:: struct hmm_devmem_ops { void (*free)(struct hmm_devmem *devmem, struct page *page); - int (*fault)(struct hmm_devmem *devmem, + vm_fault_t (*fault)(struct hmm_devmem *devmem, struct vm_area_struct *vma, unsigned long addr, struct page *page, @@ -417,9 +424,9 @@ willing to pay to keep all the code simpler. Memory cgroup (memcg) and rss accounting ======================================= -For now device memory is accounted as any regular page in rss counters (either +For now, device memory is accounted as any regular page in rss counters (either anonymous if device page is used for anonymous, file if device page is used for -file backed page or shmem if device page is used for shared memory). This is a +file backed page, or shmem if device page is used for shared memory). This is a deliberate choice to keep existing applications, that might start using device memory without knowing about it, running unimpacted. @@ -439,6 +446,6 @@ get more experience in how device memory is used and its impact on memory resource control. -Note that device memory can never be pinned by device driver nor through GUP +Note that device memory can never be pinned by a device driver nor through GUP and thus such memory is always free upon process exit. Or when last reference is dropped in case of shared memory or file backed memory. diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 044a36d7c3f8..740bb00853f5 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -418,9 +418,10 @@ struct hmm_mirror_ops { * * @mirror: pointer to struct hmm_mirror * - * This is called when the mm_struct is being released. - * The callback should make sure no references to the mirror occur - * after the callback returns. + * This is called when the mm_struct is being released. The callback + * must ensure that all access to any pages obtained from this mirror + * is halted before the callback returns. All future access should + * fault. */ void (*release)(struct hmm_mirror *mirror); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 03/22] mm/hmm: clean up some coding style and comments
From: Ralph Campbell <rcampbell at nvidia.com> There are no functional changes, just some coding style clean ups and minor comment changes. Cc: John Hubbard <jhubbard at nvidia.com> Cc: Ira Weiny <ira.weiny at intel.com> Cc: Dan Williams <dan.j.williams at intel.com> Cc: Arnd Bergmann <arnd at arndb.de> Cc: Balbir Singh <bsingharora at gmail.com> Cc: Dan Carpenter <dan.carpenter at oracle.com> Cc: Matthew Wilcox <willy at infradead.org> Cc: Souptick Joarder <jrdr.linux at gmail.com> Cc: Andrew Morton <akpm at linux-foundation.org> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> --- include/linux/hmm.h | 71 +++++++++++++++++++++++---------------------- mm/hmm.c | 62 ++++++++++++++++++++------------------- 2 files changed, 68 insertions(+), 65 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 740bb00853f5..7007123842ba 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -21,8 +21,8 @@ * * HMM address space mirroring API: * - * Use HMM address space mirroring if you want to mirror range of the CPU page - * table of a process into a device page table. Here, "mirror" means "keep + * Use HMM address space mirroring if you want to mirror a range of the CPU + * page tables of a process into a device page table. Here, "mirror" means "keep * synchronized". Prerequisites: the device must provide the ability to write- * protect its page tables (at PAGE_SIZE granularity), and must be able to * recover from the resulting potential page faults. @@ -105,10 +105,11 @@ struct hmm { * HMM_PFN_WRITE: CPU page table has write permission set * HMM_PFN_DEVICE_PRIVATE: private device memory (ZONE_DEVICE) * - * The driver provide a flags array, if driver valid bit for an entry is bit - * 3 ie (entry & (1 << 3)) is true if entry is valid then driver must provide + * The driver provides a flags array for mapping page protections to device + * PTE bits. If the driver valid bit for an entry is bit 3, + * i.e., (entry & (1 << 3)), then the driver must provide * an array in hmm_range.flags with hmm_range.flags[HMM_PFN_VALID] == 1 << 3. - * Same logic apply to all flags. This is same idea as vm_page_prot in vma + * Same logic apply to all flags. This is the same idea as vm_page_prot in vma * except that this is per device driver rather than per architecture. */ enum hmm_pfn_flag_e { @@ -129,13 +130,13 @@ enum hmm_pfn_flag_e { * be mirrored by a device, because the entry will never have HMM_PFN_VALID * set and the pfn value is undefined. * - * Driver provide entry value for none entry, error entry and special entry, - * driver can alias (ie use same value for error and special for instance). It - * should not alias none and error or special. + * Driver provides values for none entry, error entry, and special entry. + * Driver can alias (i.e., use same value) error and special, but + * it should not alias none with error or special. * * HMM pfn value returned by hmm_vma_get_pfns() or hmm_vma_fault() will be: * hmm_range.values[HMM_PFN_ERROR] if CPU page table entry is poisonous, - * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table + * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table entry, * hmm_range.values[HMM_PFN_SPECIAL] if CPU page table entry is a special one */ enum hmm_pfn_value_e { @@ -158,6 +159,7 @@ enum hmm_pfn_value_e { * @values: pfn value for some special case (none, special, error, ...) * @default_flags: default flags for the range (write, read, ... see hmm doc) * @pfn_flags_mask: allows to mask pfn flags so that only default_flags matter + * @page_shift: device virtual address shift value (should be >= PAGE_SHIFT) * @pfn_shifts: pfn shift value (should be <= PAGE_SHIFT) * @valid: pfns array did not change since it has been fill by an HMM function */ @@ -180,7 +182,7 @@ struct hmm_range { /* * hmm_range_page_shift() - return the page shift for the range * @range: range being queried - * Returns: page shift (page size = 1 << page shift) for the range + * Return: page shift (page size = 1 << page shift) for the range */ static inline unsigned hmm_range_page_shift(const struct hmm_range *range) { @@ -190,7 +192,7 @@ static inline unsigned hmm_range_page_shift(const struct hmm_range *range) /* * hmm_range_page_size() - return the page size for the range * @range: range being queried - * Returns: page size for the range in bytes + * Return: page size for the range in bytes */ static inline unsigned long hmm_range_page_size(const struct hmm_range *range) { @@ -201,7 +203,7 @@ static inline unsigned long hmm_range_page_size(const struct hmm_range *range) * hmm_range_wait_until_valid() - wait for range to be valid * @range: range affected by invalidation to wait on * @timeout: time out for wait in ms (ie abort wait after that period of time) - * Returns: true if the range is valid, false otherwise. + * Return: true if the range is valid, false otherwise. */ static inline bool hmm_range_wait_until_valid(struct hmm_range *range, unsigned long timeout) @@ -222,7 +224,7 @@ static inline bool hmm_range_wait_until_valid(struct hmm_range *range, /* * hmm_range_valid() - test if a range is valid or not * @range: range - * Returns: true if the range is valid, false otherwise. + * Return: true if the range is valid, false otherwise. */ static inline bool hmm_range_valid(struct hmm_range *range) { @@ -233,7 +235,7 @@ static inline bool hmm_range_valid(struct hmm_range *range) * hmm_device_entry_to_page() - return struct page pointed to by a device entry * @range: range use to decode device entry value * @entry: device entry value to get corresponding struct page from - * Returns: struct page pointer if entry is a valid, NULL otherwise + * Return: struct page pointer if entry is a valid, NULL otherwise * * If the device entry is valid (ie valid flag set) then return the struct page * matching the entry value. Otherwise return NULL. @@ -256,7 +258,7 @@ static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang * hmm_device_entry_to_pfn() - return pfn value store in a device entry * @range: range use to decode device entry value * @entry: device entry to extract pfn from - * Returns: pfn value if device entry is valid, -1UL otherwise + * Return: pfn value if device entry is valid, -1UL otherwise */ static inline unsigned long hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn) @@ -276,7 +278,7 @@ hmm_device_entry_to_pfn(const struct hmm_range *range, uint64_t pfn) * hmm_device_entry_from_page() - create a valid device entry for a page * @range: range use to encode HMM pfn value * @page: page for which to create the device entry - * Returns: valid device entry for the page + * Return: valid device entry for the page */ static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range, struct page *page) @@ -289,7 +291,7 @@ static inline uint64_t hmm_device_entry_from_page(const struct hmm_range *range, * hmm_device_entry_from_pfn() - create a valid device entry value from pfn * @range: range use to encode HMM pfn value * @pfn: pfn value for which to create the device entry - * Returns: valid device entry for the pfn + * Return: valid device entry for the pfn */ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, unsigned long pfn) @@ -394,7 +396,7 @@ enum hmm_update_event { }; /* - * struct hmm_update - HMM update informations for callback + * struct hmm_update - HMM update information for callback * * @start: virtual start address of the range to update * @end: virtual end address of the range to update @@ -428,8 +430,8 @@ struct hmm_mirror_ops { /* sync_cpu_device_pagetables() - synchronize page tables * * @mirror: pointer to struct hmm_mirror - * @update: update informations (see struct hmm_update) - * Returns: -EAGAIN if update.blockable false and callback need to + * @update: update information (see struct hmm_update) + * Return: -EAGAIN if update.blockable false and callback need to * block, 0 otherwise. * * This callback ultimately originates from mmu_notifiers when the CPU @@ -468,13 +470,13 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror); /* * hmm_mirror_mm_is_alive() - test if mm is still alive * @mirror: the HMM mm mirror for which we want to lock the mmap_sem - * Returns: false if the mm is dead, true otherwise + * Return: false if the mm is dead, true otherwise * - * This is an optimization it will not accurately always return -EINVAL if the - * mm is dead ie there can be false negative (process is being kill but HMM is - * not yet inform of that). It is only intented to be use to optimize out case - * where driver is about to do something time consuming and it would be better - * to skip it if the mm is dead. + * This is an optimization, it will not always accurately return false if the + * mm is dead; i.e., there can be false negatives (process is being killed but + * HMM is not yet informed of that). It is only intended to be used to optimize + * out cases where the driver is about to do something time consuming and it + * would be better to skip it if the mm is dead. */ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) { @@ -489,7 +491,6 @@ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) return true; } - /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ @@ -562,7 +563,7 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block) ret = hmm_range_fault(range, block); if (ret <= 0) { if (ret == -EBUSY || !ret) { - /* Same as above drop mmap_sem to match old API. */ + /* Same as above, drop mmap_sem to match old API. */ up_read(&range->vma->vm_mm->mmap_sem); ret = -EBUSY; } else if (ret == -EAGAIN) @@ -629,7 +630,7 @@ struct hmm_devmem_ops { * @page: pointer to struct page backing virtual address (unreliable) * @flags: FAULT_FLAG_* (see include/linux/mm.h) * @pmdp: page middle directory - * Returns: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR + * Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR * on error * * The callback occurs whenever there is a CPU page fault or GUP on a @@ -637,14 +638,14 @@ struct hmm_devmem_ops { * page back to regular memory (CPU accessible). * * The device driver is free to migrate more than one page from the - * fault() callback as an optimization. However if device decide to - * migrate more than one page it must always priotirize the faulting + * fault() callback as an optimization. However if the device decides + * to migrate more than one page it must always priotirize the faulting * address over the others. * - * The struct page pointer is only given as an hint to allow quick + * The struct page pointer is only given as a hint to allow quick * lookup of internal device driver data. A concurrent migration - * might have already free that page and the virtual address might - * not longer be back by it. So it should not be modified by the + * might have already freed that page and the virtual address might + * no longer be backed by it. So it should not be modified by the * callback. * * Note that mmap semaphore is held in read mode at least when this @@ -671,7 +672,7 @@ struct hmm_devmem_ops { * @ref: per CPU refcount * @page_fault: callback when CPU fault on an unaddressable device page * - * This an helper structure for device drivers that do not wish to implement + * This is a helper structure for device drivers that do not wish to implement * the gory details related to hotplugging new memoy and allocating struct * pages. * diff --git a/mm/hmm.c b/mm/hmm.c index c62ae414a3a2..4db5dcf110ba 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -153,9 +153,8 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) /* Wake-up everyone waiting on any range. */ mutex_lock(&hmm->lock); - list_for_each_entry(range, &hmm->ranges, list) { + list_for_each_entry(range, &hmm->ranges, list) range->valid = false; - } wake_up_all(&hmm->wq); mutex_unlock(&hmm->lock); @@ -166,9 +165,10 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) list_del_init(&mirror->list); if (mirror->ops->release) { /* - * Drop mirrors_sem so callback can wait on any pending - * work that might itself trigger mmu_notifier callback - * and thus would deadlock with us. + * Drop mirrors_sem so the release callback can wait + * on any pending work that might itself trigger a + * mmu_notifier callback and thus would deadlock with + * us. */ up_write(&hmm->mirrors_sem); mirror->ops->release(mirror); @@ -223,11 +223,8 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, int ret; ret = mirror->ops->sync_cpu_device_pagetables(mirror, &update); - if (!update.blockable && ret == -EAGAIN) { - up_read(&hmm->mirrors_sem); - ret = -EAGAIN; - goto out; - } + if (!update.blockable && ret == -EAGAIN) + break; } up_read(&hmm->mirrors_sem); @@ -271,6 +268,7 @@ static const struct mmu_notifier_ops hmm_mmu_notifier_ops = { * * @mirror: new mirror struct to register * @mm: mm to register against + * Return: 0 on success, -ENOMEM if no memory, -EINVAL if invalid arguments * * To start mirroring a process address space, the device driver must register * an HMM mirror struct. @@ -298,7 +296,7 @@ EXPORT_SYMBOL(hmm_mirror_register); /* * hmm_mirror_unregister() - unregister a mirror * - * @mirror: new mirror struct to register + * @mirror: mirror struct to unregister * * Stop mirroring a process address space, and cleanup. */ @@ -372,7 +370,7 @@ static int hmm_pfns_bad(unsigned long addr, * @fault: should we fault or not ? * @write_fault: write fault ? * @walk: mm_walk structure - * Returns: 0 on success, -EBUSY after page fault, or page fault error + * Return: 0 on success, -EBUSY after page fault, or page fault error * * This function will be called whenever pmd_none() or pte_none() returns true, * or whenever there is no page directory covering the virtual address range. @@ -911,6 +909,7 @@ int hmm_range_register(struct hmm_range *range, unsigned page_shift) { unsigned long mask = ((1UL << page_shift) - 1UL); + struct hmm *hmm; range->valid = false; range->hmm = NULL; @@ -924,28 +923,29 @@ int hmm_range_register(struct hmm_range *range, range->start = start; range->end = end; - range->hmm = hmm_get_or_create(mm); - if (!range->hmm) + hmm = hmm_get_or_create(mm); + if (!hmm) return -EFAULT; /* Check if hmm_mm_destroy() was call. */ - if (range->hmm->mm == NULL || range->hmm->dead) { - hmm_put(range->hmm); + if (hmm->mm == NULL || hmm->dead) { + hmm_put(hmm); return -EFAULT; } - /* Initialize range to track CPU page table update */ - mutex_lock(&range->hmm->lock); + /* Initialize range to track CPU page table updates. */ + mutex_lock(&hmm->lock); - list_add_rcu(&range->list, &range->hmm->ranges); + range->hmm = hmm; + list_add_rcu(&range->list, &hmm->ranges); /* * If there are any concurrent notifiers we have to wait for them for * the range to be valid (see hmm_range_wait_until_valid()). */ - if (!range->hmm->notifiers) + if (!hmm->notifiers) range->valid = true; - mutex_unlock(&range->hmm->lock); + mutex_unlock(&hmm->lock); return 0; } @@ -960,17 +960,19 @@ EXPORT_SYMBOL(hmm_range_register); */ void hmm_range_unregister(struct hmm_range *range) { + struct hmm *hmm = range->hmm; + /* Sanity check this really should not happen. */ - if (range->hmm == NULL || range->end <= range->start) + if (hmm == NULL || range->end <= range->start) return; - mutex_lock(&range->hmm->lock); + mutex_lock(&hmm->lock); list_del_rcu(&range->list); - mutex_unlock(&range->hmm->lock); + mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */ range->valid = false; - hmm_put(range->hmm); + hmm_put(hmm); range->hmm = NULL; } EXPORT_SYMBOL(hmm_range_unregister); @@ -978,7 +980,7 @@ EXPORT_SYMBOL(hmm_range_unregister); /* * hmm_range_snapshot() - snapshot CPU page table for a range * @range: range - * Returns: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid + * Return: -EINVAL if invalid argument, -ENOMEM out of memory, -EPERM invalid * permission (for instance asking for write and range is read only), * -EAGAIN if you need to retry, -EFAULT invalid (ie either no valid * vma or it is illegal to access that range), number of valid pages @@ -1061,7 +1063,7 @@ EXPORT_SYMBOL(hmm_range_snapshot); * hmm_range_fault() - try to fault some address in a virtual address range * @range: range being faulted * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Returns: number of valid pages in range->pfns[] (from range start + * Return: number of valid pages in range->pfns[] (from range start * address). This may be zero. If the return value is negative, * then one of the following values may be returned: * @@ -1179,7 +1181,7 @@ EXPORT_SYMBOL(hmm_range_fault); * @device: device against to dma map page to * @daddrs: dma address of mapped pages * @block: allow blocking on fault (if true it sleeps and do not drop mmap_sem) - * Returns: number of pages mapped on success, -EAGAIN if mmap_sem have been + * Return: number of pages mapped on success, -EAGAIN if mmap_sem have been * drop and you need to try again, some other error value otherwise * * Note same usage pattern as hmm_range_fault(). @@ -1267,7 +1269,7 @@ EXPORT_SYMBOL(hmm_range_dma_map); * @device: device against which dma map was done * @daddrs: dma address of mapped pages * @dirty: dirty page if it had the write flag set - * Returns: number of page unmapped on success, -EINVAL otherwise + * Return: number of page unmapped on success, -EINVAL otherwise * * Note that caller MUST abide by mmu notifier or use HMM mirror and abide * to the sync_cpu_device_pagetables() callback so that it is safe here to @@ -1390,7 +1392,7 @@ static void hmm_devmem_free(struct page *page, void *data) * @ops: memory event device driver callback (see struct hmm_devmem_ops) * @device: device struct to bind the resource too * @size: size in bytes of the device memory to add - * Returns: pointer to new hmm_devmem struct ERR_PTR otherwise + * Return: pointer to new hmm_devmem struct ERR_PTR otherwise * * This function first finds an empty range of physical address big enough to * contain the new resource, and then hotplugs it as ZONE_DEVICE memory, which -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 04/22] mm/hmm: support automatic NUMA balancing
From: Philip Yang <Philip.Yang at amd.com> While the page is migrating by NUMA balancing, HMM failed to detect this condition and still return the old page. Application will use the new page migrated, but driver pass the old page physical address to GPU, this crash the application later. Use pte_protnone(pte) to return this condition and then hmm_vma_do_fault will allocate new page. Signed-off-by: Philip Yang <Philip.Yang at amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling at amd.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> --- mm/hmm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index 4db5dcf110ba..dce4e70e648a 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -548,7 +548,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) { - if (pte_none(pte) || !pte_present(pte)) + if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte)) return 0; return pte_write(pte) ? range->flags[HMM_PFN_VALID] | range->flags[HMM_PFN_WRITE] : -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 05/22] mm/hmm: Only set FAULT_FLAG_ALLOW_RETRY for non-blocking
From: "Kuehling, Felix" <Felix.Kuehling at amd.com> Don't set this flag by default in hmm_vma_do_fault. It is set conditionally just a few lines below. Setting it unconditionally can lead to handle_mm_fault doing a non-blocking fault, returning -EBUSY and unlocking mmap_sem unexpectedly. Signed-off-by: Felix Kuehling <Felix.Kuehling at amd.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> --- mm/hmm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hmm.c b/mm/hmm.c index dce4e70e648a..826816ab2377 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -328,7 +328,7 @@ struct hmm_vma_walk { static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr, bool write_fault, uint64_t *pfn) { - unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_REMOTE; + unsigned int flags = FAULT_FLAG_REMOTE; struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; struct vm_area_struct *vma = walk->vma; -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 06/22] mm/hmm: fix use after free with struct hmm in the mmu notifiers
From: Jason Gunthorpe <jgg at mellanox.com> mmu_notifier_unregister_no_release() is not a fence and the mmu_notifier system will continue to reference hmm->mn until the srcu grace period expires. Resulting in use after free races like this: CPU0 CPU1 __mmu_notifier_invalidate_range_start() srcu_read_lock hlist_for_each () // mn == hmm->mn hmm_mirror_unregister() hmm_put() hmm_free() mmu_notifier_unregister_no_release() hlist_del_init_rcu(hmm-mn->list) mn->ops->invalidate_range_start(mn, range); mm_get_hmm() mm->hmm = NULL; kfree(hmm) mutex_lock(&hmm->lock); Use SRCU to kfree the hmm memory so that the notifiers can rely on hmm existing. Get the now-safe hmm struct through container_of and directly check kref_get_unless_zero to lock it against free. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- include/linux/hmm.h | 1 + mm/hmm.c | 23 +++++++++++++++++------ 2 files changed, 18 insertions(+), 6 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 7007123842ba..cb01cf1fa3c0 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -93,6 +93,7 @@ struct hmm { struct mmu_notifier mmu_notifier; struct rw_semaphore mirrors_sem; wait_queue_head_t wq; + struct rcu_head rcu; long notifiers; bool dead; }; diff --git a/mm/hmm.c b/mm/hmm.c index 826816ab2377..f6956d78e3cb 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -104,6 +104,11 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) return NULL; } +static void hmm_free_rcu(struct rcu_head *rcu) +{ + kfree(container_of(rcu, struct hmm, rcu)); +} + static void hmm_free(struct kref *kref) { struct hmm *hmm = container_of(kref, struct hmm, kref); @@ -116,7 +121,7 @@ static void hmm_free(struct kref *kref) mm->hmm = NULL; spin_unlock(&mm->page_table_lock); - kfree(hmm); + mmu_notifier_call_srcu(&hmm->rcu, hmm_free_rcu); } static inline void hmm_put(struct hmm *hmm) @@ -144,10 +149,14 @@ void hmm_mm_destroy(struct mm_struct *mm) static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) { - struct hmm *hmm = mm_get_hmm(mm); + struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); struct hmm_mirror *mirror; struct hmm_range *range; + /* Bail out if hmm is in the process of being freed */ + if (!kref_get_unless_zero(&hmm->kref)) + return; + /* Report this HMM as dying. */ hmm->dead = true; @@ -185,13 +194,14 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) static int hmm_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *nrange) { - struct hmm *hmm = mm_get_hmm(nrange->mm); + struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); struct hmm_mirror *mirror; struct hmm_update update; struct hmm_range *range; int ret = 0; - VM_BUG_ON(!hmm); + if (!kref_get_unless_zero(&hmm->kref)) + return 0; update.start = nrange->start; update.end = nrange->end; @@ -236,9 +246,10 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, static void hmm_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *nrange) { - struct hmm *hmm = mm_get_hmm(nrange->mm); + struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); - VM_BUG_ON(!hmm); + if (!kref_get_unless_zero(&hmm->kref)) + return; mutex_lock(&hmm->lock); hmm->notifiers--; -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 07/22] mm/hmm: Use hmm_mirror not mm as an argument for hmm_range_register
From: Jason Gunthorpe <jgg at mellanox.com> Ralph observes that hmm_range_register() can only be called by a driver while a mirror is registered. Make this clear in the API by passing in the mirror structure as a parameter. This also simplifies understanding the lifetime model for struct hmm, as the hmm pointer must be valid as part of a registered mirror so all we need in hmm_register_range() is a simple kref_get. Suggested-by: Ralph Campbell <rcampbell at nvidia.com> Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- include/linux/hmm.h | 7 ++++--- mm/hmm.c | 13 ++++--------- 3 files changed, 9 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 93ed43c413f0..8c92374afcf2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -649,7 +649,7 @@ nouveau_svm_fault(struct nvif_notify *notify) range.values = nouveau_svm_pfn_values; range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT; again: - ret = hmm_vma_fault(&range, true); + ret = hmm_vma_fault(&svmm->mirror, &range, true); if (ret == 0) { mutex_lock(&svmm->mutex); if (!hmm_vma_range_done(&range)) { diff --git a/include/linux/hmm.h b/include/linux/hmm.h index cb01cf1fa3c0..1fba6979adf4 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -496,7 +496,7 @@ static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) * Please see Documentation/vm/hmm.rst for how to use the range API. */ int hmm_range_register(struct hmm_range *range, - struct mm_struct *mm, + struct hmm_mirror *mirror, unsigned long start, unsigned long end, unsigned page_shift); @@ -532,7 +532,8 @@ static inline bool hmm_vma_range_done(struct hmm_range *range) } /* This is a temporary helper to avoid merge conflict between trees. */ -static inline int hmm_vma_fault(struct hmm_range *range, bool block) +static inline int hmm_vma_fault(struct hmm_mirror *mirror, + struct hmm_range *range, bool block) { long ret; @@ -545,7 +546,7 @@ static inline int hmm_vma_fault(struct hmm_range *range, bool block) range->default_flags = 0; range->pfn_flags_mask = -1UL; - ret = hmm_range_register(range, range->vma->vm_mm, + ret = hmm_range_register(range, mirror, range->start, range->end, PAGE_SHIFT); if (ret) diff --git a/mm/hmm.c b/mm/hmm.c index f6956d78e3cb..22a97ada108b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -914,13 +914,13 @@ static void hmm_pfns_clear(struct hmm_range *range, * Track updates to the CPU page table see include/linux/hmm.h */ int hmm_range_register(struct hmm_range *range, - struct mm_struct *mm, + struct hmm_mirror *mirror, unsigned long start, unsigned long end, unsigned page_shift) { unsigned long mask = ((1UL << page_shift) - 1UL); - struct hmm *hmm; + struct hmm *hmm = mirror->hmm; range->valid = false; range->hmm = NULL; @@ -934,20 +934,15 @@ int hmm_range_register(struct hmm_range *range, range->start = start; range->end = end; - hmm = hmm_get_or_create(mm); - if (!hmm) - return -EFAULT; - /* Check if hmm_mm_destroy() was call. */ - if (hmm->mm == NULL || hmm->dead) { - hmm_put(hmm); + if (hmm->mm == NULL || hmm->dead) return -EFAULT; - } /* Initialize range to track CPU page table updates. */ mutex_lock(&hmm->lock); range->hmm = hmm; + kref_get(&hmm->kref); list_add_rcu(&range->list, &hmm->ranges); /* -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 08/22] mm/hmm: Hold a mmgrab from hmm to mm
From: Jason Gunthorpe <jgg at mellanox.com> So long as a struct hmm pointer exists, so should the struct mm it is linked too. Hold the mmgrab() as soon as a hmm is created, and mmdrop() it once the hmm refcount goes to zero. Since mmdrop() (ie a 0 kref on struct mm) is now impossible with a !NULL mm->hmm delete the hmm_hmm_destroy(). Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- include/linux/hmm.h | 3 --- kernel/fork.c | 1 - mm/hmm.c | 22 ++++------------------ 3 files changed, 4 insertions(+), 22 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 1fba6979adf4..1d97b6d62c5b 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -577,14 +577,11 @@ static inline int hmm_vma_fault(struct hmm_mirror *mirror, } /* Below are for HMM internal use only! Not to be used by device driver! */ -void hmm_mm_destroy(struct mm_struct *mm); - static inline void hmm_mm_init(struct mm_struct *mm) { mm->hmm = NULL; } #else /* IS_ENABLED(CONFIG_HMM_MIRROR) */ -static inline void hmm_mm_destroy(struct mm_struct *mm) {} static inline void hmm_mm_init(struct mm_struct *mm) {} #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */ diff --git a/kernel/fork.c b/kernel/fork.c index 75675b9bf6df..c704c3cedee7 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -673,7 +673,6 @@ void __mmdrop(struct mm_struct *mm) WARN_ON_ONCE(mm == current->active_mm); mm_free_pgd(mm); destroy_context(mm); - hmm_mm_destroy(mm); mmu_notifier_mm_destroy(mm); check_mm(mm); put_user_ns(mm->user_ns); diff --git a/mm/hmm.c b/mm/hmm.c index 22a97ada108b..080b17a2e87e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -20,6 +20,7 @@ #include <linux/swapops.h> #include <linux/hugetlb.h> #include <linux/memremap.h> +#include <linux/sched/mm.h> #include <linux/jump_label.h> #include <linux/dma-mapping.h> #include <linux/mmu_notifier.h> @@ -73,6 +74,7 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) hmm->notifiers = 0; hmm->dead = false; hmm->mm = mm; + mmgrab(hmm->mm); spin_lock(&mm->page_table_lock); if (!mm->hmm) @@ -100,6 +102,7 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) mm->hmm = NULL; spin_unlock(&mm->page_table_lock); error: + mmdrop(hmm->mm); kfree(hmm); return NULL; } @@ -121,6 +124,7 @@ static void hmm_free(struct kref *kref) mm->hmm = NULL; spin_unlock(&mm->page_table_lock); + mmdrop(hmm->mm); mmu_notifier_call_srcu(&hmm->rcu, hmm_free_rcu); } @@ -129,24 +133,6 @@ static inline void hmm_put(struct hmm *hmm) kref_put(&hmm->kref, hmm_free); } -void hmm_mm_destroy(struct mm_struct *mm) -{ - struct hmm *hmm; - - spin_lock(&mm->page_table_lock); - hmm = mm_get_hmm(mm); - mm->hmm = NULL; - if (hmm) { - hmm->mm = NULL; - hmm->dead = true; - spin_unlock(&mm->page_table_lock); - hmm_put(hmm); - return; - } - - spin_unlock(&mm->page_table_lock); -} - static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) { struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 09/22] mm/hmm: Simplify hmm_get_or_create and make it reliable
From: Jason Gunthorpe <jgg at mellanox.com> As coded this function can false-fail in various racy situations. Make it reliable and simpler by running under the write side of the mmap_sem and avoiding the false-failing compare/exchange pattern. Due to the mmap_sem this no longer has to avoid racing with a 2nd parallel hmm_get_or_create(). Unfortunately this still has to use the page_table_lock as the non-sleeping lock protecting mm->hmm, since the contexts where we free the hmm are incompatible with mmap_sem. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 77 ++++++++++++++++++++++---------------------------------- 1 file changed, 30 insertions(+), 47 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 080b17a2e87e..0423f4ca3a7e 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -31,16 +31,6 @@ #if IS_ENABLED(CONFIG_HMM_MIRROR) static const struct mmu_notifier_ops hmm_mmu_notifier_ops; -static inline struct hmm *mm_get_hmm(struct mm_struct *mm) -{ - struct hmm *hmm = READ_ONCE(mm->hmm); - - if (hmm && kref_get_unless_zero(&hmm->kref)) - return hmm; - - return NULL; -} - /** * hmm_get_or_create - register HMM against an mm (HMM internal) * @@ -55,11 +45,16 @@ static inline struct hmm *mm_get_hmm(struct mm_struct *mm) */ static struct hmm *hmm_get_or_create(struct mm_struct *mm) { - struct hmm *hmm = mm_get_hmm(mm); - bool cleanup = false; + struct hmm *hmm; + + lockdep_assert_held_exclusive(&mm->mmap_sem); - if (hmm) - return hmm; + /* Abuse the page_table_lock to also protect mm->hmm. */ + spin_lock(&mm->page_table_lock); + hmm = mm->hmm; + if (mm->hmm && kref_get_unless_zero(&mm->hmm->kref)) + goto out_unlock; + spin_unlock(&mm->page_table_lock); hmm = kmalloc(sizeof(*hmm), GFP_KERNEL); if (!hmm) @@ -74,57 +69,45 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) hmm->notifiers = 0; hmm->dead = false; hmm->mm = mm; - mmgrab(hmm->mm); - spin_lock(&mm->page_table_lock); - if (!mm->hmm) - mm->hmm = hmm; - else - cleanup = true; - spin_unlock(&mm->page_table_lock); + hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; + if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) { + kfree(hmm); + return NULL; + } - if (cleanup) - goto error; + mmgrab(hmm->mm); /* - * We should only get here if hold the mmap_sem in write mode ie on - * registration of first mirror through hmm_mirror_register() + * We hold the exclusive mmap_sem here so we know that mm->hmm is + * still NULL or 0 kref, and is safe to update. */ - hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; - if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) - goto error_mm; - - return hmm; - -error_mm: spin_lock(&mm->page_table_lock); - if (mm->hmm == hmm) - mm->hmm = NULL; + mm->hmm = hmm; + +out_unlock: spin_unlock(&mm->page_table_lock); -error: - mmdrop(hmm->mm); - kfree(hmm); - return NULL; + return hmm; } static void hmm_free_rcu(struct rcu_head *rcu) { - kfree(container_of(rcu, struct hmm, rcu)); + struct hmm *hmm = container_of(rcu, struct hmm, rcu); + + mmdrop(hmm->mm); + kfree(hmm); } static void hmm_free(struct kref *kref) { struct hmm *hmm = container_of(kref, struct hmm, kref); - struct mm_struct *mm = hmm->mm; - mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm); + spin_lock(&hmm->mm->page_table_lock); + if (hmm->mm->hmm == hmm) + hmm->mm->hmm = NULL; + spin_unlock(&hmm->mm->page_table_lock); - spin_lock(&mm->page_table_lock); - if (mm->hmm == hmm) - mm->hmm = NULL; - spin_unlock(&mm->page_table_lock); - - mmdrop(hmm->mm); + mmu_notifier_unregister_no_release(&hmm->mmu_notifier, hmm->mm); mmu_notifier_call_srcu(&hmm->rcu, hmm_free_rcu); } -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 10/22] mm/hmm: Remove duplicate condition test before wait_event_timeout
From: Jason Gunthorpe <jgg at mellanox.com> The wait_event_timeout macro already tests the condition as its first action, so there is no reason to open code another version of this, all that does is skip the might_sleep() debugging in common cases, which is not helpful. Further, based on prior patches, we can now simplify the required condition test: - If range is valid memory then so is range->hmm - If hmm_release() has run then range->valid is set to false at the same time as dead, so no reason to check both. - A valid hmm has a valid hmm->mm. Allowing the return value of wait_event_timeout() (along with its internal barriers) to compute the result of the function. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- include/linux/hmm.h | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 1d97b6d62c5b..26e7c477490c 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -209,17 +209,8 @@ static inline unsigned long hmm_range_page_size(const struct hmm_range *range) static inline bool hmm_range_wait_until_valid(struct hmm_range *range, unsigned long timeout) { - /* Check if mm is dead ? */ - if (range->hmm == NULL || range->hmm->dead || range->hmm->mm == NULL) { - range->valid = false; - return false; - } - if (range->valid) - return true; - wait_event_timeout(range->hmm->wq, range->valid || range->hmm->dead, - msecs_to_jiffies(timeout)); - /* Return current valid status just in case we get lucky */ - return range->valid; + return wait_event_timeout(range->hmm->wq, range->valid, + msecs_to_jiffies(timeout)) != 0; } /* -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 11/22] mm/hmm: Do not use list*_rcu() for hmm->ranges
From: Jason Gunthorpe <jgg at mellanox.com> This list is always read and written while holding hmm->lock so there is no need for the confusing _rcu annotations. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Acked-by: Souptick Joarder <jrdr.linux at gmail.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Ira Weiny <iweiny at intel.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 0423f4ca3a7e..73c8af4827fe 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -912,7 +912,7 @@ int hmm_range_register(struct hmm_range *range, range->hmm = hmm; kref_get(&hmm->kref); - list_add_rcu(&range->list, &hmm->ranges); + list_add(&range->list, &hmm->ranges); /* * If there are any concurrent notifiers we have to wait for them for @@ -942,7 +942,7 @@ void hmm_range_unregister(struct hmm_range *range) return; mutex_lock(&hmm->lock); - list_del_rcu(&range->list); + list_del(&range->list); mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */ -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 12/22] mm/hmm: Hold on to the mmget for the lifetime of the range
From: Jason Gunthorpe <jgg at mellanox.com> Range functions like hmm_range_snapshot() and hmm_range_fault() call find_vma, which requires hodling the mmget() and the mmap_sem for the mm. Make this simpler for the callers by holding the mmget() inside the range for the lifetime of the range. Other functions that accept a range should only be called if the range is registered. This has the side effect of directly preventing hmm_release() from happening while a range is registered. That means range->dead cannot be false during the lifetime of the range, so remove dead and hmm_mirror_mm_is_alive() entirely. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- include/linux/hmm.h | 26 -------------------------- mm/hmm.c | 32 +++++++++++--------------------- 2 files changed, 11 insertions(+), 47 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 26e7c477490c..bf013e965257 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -82,7 +82,6 @@ * @mirrors_sem: read/write semaphore protecting the mirrors list * @wq: wait queue for user waiting on a range invalidation * @notifiers: count of active mmu notifiers - * @dead: is the mm dead ? */ struct hmm { struct mm_struct *mm; @@ -95,7 +94,6 @@ struct hmm { wait_queue_head_t wq; struct rcu_head rcu; long notifiers; - bool dead; }; /* @@ -459,30 +457,6 @@ struct hmm_mirror { int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm); void hmm_mirror_unregister(struct hmm_mirror *mirror); -/* - * hmm_mirror_mm_is_alive() - test if mm is still alive - * @mirror: the HMM mm mirror for which we want to lock the mmap_sem - * Return: false if the mm is dead, true otherwise - * - * This is an optimization, it will not always accurately return false if the - * mm is dead; i.e., there can be false negatives (process is being killed but - * HMM is not yet informed of that). It is only intended to be used to optimize - * out cases where the driver is about to do something time consuming and it - * would be better to skip it if the mm is dead. - */ -static inline bool hmm_mirror_mm_is_alive(struct hmm_mirror *mirror) -{ - struct mm_struct *mm; - - if (!mirror || !mirror->hmm) - return false; - mm = READ_ONCE(mirror->hmm->mm); - if (mirror->hmm->dead || !mm) - return false; - - return true; -} - /* * Please see Documentation/vm/hmm.rst for how to use the range API. */ diff --git a/mm/hmm.c b/mm/hmm.c index 73c8af4827fe..1eddda45cefa 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -67,7 +67,6 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) mutex_init(&hmm->lock); kref_init(&hmm->kref); hmm->notifiers = 0; - hmm->dead = false; hmm->mm = mm; hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; @@ -120,21 +119,16 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) { struct hmm *hmm = container_of(mn, struct hmm, mmu_notifier); struct hmm_mirror *mirror; - struct hmm_range *range; /* Bail out if hmm is in the process of being freed */ if (!kref_get_unless_zero(&hmm->kref)) return; - /* Report this HMM as dying. */ - hmm->dead = true; - - /* Wake-up everyone waiting on any range. */ - mutex_lock(&hmm->lock); - list_for_each_entry(range, &hmm->ranges, list) - range->valid = false; - wake_up_all(&hmm->wq); - mutex_unlock(&hmm->lock); + /* + * Since hmm_range_register() holds the mmget() lock hmm_release() is + * prevented as long as a range exists. + */ + WARN_ON(!list_empty_careful(&hmm->ranges)); down_write(&hmm->mirrors_sem); mirror = list_first_entry_or_null(&hmm->mirrors, struct hmm_mirror, @@ -903,8 +897,8 @@ int hmm_range_register(struct hmm_range *range, range->start = start; range->end = end; - /* Check if hmm_mm_destroy() was call. */ - if (hmm->mm == NULL || hmm->dead) + /* Prevent hmm_release() from running while the range is valid */ + if (!mmget_not_zero(hmm->mm)) return -EFAULT; /* Initialize range to track CPU page table updates. */ @@ -942,11 +936,12 @@ void hmm_range_unregister(struct hmm_range *range) return; mutex_lock(&hmm->lock); - list_del(&range->list); + list_del_init(&range->list); mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */ range->valid = false; + mmput(hmm->mm); hmm_put(hmm); range->hmm = NULL; } @@ -974,10 +969,7 @@ long hmm_range_snapshot(struct hmm_range *range) struct vm_area_struct *vma; struct mm_walk mm_walk; - /* Check if hmm_mm_destroy() was call. */ - if (hmm->mm == NULL || hmm->dead) - return -EFAULT; - + lockdep_assert_held(&hmm->mm->mmap_sem); do { /* If range is no longer valid force retry. */ if (!range->valid) @@ -1072,9 +1064,7 @@ long hmm_range_fault(struct hmm_range *range, bool block) struct mm_walk mm_walk; int ret; - /* Check if hmm_mm_destroy() was call. */ - if (hmm->mm == NULL || hmm->dead) - return -EFAULT; + lockdep_assert_held(&hmm->mm->mmap_sem); do { /* If range is no longer valid force retry. */ -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 13/22] mm/hmm: Use lockdep instead of comments
From: Jason Gunthorpe <jgg at mellanox.com> So we can check locking at runtime. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Acked-by: Souptick Joarder <jrdr.linux at gmail.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 1eddda45cefa..6f5dc6d568fe 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -246,11 +246,11 @@ static const struct mmu_notifier_ops hmm_mmu_notifier_ops = { * * To start mirroring a process address space, the device driver must register * an HMM mirror struct. - * - * THE mm->mmap_sem MUST BE HELD IN WRITE MODE ! */ int hmm_mirror_register(struct hmm_mirror *mirror, struct mm_struct *mm) { + lockdep_assert_held_exclusive(&mm->mmap_sem); + /* Sanity check */ if (!mm || !mirror || !mirror->ops) return -EINVAL; -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 14/22] mm/hmm: Remove racy protection against double-unregistration
From: Jason Gunthorpe <jgg at mellanox.com> No other register/unregister kernel API attempts to provide this kind of protection as it is inherently racy, so just drop it. Callers should provide their own protection, and it appears nouveau already does. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 6f5dc6d568fe..2ef14b2b5505 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -276,17 +276,11 @@ EXPORT_SYMBOL(hmm_mirror_register); */ void hmm_mirror_unregister(struct hmm_mirror *mirror) { - struct hmm *hmm = READ_ONCE(mirror->hmm); - - if (hmm == NULL) - return; + struct hmm *hmm = mirror->hmm; down_write(&hmm->mirrors_sem); list_del_init(&mirror->list); - /* To protect us against double unregister ... */ - mirror->hmm = NULL; up_write(&hmm->mirrors_sem); - hmm_put(hmm); } EXPORT_SYMBOL(hmm_mirror_unregister); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 15/22] mm/hmm: Poison hmm_range during unregister
From: Jason Gunthorpe <jgg at mellanox.com> Trying to misuse a range outside its lifetime is a kernel bug. Use poison bytes to help detect this condition. Double unregister will reliably crash. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Jérôme Glisse <jglisse at redhat.com> Reviewed-by: John Hubbard <jhubbard at nvidia.com> Acked-by: Souptick Joarder <jrdr.linux at gmail.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Ira Weiny <ira.weiny at intel.com> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 2ef14b2b5505..c30aa9403dbe 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -925,19 +925,21 @@ void hmm_range_unregister(struct hmm_range *range) { struct hmm *hmm = range->hmm; - /* Sanity check this really should not happen. */ - if (hmm == NULL || range->end <= range->start) - return; - mutex_lock(&hmm->lock); list_del_init(&range->list); mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */ - range->valid = false; mmput(hmm->mm); hmm_put(hmm); - range->hmm = NULL; + + /* + * The range is now invalid and the ref on the hmm is dropped, so + * poison the pointer. Leave other fields in place, for the caller's + * use. + */ + range->valid = false; + memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); } EXPORT_SYMBOL(hmm_range_unregister); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 16/22] mm/hmm: Remove confusing comment and logic from hmm_release
From: Jason Gunthorpe <jgg at mellanox.com> hmm_release() is called exactly once per hmm. ops->release() cannot accidentally trigger any action that would recurse back onto hmm->mirrors_sem. This fixes a use after-free race of the form: CPU0 CPU1 hmm_release() up_write(&hmm->mirrors_sem); hmm_mirror_unregister(mirror) down_write(&hmm->mirrors_sem); up_write(&hmm->mirrors_sem); kfree(mirror) mirror->ops->release(mirror) The only user we have today for ops->release is an empty function, so this is unambiguously safe. As a consequence of plugging this race drivers are not allowed to register/unregister mirrors from within a release op. Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- mm/hmm.c | 28 +++++++++------------------- 1 file changed, 9 insertions(+), 19 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index c30aa9403dbe..b224ea635a77 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -130,26 +130,16 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) */ WARN_ON(!list_empty_careful(&hmm->ranges)); - down_write(&hmm->mirrors_sem); - mirror = list_first_entry_or_null(&hmm->mirrors, struct hmm_mirror, - list); - while (mirror) { - list_del_init(&mirror->list); - if (mirror->ops->release) { - /* - * Drop mirrors_sem so the release callback can wait - * on any pending work that might itself trigger a - * mmu_notifier callback and thus would deadlock with - * us. - */ - up_write(&hmm->mirrors_sem); + down_read(&hmm->mirrors_sem); + list_for_each_entry(mirror, &hmm->mirrors, list) { + /* + * Note: The driver is not allowed to trigger + * hmm_mirror_unregister() from this thread. + */ + if (mirror->ops->release) mirror->ops->release(mirror); - down_write(&hmm->mirrors_sem); - } - mirror = list_first_entry_or_null(&hmm->mirrors, - struct hmm_mirror, list); } - up_write(&hmm->mirrors_sem); + up_read(&hmm->mirrors_sem); hmm_put(hmm); } @@ -279,7 +269,7 @@ void hmm_mirror_unregister(struct hmm_mirror *mirror) struct hmm *hmm = mirror->hmm; down_write(&hmm->mirrors_sem); - list_del_init(&mirror->list); + list_del(&mirror->list); up_write(&hmm->mirrors_sem); hmm_put(hmm); } -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 17/22] mm/hmm: Fix error flows in hmm_invalidate_range_start
From: Jason Gunthorpe <jgg at mellanox.com> If the trylock on the hmm->mirrors_sem fails the function will return without decrementing the notifiers that were previously incremented. Since the caller will not call invalidate_range_end() on EAGAIN this will result in notifiers becoming permanently incremented and deadlock. If the sync_cpu_device_pagetables() required blocking the function will not return EAGAIN even though the device continues to touch the pages. This is a violation of the mmu notifier contract. Switch, and rename, the ranges_lock to a spin lock so we can reliably obtain it without blocking during error unwind. The error unwind is necessary since the notifiers count must be held incremented across the call to sync_cpu_device_pagetables() as we cannot allow the range to become marked valid by a parallel invalidate_start/end() pair while doing sync_cpu_device_pagetables(). Signed-off-by: Jason Gunthorpe <jgg at mellanox.com> Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Reviewed-by: Christoph Hellwig <hch at lst.de> Tested-by: Philip Yang <Philip.Yang at amd.com> --- include/linux/hmm.h | 2 +- mm/hmm.c | 69 ++++++++++++++++++++++++++------------------- 2 files changed, 41 insertions(+), 30 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index bf013e965257..0fa8ea34ccef 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -86,7 +86,7 @@ struct hmm { struct mm_struct *mm; struct kref kref; - struct mutex lock; + spinlock_t ranges_lock; struct list_head ranges; struct list_head mirrors; struct mmu_notifier mmu_notifier; diff --git a/mm/hmm.c b/mm/hmm.c index b224ea635a77..de35289df20d 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -64,7 +64,7 @@ static struct hmm *hmm_get_or_create(struct mm_struct *mm) init_rwsem(&hmm->mirrors_sem); hmm->mmu_notifier.ops = NULL; INIT_LIST_HEAD(&hmm->ranges); - mutex_init(&hmm->lock); + spin_lock_init(&hmm->ranges_lock); kref_init(&hmm->kref); hmm->notifiers = 0; hmm->mm = mm; @@ -144,6 +144,25 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) hmm_put(hmm); } +static void notifiers_decrement(struct hmm *hmm) +{ + unsigned long flags; + + spin_lock_irqsave(&hmm->ranges_lock, flags); + hmm->notifiers--; + if (!hmm->notifiers) { + struct hmm_range *range; + + list_for_each_entry(range, &hmm->ranges, list) { + if (range->valid) + continue; + range->valid = true; + } + wake_up_all(&hmm->wq); + } + spin_unlock_irqrestore(&hmm->ranges_lock, flags); +} + static int hmm_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *nrange) { @@ -151,6 +170,7 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, struct hmm_mirror *mirror; struct hmm_update update; struct hmm_range *range; + unsigned long flags; int ret = 0; if (!kref_get_unless_zero(&hmm->kref)) @@ -161,12 +181,7 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, update.event = HMM_UPDATE_INVALIDATE; update.blockable = mmu_notifier_range_blockable(nrange); - if (mmu_notifier_range_blockable(nrange)) - mutex_lock(&hmm->lock); - else if (!mutex_trylock(&hmm->lock)) { - ret = -EAGAIN; - goto out; - } + spin_lock_irqsave(&hmm->ranges_lock, flags); hmm->notifiers++; list_for_each_entry(range, &hmm->ranges, list) { if (update.end < range->start || update.start >= range->end) @@ -174,7 +189,7 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, range->valid = false; } - mutex_unlock(&hmm->lock); + spin_unlock_irqrestore(&hmm->ranges_lock, flags); if (mmu_notifier_range_blockable(nrange)) down_read(&hmm->mirrors_sem); @@ -182,16 +197,23 @@ static int hmm_invalidate_range_start(struct mmu_notifier *mn, ret = -EAGAIN; goto out; } + list_for_each_entry(mirror, &hmm->mirrors, list) { - int ret; + int rc; - ret = mirror->ops->sync_cpu_device_pagetables(mirror, &update); - if (!update.blockable && ret == -EAGAIN) + rc = mirror->ops->sync_cpu_device_pagetables(mirror, &update); + if (rc) { + if (WARN_ON(update.blockable || rc != -EAGAIN)) + continue; + ret = -EAGAIN; break; + } } up_read(&hmm->mirrors_sem); out: + if (ret) + notifiers_decrement(hmm); hmm_put(hmm); return ret; } @@ -204,20 +226,7 @@ static void hmm_invalidate_range_end(struct mmu_notifier *mn, if (!kref_get_unless_zero(&hmm->kref)) return; - mutex_lock(&hmm->lock); - hmm->notifiers--; - if (!hmm->notifiers) { - struct hmm_range *range; - - list_for_each_entry(range, &hmm->ranges, list) { - if (range->valid) - continue; - range->valid = true; - } - wake_up_all(&hmm->wq); - } - mutex_unlock(&hmm->lock); - + notifiers_decrement(hmm); hmm_put(hmm); } @@ -868,6 +877,7 @@ int hmm_range_register(struct hmm_range *range, { unsigned long mask = ((1UL << page_shift) - 1UL); struct hmm *hmm = mirror->hmm; + unsigned long flags; range->valid = false; range->hmm = NULL; @@ -886,7 +896,7 @@ int hmm_range_register(struct hmm_range *range, return -EFAULT; /* Initialize range to track CPU page table updates. */ - mutex_lock(&hmm->lock); + spin_lock_irqsave(&hmm->ranges_lock, flags); range->hmm = hmm; kref_get(&hmm->kref); @@ -898,7 +908,7 @@ int hmm_range_register(struct hmm_range *range, */ if (!hmm->notifiers) range->valid = true; - mutex_unlock(&hmm->lock); + spin_unlock_irqrestore(&hmm->ranges_lock, flags); return 0; } @@ -914,10 +924,11 @@ EXPORT_SYMBOL(hmm_range_register); void hmm_range_unregister(struct hmm_range *range) { struct hmm *hmm = range->hmm; + unsigned long flags; - mutex_lock(&hmm->lock); + spin_lock_irqsave(&hmm->ranges_lock, flags); list_del_init(&range->list); - mutex_unlock(&hmm->lock); + spin_unlock_irqrestore(&hmm->ranges_lock, flags); /* Drop reference taken by hmm_range_register() */ mmput(hmm->mm); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 18/22] mm: return valid info from hmm_range_unregister
Checking range->valid is trivial and has no meaningful cost, but nicely simplifies the fastpath in typical callers. Also remove the hmm_vma_range_done function, which now is a trivial wrapper around hmm_range_unregister. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- include/linux/hmm.h | 11 +---------- mm/hmm.c | 6 +++++- 3 files changed, 7 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 8c92374afcf2..9d40114d7949 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -652,7 +652,7 @@ nouveau_svm_fault(struct nvif_notify *notify) ret = hmm_vma_fault(&svmm->mirror, &range, true); if (ret == 0) { mutex_lock(&svmm->mutex); - if (!hmm_vma_range_done(&range)) { + if (!hmm_range_unregister(&range)) { mutex_unlock(&svmm->mutex); goto again; } diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 0fa8ea34ccef..4b185d286c3b 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -465,7 +465,7 @@ int hmm_range_register(struct hmm_range *range, unsigned long start, unsigned long end, unsigned page_shift); -void hmm_range_unregister(struct hmm_range *range); +bool hmm_range_unregister(struct hmm_range *range); long hmm_range_snapshot(struct hmm_range *range); long hmm_range_fault(struct hmm_range *range, bool block); long hmm_range_dma_map(struct hmm_range *range, @@ -487,15 +487,6 @@ long hmm_range_dma_unmap(struct hmm_range *range, */ #define HMM_RANGE_DEFAULT_TIMEOUT 1000 -/* This is a temporary helper to avoid merge conflict between trees. */ -static inline bool hmm_vma_range_done(struct hmm_range *range) -{ - bool ret = hmm_range_valid(range); - - hmm_range_unregister(range); - return ret; -} - /* This is a temporary helper to avoid merge conflict between trees. */ static inline int hmm_vma_fault(struct hmm_mirror *mirror, struct hmm_range *range, bool block) diff --git a/mm/hmm.c b/mm/hmm.c index de35289df20d..c85ed7d4e2ce 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -920,11 +920,14 @@ EXPORT_SYMBOL(hmm_range_register); * * Range struct is used to track updates to the CPU page table after a call to * hmm_range_register(). See include/linux/hmm.h for how to use it. + * + * Returns if the range was still valid at the time of unregistering. */ -void hmm_range_unregister(struct hmm_range *range) +bool hmm_range_unregister(struct hmm_range *range) { struct hmm *hmm = range->hmm; unsigned long flags; + bool ret = range->valid; spin_lock_irqsave(&hmm->ranges_lock, flags); list_del_init(&range->list); @@ -941,6 +944,7 @@ void hmm_range_unregister(struct hmm_range *range) */ range->valid = false; memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); + return ret; } EXPORT_SYMBOL(hmm_range_unregister); -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 19/22] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
We should not have two different error codes for the same condition. In addition this really complicates the code due to the special handling of EAGAIN that drops the mmap_sem due to the FAULT_FLAG_ALLOW_RETRY logic in the core vm. Signed-off-by: Christoph Hellwig <hch at lst.de> --- mm/hmm.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index c85ed7d4e2ce..d125df698e2b 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -974,7 +974,7 @@ long hmm_range_snapshot(struct hmm_range *range) do { /* If range is no longer valid force retry. */ if (!range->valid) - return -EAGAIN; + return -EBUSY; vma = find_vma(hmm->mm, start); if (vma == NULL || (vma->vm_flags & device_vma)) @@ -1069,10 +1069,8 @@ long hmm_range_fault(struct hmm_range *range, bool block) do { /* If range is no longer valid force retry. */ - if (!range->valid) { - up_read(&hmm->mm->mmap_sem); - return -EAGAIN; - } + if (!range->valid) + return -EBUSY; vma = find_vma(hmm->mm, start); if (vma == NULL || (vma->vm_flags & device_vma)) -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 20/22] mm: move hmm_vma_fault to nouveau
hmm_vma_fault is marked as a legacy API to get rid of, but quite suites the current nouvea flow. Move it to the only user in preparation for fixing a locking bug involving caller and callee. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 54 ++++++++++++++++++++++++++- include/linux/hmm.h | 54 --------------------------- 2 files changed, 53 insertions(+), 55 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 9d40114d7949..e831f4184a17 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -36,6 +36,13 @@ #include <linux/sort.h> #include <linux/hmm.h> +/* + * When waiting for mmu notifiers we need some kind of time out otherwise we + * could potentialy wait for ever, 1000ms ie 1s sounds like a long time to + * wait already. + */ +#define NOUVEAU_RANGE_FAULT_TIMEOUT 1000 + struct nouveau_svm { struct nouveau_drm *drm; struct mutex mutex; @@ -475,6 +482,51 @@ nouveau_svm_fault_cache(struct nouveau_svm *svm, fault->inst, fault->addr, fault->access); } +static int +nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, + bool block) +{ + long ret; + + /* + * With the old API the driver must set each individual entries with + * the requested flags (valid, write, ...). So here we set the mask to + * keep intact the entries provided by the driver and zero out the + * default_flags. + */ + range->default_flags = 0; + range->pfn_flags_mask = -1UL; + + ret = hmm_range_register(range, mirror, + range->start, range->end, + PAGE_SHIFT); + if (ret) + return (int)ret; + + if (!hmm_range_wait_until_valid(range, NOUVEAU_RANGE_FAULT_TIMEOUT)) { + /* + * The mmap_sem was taken by driver we release it here and + * returns -EAGAIN which correspond to mmap_sem have been + * drop in the old API. + */ + up_read(&range->vma->vm_mm->mmap_sem); + return -EAGAIN; + } + + ret = hmm_range_fault(range, block); + if (ret <= 0) { + if (ret == -EBUSY || !ret) { + /* Same as above, drop mmap_sem to match old API. */ + up_read(&range->vma->vm_mm->mmap_sem); + ret = -EBUSY; + } else if (ret == -EAGAIN) + ret = -EBUSY; + hmm_range_unregister(range); + return ret; + } + return 0; +} + static int nouveau_svm_fault(struct nvif_notify *notify) { @@ -649,7 +701,7 @@ nouveau_svm_fault(struct nvif_notify *notify) range.values = nouveau_svm_pfn_values; range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT; again: - ret = hmm_vma_fault(&svmm->mirror, &range, true); + ret = nouveau_range_fault(&svmm->mirror, &range, true); if (ret == 0) { mutex_lock(&svmm->mutex); if (!hmm_range_unregister(&range)) { diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 4b185d286c3b..3457cf9182e5 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -478,60 +478,6 @@ long hmm_range_dma_unmap(struct hmm_range *range, dma_addr_t *daddrs, bool dirty); -/* - * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range - * - * When waiting for mmu notifiers we need some kind of time out otherwise we - * could potentialy wait for ever, 1000ms ie 1s sounds like a long time to - * wait already. - */ -#define HMM_RANGE_DEFAULT_TIMEOUT 1000 - -/* This is a temporary helper to avoid merge conflict between trees. */ -static inline int hmm_vma_fault(struct hmm_mirror *mirror, - struct hmm_range *range, bool block) -{ - long ret; - - /* - * With the old API the driver must set each individual entries with - * the requested flags (valid, write, ...). So here we set the mask to - * keep intact the entries provided by the driver and zero out the - * default_flags. - */ - range->default_flags = 0; - range->pfn_flags_mask = -1UL; - - ret = hmm_range_register(range, mirror, - range->start, range->end, - PAGE_SHIFT); - if (ret) - return (int)ret; - - if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { - /* - * The mmap_sem was taken by driver we release it here and - * returns -EAGAIN which correspond to mmap_sem have been - * drop in the old API. - */ - up_read(&range->vma->vm_mm->mmap_sem); - return -EAGAIN; - } - - ret = hmm_range_fault(range, block); - if (ret <= 0) { - if (ret == -EBUSY || !ret) { - /* Same as above, drop mmap_sem to match old API. */ - up_read(&range->vma->vm_mm->mmap_sem); - ret = -EBUSY; - } else if (ret == -EAGAIN) - ret = -EBUSY; - hmm_range_unregister(range); - return ret; - } - return 0; -} - /* Below are for HMM internal use only! Not to be used by device driver! */ static inline void hmm_mm_init(struct mm_struct *mm) { -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 21/22] nouveau: unlock mmap_sem on all errors from nouveau_range_fault
Currently nouveau_svm_fault expects nouveau_range_fault to never unlock mmap_sem, but the latter unlocks it for a random selection of error codes. Fix this up by always unlocking mmap_sem for non-zero return values in nouveau_range_fault, and only unlocking it in the caller for successful returns. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index e831f4184a17..c0cf7aeaefb3 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -500,8 +500,10 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, ret = hmm_range_register(range, mirror, range->start, range->end, PAGE_SHIFT); - if (ret) + if (ret) { + up_read(&range->vma->vm_mm->mmap_sem); return (int)ret; + } if (!hmm_range_wait_until_valid(range, NOUVEAU_RANGE_FAULT_TIMEOUT)) { /* @@ -515,15 +517,14 @@ nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, ret = hmm_range_fault(range, block); if (ret <= 0) { - if (ret == -EBUSY || !ret) { - /* Same as above, drop mmap_sem to match old API. */ - up_read(&range->vma->vm_mm->mmap_sem); - ret = -EBUSY; - } else if (ret == -EAGAIN) + if (ret == 0) ret = -EBUSY; + if (ret != -EAGAIN) + up_read(&range->vma->vm_mm->mmap_sem); hmm_range_unregister(range); return ret; } + return 0; } @@ -718,8 +719,8 @@ nouveau_svm_fault(struct nvif_notify *notify) NULL); svmm->vmm->vmm.object.client->super = false; mutex_unlock(&svmm->mutex); + up_read(&svmm->mm->mmap_sem); } - up_read(&svmm->mm->mmap_sem); /* Cancel any faults in the window whose pages didn't manage * to keep their valid bit, or stay writeable when required. -- 2.20.1
Christoph Hellwig
2019-Jul-01 06:20 UTC
[Nouveau] [PATCH 22/22] mm: remove the legacy hmm_pfn_* APIs
Switch the one remaining user in nouveau over to its replacement, and remove all the wrappers. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- include/linux/hmm.h | 36 -------------------------- 2 files changed, 1 insertion(+), 37 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 40c47d6a7d78..534069ffe20a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -853,7 +853,7 @@ nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct page *page; uint64_t addr; - page = hmm_pfn_to_page(range, range->pfns[i]); + page = hmm_device_entry_to_page(range, range->pfns[i]); if (page == NULL) continue; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 3457cf9182e5..9799fde71f2e 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -290,42 +290,6 @@ static inline uint64_t hmm_device_entry_from_pfn(const struct hmm_range *range, range->flags[HMM_PFN_VALID]; } -/* - * Old API: - * hmm_pfn_to_page() - * hmm_pfn_to_pfn() - * hmm_pfn_from_page() - * hmm_pfn_from_pfn() - * - * This are the OLD API please use new API, it is here to avoid cross-tree - * merge painfullness ie we convert things to new API in stages. - */ -static inline struct page *hmm_pfn_to_page(const struct hmm_range *range, - uint64_t pfn) -{ - return hmm_device_entry_to_page(range, pfn); -} - -static inline unsigned long hmm_pfn_to_pfn(const struct hmm_range *range, - uint64_t pfn) -{ - return hmm_device_entry_to_pfn(range, pfn); -} - -static inline uint64_t hmm_pfn_from_page(const struct hmm_range *range, - struct page *page) -{ - return hmm_device_entry_from_page(range, page); -} - -static inline uint64_t hmm_pfn_from_pfn(const struct hmm_range *range, - unsigned long pfn) -{ - return hmm_device_entry_from_pfn(range, pfn); -} - - - #if IS_ENABLED(CONFIG_HMM_MIRROR) /* * Mirroring: how to synchronize device page table with CPU page table. -- 2.20.1
And I've demonstrated that I can't send patch series.. While this has all the right patches, it also has the extra patches already in the hmm tree, and four extra patches I wanted to send once this series is merged. I'll give up for now, please use the git url for anything serious, as it contains the right thing.
On Mon, Jul 01, 2019 at 10:25:17AM +0200, Christoph Hellwig wrote:> And I've demonstrated that I can't send patch series.. While this > has all the right patches, it also has the extra patches already > in the hmm tree, and four extra patches I wanted to send once > this series is merged. I'll give up for now, please use the git > url for anything serious, as it contains the right thing.Okay, I sorted it all out and temporarily put it here: https://github.com/jgunthorpe/linux/commits/hmm Bit involved job: - Took Ira's v4 patch into hmm.git and confirmed it matches what Andrew has in linux-next after all the fixups - Checked your github v4 and the v3 that hit the mailing list were substantially similar (I never did get a clean v4) and largely went with the github version - Based CH's v4 series on -rc7 and put back the removal hunk in swap.c so it compiles - Merge'd CH's series to hmm.git and fixed all the conflicts with Ira and Ralph's patches (such that swap.c remains unchanged) - Added Dan's ack's and tested-by's I think this fairly closely follows what was posted to the mailing list. As it was more than a simple 'git am', I'll let it sit on github until I hear OK's then I'll move it to kernel.org's hmm.git and it will hit linux-next. 0-day should also run on this whole thing from my github. What I know is outstanding: - The conflicting ARM patches, I understand Andrew will handle these post-linux-next - The conflict with AMD GPU in -next, I am waiting to hear from AMD Otherwise I think we are done with hmm.git for this cycle. Unfortunately this is still not enough to progress rdma's ODP, so we will need to do this again next cycle :( I'll be working on patches once I get all the merge window prep I have to do done. Regards, Jason
Kuehling, Felix
2019-Jul-02 21:43 UTC
[Nouveau] [PATCH 19/22] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
On 2019-07-01 2:20 a.m., Christoph Hellwig wrote:> We should not have two different error codes for the same condition. In > addition this really complicates the code due to the special handling of > EAGAIN that drops the mmap_sem due to the FAULT_FLAG_ALLOW_RETRY logic > in the core vm.I think the comment above hmm_range_snapshot needs an update. Also Documentation/vm/hmm.rst shows some example code using hmm_range_snapshot that retries on -EAGAIN. That would need to be updated to use -EBUSY or remove the retry logic altogether. Other than that, this patch is Reviewed-by: Felix Kuehling <Felix.Kuehling at amd.com> Philip, this means we should remove our retry logic again in amdgpu_ttm_tt_get_user_pages. According to the comment above hmm_range_fault, it can only return -EAGAIN if the block parameter is false. I think this statement is now actually true. We set block=true, so we can't get -EAGAIN. On -EBUSY we can let amdgpu_amdkfd_restore_userptr_worker schedule the retry (which it does already anyway). Regards, Felix> > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > mm/hmm.c | 8 +++----- > 1 file changed, 3 insertions(+), 5 deletions(-) > > diff --git a/mm/hmm.c b/mm/hmm.c > index c85ed7d4e2ce..d125df698e2b 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -974,7 +974,7 @@ long hmm_range_snapshot(struct hmm_range *range) > do { > /* If range is no longer valid force retry. */ > if (!range->valid) > - return -EAGAIN; > + return -EBUSY; > > vma = find_vma(hmm->mm, start); > if (vma == NULL || (vma->vm_flags & device_vma)) > @@ -1069,10 +1069,8 @@ long hmm_range_fault(struct hmm_range *range, bool block) > > do { > /* If range is no longer valid force retry. */ > - if (!range->valid) { > - up_read(&hmm->mm->mmap_sem); > - return -EAGAIN; > - } > + if (!range->valid) > + return -EBUSY; > > vma = find_vma(hmm->mm, start); > if (vma == NULL || (vma->vm_flags & device_vma))
Ralph Campbell
2019-Jul-03 17:22 UTC
[Nouveau] [PATCH 18/22] mm: return valid info from hmm_range_unregister
On 6/30/19 11:20 PM, Christoph Hellwig wrote:> Checking range->valid is trivial and has no meaningful cost, but > nicely simplifies the fastpath in typical callers. Also remove the > hmm_vma_range_done function, which now is a trivial wrapper around > hmm_range_unregister. > > Signed-off-by: Christoph Hellwig <hch at lst.de>Reviewed-by: Ralph Campbell <rcampbell at nvidia.com>> --- > drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- > include/linux/hmm.h | 11 +---------- > mm/hmm.c | 6 +++++- > 3 files changed, 7 insertions(+), 12 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c > index 8c92374afcf2..9d40114d7949 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_svm.c > +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c > @@ -652,7 +652,7 @@ nouveau_svm_fault(struct nvif_notify *notify) > ret = hmm_vma_fault(&svmm->mirror, &range, true); > if (ret == 0) { > mutex_lock(&svmm->mutex); > - if (!hmm_vma_range_done(&range)) { > + if (!hmm_range_unregister(&range)) { > mutex_unlock(&svmm->mutex); > goto again; > } > diff --git a/include/linux/hmm.h b/include/linux/hmm.h > index 0fa8ea34ccef..4b185d286c3b 100644 > --- a/include/linux/hmm.h > +++ b/include/linux/hmm.h > @@ -465,7 +465,7 @@ int hmm_range_register(struct hmm_range *range, > unsigned long start, > unsigned long end, > unsigned page_shift); > -void hmm_range_unregister(struct hmm_range *range); > +bool hmm_range_unregister(struct hmm_range *range); > long hmm_range_snapshot(struct hmm_range *range); > long hmm_range_fault(struct hmm_range *range, bool block); > long hmm_range_dma_map(struct hmm_range *range, > @@ -487,15 +487,6 @@ long hmm_range_dma_unmap(struct hmm_range *range, > */ > #define HMM_RANGE_DEFAULT_TIMEOUT 1000 > > -/* This is a temporary helper to avoid merge conflict between trees. */ > -static inline bool hmm_vma_range_done(struct hmm_range *range) > -{ > - bool ret = hmm_range_valid(range); > - > - hmm_range_unregister(range); > - return ret; > -} > - > /* This is a temporary helper to avoid merge conflict between trees. */ > static inline int hmm_vma_fault(struct hmm_mirror *mirror, > struct hmm_range *range, bool block) > diff --git a/mm/hmm.c b/mm/hmm.c > index de35289df20d..c85ed7d4e2ce 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -920,11 +920,14 @@ EXPORT_SYMBOL(hmm_range_register); > * > * Range struct is used to track updates to the CPU page table after a call to > * hmm_range_register(). See include/linux/hmm.h for how to use it. > + * > + * Returns if the range was still valid at the time of unregistering.Since this is an exported function, we should have kernel-doc comments. That is probably a separate patch but at least this line could be: Return: True if the range was still valid at the time of unregistering.> */ > -void hmm_range_unregister(struct hmm_range *range) > +bool hmm_range_unregister(struct hmm_range *range) > { > struct hmm *hmm = range->hmm; > unsigned long flags; > + bool ret = range->valid; > > spin_lock_irqsave(&hmm->ranges_lock, flags); > list_del_init(&range->list); > @@ -941,6 +944,7 @@ void hmm_range_unregister(struct hmm_range *range) > */ > range->valid = false; > memset(&range->hmm, POISON_INUSE, sizeof(range->hmm)); > + return ret; > } > EXPORT_SYMBOL(hmm_range_unregister); > >
Ralph Campbell
2019-Jul-03 17:32 UTC
[Nouveau] [PATCH 19/22] mm: always return EBUSY for invalid ranges in hmm_range_{fault, snapshot}
On 6/30/19 11:20 PM, Christoph Hellwig wrote:> We should not have two different error codes for the same condition. In > addition this really complicates the code due to the special handling of > EAGAIN that drops the mmap_sem due to the FAULT_FLAG_ALLOW_RETRY logic > in the core vm. > > Signed-off-by: Christoph Hellwig <hch at lst.de>Reviewed-by: Ralph Campbell <rcampbell at nvidia.com> Probably should update the "Return:" comment above hmm_range_snapshot() too.> --- > mm/hmm.c | 8 +++----- > 1 file changed, 3 insertions(+), 5 deletions(-) > > diff --git a/mm/hmm.c b/mm/hmm.c > index c85ed7d4e2ce..d125df698e2b 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -974,7 +974,7 @@ long hmm_range_snapshot(struct hmm_range *range) > do { > /* If range is no longer valid force retry. */ > if (!range->valid) > - return -EAGAIN; > + return -EBUSY; > > vma = find_vma(hmm->mm, start); > if (vma == NULL || (vma->vm_flags & device_vma)) > @@ -1069,10 +1069,8 @@ long hmm_range_fault(struct hmm_range *range, bool block) > > do { > /* If range is no longer valid force retry. */ > - if (!range->valid) { > - up_read(&hmm->mm->mmap_sem); > - return -EAGAIN; > - } > + if (!range->valid) > + return -EBUSY; > > vma = find_vma(hmm->mm, start); > if (vma == NULL || (vma->vm_flags & device_vma)) >
Ralph Campbell
2019-Jul-03 17:48 UTC
[Nouveau] [PATCH 20/22] mm: move hmm_vma_fault to nouveau
On 6/30/19 11:20 PM, Christoph Hellwig wrote:> hmm_vma_fault is marked as a legacy API to get rid of, but quite suites > the current nouvea flow. Move it to the only user in preparation forI didn't quite parse the phrase "quite suites the current nouvea flow." s/nouvea/nouveau/> fixing a locking bug involving caller and callee. > > Signed-off-by: Christoph Hellwig <hch at lst.de>I see where you are going with this and it looks like straightforward code movement so, Reviewed-by: Ralph Campbell <rcampbell at nvidia.com>> --- > drivers/gpu/drm/nouveau/nouveau_svm.c | 54 ++++++++++++++++++++++++++- > include/linux/hmm.h | 54 --------------------------- > 2 files changed, 53 insertions(+), 55 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c > index 9d40114d7949..e831f4184a17 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_svm.c > +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c > @@ -36,6 +36,13 @@ > #include <linux/sort.h> > #include <linux/hmm.h> > > +/* > + * When waiting for mmu notifiers we need some kind of time out otherwise we > + * could potentialy wait for ever, 1000ms ie 1s sounds like a long time to > + * wait already. > + */ > +#define NOUVEAU_RANGE_FAULT_TIMEOUT 1000 > + > struct nouveau_svm { > struct nouveau_drm *drm; > struct mutex mutex; > @@ -475,6 +482,51 @@ nouveau_svm_fault_cache(struct nouveau_svm *svm, > fault->inst, fault->addr, fault->access); > } > > +static int > +nouveau_range_fault(struct hmm_mirror *mirror, struct hmm_range *range, > + bool block) > +{ > + long ret; > + > + /* > + * With the old API the driver must set each individual entries with > + * the requested flags (valid, write, ...). So here we set the mask to > + * keep intact the entries provided by the driver and zero out the > + * default_flags. > + */ > + range->default_flags = 0; > + range->pfn_flags_mask = -1UL; > + > + ret = hmm_range_register(range, mirror, > + range->start, range->end, > + PAGE_SHIFT); > + if (ret) > + return (int)ret; > + > + if (!hmm_range_wait_until_valid(range, NOUVEAU_RANGE_FAULT_TIMEOUT)) { > + /* > + * The mmap_sem was taken by driver we release it here and > + * returns -EAGAIN which correspond to mmap_sem have been > + * drop in the old API. > + */ > + up_read(&range->vma->vm_mm->mmap_sem); > + return -EAGAIN; > + } > + > + ret = hmm_range_fault(range, block); > + if (ret <= 0) { > + if (ret == -EBUSY || !ret) { > + /* Same as above, drop mmap_sem to match old API. */ > + up_read(&range->vma->vm_mm->mmap_sem); > + ret = -EBUSY; > + } else if (ret == -EAGAIN) > + ret = -EBUSY; > + hmm_range_unregister(range); > + return ret; > + } > + return 0; > +} > + > static int > nouveau_svm_fault(struct nvif_notify *notify) > { > @@ -649,7 +701,7 @@ nouveau_svm_fault(struct nvif_notify *notify) > range.values = nouveau_svm_pfn_values; > range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT; > again: > - ret = hmm_vma_fault(&svmm->mirror, &range, true); > + ret = nouveau_range_fault(&svmm->mirror, &range, true); > if (ret == 0) { > mutex_lock(&svmm->mutex); > if (!hmm_range_unregister(&range)) { > diff --git a/include/linux/hmm.h b/include/linux/hmm.h > index 4b185d286c3b..3457cf9182e5 100644 > --- a/include/linux/hmm.h > +++ b/include/linux/hmm.h > @@ -478,60 +478,6 @@ long hmm_range_dma_unmap(struct hmm_range *range, > dma_addr_t *daddrs, > bool dirty); > > -/* > - * HMM_RANGE_DEFAULT_TIMEOUT - default timeout (ms) when waiting for a range > - * > - * When waiting for mmu notifiers we need some kind of time out otherwise we > - * could potentialy wait for ever, 1000ms ie 1s sounds like a long time to > - * wait already. > - */ > -#define HMM_RANGE_DEFAULT_TIMEOUT 1000 > - > -/* This is a temporary helper to avoid merge conflict between trees. */ > -static inline int hmm_vma_fault(struct hmm_mirror *mirror, > - struct hmm_range *range, bool block) > -{ > - long ret; > - > - /* > - * With the old API the driver must set each individual entries with > - * the requested flags (valid, write, ...). So here we set the mask to > - * keep intact the entries provided by the driver and zero out the > - * default_flags. > - */ > - range->default_flags = 0; > - range->pfn_flags_mask = -1UL; > - > - ret = hmm_range_register(range, mirror, > - range->start, range->end, > - PAGE_SHIFT); > - if (ret) > - return (int)ret; > - > - if (!hmm_range_wait_until_valid(range, HMM_RANGE_DEFAULT_TIMEOUT)) { > - /* > - * The mmap_sem was taken by driver we release it here and > - * returns -EAGAIN which correspond to mmap_sem have been > - * drop in the old API. > - */ > - up_read(&range->vma->vm_mm->mmap_sem); > - return -EAGAIN; > - } > - > - ret = hmm_range_fault(range, block); > - if (ret <= 0) { > - if (ret == -EBUSY || !ret) { > - /* Same as above, drop mmap_sem to match old API. */ > - up_read(&range->vma->vm_mm->mmap_sem); > - ret = -EBUSY; > - } else if (ret == -EAGAIN) > - ret = -EBUSY; > - hmm_range_unregister(range); > - return ret; > - } > - return 0; > -} > - > /* Below are for HMM internal use only! Not to be used by device driver! */ > static inline void hmm_mm_init(struct mm_struct *mm) > { >
Jason Gunthorpe
2019-Jul-03 18:01 UTC
[Nouveau] [PATCH 22/22] mm: remove the legacy hmm_pfn_* APIs
On Mon, Jul 01, 2019 at 08:20:20AM +0200, Christoph Hellwig wrote:> Switch the one remaining user in nouveau over to its replacement, > and remove all the wrappers. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +- > include/linux/hmm.h | 36 -------------------------- > 2 files changed, 1 insertion(+), 37 deletions(-)Christoph, I guess you didn't mean to send this branch to the mailing list? In any event some of these, like this one, look obvious and I could still grab a few for hmm.git. Let me know what you'd like please Reviewed-by: Jason Gunthorpe <jgg at mellanox.com> Thanks, Jason
Jason Gunthorpe
2019-Jul-03 18:03 UTC
[Nouveau] [PATCH 20/22] mm: move hmm_vma_fault to nouveau
On Mon, Jul 01, 2019 at 08:20:18AM +0200, Christoph Hellwig wrote:> hmm_vma_fault is marked as a legacy API to get rid of, but quite suites > the current nouvea flow. Move it to the only user in preparation for > fixing a locking bug involving caller and callee. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > drivers/gpu/drm/nouveau/nouveau_svm.c | 54 ++++++++++++++++++++++++++- > include/linux/hmm.h | 54 --------------------------- > 2 files changed, 53 insertions(+), 55 deletions(-)I was thinking about doing exactly this too, but amdgpu started using this already obsolete API in their latest driver :( So, we now need to get both drivers to move to the modern API. Jason
Possibly Parallel Threads
- [PATCH 1/5] mm: return valid info from hmm_range_unregister
- [PATCH 2/6] mm: move hmm_vma_range_done and hmm_vma_fault to nouveau
- [PATCH 2/6] mm: move hmm_vma_range_done and hmm_vma_fault to nouveau
- [PATCH 1/5] mm: return valid info from hmm_range_unregister
- [PATCH 20/22] mm: move hmm_vma_fault to nouveau