Andrew Morton
2021-May-24 22:11 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On Mon, 24 May 2021 23:27:22 +1000 Alistair Popple <apopple at nvidia.com> wrote:> Some devices require exclusive write access to shared virtual > memory (SVM) ranges to perform atomic operations on that memory. This > requires CPU page tables to be updated to deny access whilst atomic > operations are occurring. > > In order to do this introduce a new swap entry > type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for > exclusive access by a device all page table mappings for the particular > range are replaced with device exclusive swap entries. This causes any > CPU access to the page to result in a fault. > > Faults are resovled by replacing the faulting entry with the original > mapping. This results in MMU notifiers being called which a driver uses > to update access permissions such as revoking atomic access. After > notifiers have been called the device will no longer have exclusive > access to the region. > > Walking of the page tables to find the target pages is handled by > get_user_pages() rather than a direct page table walk. A direct page > table walk similar to what migrate_vma_collect()/unmap() does could also > have been utilised. However this resulted in more code similar in > functionality to what get_user_pages() provides as page faulting is > required to make the PTEs present and to break COW. > > ... > > Documentation/vm/hmm.rst | 17 ++++ > include/linux/mmu_notifier.h | 6 ++ > include/linux/rmap.h | 4 + > include/linux/swap.h | 7 +- > include/linux/swapops.h | 44 ++++++++- > mm/hmm.c | 5 + > mm/memory.c | 128 +++++++++++++++++++++++- > mm/mprotect.c | 8 ++ > mm/page_vma_mapped.c | 9 +- > mm/rmap.c | 186 +++++++++++++++++++++++++++++++++++ > 10 files changed, 405 insertions(+), 9 deletions(-) >This is quite a lot of code added to core MM for a single driver. Is there any expectation that other drivers will use this code? Is there a way of reducing the impact (code size, at least) for systems which don't need this code? How beneficial is this code to nouveau users? I see that it permits a part of OpenCL to be implemented, but how useful/important is this in the real world? Thanks.
John Hubbard
2021-May-25 01:31 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On 5/24/21 3:11 PM, Andrew Morton wrote:>> ... >> >> Documentation/vm/hmm.rst | 17 ++++ >> include/linux/mmu_notifier.h | 6 ++ >> include/linux/rmap.h | 4 + >> include/linux/swap.h | 7 +- >> include/linux/swapops.h | 44 ++++++++- >> mm/hmm.c | 5 + >> mm/memory.c | 128 +++++++++++++++++++++++- >> mm/mprotect.c | 8 ++ >> mm/page_vma_mapped.c | 9 +- >> mm/rmap.c | 186 +++++++++++++++++++++++++++++++++++ >> 10 files changed, 405 insertions(+), 9 deletions(-) >> > > This is quite a lot of code added to core MM for a single driver. > > Is there any expectation that other drivers will use this code?Yes! This should work for GPUs (and potentially, other devices) that support OpenCL SVM atomic accesses on the device. I haven't looked into how amdgpu works in any detail, but that's certainly at the top of the list of likely additional callers.> > Is there a way of reducing the impact (code size, at least) for systems > which don't need this code?I'll leave this question to others for the moment, in order to answer the "do we need it at all" points.> > How beneficial is this code to nouveau users? I see that it permits a > part of OpenCL to be implemented, but how useful/important is this in > the real world? >So this is interesting. Right now, OpenCL support in Nouveau is rather new and so probably not a huge impact yet. However, we've built up enough experience with CUDA and OpenCL to learn that atomic operations, as part of the user space programming model, are a super big deal. Atomic operations are so useful and important that I'd expect many OpenCL SVM users to be uninterested in programming models that lack atomic operations for GPU compute programs. Again, this doesn't rule out future, non-GPU accelerator devices that may come along. Atomic ops are just a really important piece of high-end multi-threaded programming, it turns out. So this is the beginning of support for an important building block for general purpose programming on devices that have GPU-like memory models. thanks, -- John Hubbard NVIDIA
Balbir Singh
2021-May-25 11:51 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On Mon, May 24, 2021 at 03:11:57PM -0700, Andrew Morton wrote:> On Mon, 24 May 2021 23:27:22 +1000 Alistair Popple <apopple at nvidia.com> wrote: > > > Some devices require exclusive write access to shared virtual > > memory (SVM) ranges to perform atomic operations on that memory. This > > requires CPU page tables to be updated to deny access whilst atomic > > operations are occurring. > > > > In order to do this introduce a new swap entry > > type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for > > exclusive access by a device all page table mappings for the particular > > range are replaced with device exclusive swap entries. This causes any > > CPU access to the page to result in a fault. > > > > Faults are resovled by replacing the faulting entry with the original > > mapping. This results in MMU notifiers being called which a driver uses > > to update access permissions such as revoking atomic access. After > > notifiers have been called the device will no longer have exclusive > > access to the region. > > > > Walking of the page tables to find the target pages is handled by > > get_user_pages() rather than a direct page table walk. A direct page > > table walk similar to what migrate_vma_collect()/unmap() does could also > > have been utilised. However this resulted in more code similar in > > functionality to what get_user_pages() provides as page faulting is > > required to make the PTEs present and to break COW. > > > > ... > > > > Documentation/vm/hmm.rst | 17 ++++ > > include/linux/mmu_notifier.h | 6 ++ > > include/linux/rmap.h | 4 + > > include/linux/swap.h | 7 +- > > include/linux/swapops.h | 44 ++++++++- > > mm/hmm.c | 5 + > > mm/memory.c | 128 +++++++++++++++++++++++- > > mm/mprotect.c | 8 ++ > > mm/page_vma_mapped.c | 9 +- > > mm/rmap.c | 186 +++++++++++++++++++++++++++++++++++ > > 10 files changed, 405 insertions(+), 9 deletions(-) > > > > This is quite a lot of code added to core MM for a single driver. > > Is there any expectation that other drivers will use this code? > > Is there a way of reducing the impact (code size, at least) for systems > which don't need this code? > > How beneficial is this code to nouveau users? I see that it permits a > part of OpenCL to be implemented, but how useful/important is this in > the real world?That is a very good question! I've not reviewed the code, but a sample program with the described use case would make things easy to parse. I suspect that is not easy to build at the moment? I wonder how we co-ordinate all the work the mm is doing, page migration, reclaim with device exclusive access? Do we have any numbers for the worst case page fault latency when something is marked away for exclusive access? I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would only impact the address space of programs using the GPU. Should the exclusively marked range live in the unreclaimable list and recycled back to active/in-active to account for the fact that 1. It is not reclaimable and reclaim will only hurt via page faults? 2. It ages the page correctly or at-least allows for that possibility when the page is used by the GPU. Balbir Singh.