Balbir Singh
2021-Jun-02 08:50 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:> On 5/25/21 4:51 AM, Balbir Singh wrote: > ... > > > How beneficial is this code to nouveau users? I see that it permits a > > > part of OpenCL to be implemented, but how useful/important is this in > > > the real world? > > > > That is a very good question! I've not reviewed the code, but a sample > > program with the described use case would make things easy to parse. > > I suspect that is not easy to build at the moment? > > > > The cover letter says this: > > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program > which checks that GPU atomic accesses to system memory are atomic. Without > this series the test fails as there is no way of write-protecting the page > mapping which results in the device clobbering CPU writes. For reference > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/ > > Further testing has been performed by adding support for testing exclusive > access to the hmm-tests kselftests. > > ...so that seems to cover the "sample program" request, at least.Thanks, I'll take a look> > > I wonder how we co-ordinate all the work the mm is doing, page migration, > > reclaim with device exclusive access? Do we have any numbers for the worst > > case page fault latency when something is marked away for exclusive access? > > CPU page fault latency is approximately "terrible", if a page is resident on > the GPU. We have to spin up a DMA engine on the GPU and have it copy the page > over the PCIe bus, after all. > > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would > > Yes, for now. > > > only impact the address space of programs using the GPU. Should the exclusively > > marked range live in the unreclaimable list and recycled back to active/in-active > > to account for the fact that > > > > 1. It is not reclaimable and reclaim will only hurt via page faults? > > 2. It ages the page correctly or at-least allows for that possibility when the > > page is used by the GPU. > > I'm not sure that that is *necessarily* something we can conclude. It depends upon > access patterns of each program. For example, a "reduction" parallel program sends > over lots of data to the GPU, and only a tiny bit of (reduced!) data comes back > to the CPU. In that case, freeing the physical page on the CPU is actually the > best decision for the OS to make (if the OS is sufficiently prescient). >With a shared device or a device exclusive range, it would be good to get the device usage pattern and update the mm with that knowledge, so that the LRU can be better maintained. With your comment you seem to suggest that a page used by the GPU might be a good candidate for reclaim based on the CPU's understanding of the age of the page should not account for use by the device (are GPU workloads - access once and discard?) Balbir Singh.
Peter Xu
2021-Jun-02 14:37 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On Wed, Jun 02, 2021 at 06:50:37PM +1000, Balbir Singh wrote:> On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote: > > On 5/25/21 4:51 AM, Balbir Singh wrote: > > ... > > > > How beneficial is this code to nouveau users? I see that it permits a > > > > part of OpenCL to be implemented, but how useful/important is this in > > > > the real world? > > > > > > That is a very good question! I've not reviewed the code, but a sample > > > program with the described use case would make things easy to parse. > > > I suspect that is not easy to build at the moment? > > > > > > > The cover letter says this: > > > > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program > > which checks that GPU atomic accesses to system memory are atomic. Without > > this series the test fails as there is no way of write-protecting the page > > mapping which results in the device clobbering CPU writes. For reference > > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/ > > > > Further testing has been performed by adding support for testing exclusive > > access to the hmm-tests kselftests. > > > > ...so that seems to cover the "sample program" request, at least. > > Thanks, I'll take a look > > > > > > I wonder how we co-ordinate all the work the mm is doing, page migration, > > > reclaim with device exclusive access? Do we have any numbers for the worst > > > case page fault latency when something is marked away for exclusive access? > > > > CPU page fault latency is approximately "terrible", if a page is resident on > > the GPU. We have to spin up a DMA engine on the GPU and have it copy the page > > over the PCIe bus, after all. > > > > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would > > > > Yes, for now. > > > > > only impact the address space of programs using the GPU. Should the exclusively > > > marked range live in the unreclaimable list and recycled back to active/in-active > > > to account for the fact that > > > > > > 1. It is not reclaimable and reclaim will only hurt via page faults? > > > 2. It ages the page correctly or at-least allows for that possibility when the > > > page is used by the GPU. > > > > I'm not sure that that is *necessarily* something we can conclude. It depends upon > > access patterns of each program. For example, a "reduction" parallel program sends > > over lots of data to the GPU, and only a tiny bit of (reduced!) data comes back > > to the CPU. In that case, freeing the physical page on the CPU is actually the > > best decision for the OS to make (if the OS is sufficiently prescient). > > > > With a shared device or a device exclusive range, it would be good to get the device > usage pattern and update the mm with that knowledge, so that the LRU can be better > maintained. With your comment you seem to suggest that a page used by the GPU might > be a good candidate for reclaim based on the CPU's understanding of the age of > the page should not account for use by the device > (are GPU workloads - access once and discard?)Hmm, besides the aging info, this reminded me: do we need to isolate the page from lru too when marking device exclusive access? Afaict the current patch didn't do that so I think it's reclaimable. If we still have the rmap then we'll get a mmu notify CLEAR when unmapping that special pte, so device driver should be able to drop the ownership. However we dropped the rmap when marking exclusive. Now I don't know whether and how it'll work if page reclaim runs with the page being exclusively owned if without isolating the page.. -- Peter Xu
John Hubbard
2021-Jun-03 08:37 UTC
[Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
On 6/2/21 1:50 AM, Balbir Singh wrote: ...>>> only impact the address space of programs using the GPU. Should the exclusively >>> marked range live in the unreclaimable list and recycled back to active/in-active >>> to account for the fact that >>> >>> 1. It is not reclaimable and reclaim will only hurt via page faults? >>> 2. It ages the page correctly or at-least allows for that possibility when the >>> page is used by the GPU. >> >> I'm not sure that that is *necessarily* something we can conclude. It depends upon >> access patterns of each program. For example, a "reduction" parallel program sends >> over lots of data to the GPU, and only a tiny bit of (reduced!) data comes back >> to the CPU. In that case, freeing the physical page on the CPU is actually the >> best decision for the OS to make (if the OS is sufficiently prescient). >> > > With a shared device or a device exclusive range, it would be good to get the device > usage pattern and update the mm with that knowledge, so that the LRU can be betterIntegrating a GPU (or "device") processor and it's mm behavior with the Linux kernel is always an interesting concept. Certainly worth exploring, although it's probably not a small project by any means.> maintained. With your comment you seem to suggest that a page used by the GPU might > be a good candidate for reclaim based on the CPU's understanding of the age of > the page should not account for use by the device > (are GPU workloads - access once and discard?) >Well, that's a little too narrow of an interpretation. The GPU is a fairly general purpose processor, and so it has all kinds of workloads. I'm trying to discourage any hopes that one can know, in advance, precisely how the GPU's pages need to be managed. It's similar to the the CPU, in that regard. My example was just one, out of a vast pool of possible behaviors. thanks, -- John Hubbard NVIDIA