Yonatan Maman
2024-Dec-01 10:36 UTC
[RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> Based on: Provide a new two step DMA mapping API patchset https://lore.kernel.org/kvm/20241114170247.GA5813 at lst.de/T/#t This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement reduces data transfer overhead by allowing the GPU to directly expose device private page data to devices such as NICs, eliminating the need to traverse system RAM, which is the native method for exposing device private page data. To fully support Peer-to-Peer for device private pages, the following changes are proposed: `Memory Management (MM)` * Leverage struct pagemap_ops to support P2P page operations: This modification ensures that the GPU can directly map device private pages for P2P DMA. * Utilize hmm_range_fault to support P2P connections for device private pages (instead of Page fault) `IB Drivers` Add TRY_P2P_REQ flag for the hmm_range_fault call: This flag indicates the need for P2P mapping, enabling IB drivers to efficiently handle P2P DMA requests. `Nouveau driver` Add support for the Nouveau p2p_page callback function: This update integrates P2P DMA support into the Nouveau driver, allowing it to handle P2P page operations seamlessly. `MLX5 Driver` Utilize NIC Address Translation Service (ATS) for ODP memory, to optimize DMA P2P for private device pages. Also, when P2P DMA mapping fails due to inaccessible bridges, the system falls back to standard DMA, which uses host memory, for the affected PFNs Yonatan Maman (5): mm/hmm: HMM API to enable P2P DMA for device private pages nouveau/dmem: HMM P2P DMA for private dev pages IB/core: P2P DMA for device private pages RDMA/mlx5: Add fallback for P2P DMA errors RDMA/mlx5: Enabling ATS for ODP memory drivers/gpu/drm/nouveau/nouveau_dmem.c | 110 +++++++++++++++++++++++++ drivers/infiniband/core/umem_odp.c | 4 + drivers/infiniband/hw/mlx5/mlx5_ib.h | 6 +- drivers/infiniband/hw/mlx5/odp.c | 24 +++++- include/linux/hmm.h | 3 +- include/linux/memremap.h | 8 ++ mm/hmm.c | 57 ++++++++++--- 7 files changed, 195 insertions(+), 17 deletions(-) -- 2.34.1
Yonatan Maman
2024-Dec-01 10:36 UTC
[RFC 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages
From: Yonatan Maman <Ymaman at Nvidia.com> hmm_range_fault() by default triggered a page fault on device private when HMM_PFN_REQ_FAULT flag was set. pages, migrating them to RAM. In some cases, such as with RDMA devices, the migration overhead between the device (e.g., GPU) and the CPU, and vice-versa, significantly degrades performance. Thus, enabling Peer-to-Peer (P2P) DMA access for device private page might be crucial for minimizing data transfer overhead. Introduced an API to support P2P DMA for device private pages,includes: - Leveraging the struct pagemap_ops for P2P Page Callbacks. This callback involves mapping the page for P2P DMA and returning the corresponding PCI_P2P page. - Utilizing hmm_range_fault for initializing P2P DMA. The API also adds the HMM_PFN_REQ_TRY_P2P flag option for the hmm_range_fault caller to initialize P2P. If set, hmm_range_fault attempts initializing the P2P connection first, if the owner device supports P2P, using p2p_page. In case of failure or lack of support, hmm_range_fault will continue with the regular flow of migrating the page to RAM. This change does not affect previous use-cases of hmm_range_fault, because both the caller and the page owner must explicitly request and support it to initialize P2P connection. Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Signed-off-by: Gal Shalom <GalShalom at Nvidia.com> --- include/linux/hmm.h | 3 ++- include/linux/memremap.h | 8 ++++++ mm/hmm.c | 57 +++++++++++++++++++++++++++++++++------- 3 files changed, 57 insertions(+), 11 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 62980ca8f3c5..017f22cef893 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -26,6 +26,7 @@ struct mmu_interval_notifier; * HMM_PFN_DMA_MAPPED - Flag preserved on input-to-output transformation * to mark that page is already DMA mapped + * HMM_PFN_ALLOW_P2P - Allow returning PCI P2PDMA page * * On input: * 0 - Return the current state of the page, do not fault it. @@ -41,7 +42,7 @@ enum hmm_pfn_flags { HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), /* Sticky flag, carried from Input to Output */ + HMM_PFN_ALLOW_P2P = 1UL << (BITS_PER_LONG - 6), HMM_PFN_DMA_MAPPED = 1UL << (BITS_PER_LONG - 7), HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 3f7143ade32c..cdf5189be5e9 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -89,6 +89,14 @@ struct dev_pagemap_ops { */ vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf); + /* + * Used for private (un-addressable) device memory only. Return a + * corresponding PFN for a page that can be mapped to device + * (e.g using dma_map_page) + */ + int (*get_dma_pfn_for_device)(struct page *private_page, + unsigned long *dma_pfn); + /* * Handle the memory failure happens on a range of pfns. Notify the * processes who are using these pfns, and try to recover the data on diff --git a/mm/hmm.c b/mm/hmm.c index a852d8337c73..1c080bc00ee8 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -226,6 +226,51 @@ static inline unsigned long pte_to_hmm_pfn_flags(struct hmm_range *range, return pte_write(pte) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; } +static bool hmm_handle_device_private(struct hmm_range *range, + unsigned long pfn_req_flags, + swp_entry_t entry, + unsigned long *hmm_pfn) +{ + struct page *page = pfn_swap_entry_to_page(entry); + struct dev_pagemap *pgmap = page->pgmap; + int ret; + pfn_req_flags &= range->pfn_flags_mask; + pfn_req_flags |= range->default_flags; + + /* + * Don't fault in device private pages owned by the caller, + * just report the PFN. + */ + if (pgmap->owner == range->dev_private_owner) { + *hmm_pfn = swp_offset_pfn(entry); + goto found; + } + + /* + * P2P for supported pages, and according to caller request + * translate the private page to the match P2P page if it fails + * continue with the regular flow + */ + if (pfn_req_flags & HMM_PFN_ALLOW_P2P && + pgmap->ops->get_dma_pfn_for_device) { + ret = pgmap->ops->get_dma_pfn_for_device(page, hmm_pfn); + if (!ret) { + *hmm_pfn |= HMM_PFN_ALLOW_P2P; + goto found; + } + } + + return false; + +found: + *hmm_pfn |= HMM_PFN_VALID; + if (is_writable_device_private_entry(entry)) + *hmm_pfn |= HMM_PFN_WRITE; + return true; +} + static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, unsigned long end, pmd_t *pmdp, pte_t *ptep, unsigned long *hmm_pfn) @@ -249,17 +294,9 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, if (!pte_present(pte)) { swp_entry_t entry = pte_to_swp_entry(pte); - /* - * Don't fault in device private pages owned by the caller, - * just report the PFN. - */ if (is_device_private_entry(entry) && - pfn_swap_entry_to_page(entry)->pgmap->owner =- range->dev_private_owner) { - cpu_flags = HMM_PFN_VALID; - if (is_writable_device_private_entry(entry)) - cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = (*hmm_pfn & HMM_PFN_DMA_MAPPED) | swp_offset_pfn(entry) | cpu_flags; + hmm_handle_device_private(range, pfn_req_flags, entry, hmm_pfn)) { + *hmm_pfn = *hmm_pfn & HMM_PFN_DMA_MAPPED; return 0; } -- 2.34.1
Yonatan Maman
2024-Dec-01 10:36 UTC
[RFC 2/5] nouveau/dmem: HMM P2P DMA for private dev pages
From: Yonatan Maman <Ymaman at Nvidia.com> Enabling Peer-to-Peer DMA (P2P DMA) access in GPU-centric applications is crucial for minimizing data transfer overhead (e.g., for RDMA use- case). This change aims to enable that capability for Nouveau over HMM device private pages. P2P DMA for private device pages allows the GPU to directly exchange data with other devices (e.g., NICs) without needing to traverse system RAM. To fully support Peer-to-Peer for device private pages, the following changes are made: - Introduce struct nouveau_dmem_hmm_p2p within struct nouveau_dmem to manage BAR1 PCI P2P memory. p2p_start_addr holds the virtual address allocated with pci_alloc_p2pmem(), and p2p_size represents the allocated size of the PCI P2P memory. - nouveau_dmem_init - Ensure BAR1 accessibility and assign struct pages (PCI_P2P_PAGE) for all BAR1 pages. Introduce nouveau_alloc_bar1_pci_p2p_mem in nouveau_dmem to expose BAR1 for use as P2P memory via pci_p2pdma_add_resource and implement static allocation and assignment of struct pages using pci_alloc_p2pmem. This function will be called from nouveau_dmem_init, and failure triggers a warning message instead of driver failure. - nouveau_dmem_fini - Ensure BAR1 PCI P2P memory is properly destroyed during driver cleanup. Introduce nouveau_destroy_bar1_pci_p2p_mem to handle freeing of PCI P2P memory associated with Nouveau BAR1. Modify nouveau_dmem_fini to call nouveau_destroy_bar1_pci_p2p_mem. - Implement Nouveau `p2p_page` callback function - Implement BAR1 mapping for the chunk using `io_mem_reserve` if no mapping exists. Retrieve the pre-allocated P2P virtual address and size from `hmm_p2p`. Calculate the page offset within BAR1 and return the corresponding P2P page. Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Reviewed-by: Gal Shalom <GalShalom at Nvidia.com> --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 110 +++++++++++++++++++++++++ 1 file changed, 110 insertions(+) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 1a072568cef6..003e74895ff4 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -40,6 +40,9 @@ #include <linux/hmm.h> #include <linux/memremap.h> #include <linux/migrate.h> +#include <linux/pci-p2pdma.h> +#include <nvkm/core/pci.h> /* * FIXME: this is ugly right now we are using TTM to allocate vram and we pin @@ -77,9 +80,15 @@ struct nouveau_dmem_migrate { struct nouveau_channel *chan; }; +struct nouveau_dmem_hmm_p2p { + size_t p2p_size; + void *p2p_start_addr; +}; + struct nouveau_dmem { struct nouveau_drm *drm; struct nouveau_dmem_migrate migrate; + struct nouveau_dmem_hmm_p2p hmm_p2p; struct list_head chunks; struct mutex mutex; struct page *free_pages; @@ -158,6 +167,60 @@ static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spage, return 0; } +static int nouveau_dmem_bar1_mapping(struct nouveau_bo *nvbo, + unsigned long long *bus_addr) +{ + int ret; + struct ttm_resource *mem = nvbo->bo.resource; + + if (mem->bus.offset) { + *bus_addr = mem->bus.offset; + return 0; + } + + if (PFN_UP(nvbo->bo.base.size) > PFN_UP(nvbo->bo.resource->size)) + return -EINVAL; + + ret = ttm_bo_reserve(&nvbo->bo, false, false, NULL); + if (ret) + return ret; + + ret = nvbo->bo.bdev->funcs->io_mem_reserve(nvbo->bo.bdev, mem); + *bus_addr = mem->bus.offset; + + ttm_bo_unreserve(&nvbo->bo); + return ret; +} + +static int nouveau_dmem_get_dma_pfn(struct page *private_page, + unsigned long *dma_pfn) +{ + int ret; + unsigned long long offset_in_chunk; + unsigned long long chunk_bus_addr; + unsigned long long bar1_base_addr; + struct nouveau_drm *drm = page_to_drm(private_page); + struct nouveau_bo *nvbo = nouveau_page_to_chunk(private_page)->bo; + struct nvkm_device *nv_device = nvxx_device(drm); + size_t p2p_size = drm->dmem->hmm_p2p.p2p_size; + + bar1_base_addr = nv_device->func->resource_addr(nv_device, 1); + offset_in_chunk + (page_to_pfn(private_page) << PAGE_SHIFT) - + nouveau_page_to_chunk(private_page)->pagemap.range.start; + + ret = nouveau_dmem_bar1_mapping(nvbo, &chunk_bus_addr); + if (ret) + return ret; + + *dma_pfn = chunk_bus_addr + offset_in_chunk; + if (!p2p_size || *dma_pfn > bar1_base_addr + p2p_size || + *dma_pfn < bar1_base_addr) + return -ENOMEM; + + return 0; +} + static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) { struct nouveau_drm *drm = page_to_drm(vmf->page); @@ -221,6 +284,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf) static const struct dev_pagemap_ops nouveau_dmem_pagemap_ops = { .page_free = nouveau_dmem_page_free, .migrate_to_ram = nouveau_dmem_migrate_to_ram, + .get_dma_pfn_for_device = nouveau_dmem_get_dma_pfn, }; static int @@ -413,14 +477,31 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk) kvfree(dma_addrs); } +static void nouveau_destroy_bar1_pci_p2p_mem(struct nouveau_drm *drm, + struct pci_dev *pdev, + void *p2p_start_addr, + size_t p2p_size) +{ + if (p2p_size) + pci_free_p2pmem(pdev, p2p_start_addr, p2p_size); + + NV_INFO(drm, "PCI P2P memory freed(%p)\n", p2p_start_addr); +} + void nouveau_dmem_fini(struct nouveau_drm *drm) { struct nouveau_dmem_chunk *chunk, *tmp; + struct nvkm_device *nv_device = nvxx_device(drm); if (drm->dmem == NULL) return; + nouveau_destroy_bar1_pci_p2p_mem(drm, + nv_device->func->pci(nv_device)->pdev, + drm->dmem->hmm_p2p.p2p_start_addr, + drm->dmem->hmm_p2p.p2p_size); + mutex_lock(&drm->dmem->mutex); list_for_each_entry_safe(chunk, tmp, &drm->dmem->chunks, list) { @@ -586,10 +667,28 @@ nouveau_dmem_migrate_init(struct nouveau_drm *drm) return -ENODEV; } +static int nouveau_alloc_bar1_pci_p2p_mem(struct nouveau_drm *drm, + struct pci_dev *pdev, size_t size, + void **pp2p_start_addr) +{ + int ret; + + ret = pci_p2pdma_add_resource(pdev, 1, size, 0); + if (ret) + return ret; + + *pp2p_start_addr = pci_alloc_p2pmem(pdev, size); + + NV_INFO(drm, "PCI P2P memory allocated(%p)\n", *pp2p_start_addr); + return 0; +} + void nouveau_dmem_init(struct nouveau_drm *drm) { int ret; + struct nvkm_device *nv_device = nvxx_device(drm); + size_t bar1_size; /* This only make sense on PASCAL or newer */ if (drm->client.device.info.family < NV_DEVICE_INFO_V0_PASCAL) @@ -610,6 +709,17 @@ nouveau_dmem_init(struct nouveau_drm *drm) kfree(drm->dmem); drm->dmem = NULL; } + + /* Expose BAR1 for HMM P2P Memory */ + bar1_size = nv_device->func->resource_size(nv_device, 1); + ret = nouveau_alloc_bar1_pci_p2p_mem(drm, + nv_device->func->pci(nv_device)->pdev, + bar1_size, + &drm->dmem->hmm_p2p.p2p_start_addr); + drm->dmem->hmm_p2p.p2p_size = (ret) ? 0 : bar1_size; + if (ret) + NV_WARN(drm, + "PCI P2P memory allocation failed, HMM P2P won't be supported\n"); } static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, -- 2.34.1
From: Yonatan Maman <Ymaman at Nvidia.com> Add Peer-to-Peer (P2P) DMA request for hmm_range_fault calling, utilizing capabilities introduced in mm/hmm. By setting range.default_flags to HMM_PFN_REQ_FAULT | HMM_PFN_REQ_TRY_P2P, HMM attempts to initiate P2P DMA connections for device private pages (instead of page fault handling). This enhancement utilizes P2P DMA to reduce performance overhead during data migration between devices (e.g., GPU) and system memory, providing performance benefits for GPU-centric applications that utilize RDMA and device private pages. Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Signed-off-by: Gal Shalom <GalShalom at Nvidia.com> --- drivers/infiniband/core/umem_odp.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 51d518989914..4c2465b9bdda 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -332,6 +332,10 @@ int ib_umem_odp_map_dma_and_lock(struct ib_umem_odp *umem_odp, u64 user_virt, range.default_flags |= HMM_PFN_REQ_WRITE; } + if (access_mask & HMM_PFN_ALLOW_P2P) + range.default_flags |= HMM_PFN_ALLOW_P2P; + + range.pfn_flags_mask = HMM_PFN_ALLOW_P2P; range.hmm_pfns = &(umem_odp->map.pfn_list[pfn_start_idx]); timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); -- 2.34.1
From: Yonatan Maman <Ymaman at Nvidia.com> Handle P2P DMA mapping errors when the transaction requires traversing an inaccessible host bridge that is not in the allowlist: - In `populate_mtt`, if a P2P mapping fails, the `HMM_PFN_ALLOW_P2P` flag is cleared only for the PFNs that returned a mapping error. - In `pagefault_real_mr`, if a P2P mapping error occurs, the mapping is retried with the `HMM_PFN_ALLOW_P2P` flag only for the PFNs that didn't fail, ensuring a fallback to standard DMA(host memory) for the rest, if possible. Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Signed-off-by: Gal Shalom <GalShalom at Nvidia.com> --- drivers/infiniband/hw/mlx5/odp.c | 24 +++++++++++++++++++++--- 1 file changed, 21 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c index fbb2a5670c32..f7a1291ec7d1 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -169,6 +169,7 @@ static int populate_mtt(__be64 *pas, size_t start, size_t nentries, struct pci_p2pdma_map_state p2pdma_state = {}; struct ib_device *dev = odp->umem.ibdev; size_t i; + int ret = 0; if (flags & MLX5_IB_UPD_XLT_ZAP) return 0; @@ -184,8 +185,11 @@ static int populate_mtt(__be64 *pas, size_t start, size_t nentries, dma_addr = hmm_dma_map_pfn(dev->dma_device, &odp->map, start + i, &p2pdma_state); - if (ib_dma_mapping_error(dev, dma_addr)) - return -EFAULT; + if (ib_dma_mapping_error(dev, dma_addr)) { + odp->map.pfn_list[start + i] &= ~(HMM_PFN_ALLOW_P2P); + ret = -EFAULT; + continue; + } dma_addr |= MLX5_IB_MTT_READ; if ((pfn & HMM_PFN_WRITE) && !downgrade) @@ -194,7 +198,7 @@ static int populate_mtt(__be64 *pas, size_t start, size_t nentries, pas[i] = cpu_to_be64(dma_addr); odp->npages++; } - return 0; + return ret; } int mlx5_odp_populate_xlt(void *xlt, size_t idx, size_t nentries, @@ -696,6 +700,10 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, if (odp->umem.writable && !downgrade) access_mask |= HMM_PFN_WRITE; + /* + * try fault with HMM_PFN_ALLOW_P2P flag + */ + access_mask |= HMM_PFN_ALLOW_P2P; np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); if (np < 0) return np; @@ -705,6 +713,16 @@ static int pagefault_real_mr(struct mlx5_ib_mr *mr, struct ib_umem_odp *odp, * ib_umem_odp_map_dma_and_lock already checks this. */ ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags); + if (ret == -EFAULT) { + /* + * Indicate P2P Mapping Error, retry with no HMM_PFN_ALLOW_P2P + */ + access_mask &= ~HMM_PFN_ALLOW_P2P; + np = ib_umem_odp_map_dma_and_lock(odp, user_va, bcnt, access_mask, fault); + if (np < 0) + return np; + ret = mlx5r_umr_update_xlt(mr, start_idx, np, page_shift, xlt_flags); + } mutex_unlock(&odp->umem_mutex); if (ret < 0) { -- 2.34.1
From: Yonatan Maman <Ymaman at Nvidia.com> ATS (Address Translation Services) mainly utilized to optimize PCI Peer-to-Peer transfers and prevent bus failures. This change employed ATS usage for ODP memory, to optimize DMA P2P for ODP memory. (e.g DMA P2P for private device pages - ODP memory). Signed-off-by: Yonatan Maman <Ymaman at Nvidia.com> Signed-off-by: Gal Shalom <GalShalom at Nvidia.com> --- drivers/infiniband/hw/mlx5/mlx5_ib.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h index 1bae5595c729..702d155f5048 100644 --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h @@ -1705,9 +1705,9 @@ static inline bool rt_supported(int ts_cap) static inline bool mlx5_umem_needs_ats(struct mlx5_ib_dev *dev, struct ib_umem *umem, int access_flags) { - if (!MLX5_CAP_GEN(dev->mdev, ats) || !umem->is_dmabuf) - return false; - return access_flags & IB_ACCESS_RELAXED_ORDERING; + if (MLX5_CAP_GEN(dev->mdev, ats) && (umem->is_dmabuf || umem->is_odp)) + return access_flags & IB_ACCESS_RELAXED_ORDERING; + return false; } int set_roce_addr(struct mlx5_ib_dev *dev, u32 port_num, -- 2.34.1