search for: nr_dma

Displaying 20 results from an estimated 22 matches for "nr_dma".

2019 Aug 08
2
[PATCH] nouveau/hmm: map pages after migration
...static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, >> - struct nouveau_drm *drm, dma_addr_t *dma_addrs) >> + struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns) >> { >> struct nouveau_fence *fence; >> unsigned long addr = args->start, nr_dma = 0, i; >> >> for (i = 0; addr < args->end; i++) { >> args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, >> - addr, args->src[i], &dma_addrs[nr_dma]); >> + args->src[i], &dma_addrs[nr_dma], &pfns[i]); > > N...
2019 Aug 07
4
[PATCH] nouveau/hmm: map pages after migration
...pfn = NVIF_VMM_PFNMAP_V0_NONE; return 0; } static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, - struct nouveau_drm *drm, dma_addr_t *dma_addrs) + struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns) { struct nouveau_fence *fence; unsigned long addr = args->start, nr_dma = 0, i; for (i = 0; addr < args->end; i++) { args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, - addr, args->src[i], &dma_addrs[nr_dma]); + args->src[i], &dma_addrs[nr_dma], &pfns[i]); if (args->dst[i]) nr_dma++; addr += PAGE_SIZE;...
2019 Aug 10
0
[PATCH] nouveau/hmm: map pages after migration
On Thu, Aug 08, 2019 at 02:29:34PM -0700, Ralph Campbell wrote: >>> { >>> struct nouveau_fence *fence; >>> unsigned long addr = args->start, nr_dma = 0, i; >>> for (i = 0; addr < args->end; i++) { >>> args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, >>> - addr, args->src[i], &dma_addrs[nr_dma]); >>> + args->src[i], &dma_addrs[nr_dma], &pfns[i]); >&g...
2020 Jul 23
0
[PATCH v4 4/6] nouveau/svm: use the new migration invalidation
...ta = svmm; *pfn = NVIF_VMM_PFNMAP_V0_V | NVIF_VMM_PFNMAP_V0_VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) @@ -584,8 +592,8 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, unsigned long addr = args->start, nr_dma = 0, i; for (i = 0; addr < args->end; i++) { - args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->src[i], - dma_addrs + nr_dma, pfns + i); + args->dst[i] = nouveau_dmem_migrate_copy_one(drm, svmm, + args->src[i], dma_addrs + nr_dma, pfns + i); if (!dma_mapping_...
2019 Aug 08
0
[PATCH] nouveau/hmm: map pages after migration
...opy_one, thanks, > static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, > - struct nouveau_drm *drm, dma_addr_t *dma_addrs) > + struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns) > { > struct nouveau_fence *fence; > unsigned long addr = args->start, nr_dma = 0, i; > > for (i = 0; addr < args->end; i++) { > args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, > - addr, args->src[i], &dma_addrs[nr_dma]); > + args->src[i], &dma_addrs[nr_dma], &pfns[i]); Nit: I find the &pfns[i] way t...
2019 Jul 29
0
[PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma
...u_dmem_migrate_finalize_and_map(struct nouveau_migrate *migrate) +static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, + struct nouveau_drm *drm, dma_addr_t *dma_addrs) { - struct nouveau_drm *drm = migrate->drm; + struct nouveau_fence *fence; + unsigned long addr = args->start, nr_dma = 0, i; + + for (i = 0; addr < args->end; i++) { + args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, + addr, args->src[i], &dma_addrs[nr_dma]); + if (args->dst[i]) + nr_dma++; + addr += PAGE_SIZE; + } - nouveau_dmem_fence_done(&migrate->fence); + no...
2020 Mar 03
2
[PATCH v2] nouveau/hmm: map pages after migration
...fn = NVIF_VMM_PFNMAP_V0_NONE; return 0; } static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, - struct migrate_vma *args, dma_addr_t *dma_addrs) + struct migrate_vma *args, dma_addr_t *dma_addrs, u64 *pfns) { struct nouveau_fence *fence; unsigned long addr = args->start, nr_dma = 0, i; for (i = 0; addr < args->end; i++) { args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->src[i], - dma_addrs + nr_dma); + dma_addrs + nr_dma, pfns + i); if (args->dst[i]) nr_dma++; addr += PAGE_SIZE; @@ -607,15 +615,12 @@ static void nouveau_dmem_mi...
2019 Aug 13
0
[PATCH] nouveau/hmm: map pages after migration
...0; > } > > static void nouveau_dmem_migrate_chunk(struct migrate_vma *args, > - struct nouveau_drm *drm, dma_addr_t *dma_addrs) > + struct nouveau_drm *drm, dma_addr_t *dma_addrs, u64 *pfns) > { > struct nouveau_fence *fence; > unsigned long addr = args->start, nr_dma = 0, i; > > for (i = 0; addr < args->end; i++) { > args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->vma, > - addr, args->src[i], &dma_addrs[nr_dma]); > + args->src[i], &dma_addrs[nr_dma], &pfns[i]); > if (args->dst[i]) >...
2020 Mar 04
5
[PATCH v3 0/4] nouveau/hmm: map pages after migration
Originally patch 4 was targeted for Jason's rdma tree since other HMM related changes were queued there. Now that those have been merged, these patches just contain changes to nouveau so they could go through any tree. I guess Ben Skeggs' tree would be appropriate. Changes since v2: Added patches 1-3 to fix some minor issues. Eliminated nouveau_find_svmm() since it is easily found.
2020 May 20
2
[PATCH] nouveau/hmm: fix migrate zero page to GPU
..._VRAM | ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); @@ -528,7 +585,7 @@ static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, for (i = 0; addr < args->end; i++) { args->dst[i] = nouveau_dmem_migrate_copy_one(drm, args->src[i], dma_addrs + nr_dma, pfns + i); - if (args->dst[i]) + if (!dma_mapping_error(drm->dev->dev, dma_addrs[nr_dma])) nr_dma++; addr += PAGE_SIZE; } -- 2.20.1
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB invalidations when migrating a range of addresses from system memory to device private memory and some of those pages have already been migrated. The approach taken is to introduce a new mmu notifier invalidation event type and use that in the device driver to skip invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB invalidations when migrating a range of addresses from system memory to device private memory and some of those pages have already been migrated. The approach taken is to introduce a new mmu notifier invalidation event type and use that in the device driver to skip invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Nov 06
4
[PATCH 0/3] drm/nouveau: extend the lifetime of nouveau_drm
Hi folks, Currently, when the device is removed (or the driver is unbound) the nouveau_drm structure de-allocated. However, it's still accessible from and used by some DRM layer callbacks. For example, file handles can be closed after the device has been removed (physically or otherwise). This series converts the Nouveau device structure to be allocated and de-allocated with the
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB invalidations when migrating a range of addresses from system memory to device private memory and some of those pages have already been migrated. The approach taken is to introduce a new mmu notifier invalidation event type and use that in the device driver to skip invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB invalidations when migrating a range of addresses from system memory to device private memory and some of those pages have already been migrated. The approach taken is to introduce a new mmu notifier invalidation event type and use that in the device driver to skip invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2019 Aug 08
10
turn hmm migrate_vma upside down v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which starts revamping the migrate_vma functionality. The prime idea is to export three slightly lower level functions and thus avoid the need for migrate_vma_ops callbacks. Diffstat: 5 files changed, 281 insertions(+), 607 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split