Displaying 15 results from an estimated 15 matches for "nouveau_pfns_alloc".
2019 Aug 07
4
[PATCH] nouveau/hmm: map pages after migration
...src = kcalloc(max, sizeof(args.src), GFP_KERNEL);
@@ -649,19 +654,25 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
if (!dma_addrs)
goto out_free_dst;
- for (i = 0; i < npages; i += c) {
- c = min(SG_MAX_SINGLE_ALLOC, npages);
- args.end = start + (c << PAGE_SHIFT);
+ pfns = nouveau_pfns_alloc(max);
+ if (!pfns)
+ goto out_free_dma;
+
+ for (i = 0; i < npages; i += max) {
+ args.end = start + (max << PAGE_SHIFT);
ret = migrate_vma_setup(&args);
if (ret)
- goto out_free_dma;
+ goto out_free_pfns;
if (args.cpages)
- nouveau_dmem_migrate_chunk(&args, drm,...
2019 Aug 13
0
[PATCH] nouveau/hmm: map pages after migration
..._KERNEL);
> @@ -649,19 +654,25 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
> if (!dma_addrs)
> goto out_free_dst;
>
> - for (i = 0; i < npages; i += c) {
> - c = min(SG_MAX_SINGLE_ALLOC, npages);
> - args.end = start + (c << PAGE_SHIFT);
> + pfns = nouveau_pfns_alloc(max);
> + if (!pfns)
> + goto out_free_dma;
> +
> + for (i = 0; i < npages; i += max) {
> + args.end = start + (max << PAGE_SHIFT);
> ret = migrate_vma_setup(&args);
> if (ret)
> - goto out_free_dma;
> + goto out_free_pfns;
>
> if (args...
2019 Aug 08
0
[PATCH] nouveau/hmm: map pages after migration
...ouveau_dmem_migrate_copy_one(drm, args->vma,
> - addr, args->src[i], &dma_addrs[nr_dma]);
> + args->src[i], &dma_addrs[nr_dma], &pfns[i]);
Nit: I find the &pfns[i] way to pass the argument a little weird to read.
Why not "pfns + i"?
> +u64 *
> +nouveau_pfns_alloc(unsigned long npages)
> +{
> + struct nouveau_pfnmap_args *args;
> +
> + args = kzalloc(sizeof(*args) + npages * sizeof(args->p.phys[0]),
Can we use struct_size here?
> + int ret;
> +
> + if (!svm)
> + return;
> +
> + mutex_lock(&svm->mutex);
> + svmm =...
2019 Aug 08
2
[PATCH] nouveau/hmm: map pages after migration
...);
>> + args->src[i], &dma_addrs[nr_dma], &pfns[i]);
>
> Nit: I find the &pfns[i] way to pass the argument a little weird to read.
> Why not "pfns + i"?
OK, will do in v2.
Should I convert to "dma_addrs + nr_dma" too?
>> +u64 *
>> +nouveau_pfns_alloc(unsigned long npages)
>> +{
>> + struct nouveau_pfnmap_args *args;
>> +
>> + args = kzalloc(sizeof(*args) + npages * sizeof(args->p.phys[0]),
>
> Can we use struct_size here?
Yes, good suggestion.
>
>> + int ret;
>> +
>> + if (!svm)
>>...
2020 Mar 03
2
[PATCH v2] nouveau/hmm: map pages after migration
...rc = kcalloc(max, sizeof(*args.src), GFP_KERNEL);
@@ -646,19 +652,25 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm,
if (!dma_addrs)
goto out_free_dst;
- for (i = 0; i < npages; i += c) {
- c = min(SG_MAX_SINGLE_ALLOC, npages);
- args.end = start + (c << PAGE_SHIFT);
+ pfns = nouveau_pfns_alloc(max);
+ if (!pfns)
+ goto out_free_dma;
+
+ for (i = 0; i < npages; i += max) {
+ args.end = start + (max << PAGE_SHIFT);
ret = migrate_vma_setup(&args);
if (ret)
- goto out_free_dma;
+ goto out_free_pfns;
if (args.cpages)
- nouveau_dmem_migrate_chunk(drm, &args,...
2020 Mar 04
5
[PATCH v3 0/4] nouveau/hmm: map pages after migration
Originally patch 4 was targeted for Jason's rdma tree since other HMM
related changes were queued there. Now that those have been merged,
these patches just contain changes to nouveau so they could go through
any tree. I guess Ben Skeggs' tree would be appropriate.
Changes since v2:
Added patches 1-3 to fix some minor issues.
Eliminated nouveau_find_svmm() since it is easily found.
2020 Jul 23
0
[PATCH v4 4/6] nouveau/svm: use the new migration invalidation
...drm *);
@@ -19,6 +29,7 @@ int nouveau_svmm_join(struct nouveau_svmm *, u64 inst);
void nouveau_svmm_part(struct nouveau_svmm *, u64 inst);
int nouveau_svmm_bind(struct drm_device *, void *, struct drm_file *);
+void nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit);
u64 *nouveau_pfns_alloc(unsigned long npages);
void nouveau_pfns_free(u64 *pfns);
void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm,
--
2.20.1
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page with hmm_pfn_to_page() and the page size
order can be determined with compound_order(page) but if the page is larger
than order 0 (PAGE_SIZE), there is no indication that the page is mapped
using a larger page size. To
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split