Displaying 20 results from an estimated 32 matches for "memremap_pages".
2020 Sep 16
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...(range->end),
@@ -181,7 +152,6 @@ void memunmap_pages(struct dev_pagemap *pgmap)
pageunmap_range(pgmap, i);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
- devmap_managed_enable_put();
}
EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -319,7 +289,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
.pgprot = PAGE_KERNEL,
};
const int nr_range = pgmap->nr_range;
- bool need_devmap_managed = true;
int error, i;
if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
@@ -331,8 +300,9 @@ void *memremap_pages(struct dev_pagemap *pgmap...
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...that ZONE_DEVICE
struct page reference counting is ugly/broken. This is my attempt to
fix it and it works for the HMM migration self tests.
I'm only sending this out as a RFC since I'm not that familiar with the
DAX, PMEM, XEN, and other uses of ZONE_DEVICE struct pages allocated
with devm_memremap_pages() or memremap_pages() but my best reading of
the code looks like it might be OK. I could use help testing these
configurations.
I have a modified THP migration patch series that applies on top of
this one and is cleaner since I don't have to add code to handle the
+1 reference count. The link...
2020 Sep 17
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...nmap_pages(struct dev_pagemap *pgmap)
> pageunmap_range(pgmap, i);
>
> WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
> - devmap_managed_enable_put();
> }
> EXPORT_SYMBOL_GPL(memunmap_pages);
>
> @@ -319,7 +289,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
> .pgprot = PAGE_KERNEL,
> };
> const int nr_range = pgmap->nr_range;
> - bool need_devmap_managed = true;
> int error, i;
>
> if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
> @@ -331,8 +300,9 @@...
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...ct dev_pagemap *pgmap)
untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
pgmap_array_delete(res);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
- devmap_managed_enable_put();
}
EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -192,7 +147,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
.pgprot = PAGE_KERNEL,
};
int error, is_ram;
- bool need_devmap_managed = true;
switch (pgmap->type) {
case MEMORY_DEVICE_PRIVATE:
@@ -217,11 +171,9 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
}
break;
case MEMORY_DEVICE_GE...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
...is isolated or idle. This is my attempt to make
ZONE_DEVICE pages be free when the reference count is zero and removing
the special cases.
I'm only sending this out as a RFC since I'm not that familiar with the
DAX, PMEM, XEN, and other uses of ZONE_DEVICE struct pages allocated
with devm_memremap_pages() or memremap_pages() but my best reading of
the code looks like it might be OK. I could use help testing these
configurations.
I have been able to successfully run xfstests on ext4 with the memmap
kernel boot option to simulate pmem.
One of the big changes in v2 is that devm_memremap_pages() and...
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
...is isolated or idle. This is my attempt to make
ZONE_DEVICE pages be free when the reference count is zero and removing
the special cases.
I'm only sending this out as a RFC since I'm not that familiar with the
DAX, PMEM, XEN, and other uses of ZONE_DEVICE struct pages allocated
with devm_memremap_pages() or memremap_pages() but my best reading of
the code looks like it might be OK. I could use help testing these
configurations.
I have been able to successfully run xfstests on ext4 with the memmap
kernel boot option to simulate pmem.
Changes in v3:
Rebase to linux-mm 5.9.0-rc7-mm1.
Added a check...
2020 Mar 16
0
[PATCH 1/4] memremap: add an owner field to struct dev_pagemap
...vm/book3s_hv_uvmem.c
@@ -779,6 +779,8 @@ int kvmppc_uvmem_init(void)
kvmppc_uvmem_pgmap.type = MEMORY_DEVICE_PRIVATE;
kvmppc_uvmem_pgmap.res = *res;
kvmppc_uvmem_pgmap.ops = &kvmppc_uvmem_ops;
+ /* just one global instance: */
+ kvmppc_uvmem_pgmap.owner = &kvmppc_uvmem_pgmap;
addr = memremap_pages(&kvmppc_uvmem_pgmap, NUMA_NO_NODE);
if (IS_ERR(addr)) {
ret = PTR_ERR(addr);
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 0ad5d87b5a8e..a4682272586e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouve...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...i));
dev_pagemap_cleanup(pgmap);
for (i = 0; i < pgmap->nr_range; i++)
pageunmap_range(pgmap, i);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
- devmap_managed_enable_put();
}
EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -327,7 +286,6 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
.pgprot = PAGE_KERNEL,
};
const int nr_range = pgmap->nr_range;
- bool need_devmap_managed = true;
int error, i;
if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
@@ -343,6 +301,10 @@ void *memremap_pages(struct dev_pagemap *pgma...
2020 Sep 17
1
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
....
Dan Williams thought that having the ZONE_DEVICE struct pages
be on a free list with a refcount of one was a bit strange and
that the driver should handle the zero to one transition.
But, that would mean a bit more invasive change to the 3 drivers
to set the reference count to zero after calling memremap_pages()
and setting the reference count to one when allocating a struct
page. What you are suggesting is what I also proposed in v1.
2020 Sep 26
1
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...eturn dpage;
>
Doesn't test_hmm also need to reinitialize the refcount before freeing
the page in hmm_dmirror_exit?
> int error, is_ram;
> - bool need_devmap_managed = true;
>
> switch (pgmap->type) {
> case MEMORY_DEVICE_PRIVATE:
> @@ -217,11 +171,9 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
> }
> break;
> case MEMORY_DEVICE_GENERIC:
The MEMORY_DEVICE_PRIVATE cases loses the sanity check that the
page_free method is set.
Otherwise this looks good.
2020 Oct 12
2
[PATCH v2] mm/hmm: make device private reference counts zero based
...i++)
pageunmap_range(pgmap, i);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
- devmap_managed_enable_put();
+ if (pgmap->type == MEMORY_DEVICE_FS_DAX)
+ devmap_managed_enable_put();
}
EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -328,7 +312,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
.pgprot = PAGE_KERNEL,
};
const int nr_range = pgmap->nr_range;
- bool need_devmap_managed = true;
+ bool need_devmap_managed = false;
int error, i;
if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
@@ -344,6 +328,10 @@ void *mem...
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...i++)
pageunmap_range(pgmap, i);
WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
- devmap_managed_enable_put();
+ if (pgmap->type == MEMORY_DEVICE_FS_DAX)
+ devmap_managed_enable_put();
}
EXPORT_SYMBOL_GPL(memunmap_pages);
@@ -328,7 +312,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
.pgprot = PAGE_KERNEL,
};
const int nr_range = pgmap->nr_range;
- bool need_devmap_managed = true;
+ bool need_devmap_managed = false;
int error, i;
if (WARN_ONCE(!nr_range, "nr_range must be specified\n"))
@@ -344,6 +328,10 @@ void *mem...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...E_DEVICE private memory.
>>> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
>>> indicate the huge page was fully mapped by the CPU.
>>> Export prep_compound_page() so that device drivers can create huge
>>> device private pages after calling memremap_pages().
>>>
>>> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
>>> ---
>>> include/linux/migrate.h | 1 +
>>> include/linux/mm.h | 1 +
>>> mm/huge_memory.c | 30 ++++--
>>> mm/internal.h | 1...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...;>>> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
>>>>> indicate the huge page was fully mapped by the CPU.
>>>>> Export prep_compound_page() so that device drivers can create huge
>>>>> device private pages after calling memremap_pages().
>>>>>
>>>>> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
>>>>> ---
>>>>> include/linux/migrate.h | 1 +
>>>>> include/linux/mm.h | 1 +
>>>>> mm/huge_memory.c | 30 +...
2020 Nov 06
1
[PATCH v3 1/6] mm/thp: add prep_transhuge_device_private_page()
On Thu, Nov 05, 2020 at 04:51:42PM -0800, Ralph Campbell wrote:
> +extern void prep_transhuge_device_private_page(struct page *page);
No need for the extern.
> +static inline void prep_transhuge_device_private_page(struct page *page)
> +{
> +}
Is the code to call this even reachable if THP support is configured
out? If not just declaring it unconditionally and letting dead code
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...arent huge page migration to ZONE_DEVICE private memory.
> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
> indicate the huge page was fully mapped by the CPU.
> Export prep_compound_page() so that device drivers can create huge
> device private pages after calling memremap_pages().
>
> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
> ---
> include/linux/migrate.h | 1 +
> include/linux/mm.h | 1 +
> mm/huge_memory.c | 30 ++++--
> mm/internal.h | 1 -
> mm/memory.c | 10 +-
> mm/memremap.c...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...migration to ZONE_DEVICE private memory.
>> A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
>> indicate the huge page was fully mapped by the CPU.
>> Export prep_compound_page() so that device drivers can create huge
>> device private pages after calling memremap_pages().
>>
>> Signed-off-by: Ralph Campbell <rcampbell at nvidia.com>
>> ---
>> include/linux/migrate.h | 1 +
>> include/linux/mm.h | 1 +
>> mm/huge_memory.c | 30 ++++--
>> mm/internal.h | 1 -
>> mm/memory.c...
2020 Sep 15
2
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
On 9/15/20 9:29 AM, Christoph Hellwig wrote:
> On Mon, Sep 14, 2020 at 04:53:25PM -0700, Ralph Campbell wrote:
>> Since set_page_refcounted() is defined in mm_interal.h I would have to
>> move the definition to someplace like page_ref.h or have the drivers
>> cal init_page_count() or set_page_count() since get_page() calls
>> VM_BUG_ON_PAGE() if refcount == 0.
>>
2020 Mar 16
14
ensure device private pages have an owner v2
When acting on device private mappings a driver needs to know if the
device (or other entity in case of kvmppc) actually owns this private
mapping. This series adds an owner field and converts the migrate_vma
code over to check it. I looked into doing the same for
hmm_range_fault, but as far as I can tell that code has never been
wired up to actually work for device private memory, so instead of
2020 Mar 16
0
[PATCH 1/2] mm: handle multiple owners of device private pages in migrate_vma
...ut, nothing to do */
@@ -779,6 +781,8 @@ int kvmppc_uvmem_init(void)
kvmppc_uvmem_pgmap.type = MEMORY_DEVICE_PRIVATE;
kvmppc_uvmem_pgmap.res = *res;
kvmppc_uvmem_pgmap.ops = &kvmppc_uvmem_ops;
+ /* just one global instance: */
+ kvmppc_uvmem_pgmap.owner = &kvmppc_uvmem_pgmap;
addr = memremap_pages(&kvmppc_uvmem_pgmap, NUMA_NO_NODE);
if (IS_ERR(addr)) {
ret = PTR_ERR(addr);
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 0ad5d87b5a8e..7605c4c48985 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouve...