Displaying 9 results from an estimated 9 matches for "do_huge_pmd_anonymous_pag".
Did you mean:
do_huge_pmd_anonymous_page
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
...free_pages(gfp_t gfp_mask, unsigned int order);
> extern unsigned long get_zeroed_page(gfp_t gfp_mask);
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 25d95f7b1e98..f749633ed350 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -775,6 +775,22 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
> return __do_huge_pmd_anonymous_page(vmf, page, gfp);
> }
>
> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> +struct page *alloc_transhugepage(struct vm_area_struct *vma,
> + unsigned long haddr)
> +{
> + gfp_t gfp;
> + struct page *page;
> +...
2020 Jun 19
0
[PATCH 14/16] mm/thp: add THP allocation helper
...extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order);
extern unsigned long get_zeroed_page(gfp_t gfp_mask);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 25d95f7b1e98..f749633ed350 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -775,6 +775,22 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
return __do_huge_pmd_anonymous_page(vmf, page, gfp);
}
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+struct page *alloc_transhugepage(struct vm_area_struct *vma,
+ unsigned long haddr)
+{
+ gfp_t gfp;
+ struct page *page;
+
+ gfp = alloc_hugepage_direct_gfpmask(vma);
+ p...
2020 Nov 06
0
[PATCH v3 4/6] mm/thp: add THP allocation helper
...extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigned int order);
extern unsigned long get_zeroed_page(gfp_t gfp_mask);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a073e66d0ee2..c2c1d3e7c35f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -765,6 +765,20 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
return __do_huge_pmd_anonymous_page(vmf, page, gfp);
}
+struct page *alloc_transhugepage(struct vm_area_struct *vma,
+ unsigned long haddr)
+{
+ gfp_t gfp;
+ struct page *page;
+
+ gfp = alloc_hugepage_direct_gfpmask(vma);
+ page = alloc_hugepage_vma(gfp, vma, haddr,...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...n false;
-
/* Page from ZONE_DEVICE have one extra reference */
if (is_zone_device_page(page)) {
/*
@@ -2726,13 +2759,115 @@ int migrate_vma_setup(struct migrate_vma *args)
}
EXPORT_SYMBOL(migrate_vma_setup);
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+/*
+ * This code closely follows:
+ * do_huge_pmd_anonymous_page()
+ * __do_huge_pmd_anonymous_page()
+ * except that the page being inserted is likely to be a device private page
+ * instead of an allocated or zero page.
+ */
+static int insert_huge_pmd_anonymous_page(struct vm_area_struct *vma,
+ unsigned long haddr,
+ struct page *page,
+...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...xtra reference */
> if (is_zone_device_page(page)) {
> /*
> @@ -2726,13 +2759,115 @@ int migrate_vma_setup(struct migrate_vma *args)
> }
> EXPORT_SYMBOL(migrate_vma_setup);
>
> +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
> +/*
> + * This code closely follows:
> + * do_huge_pmd_anonymous_page()
> + * __do_huge_pmd_anonymous_page()
> + * except that the page being inserted is likely to be a device private page
> + * instead of an allocated or zero page.
> + */
> +static int insert_huge_pmd_anonymous_page(struct vm_area_struct *vma,
> + unsigned long haddr,
>...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...+ pud_t *pudp;
+
+ pgdp = pgd_offset(mm, addr);
+ p4dp = p4d_alloc(mm, pgdp, addr);
+ if (!p4dp)
+ return NULL;
+ pudp = pud_alloc(mm, p4dp, addr);
+ if (!pudp)
+ return NULL;
+ return pmd_alloc(mm, pudp, addr);
+}
+
+#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION
+/*
+ * This code closely follows:
+ * do_huge_pmd_anonymous_page()
+ * __do_huge_pmd_anonymous_page()
+ * except that the page being inserted is likely to be a device private page
+ * instead of an allocated or zero page.
+ */
+static int insert_huge_pmd_anonymous_page(struct vm_area_struct *vma,
+ unsigned long haddr,
+ struct page *page,
+...
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan