search for: pmd_lock

Displaying 18 results from an estimated 18 matches for "pmd_lock".

Did you mean: pgd_lock
2019 Jul 26
0
[PATCH v2 2/7] mm/hmm: a few more C style and comment clean ups
...walk->last = addr; - pmd_migration_entry_wait(vma->vm_mm, pmdp); + pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } return 0; @@ -657,11 +653,11 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) { /* - * No need to take pmd_lock here, even if some other threads + * No need to take pmd_lock here, even if some other thread * is splitting the huge pmd we will get that event through * mmu_notifier callback. * - * So just read pmd value and check again its a transparent + * So just read pmd value and check aga...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...+{ + spinlock_t *ptl; + + VM_BUG_ON_PAGE(is_huge_zero_page(head), head); + VM_BUG_ON_PAGE(!PageLocked(head), head); + VM_BUG_ON_PAGE(!PageHead(head), head); + VM_BUG_ON_PAGE(PageWriteback(head), head); + VM_BUG_ON_PAGE(PageLRU(head), head); + VM_BUG_ON_PAGE(compound_mapcount(head), head); + + ptl = pmd_lock(vma->vm_mm, pmd); + __split_huge_pmd_locked(vma, pmd, address, false); + spin_unlock(ptl); + + return __split_huge_page_to_list(head, NULL, false); +} + void free_transhuge_page(struct page *page) { struct deferred_split *ds_queue = get_deferred_split_queue(page); @@ -2766,9 +2836,11 @@ void...
2016 Oct 21
0
[RESEND PATCH v3 kernel 3/7] mm: add a function to get the max pfn
...redhat.com> --- include/linux/mm.h | 1 + mm/page_alloc.c | 10 ++++++++++ 2 files changed, 11 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ffbd729..2a89da0e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1776,6 +1776,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern unsigned long get_max_pfn(void); /* * Free reserved pages within range [PAGE_ALIGN(star...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...pmd; again: - if (pmd_none(*pmdp)) + pmd = READ_ONCE(*pmdp); + if (pmd_none(pmd)) return migrate_vma_collect_hole(start, end, -1, walk); - if (pmd_trans_huge(*pmdp)) { + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { struct page *page; + unsigned long write = 0; + int ret; ptl = pmd_lock(mm, pmdp); - if (unlikely(!pmd_trans_huge(*pmdp))) { - spin_unlock(ptl); - goto again; - } + if (pmd_trans_huge(*pmdp)) { + page = pmd_page(*pmdp); + if (is_huge_zero_page(page)) { + spin_unlock(ptl); + return migrate_vma_collect_hole(start, end, -1, + walk); + } + if (p...
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...READ_ONCE(*pmdp); > + if (pmd_none(pmd)) > return migrate_vma_collect_hole(start, end, -1, walk); > > - if (pmd_trans_huge(*pmdp)) { > + if (pmd_trans_huge(pmd) || !pmd_present(pmd)) { > struct page *page; > + unsigned long write = 0; > + int ret; > > ptl = pmd_lock(mm, pmdp); > - if (unlikely(!pmd_trans_huge(*pmdp))) { > - spin_unlock(ptl); > - goto again; > - } > + if (pmd_trans_huge(*pmdp)) { > + page = pmd_page(*pmdp); > + if (is_huge_zero_page(page)) { > + spin_unlock(ptl); > + return migrate_vma_collect_hole(st...
2016 Nov 30
0
[PATCH kernel v5 5/5] virtio-balloon: tell host vm's unused page info
...IO_BALLOON_F_PAGE_BITMAP, + VIRTIO_BALLOON_F_HOST_REQ_VQ, }; static struct virtio_driver virtio_balloon_driver = { diff --git a/include/linux/mm.h b/include/linux/mm.h index a92c8d7..e05ca86 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1772,7 +1772,8 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); - +extern int get_unused_pages(unsigned long *unused_pages, unsigned long size, + int order, unsi...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
...ches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on the order Andrew applies these. Changes in v3: Sent the patch ("mm/thp: fix __split_huge_pmd_locked() for migration PMD") as a separate patch from this series. Rebased to linux-mm 5.10.0-rc2. Changes in v2: Added splitting a THP midway in the migration process: i.e., in migrate_vma_pages(). [1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampbell at nvidia.com [2] https://l...
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
...pu/drm/nouveau/nouveau_dmem.c, it might be easiest if Andrew could take these through the linux-mm tree assuming that's OK with other maintainers like Ben Skeggs. [1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampbell at nvidia.com Ralph Campbell (7): mm/thp: fix __split_huge_pmd_locked() for migration PMD mm/migrate: move migrate_vma_collect_skip() mm: support THP migration to device private memory mm/thp: add prep_transhuge_device_private_page() mm/thp: add THP allocation helper mm/hmm/test: add self tests for THP migration nouveau: support THP migration to private...
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up. This was based on top of Christoph's seven patches: "hmm_range_fault related fixes and legacy API removal v3". I assume this will go into Jason's tree since there will likely be more HMM changes in this cycle. Changes from v1 to v2: Added AMD GPU to hmm_update removal. Added 2 patches from Christoph. Added 2 patches as
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split
2016 Nov 02
8
[PATCH kernel v4 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Nov 02
8
[PATCH kernel v4 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Nov 30
8
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Nov 30
8
[PATCH kernel v5 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Dec 21
12
[PATCH v6 kernel 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use {pfn|length} to present the page information instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Dec 21
12
[PATCH v6 kernel 0/5] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use {pfn|length} to present the page information instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Oct 21
16
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.
2016 Oct 21
16
[RESEND PATCH v3 kernel 0/7] Extend virtio-balloon for fast (de)inflating & fast live migration
This patch set contains two parts of changes to the virtio-balloon. One is the change for speeding up the inflating & deflating process, the main idea of this optimization is to use bitmap to send the page information to host instead of the PFNs, to reduce the overhead of virtio data transmission, address translation and madvise(). This can help to improve the performance by about 85%.