search for: locked_pgdat

Displaying 11 results from an estimated 11 matches for "locked_pgdat".

2019 Jun 26
2
[PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support
...@ void release_pages(struct page **pages, int nr) > if (is_huge_zero_page(page)) > continue; > > - /* Device public page can not be huge page */ > - if (is_device_public_page(page)) { > - if (locked_pgdat) { > - spin_unlock_irqrestore(&locked_pgdat->lru_lock, > - flags); > - locked_pgdat = NULL; > - } > - put_devmap_managed...
2019 Jun 26
0
[PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support
...ge **pages, int nr) > > if (is_huge_zero_page(page)) > > continue; > > > > - /* Device public page can not be huge page */ > > - if (is_device_public_page(page)) { > > - if (locked_pgdat) { > > - spin_unlock_irqrestore(&locked_pgdat->lru_lock, > > - flags); > > - locked_pgdat = NULL; > > - } > > -...
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
..._zone_device_page(page)) { - put_dev_pagemap(page->pgmap); - /* * The page belongs to the device that created pgmap. Do * not return it to page allocator. */ + free_zone_device_page(page); return; } @@ -848,30 +847,22 @@ void release_pages(struct page **pages, int nr) locked_pgdat = NULL; } + page = compound_head(page); if (is_huge_zero_page(page)) continue; + if (!put_page_testzero(page)) + continue; + if (is_zone_device_page(page)) { if (locked_pgdat) { spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); locked_pg...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...not return it to page allocator. */ + free_zone_device_page(page); return; } @@ -891,26 +890,18 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; + if (!put_page_testzero(page)) + continue; + if (is_zone_device_page(page)) { if (locked_pgdat) { spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); locked_pgdat = NULL; } - /* - * ZONE_DEVICE pages that return 'false' from - * page_is_devmap_managed() do not require special - * processing, and instead, expect a call to - * put_pa...
2019 Jun 26
0
[PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support
...p.c b/mm/swap.c index 7ede3eddc12a..83107410d29f 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -740,17 +740,6 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; - /* Device public page can not be huge page */ - if (is_device_public_page(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); - locked_pgdat = NULL; - } - put_devmap_managed_page(page); - continue; - } - page = compound_head(page); if (!put_page_testzero(page)) continue; -- 2.20.1
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...* The page belongs to the device that created pgmap. Do @@ -851,27 +872,19 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; + page = compound_head(page); + if (!put_page_testzero(page)) + continue; + if (is_zone_device_page(page)) { if (locked_pgdat) { spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); locked_pgdat = NULL; } - /* - * ZONE_DEVICE pages that return 'false' from - * put_devmap_managed_page() do not require special - * processing, and instead, expect a call to - * put_p...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some more, I realized that the ZONE_DEVICE struct pages are being inserted into the process' page tables with vmf_insert_mixed() and a zero refcount on the ZONE_DEVICE struct page. This is sort of OK because insert_pfn() increments the reference count on the pgmap which is what prevents memunmap_pages() from freeing the struct
2020 Sep 16
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...e)) { - __put_devmap_managed_page(page); - /* * The page belongs to the device that created pgmap. Do * not return it to page allocator. */ + free_zone_device_page(page); return; } @@ -923,7 +901,7 @@ void release_pages(struct page **pages, int nr) flags); locked_pgdat = NULL; } - __put_devmap_managed_page(page); + free_zone_device_page(page); return; }
2020 Sep 17
0
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...* The page belongs to the device that created pgmap. Do > * not return it to page allocator. > */ > + free_zone_device_page(page); > return; > } > > @@ -923,7 +901,7 @@ void release_pages(struct page **pages, int nr) > flags); > locked_pgdat = NULL; > } > - __put_devmap_managed_page(page); > + free_zone_device_page(page); > return; > } > > Thanks for the review! I will apply the above in v2. I found a couple of more reference count checks in fs/dax.c so I need to run fstests with dax before s...
2019 Jun 26
41
dev_pagemap related cleanups v3
Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of Linux 5.2-rc5 and has some minor conflicts with the hmm tree that are easy to resolve. Diffstat summary: 32 files changed, 361 insertions(+), 1012 deletions(-) Git