search for: xa_state

Displaying 7 results from an estimated 7 matches for "xa_state".

2020 Sep 25
1
[PATCH 1/2] ext4/xfs: add page refcount helper
...@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn); - if (page_ref_count(page) > 1) + if (!dax_layout_is_idle_page(page)) return page; } return NULL; @@ -560,11 +560,11 @@ static void *grab_mapping_entry(struct xa_state *xas, /** * dax_layout_busy_page - find first pinned page in @mapping - * @mapping: address space to scan for a page with ref count > 1 + * @mapping: address space to scan for a page with ref count > 0 * * DAX requires ZONE_DEVICE mapped pages. These pages are never * 'onlined&...
2020 Sep 25
0
[PATCH 1/2] ext4/xfs: add page refcount helper
...uct page *page = pfn_to_page(pfn); > > - if (page_ref_count(page) > 1) > + if (!dax_layout_is_idle_page(page)) > return page; > } > return NULL; > @@ -560,11 +560,11 @@ static void *grab_mapping_entry(struct xa_state *xas, > > /** > * dax_layout_busy_page - find first pinned page in @mapping > - * @mapping: address space to scan for a page with ref count > 1 > + * @mapping: address space to scan for a page with ref count > 0 > * > * DAX requires ZONE_DEVICE mapped pages. These...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...rn expected_count; > > + refs += folio_nr_pages(folio); > > + if (folio_get_private(folio)) > > + refs++; > > + > > + return refs; > > } > > > > /* > > @@ -359,7 +364,7 @@ int folio_migrate_mapping(struct address_space *mapping, > > XA_STATE(xas, &mapping->i_pages, folio_index(folio)); > > struct zone *oldzone, *newzone; > > int dirty; > > - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count; > > + int expected_count = folio_expected_refs(mapping, folio) + extra_count;...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...rn expected_count; > > + refs += folio_nr_pages(folio); > > + if (folio_get_private(folio)) > > + refs++; > > + > > + return refs; > > } > > > > /* > > @@ -359,7 +364,7 @@ int folio_migrate_mapping(struct address_space *mapping, > > XA_STATE(xas, &mapping->i_pages, folio_index(folio)); > > struct zone *oldzone, *newzone; > > int dirty; > > - int expected_count = expected_page_refs(mapping, &folio->page) + extra_count; > > + int expected_count = folio_expected_refs(mapping, folio) + extra_count;...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
..._locked(struct nouveau_drm *drm) return NULL; } - get_page(page); + init_page_count(page); lock_page(page); return page; } diff --git a/fs/dax.c b/fs/dax.c index 85c63f735909..4804348f62e6 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -560,14 +560,14 @@ static void *grab_mapping_entry(struct xa_state *xas, /** * dax_layout_busy_page_range - find first pinned page in @mapping - * @mapping: address space to scan for a page with ref count > 1 + * @mapping: address space to scan for a page with ref count > 0 * @start: Starting offset. Page containing 'start' is included. * @e...
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some more, I realized that the ZONE_DEVICE struct pages are being inserted into the process' page tables with vmf_insert_mixed() and a zero refcount on the ZONE_DEVICE struct page. This is sort of OK because insert_pfn() increments the reference count on the pgmap which is what prevents memunmap_pages() from freeing the struct