search for: page_ref_count

Displaying 19 results from an estimated 19 matches for "page_ref_count".

2020 Nov 09
3
[PATCH v3 3/6] mm: support THP migration to device private memory
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote: > > On 11/6/20 12:03 AM, Christoph Hellwig wrote: >> I hate the extra pin count magic here. IMHO we really need to finish >> off the series to get rid of the extra references on the ZONE_DEVICE >> pages first. > > First, thanks for the review comments. > > I don't like the extra refcount
2020 Nov 11
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...interesting case, any specific > help that you need with that? There are 4 types of ZONE_DEVICE struct pages: MEMORY_DEVICE_PRIVATE, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_GENERIC, and MEMORY_DEVICE_PCI_P2PDMA. Currently, memremap_pages() allocates struct pages for a physical address range with a page_ref_count(page) of one and increments the pgmap->ref per CPU reference count by the number of pages created since each ZONE_DEVICE struct page has a pointer to the pgmap. The struct pages are not freed until memunmap_pages() is called which calls put_page() which calls put_dev_pagemap() which releases a...
2020 Sep 25
1
[PATCH 1/2] ext4/xfs: add page refcount helper
...s/dax.c b/fs/dax.c index 994ab66a9907..8eddbcc0e149 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -358,7 +358,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn); - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); + WARN_ON_ONCE(trunc && !dax_layout_is_idle_page(page)); WARN_ON_ONCE(page->mapping && page->mapping != mapping); page->mapping = NULL; page->index = 0; @@ -372,7 +372,7 @@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entr...
2020 Oct 01
0
[RFC PATCH v3 1/2] ext4/xfs: add page refcount helper
...s/dax.c b/fs/dax.c index 5b47834f2e1b..85c63f735909 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -358,7 +358,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, for_each_mapped_pfn(entry, pfn) { struct page *page = pfn_to_page(pfn); - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); + WARN_ON_ONCE(trunc && !dax_layout_is_idle_page(page)); WARN_ON_ONCE(page->mapping && page->mapping != mapping); page->mapping = NULL; page->index = 0; @@ -372,7 +372,7 @@ static struct page *dax_busy_page(void *entry) for_each_mapped_pfn(entr...
2020 Sep 25
0
[PATCH 1/2] ext4/xfs: add page refcount helper
...-- a/fs/dax.c > +++ b/fs/dax.c > @@ -358,7 +358,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping, > for_each_mapped_pfn(entry, pfn) { > struct page *page = pfn_to_page(pfn); > > - WARN_ON_ONCE(trunc && page_ref_count(page) > 1); > + WARN_ON_ONCE(trunc && !dax_layout_is_idle_page(page)); > WARN_ON_ONCE(page->mapping && page->mapping != mapping); > page->mapping = NULL; > page->index = 0; > @@ -372,7 +372,7...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Sep 26
1
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...ux/dax.h > index 3f78ed78d1d6..8d29f38645aa 100644 > --- a/include/linux/dax.h > +++ b/include/linux/dax.h > @@ -240,7 +240,7 @@ static inline bool dax_mapping(struct address_space *mapping) > > static inline bool dax_layout_is_idle_page(struct page *page) > { > - return page_ref_count(page) <= 1; > + return page_ref_count(page) == 0; > } > > #endif > diff --git a/include/linux/memremap.h b/include/linux/memremap.h > index e5862746751b..f9224f88e4cd 100644 > --- a/include/linux/memremap.h > +++ b/include/linux/memremap.h > @@ -65,9 +65,10 @@ enum...
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some more, I realized that the ZONE_DEVICE struct pages are being inserted into the process' page tables with vmf_insert_mixed() and a zero refcount on the ZONE_DEVICE struct page. This is sort of OK because insert_pfn() increments the reference count on the pgmap which is what prevents memunmap_pages() from freeing the struct
2019 Oct 16
4
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
...he physical memory remove. [...] > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d5d7944954b3..fef74720d8b4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > if (!page_ref_count(page)) { > if (PageBuddy(page)) > iter += (1 << page_order(page)) - 1; > + /* > + * Memory devices allow to offline a page if it is > + * marked PG_offline and has a reference count of 0. > + * However, the pages are not movable as it would be > + * req...
2019 Oct 16
4
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
...he physical memory remove. [...] > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d5d7944954b3..fef74720d8b4 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > if (!page_ref_count(page)) { > if (PageBuddy(page)) > iter += (1 << page_order(page)) - 1; > + /* > + * Memory devices allow to offline a page if it is > + * marked PG_offline and has a reference count of 0. > + * However, the pages are not movable as it would be > + * req...
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...-git a/include/linux/dax.h b/include/linux/dax.h index 3f78ed78d1d6..8d29f38645aa 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -240,7 +240,7 @@ static inline bool dax_mapping(struct address_space *mapping) static inline bool dax_layout_is_idle_page(struct page *page) { - return page_ref_count(page) <= 1; + return page_ref_count(page) == 0; } #endif diff --git a/include/linux/memremap.h b/include/linux/memremap.h index e5862746751b..f9224f88e4cd 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -65,9 +65,10 @@ enum memory_type { struct dev_pagemap_ops {...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...-git a/include/linux/dax.h b/include/linux/dax.h index f906cf4db1cc..e4920ea6abd3 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -245,7 +245,7 @@ static inline bool dax_mapping(struct address_space *mapping) static inline bool dax_layout_is_idle_page(struct page *page) { - return page_ref_count(page) == 1; + return page_ref_count(page) == 0; } #define dax_wait_page(_inode, _page, _wait_cb) \ diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 86c6c368ce9b..2f63747caf56 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -66,9 +66,10 @@ enum me...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...che, and probably for some others. > > I say "dangerously" because it tells page migration a swapcache page > is safe for migration when it certainly is not. > > The fun that typically ensues is kernel BUG at include/linux/mm.h:750! > put_page_testzero() VM_BUG_ON_PAGE(page_ref_count(page) == 0, page), > if CONFIG_DEBUG_VM=y (bisecting for that is what brought me to this). > But I guess you might get silent data corruption too. > > I assumed at first that you'd changed the rules, and were now expecting > any subsystem that puts a non-zero value into folio-&g...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...che, and probably for some others. > > I say "dangerously" because it tells page migration a swapcache page > is safe for migration when it certainly is not. > > The fun that typically ensues is kernel BUG at include/linux/mm.h:750! > put_page_testzero() VM_BUG_ON_PAGE(page_ref_count(page) == 0, page), > if CONFIG_DEBUG_VM=y (bisecting for that is what brought me to this). > But I guess you might get silent data corruption too. > > I assumed at first that you'd changed the rules, and were now expecting > any subsystem that puts a non-zero value into folio-&g...
2019 Oct 16
0
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
...t; [...] >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index d5d7944954b3..fef74720d8b4 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, >> if (!page_ref_count(page)) { >> if (PageBuddy(page)) >> iter += (1 << page_order(page)) - 1; >> + /* >> + * Memory devices allow to offline a page if it is >> + * marked PG_offline and has a reference count of 0. >> + * However, the pages are not movable as...
2019 Sep 19
0
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
...= "failure to isolate range"; goto failed_removal; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d5d7944954b3..fef74720d8b4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, if (!page_ref_count(page)) { if (PageBuddy(page)) iter += (1 << page_order(page)) - 1; + /* + * Memory devices allow to offline a page if it is + * marked PG_offline and has a reference count of 0. + * However, the pages are not movable as it would be + * required e.g., for alloc_contig_range(...
2019 Oct 16
3
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
.../mm/page_alloc.c b/mm/page_alloc.c > > > index d5d7944954b3..fef74720d8b4 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > > > if (!page_ref_count(page)) { > > > if (PageBuddy(page)) > > > iter += (1 << page_order(page)) - 1; > > > + /* > > > + * Memory devices allow to offline a page if it is > > > + * marked PG_offline and has a reference count of 0. > > > + * Ho...
2019 Oct 16
3
[PATCH RFC v3 6/9] mm: Allow to offline PageOffline() pages with a reference count of 0
.../mm/page_alloc.c b/mm/page_alloc.c > > > index d5d7944954b3..fef74720d8b4 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -8221,6 +8221,15 @@ bool has_unmovable_pages(struct zone *zone, struct page *page, int count, > > > if (!page_ref_count(page)) { > > > if (PageBuddy(page)) > > > iter += (1 << page_order(page)) - 1; > > > + /* > > > + * Memory devices allow to offline a page if it is > > > + * marked PG_offline and has a reference count of 0. > > > + * Ho...
2019 Sep 19
14
[PATCH RFC v3 0/9] virtio-mem: paravirtualized memory
Long time no RFC! I finally had time to get the next version of the Linux driver side of virtio-mem into shape, incorporating ideas and feedback from previous discussions. This RFC is based on the series currently on the mm list: - [PATCH 0/3] Remove __online_page_set_limits() - [PATCH v1 0/3] mm/memory_hotplug: Export generic_online_page() - [PATCH v4 0/8] mm/memory_hotplug: Shrink zones before