search for: expected_count

Displaying 19 results from an estimated 19 matches for "expected_count".

2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...ct mm_struct *mm, pmd_t *pmd) > > } > > #endif > > > > -static int expected_page_refs(struct address_space *mapping, struct page *page) > > +static int folio_expected_refs(struct address_space *mapping, > > + struct folio *folio) > > { > > - int expected_count = 1; > > + int refs = 1; > > + if (!mapping) > > + return refs; > > > > - if (mapping) > > - expected_count += compound_nr(page) + page_has_private(page); > > - return expected_count; > > + refs += folio_nr_pages(folio); > > + if (folio_get_...
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...ct mm_struct *mm, pmd_t *pmd) > > } > > #endif > > > > -static int expected_page_refs(struct address_space *mapping, struct page *page) > > +static int folio_expected_refs(struct address_space *mapping, > > + struct folio *folio) > > { > > - int expected_count = 1; > > + int refs = 1; > > + if (!mapping) > > + return refs; > > > > - if (mapping) > > - expected_count += compound_nr(page) + page_has_private(page); > > - return expected_count; > > + refs += folio_nr_pages(folio); > > + if (folio_get_...
2019 Jun 26
0
[PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support
..._device_private_entry(new, pte_write(pte)); pte = swp_entry_to_pte(entry); - } else if (is_device_public_page(new)) { - pte = pte_mkdevmap(pte); } } @@ -381,7 +379,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) * ZONE_DEVICE pages. */ expected_count += is_device_private_page(page); - expected_count += is_device_public_page(page); if (mapping) expected_count += hpage_nr_pages(page) + page_has_private(page); @@ -994,10 +991,7 @@ static int move_to_new_page(struct page *newpage, struct page *page, if (!PageMappingFlags(page)) page-&...
2020 Oct 12
2
[PATCH v2] mm/hmm: make device private reference counts zero based
...ice_private_page(page)) return; - } __ClearPageWaiters(page); diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..ee09334b46d8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -380,11 +380,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; - /* - * Device private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page); diff --git a/mm/swap.c b/mm/swap.c index 0eb057141a04..93d880c6...
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...urn; + default: + return; + } +} #endif /* CONFIG_DEV_PAGEMAP_OPS */ diff --git a/mm/migrate.c b/mm/migrate.c index aecb1433cf3c..c51749c7cb9d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -380,11 +380,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; - /* - * Device public or private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index...
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
...urn; + default: + return; + } +} #endif /* CONFIG_DEV_PAGEMAP_OPS */ diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..ee09334b46d8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -380,11 +380,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; - /* - * Device private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index eb0962976a...
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...ice_private_page(page)) return; - } __ClearPageWaiters(page); diff --git a/mm/migrate.c b/mm/migrate.c index 5ca5842df5db..ee09334b46d8 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -380,11 +380,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; - /* - * Device private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page); diff --git a/mm/swap.c b/mm/swap.c index 0eb057141a04..93d880c6...
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some more, I realized that the ZONE_DEVICE struct pages are being inserted into the process' page tables with vmf_insert_mixed() and a zero refcount on the ZONE_DEVICE struct page. This is sort of OK because insert_pfn() increments the reference count on the pgmap which is what prevents memunmap_pages() from freeing the struct
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
...field * may still contain a (stale) mapping value. For example, the diff --git a/mm/migrate.c b/mm/migrate.c index 4f89360d9e77..be1586582b52 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -380,11 +380,6 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) { int expected_count = 1; - /* - * Device private pages have an extra refcount as they are - * ZONE_DEVICE pages. - */ - expected_count += is_device_private_page(page); if (mapping) expected_count += thp_nr_pages(page) + page_has_private(page); diff --git a/mm/swap.c b/mm/swap.c index feff680d3de9..8468e72e...
2019 Nov 12
0
[PATCH v3 06/14] RDMA/hfi1: Use mmu_interval_notifier_insert for user_exp_rcv
...} /* @@ -139,7 +123,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd, * init. */ spin_lock(&fd->tid_lock); - if (uctxt->subctxt_cnt && fd->handler) { + if (uctxt->subctxt_cnt && fd->use_mn) { u16 remainder; fd->tid_limit = uctxt->expected_count / uctxt->subctxt_cnt; @@ -158,18 +142,10 @@ void hfi1_user_exp_rcv_free(struct hfi1_filedata *fd) { struct hfi1_ctxtdata *uctxt = fd->uctxt; - /* - * The notifier would have been removed when the process'es mm - * was freed. - */ - if (fd->handler) { - hfi1_mmu_rb_unregister(f...
2019 Oct 28
1
[PATCH v2 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv
...} /* @@ -139,7 +123,7 @@ int hfi1_user_exp_rcv_init(struct hfi1_filedata *fd, * init. */ spin_lock(&fd->tid_lock); - if (uctxt->subctxt_cnt && fd->handler) { + if (uctxt->subctxt_cnt && fd->use_mn) { u16 remainder; fd->tid_limit = uctxt->expected_count / uctxt->subctxt_cnt; @@ -158,18 +142,10 @@ void hfi1_user_exp_rcv_free(struct hfi1_filedata *fd) { struct hfi1_ctxtdata *uctxt = fd->uctxt; - /* - * The notifier would have been removed when the process'es mm - * was freed. - */ - if (fd->handler) { - hfi1_mmu_rb_unregister(f...
2012 Nov 11
8
[PATCH v12 0/7] make balloon pages movable by compaction
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch-set follows the main idea discussed at 2012 LSFMMS session: "Ballooning for transparent huge
2012 Nov 11
8
[PATCH v12 0/7] make balloon pages movable by compaction
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch-set follows the main idea discussed at 2012 LSFMMS session: "Ballooning for transparent huge
2019 Jun 26
41
dev_pagemap related cleanups v3
Hi Dan, Jérôme and Jason, below is a series that cleans up the dev_pagemap interface so that it is more easily usable, which removes the need to wrap it in hmm and thus allowing to kill a lot of code Note: this series is on top of Linux 5.2-rc5 and has some minor conflicts with the hmm tree that are easy to resolve. Diffstat summary: 32 files changed, 361 insertions(+), 1012 deletions(-) Git
2012 Nov 07
8
[PATCH v11 0/7] make balloon pages movable by compaction
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch-set follows the main idea discussed at 2012 LSFMMS session: "Ballooning for transparent huge
2012 Nov 07
8
[PATCH v11 0/7] make balloon pages movable by compaction
Memory fragmentation introduced by ballooning might reduce significantly the number of 2MB contiguous memory blocks that can be used within a guest, thus imposing performance penalties associated with the reduced number of transparent huge pages that could be used by the guest workload. This patch-set follows the main idea discussed at 2012 LSFMMS session: "Ballooning for transparent huge
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others