Displaying 20 results from an estimated 31 matches for "put_dev_pagemap".
2019 Aug 07
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...t is the point of getting checking
that the pgmap exists for the page and then immediately releasing it?
This code has this pattern in several places.
It feels racy
> }
> pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags;
> }
> - if (hmm_vma_walk->pgmap) {
> - put_dev_pagemap(hmm_vma_walk->pgmap);
> - hmm_vma_walk->pgmap = NULL;
Putting the value in the hmm_vma_walk would have made some sense to me
if the pgmap was not set to NULL all over the place. Then the most
xa_loads would be eliminated, as I would expect the pgmap tends to be
mostly uniform for these u...
2019 Aug 06
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...p = get_dev_pagemap(pfn,
- hmm_vma_walk->pgmap);
- if (unlikely(!hmm_vma_walk->pgmap))
+ pgmap = get_dev_pagemap(pfn, pgmap);
+ if (unlikely(!pgmap))
return -EBUSY;
}
pfns[i] = hmm_device_entry_from_pfn(range, pfn) | cpu_flags;
}
- if (hmm_vma_walk->pgmap) {
- put_dev_pagemap(hmm_vma_walk->pgmap);
- hmm_vma_walk->pgmap = NULL;
- }
+ if (pgmap)
+ put_dev_pagemap(pgmap);
hmm_vma_walk->last = end;
return 0;
#else
@@ -520,7 +517,7 @@ static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte)
static int hmm_vma_handle_pte(struct mm_wal...
2019 Aug 14
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...gt; > That said, this seems to want a better mechanism to determine "pfn is
> > ZONE_DEVICE".
>
> So I guess this patch is fine for now, and once you provide a better
> mechanism we can switch over to it?
What about the version I sent to just get rid of all the strange
put_dev_pagemaps while scanning? Odds are good we will work with only
a single pagemap, so it makes some sense to cache it once we find it?
diff --git a/mm/hmm.c b/mm/hmm.c
index 9a908902e4cc38..4e30128c23a505 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -497,10 +497,6 @@ static int hmm_vma_handle_pmd(struct mm_walk *...
2019 Aug 14
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
> Section alignment constraints somewhat save us here. The only example
> I can think of a PMD not containing a uniform pgmap association for
> each pte is the case when the pgmap overlaps normal dram, i.e. shares
> the same 'struct memory_section' for a given span. Otherwise, distinct
> pgmaps arrange to manage
2019 Aug 07
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...es.
>
> It feels racy
Agree, not sure what the intent is here. The only other reason call
get_dev_pagemap() is to just check in general if the pfn is indeed
owned by some ZONE_DEVICE instance, but if the intent is to make sure
the device is still attached/enabled that check is invalidated at
put_dev_pagemap().
If it's the former case, validating ZONE_DEVICE pfns, I imagine we can
do something cheaper with a helper that is on the order of the same
cost as pfn_valid(). I.e. replace PTE_DEVMAP with a mem_section flag
or something similar.
>
> > }
> > pfns[...
2019 Aug 08
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...t;
> Agree, not sure what the intent is here. The only other reason call
> get_dev_pagemap() is to just check in general if the pfn is indeed
> owned by some ZONE_DEVICE instance, but if the intent is to make sure
> the device is still attached/enabled that check is invalidated at
> put_dev_pagemap().
>
> If it's the former case, validating ZONE_DEVICE pfns, I imagine we can
> do something cheaper with a helper that is on the order of the same
> cost as pfn_valid(). I.e. replace PTE_DEVMAP with a mem_section flag
> or something similar.
The hmm literally never dereference...
2019 Aug 06
24
hmm cleanups, v2
Hi Jérôme, Ben, Felix and Jason,
below is a series against the hmm tree which cleans up various minor
bits and allows HMM_MIRROR to be built on all architectures.
Diffstat:
11 files changed, 94 insertions(+), 210 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git hmm-cleanups.2
Gitweb:
2019 Aug 14
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...sure what the intent is here. The only other reason call
> > get_dev_pagemap() is to just check in general if the pfn is indeed
> > owned by some ZONE_DEVICE instance, but if the intent is to make sure
> > the device is still attached/enabled that check is invalidated at
> > put_dev_pagemap().
> >
> > If it's the former case, validating ZONE_DEVICE pfns, I imagine we can
> > do something cheaper with a helper that is on the order of the same
> > cost as pfn_valid(). I.e. replace PTE_DEVMAP with a mem_section flag
> > or something similar.
>
> Th...
2019 Aug 14
3
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...want a better mechanism to determine "pfn is
> > > ZONE_DEVICE".
> >
> > So I guess this patch is fine for now, and once you provide a better
> > mechanism we can switch over to it?
>
> What about the version I sent to just get rid of all the strange
> put_dev_pagemaps while scanning? Odds are good we will work with only
> a single pagemap, so it makes some sense to cache it once we find it?
Yes, if the scan is over a single pmd then caching it makes sense.
2019 Aug 15
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...> ZONE_DEVICE".
> > > >
> > > > So I guess this patch is fine for now, and once you provide a better
> > > > mechanism we can switch over to it?
> > >
> > > What about the version I sent to just get rid of all the strange
> > > put_dev_pagemaps while scanning? Odds are good we will work with only
> > > a single pagemap, so it makes some sense to cache it once we find it?
> >
> > Yes, if the scan is over a single pmd then caching it makes sense.
>
> Quite frankly an easier an better solution is to remove the pag...
2019 Jul 30
29
hmm_range_fault related fixes and legacy API removal v3
Hi Jérôme, Ben, Felxi and Jason,
below is a series against the hmm tree which cleans up various minor
bits and allows HMM_MIRROR to be built on all architectures.
Diffstat:
7 files changed, 81 insertions(+), 171 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git hmm-cleanups
Gitweb:
2020 Nov 11
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...address range
with a page_ref_count(page) of one and increments the pgmap->ref per CPU
reference count by the number of pages created since each ZONE_DEVICE struct
page has a pointer to the pgmap.
The struct pages are not freed until memunmap_pages() is called which
calls put_page() which calls put_dev_pagemap() which releases a reference to
pgmap->ref. memunmap_pages() blocks waiting for pgmap->ref reference count
to be zero. As far as I can tell, the put_page() in memunmap_pages() has to
be the *last* put_page() (see MEMORY_DEVICE_PCI_P2PDMA).
My RFC [1] breaks this put_page() -> put_dev_pagem...
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
...nr_pages(page) + page_has_private(page);
diff --git a/mm/swap.c b/mm/swap.c
index 0eb057141a04..93d880c6f73c 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -116,12 +116,11 @@ static void __put_compound_page(struct page *page)
void __put_page(struct page *page)
{
if (is_zone_device_page(page)) {
- put_dev_pagemap(page->pgmap);
-
/*
* The page belongs to the device that created pgmap. Do
* not return it to page allocator.
*/
+ free_zone_device_page(page);
return;
}
@@ -907,6 +906,9 @@ void release_pages(struct page **pages, int nr)
put_devmap_managed_page(page);
continue;...
2019 Aug 15
3
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...> > > > > So I guess this patch is fine for now, and once you provide a better
> > > > > > mechanism we can switch over to it?
> > > > >
> > > > > What about the version I sent to just get rid of all the strange
> > > > > put_dev_pagemaps while scanning? Odds are good we will work with only
> > > > > a single pagemap, so it makes some sense to cache it once we find it?
> > > >
> > > > Yes, if the scan is over a single pmd then caching it makes sense.
> > >
> > > Quite frankly...
2020 Nov 09
3
[PATCH v3 3/6] mm: support THP migration to device private memory
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote:
>
> On 11/6/20 12:03 AM, Christoph Hellwig wrote:
>> I hate the extra pin count magic here. IMHO we really need to finish
>> off the series to get rid of the extra references on the ZONE_DEVICE
>> pages first.
>
> First, thanks for the review comments.
>
> I don't like the extra refcount
2019 Jul 30
0
[PATCH 11/13] mm: cleanup the hmm_vma_handle_pmd stub
...*walk, unsigned long addr,
+ unsigned long end, uint64_t *pfns, pmd_t pmd)
+{
struct hmm_vma_walk *hmm_vma_walk = walk->private;
struct hmm_range *range = hmm_vma_walk->range;
struct dev_pagemap *pgmap = NULL;
@@ -490,11 +487,14 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
put_dev_pagemap(pgmap);
hmm_vma_walk->last = end;
return 0;
-#else
- /* If THP is not enabled then we should never reach this code ! */
+}
+#else /* CONFIG_TRANSPARENT_HUGEPAGE */
+static int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr,
+ unsigned long end, uint64_t *pfns, pmd_t pmd)
+{
r...
2019 Jul 30
1
[PATCH 11/13] mm: cleanup the hmm_vma_handle_pmd stub
...unsigned long end, uint64_t *pfns, pmd_t pmd)
> +{
> struct hmm_vma_walk *hmm_vma_walk = walk->private;
> struct hmm_range *range = hmm_vma_walk->range;
> struct dev_pagemap *pgmap = NULL;
> @@ -490,11 +487,14 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
> put_dev_pagemap(pgmap);
> hmm_vma_walk->last = end;
> return 0;
> -#else
> - /* If THP is not enabled then we should never reach this
This old comment says we should never get here
> +}
> +#else /* CONFIG_TRANSPARENT_HUGEPAGE */
> +static int hmm_vma_handle_pmd(struct mm_walk *walk,...
2019 Aug 06
0
[PATCH 11/15] mm: cleanup the hmm_vma_handle_pmd stub
...*walk, unsigned long addr,
+ unsigned long end, uint64_t *pfns, pmd_t pmd)
+{
struct hmm_vma_walk *hmm_vma_walk = walk->private;
struct hmm_range *range = hmm_vma_walk->range;
struct dev_pagemap *pgmap = NULL;
@@ -490,11 +487,12 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,
put_dev_pagemap(pgmap);
hmm_vma_walk->last = end;
return 0;
-#else
- /* If THP is not enabled then we should never reach this code ! */
- return -EINVAL;
-#endif
}
+#else /* CONFIG_TRANSPARENT_HUGEPAGE */
+/* stub to allow the code below to compile */
+int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned...
2019 Aug 15
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...ne "pfn is
> > > > ZONE_DEVICE".
> > >
> > > So I guess this patch is fine for now, and once you provide a better
> > > mechanism we can switch over to it?
> >
> > What about the version I sent to just get rid of all the strange
> > put_dev_pagemaps while scanning? Odds are good we will work with only
> > a single pagemap, so it makes some sense to cache it once we find it?
>
> Yes, if the scan is over a single pmd then caching it makes sense.
Quite frankly an easier an better solution is to remove the pagemap
lookup as HMM user...
2019 Aug 15
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...> > > >
> > > > > So I guess this patch is fine for now, and once you provide a better
> > > > > mechanism we can switch over to it?
> > > >
> > > > What about the version I sent to just get rid of all the strange
> > > > put_dev_pagemaps while scanning? Odds are good we will work with only
> > > > a single pagemap, so it makes some sense to cache it once we find it?
> > >
> > > Yes, if the scan is over a single pmd then caching it makes sense.
> >
> > Quite frankly an easier an better solu...