Christoph Hellwig
2019-Aug-08 06:59 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Wed, Aug 07, 2019 at 11:47:22AM -0700, Dan Williams wrote:> > Unrelated to this patch, but what is the point of getting checking > > that the pgmap exists for the page and then immediately releasing it? > > This code has this pattern in several places. > > > > It feels racy > > Agree, not sure what the intent is here. The only other reason call > get_dev_pagemap() is to just check in general if the pfn is indeed > owned by some ZONE_DEVICE instance, but if the intent is to make sure > the device is still attached/enabled that check is invalidated at > put_dev_pagemap(). > > If it's the former case, validating ZONE_DEVICE pfns, I imagine we can > do something cheaper with a helper that is on the order of the same > cost as pfn_valid(). I.e. replace PTE_DEVMAP with a mem_section flag > or something similar.The hmm literally never dereferences the pgmap, so validity checking is the only explanation for it.> > + /* > > + * We do put_dev_pagemap() here so that we can leverage > > + * get_dev_pagemap() optimization which will not re-take a > > + * reference on a pgmap if we already have one. > > + */ > > + if (hmm_vma_walk->pgmap) > > + put_dev_pagemap(hmm_vma_walk->pgmap); > > + > > Seems ok, but only if the caller is guaranteeing that the range does > not span outside of a single pagemap instance. If that guarantee is > met why not just have the caller pass in a pinned pagemap? If that > guarantee is not met, then I think we're back to your race concern.It iterates over multiple ptes in a non-huge pmd. Is there any kind of limitations on different pgmap instances inside a pmd? I can't think of one, so this might actually be a bug.
Dan Williams
2019-Aug-14 01:36 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Wed, Aug 7, 2019 at 11:59 PM Christoph Hellwig <hch at lst.de> wrote:> > On Wed, Aug 07, 2019 at 11:47:22AM -0700, Dan Williams wrote: > > > Unrelated to this patch, but what is the point of getting checking > > > that the pgmap exists for the page and then immediately releasing it? > > > This code has this pattern in several places. > > > > > > It feels racy > > > > Agree, not sure what the intent is here. The only other reason call > > get_dev_pagemap() is to just check in general if the pfn is indeed > > owned by some ZONE_DEVICE instance, but if the intent is to make sure > > the device is still attached/enabled that check is invalidated at > > put_dev_pagemap(). > > > > If it's the former case, validating ZONE_DEVICE pfns, I imagine we can > > do something cheaper with a helper that is on the order of the same > > cost as pfn_valid(). I.e. replace PTE_DEVMAP with a mem_section flag > > or something similar. > > The hmm literally never dereferences the pgmap, so validity checking is > the only explanation for it. > > > > + /* > > > + * We do put_dev_pagemap() here so that we can leverage > > > + * get_dev_pagemap() optimization which will not re-take a > > > + * reference on a pgmap if we already have one. > > > + */ > > > + if (hmm_vma_walk->pgmap) > > > + put_dev_pagemap(hmm_vma_walk->pgmap); > > > + > > > > Seems ok, but only if the caller is guaranteeing that the range does > > not span outside of a single pagemap instance. If that guarantee is > > met why not just have the caller pass in a pinned pagemap? If that > > guarantee is not met, then I think we're back to your race concern. > > It iterates over multiple ptes in a non-huge pmd. Is there any kind of > limitations on different pgmap instances inside a pmd? I can't think > of one, so this might actually be a bug.Section alignment constraints somewhat save us here. The only example I can think of a PMD not containing a uniform pgmap association for each pte is the case when the pgmap overlaps normal dram, i.e. shares the same 'struct memory_section' for a given span. Otherwise, distinct pgmaps arrange to manage their own exclusive sections (and now subsections as of v5.3). Otherwise the implementation could not guarantee different mapping lifetimes. That said, this seems to want a better mechanism to determine "pfn is ZONE_DEVICE".
Christoph Hellwig
2019-Aug-14 07:38 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:> Section alignment constraints somewhat save us here. The only example > I can think of a PMD not containing a uniform pgmap association for > each pte is the case when the pgmap overlaps normal dram, i.e. shares > the same 'struct memory_section' for a given span. Otherwise, distinct > pgmaps arrange to manage their own exclusive sections (and now > subsections as of v5.3). Otherwise the implementation could not > guarantee different mapping lifetimes. > > That said, this seems to want a better mechanism to determine "pfn is > ZONE_DEVICE".So I guess this patch is fine for now, and once you provide a better mechanism we can switch over to it?
Maybe Matching Threads
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk