search for: mmu_notifier_get

Displaying 7 results from an estimated 7 matches for "mmu_notifier_get".

2020 Mar 03
1
[PATCH v2] nouveau/hmm: map pages after migration
...m) >> +{ >> + struct nouveau_ivmm *ivmm; >> + >> + list_for_each_entry(ivmm, &svm->inst, head) { >> + if (ivmm->svmm->notifier.mm == mm) >> + return ivmm->svmm; >> + } >> + return NULL; >> +} > > Is this re-implementing mmu_notifier_get() ? > > Jason Not quite. This is being called from an ioctl() call on the GPU device file which calls nouveau_svmm_bind() which locks mmap_sem for reading, walks the vmas for the address range given in the ioctl() data, and migrates the pages to the GPU memory. mmu_notifier_get() would try...
2020 Mar 03
2
[PATCH v2] nouveau/hmm: map pages after migration
When memory is migrated to the GPU, it is likely to be accessed by GPU code soon afterwards. Instead of waiting for a GPU fault, map the migrated memory into the GPU page tables with the same access permissions as the source CPU page table entries. This preserves copy on write semantics. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Cc: Christoph Hellwig <hch at lst.de> Cc:
2020 Mar 03
0
[PATCH v2] nouveau/hmm: map pages after migration
...m(struct nouveau_svm *svm, struct mm_struct *mm) > +{ > + struct nouveau_ivmm *ivmm; > + > + list_for_each_entry(ivmm, &svm->inst, head) { > + if (ivmm->svmm->notifier.mm == mm) > + return ivmm->svmm; > + } > + return NULL; > +} Is this re-implementing mmu_notifier_get() ? Jason
2019 Oct 28
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
...adeon_mn_ops = { */ int radeon_mn_register(struct radeon_bo *bo, unsigned long addr) { - unsigned long end = addr + radeon_bo_size(bo) - 1; - struct mmu_notifier *mn; - struct radeon_mn *rmn; - struct radeon_mn_node *node = NULL; - struct list_head bos; - struct interval_tree_node *it; - - mn = mmu_notifier_get(&radeon_mn_ops, current->mm); - if (IS_ERR(mn)) - return PTR_ERR(mn); - rmn = container_of(mn, struct radeon_mn, mn); - - INIT_LIST_HEAD(&bos); - - mutex_lock(&rmn->lock); - - while ((it = interval_tree_iter_first(&rmn->objects, addr, end))) { - kfree(node); - node = con...
2019 Oct 29
0
[PATCH v2 07/15] drm/radeon: use mmu_range_notifier_insert
...struct radeon_bo *bo, unsigned long addr) > { > - unsigned long end = addr + radeon_bo_size(bo) - 1; > - struct mmu_notifier *mn; > - struct radeon_mn *rmn; > - struct radeon_mn_node *node = NULL; > - struct list_head bos; > - struct interval_tree_node *it; > - > - mn = mmu_notifier_get(&radeon_mn_ops, current->mm); > - if (IS_ERR(mn)) > - return PTR_ERR(mn); > - rmn = container_of(mn, struct radeon_mn, mn); > - > - INIT_LIST_HEAD(&bos); > - > - mutex_lock(&rmn->lock); > - > - while ((it = interval_tree_iter_first(&rmn->objects,...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others