Displaying 15 results from an estimated 15 matches for "nouveau_svmm_invalidate_range_start".
2020 Jan 09
1
[BUG] nouveau lockdep splat
...---
[ 98.403810] test-file-priva/6909 is trying to acquire lock:
[ 98.409442] ffff888753e39a80 (&vmm->mutex#3){+.+.}, at: nvkm_uvmm_mthd+0x4d5/0xbe0 [nouveau]
[ 98.418064]
[ 98.418064] but task is already holding lock:
[ 98.423915] ffff888759cdc9b0 (&svmm->mutex){+.+.}, at: nouveau_svmm_invalidate_range_start+0x71/0x110 [nouveau]
[ 98.434142]
[ 98.434142] which lock already depends on the new lock.
[ 98.434142]
[ 98.442408]
[ 98.442408] the existing dependency chain (in reverse order) is:
[ 98.449899]
[ 98.449899] -> #3 (&svmm->mutex){+.+.}:
[ 98.455249] __mutex_lock...
2020 Jun 25
2
[RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration
...t to invalidate those TLB
> entries.
>
> Any other suggestions?
After a night's sleep, I think this might work. What do others think?
1) Add a new MMU_NOTIFY_MIGRATE enum to mmu_notifier_event.
2) Change migrate_vma_collect() to use the new MMU_NOTIFY_MIGRATE event type.
3) Modify nouveau_svmm_invalidate_range_start() to simply return (no invalidations)
for MMU_NOTIFY_MIGRATE mmu notifier callbacks.
4) Leave the src_owner check in migrate_vma_collect_pmd() for normal pages so if the
device driver is migrating normal pages to device private memory, the driver would
set src_owner = NULL and already migrated dev...
2019 Oct 15
0
[PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start
...utex mutex;
- struct mm_struct *mm;
struct hmm_mirror mirror;
};
@@ -251,10 +251,11 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit)
}
static int
-nouveau_svmm_sync_cpu_device_pagetables(struct hmm_mirror *mirror,
- const struct mmu_notifier_range *update)
+nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn,
+ const struct mmu_notifier_range *update)
{
- struct nouveau_svmm *svmm = container_of(mirror, typeof(*svmm), mirror);
+ struct nouveau_svmm *svmm =
+ container_of(mn, struct nouveau_svmm, notifier);
unsigned long start = update->start;
unsigned long limit...
2020 Jan 13
0
[PATCH v6 5/6] nouveau: use new mmu interval notifiers
...016llx-%016llx", start, limit);
+
if (limit > start) {
bool super = svmm->vmm->vmm.object.client->super;
svmm->vmm->vmm.object.client->super = true;
@@ -248,58 +257,25 @@ nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit)
}
}
-static int
-nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn,
- const struct mmu_notifier_range *update)
-{
- struct nouveau_svmm *svmm =
- container_of(mn, struct nouveau_svmm, notifier);
- unsigned long start = update->start;
- unsigned long limit = update->end;
-
- if (!mmu_notifier_range_blockable(update))
- return...
2020 Jun 25
0
[RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration
...> > Any other suggestions?
>
> After a night's sleep, I think this might work. What do others think?
>
> 1) Add a new MMU_NOTIFY_MIGRATE enum to mmu_notifier_event.
>
> 2) Change migrate_vma_collect() to use the new MMU_NOTIFY_MIGRATE event type.
>
> 3) Modify nouveau_svmm_invalidate_range_start() to simply return (no invalidations)
> for MMU_NOTIFY_MIGRATE mmu notifier callbacks.
Isn't it a bit of an assumption that migrate_vma_collect() is only
used by nouveau itself?
What if some other devices' device_private pages are being migrated?
Jason
2020 Jul 23
0
[PATCH v4 4/6] nouveau/svm: use the new migration invalidation
...\
@@ -246,7 +235,7 @@ nouveau_svmm_join(struct nouveau_svmm *svmm, u64 inst)
}
/* Invalidate SVMM address-range on GPU. */
-static void
+void
nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u64 start, u64 limit)
{
if (limit > start) {
@@ -279,6 +268,14 @@ nouveau_svmm_invalidate_range_start(struct mmu_notifier *mn,
if (unlikely(!svmm->vmm))
goto out;
+ /*
+ * Ignore invalidation callbacks for device private pages since
+ * the invalidation is handled as part of the migration process.
+ */
+ if (update->event == MMU_NOTIFY_MIGRATE &&
+ update->migrate_pgm...
2020 Jun 24
2
[RESEND PATCH 2/3] nouveau: fix mixed normal and device private page migration
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
> The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
> migrate memory in the given address range to device private memory. The
> source pages might already have been migrated to device private memory.
> In that case, the source struct page is not checked to see if it is
> a device private page and
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jan 13
9
[PATCH v6 0/6] mm/hmm/test: add self tests for HMM
This series adds new functions to the mmu interval notifier API to
allow device drivers with MMUs to dynamically mirror a process' page
tables based on device faults and invalidation callbacks. The Nouveau
driver is updated to use the extended API and a set of stand alone self
tests is added to help validate and maintain correctness.
The patches are based on linux-5.5.0-rc6 and are for
2020 Nov 06
4
[PATCH 0/3] drm/nouveau: extend the lifetime of nouveau_drm
Hi folks,
Currently, when the device is removed (or the driver is unbound) the
nouveau_drm structure de-allocated. However, it's still accessible from
and used by some DRM layer callbacks. For example, file handles can be
closed after the device has been removed (physically or otherwise). This
series converts the Nouveau device structure to be allocated and
de-allocated with the
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com>
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell if the
driver is interested. Half of them use an interval_tree, the others