Displaying 4 results from an estimated 4 matches for "e5c83b8ee82e".
2020 Jul 20
2
[PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma
...p;
> + mig.dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE;
>
> mutex_lock(&kvm->arch.uvmem_lock);
> /* The requested page is already paged-out, nothing to do */
> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index e5c230d9ae24..e5c83b8ee82e 100644
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -183,6 +183,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
> .src = &src,
> .dst = &dst,
> .src_owner = drm->dev,
> + .dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE,
> };
>...
2020 Jul 20
0
[PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma
...RATE_VMA_FROM_DEVICE_PRIVATE;
>>
>> mutex_lock(&kvm->arch.uvmem_lock);
>> /* The requested page is already paged-out, nothing to do */
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
>> index e5c230d9ae24..e5c83b8ee82e 100644
>> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
>> @@ -183,6 +183,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
>> .src = &src,
>> .dst = &dst,
>> .src_owner = drm->dev,
>> + .dir = MIGRATE_VMA_FROM_DEV...
2020 Jul 13
0
[PATCH v2 2/5] mm/migrate: add a direction parameter to migrate_vma
...owner = &kvmppc_uvmem_pgmap;
+ mig.dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE;
mutex_lock(&kvm->arch.uvmem_lock);
/* The requested page is already paged-out, nothing to do */
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..e5c83b8ee82e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -183,6 +183,7 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
.src = &src,
.dst = &dst,
.src_owner = drm->dev,
+ .dir = MIGRATE_VMA_FROM_DEVICE_PRIVATE,...
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is