Displaying 18 results from an estimated 18 matches for "mmap_read_unlock".
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...st + 1,
- .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE,
+ .default_flags = hmm_flags,
.hmm_pfns = hmm_pfns,
};
struct mm_struct *mm = notifier->notifier.mm;
@@ -575,11 +571,6 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
ret = hmm_range_fault(&range);
mmap_read_unlock(mm);
if (ret) {
- /*
- * FIXME: the input PFN_REQ flags are destroyed on
- * -EBUSY, we need to regenerate them, also for the
- * other continue below
- */
if (ret == -EBUSY)
continue;
return ret;
@@ -614,17 +605,12 @@ nouveau_svm_fault(struct nvif_notify *notify)
st...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...st + 1,
- .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE,
+ .default_flags = hmm_flags,
.hmm_pfns = hmm_pfns,
};
struct mm_struct *mm = notifier->notifier.mm;
@@ -575,11 +571,6 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
ret = hmm_range_fault(&range);
mmap_read_unlock(mm);
if (ret) {
- /*
- * FIXME: the input PFN_REQ flags are destroyed on
- * -EBUSY, we need to regenerate them, also for the
- * other continue below
- */
if (ret == -EBUSY)
continue;
return ret;
@@ -614,17 +605,12 @@ nouveau_svm_fault(struct nvif_notify *notify)
st...
2020 Jul 31
1
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...// clears migration flag for pages that can't be migrated.
>
> <driver updates device page tables for pages still migrating, rollback pages not migrating>
>
> migrate_vma_finalize()
> // replaces migration page table entry with destination page PFN.
>
> mmap_read_unlock()
>
> Since the address range is invalidated in the migrate_vma_setup() stage,
> and the page is isolated from the LRU cache, locked, unmapped, and the page table
> holds a migration entry (so the page can't be faulted and the CPU page table set
> valid again), and there are no...
2020 Jul 28
2
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
> When migrating the special zero page, migrate_vma_pages() calls
> mmu_notifier_invalidate_range_start() before replacing the zero page
> PFN in the CPU page tables. This is unnecessary since the range was
> invalidated in migrate_vma_setup() and the page table entry is checked
> to be sure it hasn't changed
2020 Sep 10
0
[PATCH] vhost-vdpa: fix memory leak in error path
...> + if (!npages) {
> + ret = -EINVAL;
> + goto free_page;
> + }
>
> mmap_read_lock(dev->mm);
>
> @@ -666,6 +668,8 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
> atomic64_sub(npages, &dev->mm->pinned_vm);
> }
> mmap_read_unlock(dev->mm);
> +
> +free_page:
> free_page((unsigned long)page_list);
> return ret;
> }
Cc: stable at vger.kernel.org
Acked-by: Jason Wang <jasowang at redhat.com>
2020 Jul 28
0
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...to destination struct page info.
// clears migration flag for pages that can't be migrated.
<driver updates device page tables for pages still migrating, rollback pages not migrating>
migrate_vma_finalize()
// replaces migration page table entry with destination page PFN.
mmap_read_unlock()
Since the address range is invalidated in the migrate_vma_setup() stage,
and the page is isolated from the LRU cache, locked, unmapped, and the page table
holds a migration entry (so the page can't be faulted and the CPU page table set
valid again), and there are no extra page references (pi...
2020 Nov 03
0
[PATCH 1/2] Revert "vhost-vdpa: fix page pinning leakage in error path"
...(last_pfn - map_pfn + 1) << PAGE_SHIFT,
> + map_pfn << PAGE_SHIFT, msg->perm);
> out:
> - if (ret)
> + if (ret) {
> vhost_vdpa_unmap(v, msg->iova, msg->size);
> -unlock:
> + atomic64_sub(npages, &dev->mm->pinned_vm);
> + }
> mmap_read_unlock(dev->mm);
> -free:
> - kvfree(vmas);
> - kvfree(page_list);
> + free_page((unsigned long)page_list);
> return ret;
> }
>
2020 Oct 01
0
[PATCH] vhost-vdpa: fix page pinning leakage in error path
...st_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
- map_pfn << PAGE_SHIFT, msg->perm);
+ WARN_ON(nmap != npages);
out:
- if (ret) {
+ if (ret)
vhost_vdpa_unmap(v, msg->iova, msg->size);
- atomic64_sub(npages, &dev->mm->pinned_vm);
- }
+unlock:
mmap_read_unlock(dev->mm);
- free_page((unsigned long)page_list);
+free:
+ kvfree(vmas);
+ kvfree(page_list);
return ret;
}
--
1.8.3.1
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output
array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to
know that a given 4K PFN is actually mapped by the CPU using either a
PMD sized or PUD sized CPU page table entry and therefore the device
driver can safely map system memory using larger device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for
2020 Oct 01
0
[PATCH v2] vhost-vdpa: fix page pinning leakage in error path
...st_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
- map_pfn << PAGE_SHIFT, msg->perm);
+ WARN_ON(nmap != npages);
out:
- if (ret) {
+ if (ret)
vhost_vdpa_unmap(v, msg->iova, msg->size);
- atomic64_sub(npages, &dev->mm->pinned_vm);
- }
+unlock:
mmap_read_unlock(dev->mm);
- free_page((unsigned long)page_list);
+free:
+ kvfree(vmas);
+ kvfree(page_list);
return ret;
}
--
1.8.3.1
2020 Jun 19
0
[PATCH 15/16] mm/hmm/test: add self tests for THP migration
...R_PTE << PAGE_SHIFT));
if (next > vma->vm_end)
next = vma->vm_end;
@@ -725,6 +851,8 @@ static int dmirror_migrate(struct dmirror *dmirror,
dmirror_migrate_finalize_and_map(&args, dmirror);
migrate_vma_finalize(&args);
}
+ kfree(dst_pfns);
+ kfree(src_pfns);
mmap_read_unlock(mm);
mmput(mm);
@@ -746,6 +874,10 @@ static int dmirror_migrate(struct dmirror *dmirror,
out:
mmap_read_unlock(mm);
+ kfree(dst_pfns);
+out_free_src:
+ kfree(src_pfns);
+out_put:
mmput(mm);
return ret;
}
@@ -986,18 +1118,37 @@ static const struct file_operations dmirror_fops = {
st...
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order()
function. This allows a device driver to know that a given 4K PFN is
actually mapped by the CPU using a larger sized CPU page table entry and
therefore the device driver can safely map system memory using larger
device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's
hmm tree. These were
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Sep 24
30
[RFC PATCH 00/24] Control VQ support in vDPA
Hi All:
This series tries to add the support for control virtqueue in vDPA.
Control virtqueue is used by networking device for accepting various
commands from the driver. It's a must to support multiqueue and other
configurations.
When used by vhost-vDPA bus driver for VM, the control virtqueue
should be shadowed via userspace VMM (Qemu) instead of being assigned
directly to Guest. This is
2020 Sep 24
30
[RFC PATCH 00/24] Control VQ support in vDPA
Hi All:
This series tries to add the support for control virtqueue in vDPA.
Control virtqueue is used by networking device for accepting various
commands from the driver. It's a must to support multiqueue and other
configurations.
When used by vhost-vDPA bus driver for VM, the control virtqueue
should be shadowed via userspace VMM (Qemu) instead of being assigned
directly to Guest. This is
2020 Jul 21
87
[PATCH v9 00/84] VM introspection
The KVM introspection subsystem provides a facility for applications
running on the host or in a separate VM, to control the execution of
other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs,
MSRs etc.), alter the page access bits in the shadow page tables (only
for the hardware backed ones, eg. Intel's EPT) and receive notifications
when events of interest have taken place