Displaying 16 results from an estimated 16 matches for "mmap_read_lock".
Did you mean:
mmap_read_unlock
2020 Jul 31
1
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...where the zero page came from?
>
> The zero page comes from an anonymous private VMA that is read-only
> and the user level CPU process tries to read the page data (or any
> other read page fault).
>
> > Jason
> >
>
> The overall migration process is:
>
> mmap_read_lock()
>
> migrate_vma_setup()
> // invalidates range, locks/isolates pages, puts migration entry in page table
>
> <driver allocates destination pages and copies source to dest>
>
> migrate_vma_pages()
> // moves source struct page info to destination struct...
2020 Jul 28
2
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
> When migrating the special zero page, migrate_vma_pages() calls
> mmu_notifier_invalidate_range_start() before replacing the zero page
> PFN in the CPU page tables. This is unnecessary since the range was
> invalidated in migrate_vma_setup() and the page table entry is checked
> to be sure it hasn't changed
2020 Sep 10
0
[PATCH] vhost-vdpa: fix memory leak in error path
...b_update(struct vhost_vdpa *v,
> gup_flags |= FOLL_WRITE;
>
> npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
> - if (!npages)
> - return -EINVAL;
> + if (!npages) {
> + ret = -EINVAL;
> + goto free_page;
> + }
>
> mmap_read_lock(dev->mm);
>
> @@ -666,6 +668,8 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
> atomic64_sub(npages, &dev->mm->pinned_vm);
> }
> mmap_read_unlock(dev->mm);
> +
> +free_page:
> free_page((unsigned long)page_list);
> r...
2020 Jul 28
0
[PATCH v4 6/6] mm/migrate: remove range invalidation in migrate_vma_pages()
...t that case?
>
> I'm also not sure I follow where the zero page came from?
The zero page comes from an anonymous private VMA that is read-only
and the user level CPU process tries to read the page data (or any
other read page fault).
> Jason
>
The overall migration process is:
mmap_read_lock()
migrate_vma_setup()
// invalidates range, locks/isolates pages, puts migration entry in page table
<driver allocates destination pages and copies source to dest>
migrate_vma_pages()
// moves source struct page info to destination struct page info.
// clears migration...
2020 Nov 03
0
[PATCH 1/2] Revert "vhost-vdpa: fix page pinning leakage in error path"
...return -EINVAL;
>
> - page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> - vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
> - GFP_KERNEL);
> - if (!page_list || !vmas) {
> - ret = -ENOMEM;
> - goto free;
> - }
> -
> mmap_read_lock(dev->mm);
>
> + locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
> lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> - if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
> - ret = -ENOMEM;
> - goto unlock;
> - }...
2020 Oct 01
0
[PATCH] vhost-vdpa: fix page pinning leakage in error path
...otlb_update(struct vhost_vdpa *v,
if (!npages)
return -EINVAL;
+ page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+ vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
+ GFP_KERNEL);
+ if (!page_list || !vmas) {
+ ret = -ENOMEM;
+ goto free;
+ }
+
mmap_read_lock(dev->mm);
- locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-
- if (locked > lock_limit) {
+ if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
ret = -ENOMEM;
- goto out;
+ goto un...
2020 Oct 01
0
[PATCH v2] vhost-vdpa: fix page pinning leakage in error path
...otlb_update(struct vhost_vdpa *v,
if (!npages)
return -EINVAL;
+ page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+ vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
+ GFP_KERNEL);
+ if (!page_list || !vmas) {
+ ret = -ENOMEM;
+ goto free;
+ }
+
mmap_read_lock(dev->mm);
- locked = atomic64_add_return(npages, &dev->mm->pinned_vm);
lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
-
- if (locked > lock_limit) {
+ if (npages + atomic64_read(&dev->mm->pinned_vm) > lock_limit) {
ret = -ENOMEM;
- goto out;
+ goto un...
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...);
- continue;
- }
-
- /* Intersect fault window with the CPU VMA, cancelling
- * the fault if the address is invalid.
+ /*
+ * Prepare the GPU-side update of all pages within the
+ * fault window, determining required pages and access
+ * permissions based on pending faults.
*/
- mmap_read_lock(mm);
- vma = find_vma_intersection(mm, start, limit);
- if (!vma) {
- SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit);
- mmap_read_unlock(mm);
- mmput(mm);
- nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
- continue;
+ args.i.p.addr = start;
+ args.i.p.page...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...);
- continue;
- }
-
- /* Intersect fault window with the CPU VMA, cancelling
- * the fault if the address is invalid.
+ /*
+ * Prepare the GPU-side update of all pages within the
+ * fault window, determining required pages and access
+ * permissions based on pending faults.
*/
- mmap_read_lock(mm);
- vma = find_vma_intersection(mm, start, limit);
- if (!vma) {
- SVMM_ERR(svmm, "wndw %016llx-%016llx", start, limit);
- mmap_read_unlock(mm);
- mmput(mm);
- nouveau_svm_fault_cancel_fault(svm, buffer->fault[fi]);
- continue;
+ args.i.p.addr = start;
+ args.i.p.page...
2020 Jun 19
0
[PATCH 15/16] mm/hmm/test: add self tests for THP migration
...ro(mm))
return -EINVAL;
+ src_pfns = kmalloc_array(PTRS_PER_PTE, sizeof(*src_pfns), GFP_KERNEL);
+ if (!src_pfns) {
+ ret = -ENOMEM;
+ goto out_put;
+ }
+ dst_pfns = kmalloc_array(PTRS_PER_PTE, sizeof(*dst_pfns), GFP_KERNEL);
+ if (!dst_pfns) {
+ ret = -ENOMEM;
+ goto out_free_src;
+ }
+
mmap_read_lock(mm);
for (addr = start; addr < end; addr = next) {
vma = find_vma(mm, addr);
@@ -706,7 +832,7 @@ static int dmirror_migrate(struct dmirror *dmirror,
ret = -EINVAL;
goto out;
}
- next = min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT));
+ next = min(end, addr + (PTRS_P...
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output
array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to
know that a given 4K PFN is actually mapped by the CPU using either a
PMD sized or PUD sized CPU page table entry and therefore the device
driver can safely map system memory using larger device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order()
function. This allows a device driver to know that a given 4K PFN is
actually mapped by the CPU using a larger sized CPU page table entry and
therefore the device driver can safely map system memory using larger
device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's
hmm tree. These were
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
An earlier version was posted previously [1]. This version now
supports splitting a THP midway in the migration process which
led to a number of changes.
The patches apply cleanly to the current linux-mm tree. Since there
are a couple of patches in linux-mm from Dan
2020 Jul 21
87
[PATCH v9 00/84] VM introspection
The KVM introspection subsystem provides a facility for applications
running on the host or in a separate VM, to control the execution of
other VMs (pause, resume, shutdown), query the state of the vCPUs (GPRs,
MSRs etc.), alter the page access bits in the shadow page tables (only
for the hardware backed ones, eg. Intel's EPT) and receive notifications
when events of interest have taken place