Displaying 20 results from an estimated 37 matches for "vm_mixedmap".
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2019 Oct 03
0
DANGER WILL ROBINSON, DANGER
...gt; special which is the case either because insert_pfn() mark the pte as
> > > special or the kvm device driver which control the vm_operation struct
> > > set a
> > > find_special_page() callback that always return NULL, or the vma has
> > > either VM_PFNMAP or VM_MIXEDMAP set (which is the case with
> > insert_pfn).
> > >
> > > So you can keep the existing kvm code unmodified.
> >
> > Great, thanks. And KVM is already able to handle
> > VM_PFNMAP/VM_MIXEDMAP, so that should work.
>
> This means setting VM_PFNMAP/VM_...
2019 Sep 17
0
[PATCH 2/8] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
Hi,
> > - /* VM_PFNMAP was set by drm_gem_mmap() */
> > - vma->vm_flags &= ~VM_PFNMAP;
> > - vma->vm_flags |= VM_MIXEDMAP;
> > + vma->vm_flags |= (VM_MIXEDMAP|VM_DONTEXPAND);
>
> I'm finding this a bit hard to follow - but I think here we've lost
> VM_IO and VM_DONTDUMP which used to be set by drm_gem_mmap().
Yep. Intentional, but I think I better split that off to a separate
patch with a...
2019 Jul 26
0
[PATCH v2 5/7] mm/hmm: make full use of walk_page_range()
...* If range is no longer valid, force retry. */
+ if (!range->valid)
+ return -EBUSY;
+
+ /*
+ * Skip vma ranges that don't have struct page backing them or
+ * map I/O devices directly.
+ * TODO: handle peer-to-peer device mappings.
+ */
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
+ return -EFAULT;
+
+ if (is_vm_hugetlb_page(vma)) {
+ if (huge_page_shift(hstate_vma(vma)) != range->page_shift &&
+ range->page_shift != PAGE_SHIFT)
+ return -EINVAL;
+ } else {
+ if (range->page_shift != PAGE_SHIFT)
+ return -EINVAL;
+ }
+
+ /*
+ * If vma does not...
2019 Sep 13
0
[PATCH 2/8] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...= to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,8 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= (VM_MIXEDMAP|VM_DONTEXPAND);
+ vma->vm_ops = &drm_gem_shmem_vm_ops;
/* Remove the fake offset */
vma->vm_pgoff -= drm_vma_node_start(&shmem->base.vma_node);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c
i...
2010 Mar 10
34
[Patch RFC] nouveau accelerated on Xen pv-ops kernel
...uveau-kernel.orig/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-01-27
10:19:28.000000000 +0530
+++ nouveau-kernel.new/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-03-10
17:28:59.000000000 +0530
@@ -271,7 +271,10 @@
*/
vma->vm_private_data = bo;
- vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND;
+ vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND;
+ if (!((bo->mem.placement & TTM_PL_MASK_MEM) & TTM_PL_FLAG_TT))
+ vma->vm_flags |= VM_IO;
+ vma->vm_page_prot = vma_get_vm_prot(vma->vm_flags);
return 0;
o...
2010 Mar 10
34
[Patch RFC] nouveau accelerated on Xen pv-ops kernel
...uveau-kernel.orig/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-01-27
10:19:28.000000000 +0530
+++ nouveau-kernel.new/drivers/gpu/drm/ttm/ttm_bo_vm.c 2010-03-10
17:28:59.000000000 +0530
@@ -271,7 +271,10 @@
*/
vma->vm_private_data = bo;
- vma->vm_flags |= VM_RESERVED | VM_IO | VM_MIXEDMAP | VM_DONTEXPAND;
+ vma->vm_flags |= VM_RESERVED | VM_MIXEDMAP | VM_DONTEXPAND;
+ if (!((bo->mem.placement & TTM_PL_MASK_MEM) & TTM_PL_FLAG_TT))
+ vma->vm_flags |= VM_IO;
+ vma->vm_page_prot = vma_get_vm_prot(vma->vm_flags);
return 0;
o...
2019 Sep 17
0
[PATCH v2 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...= to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+ vma->vm_ops = &drm_gem_shmem_vm_ops;
/* Remove the fake offset */...
2019 Sep 19
0
[PATCH v3 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...= to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+ vma->vm_ops = &drm_gem_shmem_vm_ops;
/* Remove the fake offset */...
2019 Oct 16
0
[PATCH v4 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...= to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
+ vma->vm_ops = &drm_gem_shmem_vm_ops;
/* Remove the fake offset */...
2019 Sep 11
0
[PATCH 1/4] mm/hmm: make full use of walk_page_range()
...struct vm_area_struct *vma = walk->vma;
+
+ /* If range is no longer valid, force retry. */
+ if (!range->valid)
+ return -EBUSY;
+
+ /*
+ * Skip vma ranges that don't have struct page backing them or
+ * map I/O devices directly.
+ */
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
+ return -EFAULT;
+
+ /*
+ * If the vma does not allow read access, then assume that it does not
+ * allow write access either. HMM does not support architectures
+ * that allow write without read.
+ */
+ if (!(vma->vm_flags & VM_READ)) {
+ (void) hmm_pfns_fill(start, end, range, HMM...
2016 Jun 16
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...ts not using the function directly, I just re-iterated the sequence of functions
above. (do_set_pte -> page_add_file_rmap) gets called after we grab the page from
driver through (__do_fault->vma->vm_ops->fault()).
> Do you use vm_insert_pfn?
> What type your vma is? VM_PFNMMAP or VM_MIXEDMAP?
I dont use vm_insert_pfn(). Here is the sequence of events how the user space
VMA gets the non LRU pages from the driver.
- Driver registers a character device with 'struct file_operations' binding
- Then the 'fops->mmap()' just binds the incoming 'struct vma' with a &...
2016 Jun 16
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...ts not using the function directly, I just re-iterated the sequence of functions
above. (do_set_pte -> page_add_file_rmap) gets called after we grab the page from
driver through (__do_fault->vma->vm_ops->fault()).
> Do you use vm_insert_pfn?
> What type your vma is? VM_PFNMMAP or VM_MIXEDMAP?
I dont use vm_insert_pfn(). Here is the sequence of events how the user space
VMA gets the non LRU pages from the driver.
- Driver registers a character device with 'struct file_operations' binding
- Then the 'fops->mmap()' just binds the incoming 'struct vma' with a &...
2016 Jun 27
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...re-iterated the sequence of functions
>> above. (do_set_pte -> page_add_file_rmap) gets called after we grab the page from
>> driver through (__do_fault->vma->vm_ops->fault()).
>>
>>> Do you use vm_insert_pfn?
>>> What type your vma is? VM_PFNMMAP or VM_MIXEDMAP?
>>
>> I dont use vm_insert_pfn(). Here is the sequence of events how the user space
>> VMA gets the non LRU pages from the driver.
>>
>> - Driver registers a character device with 'struct file_operations' binding
>> - Then the 'fops->mmap()' j...
2016 Jun 27
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...re-iterated the sequence of functions
>> above. (do_set_pte -> page_add_file_rmap) gets called after we grab the page from
>> driver through (__do_fault->vma->vm_ops->fault()).
>>
>>> Do you use vm_insert_pfn?
>>> What type your vma is? VM_PFNMMAP or VM_MIXEDMAP?
>>
>> I dont use vm_insert_pfn(). Here is the sequence of events how the user space
>> VMA gets the non LRU pages from the driver.
>>
>> - Driver registers a character device with 'struct file_operations' binding
>> - Then the 'fops->mmap()' j...
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up.
This was based on top of Christoph's seven patches:
"hmm_range_fault related fixes and legacy API removal v3".
I assume this will go into Jason's tree since there will likely be
more HMM changes in this cycle.
Changes from v1 to v2:
Added AMD GPU to hmm_update removal.
Added 2 patches from Christoph.
Added 2 patches as
2016 Jun 15
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 06/15/2016 08:02 AM, Minchan Kim wrote:
> Hi,
>
> On Mon, Jun 13, 2016 at 03:08:19PM +0530, Anshuman Khandual wrote:
>> > On 05/31/2016 05:31 AM, Minchan Kim wrote:
>>> > > @@ -791,6 +921,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>>> > > int rc = -EAGAIN;
>>> > > int page_was_mapped = 0;
2016 Jun 15
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 06/15/2016 08:02 AM, Minchan Kim wrote:
> Hi,
>
> On Mon, Jun 13, 2016 at 03:08:19PM +0530, Anshuman Khandual wrote:
>> > On 05/31/2016 05:31 AM, Minchan Kim wrote:
>>> > > @@ -791,6 +921,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
>>> > > int rc = -EAGAIN;
>>> > > int page_was_mapped = 0;
2019 Oct 03
0
DANGER WILL ROBINSON, DANGER
...pte (inside the process that is mirroring the other process) as special
which is the case either because insert_pfn() mark the pte as special or
the kvm device driver which control the vm_operation struct set a
find_special_page() callback that always return NULL, or the vma has
either VM_PFNMAP or VM_MIXEDMAP set (which is the case with insert_pfn).
So you can keep the existing kvm code unmodified.
Cheers,
J?r?me