Displaying 20 results from an estimated 37 matches for "vm_pfnmap".
2011 Mar 07
6
[PATCH] xen/gntdev,gntalloc: Remove unneeded VM flags
The only time when granted pages need to be treated specially is when
using Xen''s PTE modification for grant mappings owned by another domain.
Otherwise, the area does not require VM_DONTCOPY and VM_PFNMAP, since it
can be accessed just like any other page of RAM.
Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
drivers/xen/gntalloc.c | 14 ++++++++++++--
drivers/xen/gntdev.c | 16 ++++++++++++++--
2 files changed, 26 insertions(+), 4 deletions(-)
diff --git a/drivers/xen/gnt...
2012 Apr 03
2
[PATCH] xen/gntdev: do not set VM_PFNMAP
Since when we are using the m2p_override it is not true anymore that the
mmap''ed area doesn''t have corresponsing struct pages.
Removing the VM_PFNMAP flag makes get_user_pages work on the mmap''ed user vma.
An example test case would be using a Xen userspace block backend
(QDISK) on a file on NFS using O_DIRECT.
The patch should be backported back to 2.6.38.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---...
2012 Apr 03
0
[PATCH v2] xen/gntdev: do not set VM_PFNMAP
Since we are using the m2p_override we do have struct pages
corresponding to the user vma mmap''ed by gntdev.
Removing the VM_PFNMAP flag makes get_user_pages work on that vma.
An example test case would be using a Xen userspace block backend
(QDISK) on a file on NFS using O_DIRECT.
The patch should be backported back to 2.6.38.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
drivers/xen/gntdev....
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2019 Oct 02
5
DANGER WILL ROBINSON, DANGER
On 02/10/19 19:04, Jerome Glisse wrote:
> On Wed, Oct 02, 2019 at 06:18:06PM +0200, Paolo Bonzini wrote:
>>>> If the mapping of the source VMA changes, mirroring can update the
>>>> target VMA via insert_pfn. But what ensures that KVM's MMU notifier
>>>> dismantles its own existing page tables (so that they can be recreated
>>>> with the new
2019 Oct 03
0
DANGER WILL ROBINSON, DANGER
...s
> > > special which is the case either because insert_pfn() mark the pte as
> > > special or the kvm device driver which control the vm_operation struct
> > > set a
> > > find_special_page() callback that always return NULL, or the vma has
> > > either VM_PFNMAP or VM_MIXEDMAP set (which is the case with
> > insert_pfn).
> > >
> > > So you can keep the existing kvm code unmodified.
> >
> > Great, thanks. And KVM is already able to handle
> > VM_PFNMAP/VM_MIXEDMAP, so that should work.
>
> This means settin...
2006 Feb 08
3
[PATCH] direct_remap_pfn_range vm_flags fix
direct_remap_pfn_range() does not properly mark vma with VM_PFNMAP.
This triggers improper reference counting on what rmap thought was
a normal page, and a subsequent BUG() such as:
Eeek! page_mapcount(page) went negative! (-1)
page->flags = 414
page->count = 1
page->mapping = 00000000
------------[ cut here ]------------
kernel BUG at /home/chrisw...
2019 Sep 17
0
[PATCH 2/8] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
Hi,
> > - /* VM_PFNMAP was set by drm_gem_mmap() */
> > - vma->vm_flags &= ~VM_PFNMAP;
> > - vma->vm_flags |= VM_MIXEDMAP;
> > + vma->vm_flags |= (VM_MIXEDMAP|VM_DONTEXPAND);
>
> I'm finding this a bit hard to follow - but I think here we've lost
> VM_IO and VM_DONTDUMP wh...
2019 Jul 26
0
[PATCH v2 5/7] mm/hmm: make full use of walk_page_range()
...t;vma;
+
+ /* If range is no longer valid, force retry. */
+ if (!range->valid)
+ return -EBUSY;
+
+ /*
+ * Skip vma ranges that don't have struct page backing them or
+ * map I/O devices directly.
+ * TODO: handle peer-to-peer device mappings.
+ */
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
+ return -EFAULT;
+
+ if (is_vm_hugetlb_page(vma)) {
+ if (huge_page_shift(hstate_vma(vma)) != range->page_shift &&
+ range->page_shift != PAGE_SHIFT)
+ return -EINVAL;
+ } else {
+ if (range->page_shift != PAGE_SHIFT)
+ return -EINVAL;
+ }
+
+ /*
+ * I...
2019 Sep 13
0
[PATCH 2/8] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...vma);
- if (ret)
- return ret;
-
- shmem = to_drm_gem_shmem_obj(vma->vm_private_data);
+ shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,8 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= (VM_MIXEDMAP|VM_DONTEXPAND);
+ vma->vm_ops = &drm_gem_shmem_vm_ops;
/* Remove the fake offset */
vma->vm_pgoff -= drm_vma_node_start(&shmem->base.vma_no...
2019 Sep 17
0
[PATCH v2 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...vma);
- if (ret)
- return ret;
-
- shmem = to_drm_gem_shmem_obj(vma->vm_private_data);
+ shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(...
2019 Sep 19
0
[PATCH v3 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...vma);
- if (ret)
- return ret;
-
- shmem = to_drm_gem_shmem_obj(vma->vm_private_data);
+ shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(...
2019 Oct 16
0
[PATCH v4 02/11] drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap
...vma);
- if (ret)
- return ret;
-
- shmem = to_drm_gem_shmem_obj(vma->vm_private_data);
+ shmem = to_drm_gem_shmem_obj(obj);
ret = drm_gem_shmem_get_pages(shmem);
if (ret) {
@@ -545,9 +536,10 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
return ret;
}
- /* VM_PFNMAP was set by drm_gem_mmap() */
- vma->vm_flags &= ~VM_PFNMAP;
- vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+ vma->vm_page_prot = pgprot_decrypted(...
2019 Sep 11
0
[PATCH 1/4] mm/hmm: make full use of walk_page_range()
...gt;range;
+ struct vm_area_struct *vma = walk->vma;
+
+ /* If range is no longer valid, force retry. */
+ if (!range->valid)
+ return -EBUSY;
+
+ /*
+ * Skip vma ranges that don't have struct page backing them or
+ * map I/O devices directly.
+ */
+ if (vma->vm_flags & (VM_IO | VM_PFNMAP | VM_MIXEDMAP))
+ return -EFAULT;
+
+ /*
+ * If the vma does not allow read access, then assume that it does not
+ * allow write access either. HMM does not support architectures
+ * that allow write without read.
+ */
+ if (!(vma->vm_flags & VM_READ)) {
+ (void) hmm_pfns_fill(start, e...
2020 Jun 03
2
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
...>
> int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> unsigned long pfn, unsigned long size, pgprot_t prot)
> {
> if (addr != (pfn << PAGE_SHIFT))
> return -EINVAL;
>
> vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> return 0;
> }
> EXPORT_SYMBOL(remap_pfn_range);
>
>
> So things aren't going to work if you have a fixed PFN
> which is the case of the hardware device.
Looking at the implementation of some drivers e.g mtd_char. If I read
the co...
2020 Jun 03
2
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
...>
> int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
> unsigned long pfn, unsigned long size, pgprot_t prot)
> {
> if (addr != (pfn << PAGE_SHIFT))
> return -EINVAL;
>
> vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
> return 0;
> }
> EXPORT_SYMBOL(remap_pfn_range);
>
>
> So things aren't going to work if you have a fixed PFN
> which is the case of the hardware device.
Looking at the implementation of some drivers e.g mtd_char. If I read
the co...
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up.
This was based on top of Christoph's seven patches:
"hmm_range_fault related fixes and legacy API removal v3".
I assume this will go into Jason's tree since there will likely be
more HMM changes in this cycle.
Changes from v1 to v2:
Added AMD GPU to hmm_update removal.
Added 2 patches from Christoph.
Added 2 patches as
2020 Jun 02
2
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
On 2020/6/2 ??12:56, Michael S. Tsirkin wrote:
> On Tue, Jun 02, 2020 at 03:22:49AM +0800, kbuild test robot wrote:
>> Hi Jason,
>>
>> I love your patch! Yet something to improve:
>>
>> [auto build test ERROR on vhost/linux-next]
>> [also build test ERROR on linus/master v5.7 next-20200529]
>> [if your patch is applied to the wrong git tree, please drop
2020 Jun 02
2
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
On 2020/6/2 ??12:56, Michael S. Tsirkin wrote:
> On Tue, Jun 02, 2020 at 03:22:49AM +0800, kbuild test robot wrote:
>> Hi Jason,
>>
>> I love your patch! Yet something to improve:
>>
>> [auto build test ERROR on vhost/linux-next]
>> [also build test ERROR on linus/master v5.7 next-20200529]
>> [if your patch is applied to the wrong git tree, please drop
2019 Oct 03
0
DANGER WILL ROBINSON, DANGER
...ll see those
pte (inside the process that is mirroring the other process) as special
which is the case either because insert_pfn() mark the pte as special or
the kvm device driver which control the vm_operation struct set a
find_special_page() callback that always return NULL, or the vma has
either VM_PFNMAP or VM_MIXEDMAP set (which is the case with insert_pfn).
So you can keep the existing kvm code unmodified.
Cheers,
J?r?me