Displaying 20 results from an estimated 52 matches for "foll_writ".
Did you mean:
foll_write
2020 May 29
6
[PATCH 0/2] vhost, docs: convert to pin_user_pages(), new "case 5"
Hi,
It recently became clear to me that there are some get_user_pages*()
callers that don't fit neatly into any of the four cases that are so
far listed in pin_user_pages.rst. vhost.c is one of those.
Add a Case 5 to the documentation, and refer to that when converting
vhost.c.
Thanks to Jan Kara for helping me (again) in understanding the
interaction between get_user_pages() and page
2016 Oct 26
2
CVE-2016-5195 DirtyCOW : Critical Linux Kernel Flaw
...FOLL_COW))
+ return page && PageAnon(page) && !PageKsm(page);
+
+ return false;
+}
+
/*
* Do a quick page-table lookup for a single page.
*/
@@ -1266,10 +1284,11 @@ split_fallthrough:
migration_entry_wait(mm, pmd, address);
goto split_fallthrough;
}
- if ((flags & FOLL_WRITE) && !pte_write(pte))
- goto unlock;
-
page = vm_normal_page(vma, address, pte);
+ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, flags)) {
+ pte_unmap_unlock(ptep, ptl);
+ return NULL;
+ }
if (unlikely(!page)) {
if ((flags & FOLL_DUMP) ||
!is_...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your
2016 Oct 25
5
CVE-2016-5195 DirtyCOW : Critical Linux Kernel Flaw
On Tue, 25 Oct 2016 10:06:12 +0200
Christian Anthon <anthon at rth.dk> wrote:
> What is the best approach on centos 6 to mitigate the problem is
> officially patched? As far as I can tell Centos 6 is vulnerable to
> attacks using ptrace.
I can confirm that c6 is vulnerable, we're running a patched kernel
(local build) using a rhel6 adaptation of the upstream fix.
Ask
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...mu notifier could call set_page_dirty and
then proceed in try_to_free_buffers or page_mkclean and then the
concurrent mmu notifier that arrives second, then must not call
set_page_dirty a second time.
With KVM sptes mappings and vhost mappings you would call
set_page_dirty (if you invoked gup with FOLL_WRITE) only when
effectively tearing down any secondary mapping (you've got pointers in
both cases for the mapping). So there's no way to risk a double
set_page_dirty from concurrent mmu notifier invalidate because the
invalidate takes a lock when it has to teardown the mapping and so
set_page_d...
2020 May 29
0
[PATCH 2/2] vhost: convert get_user_pages() --> pin_user_pages()
...rs/vhost/vhost.c b/drivers/vhost/vhost.c
index 21a59b598ed8..596132a96cd5 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1762,15 +1762,14 @@ static int set_bit_to_user(int nr, void __user *addr)
int bit = nr + (log % PAGE_SIZE) * 8;
int r;
- r = get_user_pages_fast(log, 1, FOLL_WRITE, &page);
+ r = pin_user_pages_fast(log, 1, FOLL_WRITE, &page);
if (r < 0)
return r;
BUG_ON(r != 1);
base = kmap_atomic(page);
set_bit(bit, base);
kunmap_atomic(base);
- set_page_dirty_lock(page);
- put_page(page);
+ unpin_user_pages_dirty_lock(&page, 1, true);
return...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...Well it is safe only if the page was
> map with write permission prior to the callback so here i assume
> nothing stupid is going on and that you only vmap page with write
> if they have a CPU pte with write and if not then you force a write
> page fault.
So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
be called in such case. It only ever makes sense if the pte is
writable.
On a side note, the reason the write bit on the pte enabled avoids the
need of the _lock suffix is because of the stable page writeback
guarantees?
> Basicly from mmu notifier callbac...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote:
> On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote:
> > I thought this patch was only for anonymous memory ie not file back ?
>
> Yes, the other common usages are on hugetlbfs/tmpfs that also don't
> need to implement writeback and are obviously safe too.
>
> > If so then set dirty is
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 02:38:38PM -0500, Andrea Arcangeli wrote:
> On Thu, Mar 07, 2019 at 02:09:10PM -0500, Jerome Glisse wrote:
> > I thought this patch was only for anonymous memory ie not file back ?
>
> Yes, the other common usages are on hugetlbfs/tmpfs that also don't
> need to implement writeback and are obviously safe too.
>
> > If so then set dirty is
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...y if the page was
>> map with write permission prior to the callback so here i assume
>> nothing stupid is going on and that you only vmap page with write
>> if they have a CPU pte with write and if not then you force a write
>> page fault.
> So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
> be called in such case. It only ever makes sense if the pte is
> writable.
>
> On a side note, the reason the write bit on the pte enabled avoids the
> need of the _lock suffix is because of the stable page writeback
> guarantees?
>
>&...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...y if the page was
>> map with write permission prior to the callback so here i assume
>> nothing stupid is going on and that you only vmap page with write
>> if they have a CPU pte with write and if not then you force a write
>> page fault.
> So if the GUP doesn't set FOLL_WRITE, set_page_dirty simply shouldn't
> be called in such case. It only ever makes sense if the pte is
> writable.
>
> On a side note, the reason the write bit on the pte enabled avoids the
> need of the _lock suffix is because of the stable page writeback
> guarantees?
>
>&...
2020 Jun 01
3
[PATCH v2 0/2] vhost, docs: convert to pin_user_pages(), new "case 5"
This is based on Linux 5.7, plus one prerequisite patch:
"mm/gup: update pin_user_pages.rst for "case 3" (mmu notifiers)" [1]
Changes since v1: removed references to set_page_dirty*(), in response to
Souptick Joarder's review (thanks!).
Cover letter for v1, edited/updated slightly:
It recently became clear to me that there are some get_user_pages*()
callers that
2020 Sep 10
0
[PATCH] vhost-vdpa: fix memory leak in error path
...letions(-)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 3fab94f88894..6a9fcaf1831d 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -609,8 +609,10 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
> gup_flags |= FOLL_WRITE;
>
> npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
> - if (!npages)
> - return -EINVAL;
> + if (!npages) {
> + ret = -EINVAL;
> + goto free_page;
> + }
>
> mmap_read_lock(dev->mm);
>
> @@ -666,6 +668,8 @@ s...
2020 Nov 03
0
[PATCH 1/2] Revert "vhost-vdpa: fix page pinning leakage in error path"
...e_first(iotlb, msg->iova,
> msg->iova + msg->size - 1))
> return -EEXIST;
>
> + page_list = (struct page **) __get_free_page(GFP_KERNEL);
> + if (!page_list)
> + return -ENOMEM;
> +
> if (msg->perm & VHOST_ACCESS_WO)
> gup_flags |= FOLL_WRITE;
>
> @@ -608,86 +610,61 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
> if (!npages)
> return -EINVAL;
>
> - page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
> - vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct...
2020 Oct 01
0
[PATCH] vhost-vdpa: fix page pinning leakage in error path
...ng pinned;
int ret = 0;
if (vhost_iotlb_itree_first(iotlb, msg->iova,
msg->iova + msg->size - 1))
return -EEXIST;
- page_list = (struct page **) __get_free_page(GFP_KERNEL);
- if (!page_list)
- return -ENOMEM;
-
if (msg->perm & VHOST_ACCESS_WO)
gup_flags |= FOLL_WRITE;
@@ -614,61 +614,86 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
if (!npages)
return -EINVAL;
+ page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+ vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
+ GFP_KERNEL);
+ if (!page...
2020 Oct 01
0
[PATCH v2] vhost-vdpa: fix page pinning leakage in error path
...ng pinned;
int ret = 0;
if (vhost_iotlb_itree_first(iotlb, msg->iova,
msg->iova + msg->size - 1))
return -EEXIST;
- page_list = (struct page **) __get_free_page(GFP_KERNEL);
- if (!page_list)
- return -ENOMEM;
-
if (msg->perm & VHOST_ACCESS_WO)
gup_flags |= FOLL_WRITE;
@@ -614,61 +614,86 @@ static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
if (!npages)
return -EINVAL;
+ page_list = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+ vmas = kvmalloc_array(npages, sizeof(struct vm_area_struct *),
+ GFP_KERNEL);
+ if (!page...
2008 Jun 19
0
[PATCH] ia64/xen: implement the arch specific part of xencomm.
...lt;< IA64_MAX_PHYS_BITS))
+ ia64_tpa(vaddr);
+
+ /* kernel address */
+ return __pa(vaddr);
+ }
+
+ /* XXX double-check (lack of) locking */
+ vma = find_extend_vma(current->mm, vaddr);
+ if (!vma)
+ return ~0UL;
+
+ /* We assume the page is modified. */
+ page = follow_page(vma, vaddr, FOLL_WRITE | FOLL_TOUCH);
+ if (!page)
+ return ~0UL;
+
+ return (page_to_pfn(page) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
diff --git a/include/asm-ia64/xen/xencomm.h b/include/asm-ia64/xen/xencomm.h
new file mode 100644
index 0000000..2ef31ae
--- /dev/null
+++ b/include/asm-ia64/xen/xencomm.h...
2008 Jun 19
0
[PATCH] ia64/xen: implement the arch specific part of xencomm.
...lt;< IA64_MAX_PHYS_BITS))
+ ia64_tpa(vaddr);
+
+ /* kernel address */
+ return __pa(vaddr);
+ }
+
+ /* XXX double-check (lack of) locking */
+ vma = find_extend_vma(current->mm, vaddr);
+ if (!vma)
+ return ~0UL;
+
+ /* We assume the page is modified. */
+ page = follow_page(vma, vaddr, FOLL_WRITE | FOLL_TOUCH);
+ if (!page)
+ return ~0UL;
+
+ return (page_to_pfn(page) << PAGE_SHIFT) | (vaddr & ~PAGE_MASK);
+}
diff --git a/include/asm-ia64/xen/xencomm.h b/include/asm-ia64/xen/xencomm.h
new file mode 100644
index 0000000..2ef31ae
--- /dev/null
+++ b/include/asm-ia64/xen/xencomm.h...
2020 Jul 03
0
[RFC]: mm,power: introduce MADV_WIPEONSUSPEND
...unsigned long max_pages_per_loop = ARRAY_SIZE(pages);
> +
> + /* Only care about states >= S3 */
> + if (state < PM_SUSPEND_MEM)
> + return;
> +
> + rcu_read_lock();
> + for_each_process(p) {
> + int gup_flags = FOLL_WRITE;
> +
> + mm = p->mm;
> + if (!mm)
> + continue;
> +
> + down_read(&mm->mmap_sem);
Blocking actions, such as locking semaphores, are forbidden in RCU
read-side critical sections. Also, from a more high-leve...