search for: set_page_dirty

Displaying 20 results from an estimated 119 matches for "set_page_dirty".

2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ash the kernel with this, and it wouldn't be nice to fail if > somebody decides to put VM_SHARED ext4 (we could easily allow vhost > ring only backed by anon or tmpfs or hugetlbfs to solve this of > course). > > It sounds like we should at least optimize away the _lock from > set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there > was a clean way to do that. > > Now assuming we don't nak the use on ext4 VM_SHARED and we stick to > set_page_dirty_lock for such case: could you recap how that > __writepage ext4 crash was solved if try_to_free_buffers...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ash the kernel with this, and it wouldn't be nice to fail if > somebody decides to put VM_SHARED ext4 (we could easily allow vhost > ring only backed by anon or tmpfs or hugetlbfs to solve this of > course). > > It sounds like we should at least optimize away the _lock from > set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there > was a clean way to do that. > > Now assuming we don't nak the use on ext4 VM_SHARED and we stick to > set_page_dirty_lock for such case: could you recap how that > __writepage ext4 crash was solved if try_to_free_buffers...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...separate model just for 32bit. I really wouldn't care about the performance of 32bit with >700MB of RAM if that would cause any maintenance burden. Let's focus on the best 64bit implementation that will work equally optimally on 32bit with <= 700M of RAM. Talking to Jerome about the set_page_dirty issue, he raised the point of what happens if two thread calls a mmu notifier invalidate simultaneously. The first mmu notifier could call set_page_dirty and then proceed in try_to_free_buffers or page_mkclean and then the concurrent mmu notifier that arrives second, then must not call set_page_dir...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> > back only if it is a blocking allow callback (there is a flag passdown > > with the invalidate_range_start callback if you are not allow to block > > then return EBUSY and the invalidation will be aborted). > > > > > > > That's a separate issue from set_page_dirty when memory is file backed. > > If you can access file back page then i suggest using set_page_dirty > > from within a special version of vunmap() so that when you vunmap you > > set the page dirty without taking page lock. It is safe to do so > > always from within an mmu n...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
Hello Jerome, On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: > So for the above the easiest thing is to call set_page_dirty() from > the mmu notifier callback. It is always safe to use the non locking > variant from such callback. Well it is safe only if the page was > map with write permission prior to the callback so here i assume > nothing stupid is going on and that you only vmap page with write > if...
2016 Jun 30
1
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...map(struct file *filp, struct vm_area_struct *vma) > { > filp->f_mapping->a_ops = &test_aops; > vma->vm_ops = &test_vm_ops; > vma->vm_private_data = filp->private_data; > return 0; > } > Okay. > test_aops should have *set_page_dirty* overriding. > > static int test_set_pag_dirty(struct page *page) > { > if (!PageDirty(page)) > SetPageDirty*page); > return 0; > } > > Otherwise, it goes BUG_ON during radix tree operation because > currently try_to_unmap is designed...
2016 Jun 30
1
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...map(struct file *filp, struct vm_area_struct *vma) > { > filp->f_mapping->a_ops = &test_aops; > vma->vm_ops = &test_vm_ops; > vma->vm_private_data = filp->private_data; > return 0; > } > Okay. > test_aops should have *set_page_dirty* overriding. > > static int test_set_pag_dirty(struct page *page) > { > if (!PageDirty(page)) > SetPageDirty*page); > return 0; > } > > Otherwise, it goes BUG_ON during radix tree operation because > currently try_to_unmap is designed...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission prior to the callback so here i assume >> nothing stupid is going on and that you only vmap page wit...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission prior to the callback so here i assume >> nothing stupid is going on and that you only vmap page wit...
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...n however do so under the invalidate_range_start call- back only if it is a blocking allow callback (there is a flag passdown with the invalidate_range_start callback if you are not allow to block then return EBUSY and the invalidation will be aborted). > > That's a separate issue from set_page_dirty when memory is file backed. If you can access file back page then i suggest using set_page_dirty from within a special version of vunmap() so that when you vunmap you set the page dirty without taking page lock. It is safe to do so always from within an mmu notifier callback if you had the page ma...
2019 Mar 07
5
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...n however do so under the invalidate_range_start call- back only if it is a blocking allow callback (there is a flag passdown with the invalidate_range_start callback if you are not allow to block then return EBUSY and the invalidation will be aborted). > > That's a separate issue from set_page_dirty when memory is file backed. If you can access file back page then i suggest using set_page_dirty from within a special version of vunmap() so that when you vunmap you set the page dirty without taking page lock. It is safe to do so always from within an mmu notifier callback if you had the page ma...
2005 Apr 25
3
BUG: xend oopses on munmap of /proc/xen/privcmd
...nd does the following: 1) mmap /proc/xen/privcmd 2) call an ioctl to populate the mmap 3) munmap the mapping created in (1) During the munmap, the dom0 kernel oopses, as follows: CPU: 0 EIP: 0061:[<c01505ed>] Not tainted VLI EFLAGS: 00010282 (2.6.11-1.1261_FC4.rielxen0) EIP is at set_page_dirty+0x1d/0x60 eax: 8b04ec83 ebx: c13da920 ecx: c13da920 edx: c025e1d0 esi: d4e0f730 edi: 3dd49067 ebp: b79cc000 esp: da503ebc ds: 007b es: 007b ss: 0069 Process python (pid: 2662, threadinfo=da502000 task=dc8fd550) Stack: db2b41c0 c015a487 00000000 00040004 da4d6b78 b79cd000 b79cd000 b7...
2016 Jun 27
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 06/16/2016 11:07 AM, Minchan Kim wrote: > On Thu, Jun 16, 2016 at 09:12:07AM +0530, Anshuman Khandual wrote: >> On 06/16/2016 05:56 AM, Minchan Kim wrote: >>> On Wed, Jun 15, 2016 at 12:15:04PM +0530, Anshuman Khandual wrote: >>>> On 06/15/2016 08:02 AM, Minchan Kim wrote: >>>>> Hi, >>>>> >>>>> On Mon, Jun 13, 2016 at
2016 Jun 27
2
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
On 06/16/2016 11:07 AM, Minchan Kim wrote: > On Thu, Jun 16, 2016 at 09:12:07AM +0530, Anshuman Khandual wrote: >> On 06/16/2016 05:56 AM, Minchan Kim wrote: >>> On Wed, Jun 15, 2016 at 12:15:04PM +0530, Anshuman Khandual wrote: >>>> On 06/15/2016 08:02 AM, Minchan Kim wrote: >>>>> Hi, >>>>> >>>>> On Mon, Jun 13, 2016 at
2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...; } > > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +} > + > static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) > { > struct vhost_map *map[VHOST_NUM_ADDRS]; > @@ -315,8 +327,10 @@ static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) > for (i = 0; i < VHOST_NUM_ADDRS; i++)...
2019 Jul 23
2
[PATCH 5/6] vhost: mark dirty pages during map uninit
...; } > > +static void vhost_set_map_dirty(struct vhost_virtqueue *vq, > + struct vhost_map *map, int index) > +{ > + struct vhost_uaddr *uaddr = &vq->uaddrs[index]; > + int i; > + > + if (uaddr->write) { > + for (i = 0; i < map->npages; i++) > + set_page_dirty(map->pages[i]); > + } > +} > + > static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) > { > struct vhost_map *map[VHOST_NUM_ADDRS]; > @@ -315,8 +327,10 @@ static void vhost_uninit_vq_maps(struct vhost_virtqueue *vq) > for (i = 0; i < VHOST_NUM_ADDRS; i++)...
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...the invalidate_range_start call- > back only if it is a blocking allow callback (there is a flag passdown > with the invalidate_range_start callback if you are not allow to block > then return EBUSY and the invalidation will be aborted). > > >> That's a separate issue from set_page_dirty when memory is file backed. > If you can access file back page then i suggest using set_page_dirty > from within a special version of vunmap() so that when you vunmap you > set the page dirty without taking page lock. It is safe to do so > always from within an mmu notifier callback if...
2016 Jun 28
0
[PATCH v6v3 02/12] mm: migrate: support non-lru movable page migration
...ps with my address_space_operations. int test_mmap(struct file *filp, struct vm_area_struct *vma) { filp->f_mapping->a_ops = &test_aops; vma->vm_ops = &test_vm_ops; vma->vm_private_data = filp->private_data; return 0; } test_aops should have *set_page_dirty* overriding. static int test_set_pag_dirty(struct page *page) { if (!PageDirty(page)) SetPageDirty*page); return 0; } Otherwise, it goes BUG_ON during radix tree operation because currently try_to_unmap is designed for file-lru pages which lives in page cache so it...