search for: __writepage

Displaying 14 results from an estimated 14 matches for "__writepage".

2011 Jan 19
0
Bug#603727: xen-hypervisor-4.0-amd64: i386 Dom0 crashes after doing some I/O on local storage (software Raid1 on SAS-drives with mpt2sas driver)
..._bit_lock+0x6b/0x77 [163440.614972] [<ffffffff81065d38>] ? wake_bit_function+0x0/0x23 [163440.614978] [<ffffffff81110157>] ? __block_write_full_page+0x159/0x2ac [163440.614984] [<ffffffff8110ef54>] ? end_buffer_async_write+0x0/0x13b [163440.614990] [<ffffffff810bb3a2>] ? __writepage+0xa/0x25 [163440.614996] [<ffffffff810bba29>] ? write_cache_pages+0x20b/0x327 [163440.615001] [<ffffffff810bb398>] ? __writepage+0x0/0x25 [163440.615008] [<ffffffff81108b56>] ? writeback_single_inode+0xe7/0x2da [163440.615014] [<ffffffff8110985c>] ? writeback_inodes_wb+0...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...). It sounds like we should at least optimize away the _lock from set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there was a clean way to do that. Now assuming we don't nak the use on ext4 VM_SHARED and we stick to set_page_dirty_lock for such case: could you recap how that __writepage ext4 crash was solved if try_to_free_buffers() run on a pinned GUP page (in our vhost case try_to_unmap would have gotten rid of the pins through the mmu notifier and the page would have been freed just fine). The first two things that come to mind is that we can easily forbid the try_to_free_buff...
2007 Dec 09
2
centos 5.1 kernel crash on 2.6.23.9
...k_page_buffers+0x65/0x8b Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff8804ffb4>] :ext3:journal_dirty_data_fn+0x0/0xe Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff880521da>] :ext3:ext3_ordered_writepage+0x108/0x18c Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff8106bf9a>] __writepage+0xa/0x23 Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff8106c4b8>] write_cache_pages+0x17b/0x2aa Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff8106bf90>] __writepage+0x0/0x23 Dec 8 22:23:58 devcentos5x64 kernel: [<ffffffff8106c62a>] do_writepages+0x27/0x2d Dec 8 22:23:58 d...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:34:39AM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > > +{ > > > > + int i; > > > > + > > > > + for (i = 0; i <
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...tection of the vq mutex when it can block. Then invalidate_range_end() then can clear this flag. An issue is blockable is? always false for range_end(). > >> That's a separate issue from set_page_dirty when memory is file backed. > Yes. I don't yet know why the ext4 internal __writepage cannot > re-create the bh if they've been freed by the VM and why such race > where the bh are freed for a pinned VM_SHARED ext4 page doesn't even > exist for transient pins like O_DIRECT (does it work by luck?), but > with mmu notifiers there are no long term pins anyway, so th...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t if you use gup_fast after usemm() it's similar. For vhost the invalidate would be really fast, there are no IPI to deliver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re-create the bh if they've been freed by the VM and why such race where the bh are freed for a pinned VM_SHARED ext4 page doesn't even exist for transient pins like O_DIRECT (does it work by luck?), but with mmu notifiers there are no long term pins anyway, so this works normally an...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t if you use gup_fast after usemm() it's similar. For vhost the invalidate would be really fast, there are no IPI to deliver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re-create the bh if they've been freed by the VM and why such race where the bh are freed for a pinned VM_SHARED ext4 page doesn't even exist for transient pins like O_DIRECT (does it work by luck?), but with mmu notifiers there are no long term pins anyway, so this works normally an...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t least optimize away the _lock from > set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there > was a clean way to do that. > > Now assuming we don't nak the use on ext4 VM_SHARED and we stick to > set_page_dirty_lock for such case: could you recap how that > __writepage ext4 crash was solved if try_to_free_buffers() run on a > pinned GUP page (in our vhost case try_to_unmap would have gotten rid > of the pins through the mmu notifier and the page would have been > freed just fine). So for the above the easiest thing is to call set_page_dirty() from the m...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t least optimize away the _lock from > set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there > was a clean way to do that. > > Now assuming we don't nak the use on ext4 VM_SHARED and we stick to > set_page_dirty_lock for such case: could you recap how that > __writepage ext4 crash was solved if try_to_free_buffers() run on a > pinned GUP page (in our vhost case try_to_unmap would have gotten rid > of the pins through the mmu notifier and the page would have been > freed just fine). So for the above the easiest thing is to call set_page_dirty() from the m...
2010 Nov 08
89
Re: DM-CRYPT: Scale to multiple CPUs v3 on 2.6.37-rc* ?
On Sun, Nov 07 2010 at 6:05pm -0500, Andi Kleen <andi@firstfloor.org> wrote: > On Sun, Nov 07, 2010 at 10:39:23PM +0100, Milan Broz wrote: > > On 11/07/2010 08:45 PM, Andi Kleen wrote: > > >> I read about barrier-problems and data getting to the partition when > > >> using dm-crypt and several layers so I don''t know if that could be > >