Displaying 17 results from an estimated 17 matches for "try_to_free_buffer".
Did you mean:
try_to_free_buffers
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...optimize away the _lock from
set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
was a clean way to do that.
Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
set_page_dirty_lock for such case: could you recap how that
__writepage ext4 crash was solved if try_to_free_buffers() run on a
pinned GUP page (in our vhost case try_to_unmap would have gotten rid
of the pins through the mmu notifier and the page would have been
freed just fine).
The first two things that come to mind is that we can easily forbid
the try_to_free_buffers() if the page might be pinned by GUP, it...
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:34:39AM -0500, Michael S. Tsirkin wrote:
> On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
> >
> > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > > +{
> > > > + int i;
> > > > +
> > > > + for (i = 0; i <
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote:
>>>>> Which means after we fix vhost to add the flush_dcache_page after
>>>>> kunmap, Parisc will get a double hit (but it also means Parisc
>>>>> was the only one of those archs needed explicit cache flushes,
>>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote:
>
> On 2019/3/7 ??12:31, Michael S. Tsirkin wrote:
> > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used)
> > > +{
> > > + int i;
> > > +
> > > + for (i = 0; i < used->npages; i++)
> > > + set_page_dirty_lock(used->pages[i]);
> > This seems to rely on
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...implementation that will work equally
optimally on 32bit with <= 700M of RAM.
Talking to Jerome about the set_page_dirty issue, he raised the point
of what happens if two thread calls a mmu notifier invalidate
simultaneously. The first mmu notifier could call set_page_dirty and
then proceed in try_to_free_buffers or page_mkclean and then the
concurrent mmu notifier that arrives second, then must not call
set_page_dirty a second time.
With KVM sptes mappings and vhost mappings you would call
set_page_dirty (if you invoked gup with FOLL_WRITE) only when
effectively tearing down any secondary mapping (you...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
> was a clean way to do that.
>
> Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
> set_page_dirty_lock for such case: could you recap how that
> __writepage ext4 crash was solved if try_to_free_buffers() run on a
> pinned GUP page (in our vhost case try_to_unmap would have gotten rid
> of the pins through the mmu notifier and the page would have been
> freed just fine).
So for the above the easiest thing is to call set_page_dirty() from
the mmu notifier callback. It is always safe to u...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there
> was a clean way to do that.
>
> Now assuming we don't nak the use on ext4 VM_SHARED and we stick to
> set_page_dirty_lock for such case: could you recap how that
> __writepage ext4 crash was solved if try_to_free_buffers() run on a
> pinned GUP page (in our vhost case try_to_unmap would have gotten rid
> of the pins through the mmu notifier and the page would have been
> freed just fine).
So for the above the easiest thing is to call set_page_dirty() from
the mmu notifier callback. It is always safe to u...
2001 Nov 06
2
2.2.14 integration with lvs
...deactivate_page(page);
+ /*deactivate_page(page);
+ */
page_cache_release(page);
fail:
return -1;
and
--- fs/buffer.c.orig Tue Oct 30 11:15:43 2001
+++ fs/buffer.c Tue Oct 30 11:03:31 2001
@@ -2529,8 +2529,8 @@ busy_buffer_page:
return 0;
}
EXPORT_SYMBOL(try_to_free_buffers);
-EXPORT_SYMBOL(buffermem_pages);
-
+/* EXPORT_SYMBOL(buffermem_pages);
+*/
/* ================== Debugging =================== */
void show_buffers(void)
This make all compile and work basically..
Since I am not a real kernel hacker, I dont really know what is the
impact of modifications (...
2004 Aug 06
1
ices crash
...ing request at virtual address 00090020
Feb 6 14:26:47 bauhaus kernel: current->tss.cr3 = 01a61000, %%cr3 = 01a61000
Feb 6 14:26:47 bauhaus kernel: *pde = 00000000
Feb 6 14:26:47 bauhaus kernel: Oops: 0000
Feb 6 14:26:47 bauhaus kernel: CPU: 0
Feb 6 14:26:47 bauhaus kernel: EIP: 0010:[try_to_free_buffers+16/180]
Feb 6 14:26:47 bauhaus kernel: EFLAGS: 00010296
Feb 6 14:26:47 bauhaus kernel: eax: 00009470 ebx: c037fc18 ecx: 0000cbc0 edx: 00000000
Feb 6 14:26:47 bauhaus kernel: esi: 00090000 edi: 00090000 ebp: c037fc18 esp: c1c67e1c
Feb 6 14:26:47 bauhaus kernel: ds: 0018 es: 0018...
2001 Jul 29
1
2.2.19/0.0.7a: bonnie -> VM problems
SYSTEM:
rh6x based system, 2.2.19-6.2.7 rh errata kernel + 0.0.7a patch, I rebuilt rpm
for i686; celeron466, 64MB, PIIX4.
root fs is on software raid1 ext2, 6 additional fs's on software raid1 ext2.
There's a 3rd HD, not mirrored, which is mounted ext3.
EXT3-fs: mounted filesystem with ordered data mode.
I enabled journal with tune2fs -j with unmounted fs.
The 3 HDs are tuned with
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...doesn't even
> exist for transient pins like O_DIRECT (does it work by luck?), but
> with mmu notifiers there are no long term pins anyway, so this works
> normally and it's like the memory isn't pinned. In any case I think
> that's a kernel bug in either __writepage or try_to_free_buffers, so I
> would ignore it considering qemu will only use anon memory or tmpfs or
> hugetlbfs as backing store for the virtio ring. It wouldn't make sense
> for qemu to risk triggering I/O on a VM_SHARED ext4, so we shouldn't
> be even exposed to what seems to be an orthogonal ker...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...VM_SHARED ext4 page doesn't even
exist for transient pins like O_DIRECT (does it work by luck?), but
with mmu notifiers there are no long term pins anyway, so this works
normally and it's like the memory isn't pinned. In any case I think
that's a kernel bug in either __writepage or try_to_free_buffers, so I
would ignore it considering qemu will only use anon memory or tmpfs or
hugetlbfs as backing store for the virtio ring. It wouldn't make sense
for qemu to risk triggering I/O on a VM_SHARED ext4, so we shouldn't
be even exposed to what seems to be an orthogonal kernel bug.
I suppose...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...VM_SHARED ext4 page doesn't even
exist for transient pins like O_DIRECT (does it work by luck?), but
with mmu notifiers there are no long term pins anyway, so this works
normally and it's like the memory isn't pinned. In any case I think
that's a kernel bug in either __writepage or try_to_free_buffers, so I
would ignore it considering qemu will only use anon memory or tmpfs or
hugetlbfs as backing store for the virtio ring. It wouldn't make sense
for qemu to risk triggering I/O on a VM_SHARED ext4, so we shouldn't
be even exposed to what seems to be an orthogonal kernel bug.
I suppose...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote:
> +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = {
> + .invalidate_range = vhost_invalidate_range,
> +};
> +
> void vhost_dev_init(struct vhost_dev *dev,
> struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
> {
I also wonder here: when page is write protected then
it does not look like
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote:
> +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = {
> + .invalidate_range = vhost_invalidate_range,
> +};
> +
> void vhost_dev_init(struct vhost_dev *dev,
> struct vhost_virtqueue **vqs, int nvqs, int iov_limit)
> {
I also wonder here: when page is write protected then
it does not look like