search for: vm_share

Displaying 20 results from an estimated 31 matches for "vm_share".

Did you mean: vm_shared
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...o implement writeback and are obviously safe too. > If so then set dirty is mostly useless it would only be use for swap > but for this you can use an unlock version to set the page dirty. It's not a practical issue but a security issue perhaps: you can change the KVM userland to run on VM_SHARED ext4 as guest physical memory, you could do that with the qemu command line that is used to place it on tmpfs or hugetlbfs for example and some proprietary KVM userland may do for other reasons. In general it shouldn't be possible to crash the kernel with this, and it wouldn't be nice to f...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...obviously safe too. > > > If so then set dirty is mostly useless it would only be use for swap > > but for this you can use an unlock version to set the page dirty. > > It's not a practical issue but a security issue perhaps: you can > change the KVM userland to run on VM_SHARED ext4 as guest physical > memory, you could do that with the qemu command line that is used to > place it on tmpfs or hugetlbfs for example and some proprietary KVM > userland may do for other reasons. In general it shouldn't be possible > to crash the kernel with this, and it would...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...obviously safe too. > > > If so then set dirty is mostly useless it would only be use for swap > > but for this you can use an unlock version to set the page dirty. > > It's not a practical issue but a security issue perhaps: you can > change the KVM userland to run on VM_SHARED ext4 as guest physical > memory, you could do that with the qemu command line that is used to > place it on tmpfs or hugetlbfs for example and some proprietary KVM > userland may do for other reasons. In general it shouldn't be possible > to crash the kernel with this, and it would...
2011 Aug 12
2
sr_backened_failure_73
hi, i have created the nfs storage ,from the two system i have installed the xcp and in order to created the shared storage , in one system i made as a server and other other client , and the client i can able to mount the shared storage. server 10.10.34.133 client 10.10.33.220 and from the client i can able to showmount -e 10.10.34.133 it is showing the exports list local host.local domain and
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Thu, Mar 07, 2019 at 10:34:39AM -0500, Michael S. Tsirkin wrote: > On Thu, Mar 07, 2019 at 10:45:57AM +0800, Jason Wang wrote: > > > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > > +{ > > > > + int i; > > > > + > > > > + for (i = 0; i <
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...e for range_end(). > >> That's a separate issue from set_page_dirty when memory is file backed. > Yes. I don't yet know why the ext4 internal __writepage cannot > re-create the bh if they've been freed by the VM and why such race > where the bh are freed for a pinned VM_SHARED ext4 page doesn't even > exist for transient pins like O_DIRECT (does it work by luck?), but > with mmu notifiers there are no long term pins anyway, so this works > normally and it's like the memory isn't pinned. In any case I think > that's a kernel bug in either __wr...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re-create the bh if they've been freed by the VM and why such race where the bh are freed for a pinned VM_SHARED ext4 page doesn't even exist for transient pins like O_DIRECT (does it work by luck?), but with mmu notifiers there are no long term pins anyway, so this works normally and it's like the memory isn't pinned. In any case I think that's a kernel bug in either __writepage or try_to_fr...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re-create the bh if they've been freed by the VM and why such race where the bh are freed for a pinned VM_SHARED ext4 page doesn't even exist for transient pins like O_DIRECT (does it work by luck?), but with mmu notifiers there are no long term pins anyway, so this works normally and it's like the memory isn't pinned. In any case I think that's a kernel bug in either __writepage or try_to_fr...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > + .invalidate_range = vhost_invalidate_range, > +}; > + > void vhost_dev_init(struct vhost_dev *dev, > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > { I also wonder here: when page is write protected then it does not look like
2020 May 29
1
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
...const struct vdpa_config_ops *ops = vdpa->config; > + struct vdpa_notification_area notify; > + int index = vma->vm_pgoff; > + > + if (vma->vm_end - vma->vm_start != PAGE_SIZE) > + return -EINVAL; > + if ((vma->vm_flags & VM_SHARED) == 0) > + return -EINVAL; > + if (vma->vm_flags & VM_READ) > + return -EINVAL; > + if (index > 65535) > + return -EINVAL; > + if (!ops->get_vq_notification) > + return -ENOTSUPP; > + &gt...
2024 Jan 24
1
[PATCH] mm: Remove double faults once write a device pfn
...n that this problem only exists on arm64 platform. >> >> Ok, that makes at least a little bit more sense. >> >> >> >>> Because on arm64 platform the PTE_RDONLY is automatically attached >> >>> to the userspace pte entries even through VM_WRITE + VM_SHARE. >> >>> The PTE_RDONLY needs to be cleared in vmf_insert_pfn_prot. However >> >>> vmf_insert_pfn_prot do not make the pte writable passing false >> >>> @mkwrite to insert_pfn. >> >> Question is why is arm64 doing this? As far as I can see th...
2024 Jan 23
2
[PATCH] mm: Remove double faults once write a device pfn
...hat? How do you got to this conclusion? > Sorry. I did not mention that this problem only exists on arm64 platform. Ok, that makes at least a little bit more sense. > Because on arm64 platform the PTE_RDONLY is automatically attached to > the userspace pte entries even through VM_WRITE + VM_SHARE. > The PTE_RDONLY needs to be cleared in vmf_insert_pfn_prot. However > vmf_insert_pfn_prot do not make the pte writable passing false @mkwrite > to insert_pfn. Question is why is arm64 doing this? As far as I can see they must have some hardware reason for that. The mkwrite parameter...
2024 Jan 24
2
[PATCH] mm: Remove double faults once write a device pfn
...>>> Sorry. I did not mention that this problem only exists on arm64 platform. >> Ok, that makes at least a little bit more sense. >> >>> Because on arm64 platform the PTE_RDONLY is automatically attached to >>> the userspace pte entries even through VM_WRITE + VM_SHARE. >>> The PTE_RDONLY needs to be cleared in vmf_insert_pfn_prot. However >>> vmf_insert_pfn_prot do not make the pte writable passing false >>> @mkwrite to insert_pfn. >> Question is why is arm64 doing this? As far as I can see they must have some >> hardware...
2020 May 29
0
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
...;vm_file->private_data; + struct vdpa_device *vdpa = v->vdpa; + const struct vdpa_config_ops *ops = vdpa->config; + struct vdpa_notification_area notify; + int index = vma->vm_pgoff; + + if (vma->vm_end - vma->vm_start != PAGE_SIZE) + return -EINVAL; + if ((vma->vm_flags & VM_SHARED) == 0) + return -EINVAL; + if (vma->vm_flags & VM_READ) + return -EINVAL; + if (index > 65535) + return -EINVAL; + if (!ops->get_vq_notification) + return -ENOTSUPP; + + /* To be safe and easily modelled by userspace, We only + * support the doorbell which sits on the page bounda...
2020 May 29
0
[PATCH 4/6] vhost_vdpa: support doorbell mapping via mmap
...nst struct vdpa_config_ops *ops = vdpa->config; >> +??? struct vdpa_notification_area notify; >> +??? int index = vma->vm_pgoff; >> + >> +??? if (vma->vm_end - vma->vm_start != PAGE_SIZE) >> +??????? return -EINVAL; >> +??? if ((vma->vm_flags & VM_SHARED) == 0) >> +??????? return -EINVAL; >> +??? if (vma->vm_flags & VM_READ) >> +??????? return -EINVAL; >> +??? if (index > 65535) >> +??????? return -EINVAL; >> +??? if (!ops->get_vq_notification) >> +??????? return -ENOTSUPP; >> + >>...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...et_page_dirty from the mmu notifier (but not after). > > Hence why my suggestion is a special vunmap that call set_page_dirty > on the page from the mmu notifier. Agreed, that will solve all issues in vhost context with regard to set_page_dirty, including the case the memory is backed by VM_SHARED ext4. Thanks! Andrea
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...he mmu notifier (but not after). >> >> Hence why my suggestion is a special vunmap that call set_page_dirty >> on the page from the mmu notifier. > Agreed, that will solve all issues in vhost context with regard to > set_page_dirty, including the case the memory is backed by VM_SHARED ext4. > > Thanks! > Andrea