search for: contigious

Displaying 20 results from an estimated 67 matches for "contigious".

2016 Apr 28
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
...the IOTLB invalidation of IOMMU IOTLB and use > >> VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost. > > There's one problem here, and that is that VQs still do not undergo > > translation. In theory VQ could be mapped in such a way > > that it's not contigious in userspace memory. > > I'm not sure I get the issue, current vhost API support setting > desc_user_addr, used_user_addr and avail_user_addr independently. So > looks ok? If not, looks not a problem to device IOTLB API itself. The problem is that addresses are all HVA. Without a...
2016 Apr 28
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
...the IOTLB invalidation of IOMMU IOTLB and use > >> VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost. > > There's one problem here, and that is that VQs still do not undergo > > translation. In theory VQ could be mapped in such a way > > that it's not contigious in userspace memory. > > I'm not sure I get the issue, current vhost API support setting > desc_user_addr, used_user_addr and avail_user_addr independently. So > looks ok? If not, looks not a problem to device IOTLB API itself. The problem is that addresses are all HVA. Without a...
2016 Apr 29
0
[RFC PATCH V2 2/2] vhost: device IOTLB API
...nvalidation of IOMMU IOTLB and use >>>> VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost. >>> There's one problem here, and that is that VQs still do not undergo >>> translation. In theory VQ could be mapped in such a way >>> that it's not contigious in userspace memory. >> I'm not sure I get the issue, current vhost API support setting >> desc_user_addr, used_user_addr and avail_user_addr independently. So >> looks ok? If not, looks not a problem to device IOTLB API itself. > The problem is that addresses are all HVA....
2003 Nov 20
1
Large RAM (> 4G) and rsync still dies?
Hello. Hopefully someone can shed some light on this: We've got a production server with _LOTS_ of files on it. The system is a dual XEON with 4GB of RAM. During the evening, the load is very low. Linux shows (via 'top') that approximately 3G of RAM is cached RAM, and 500M is buffered. That, if my understanding is correct, should leave us with 3.5G of available memory to draw
2016 Apr 27
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
...ace is also in charge of > snooping the IOTLB invalidation of IOMMU IOTLB and use > VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost. There's one problem here, and that is that VQs still do not undergo translation. In theory VQ could be mapped in such a way that it's not contigious in userspace memory. > Signed-off-by: Jason Wang <jasowang at redhat.com> What limits amount of entries that kernel keeps around? Do we want at least a mod parameter for this? > --- > drivers/vhost/net.c | 6 +- > drivers/vhost/vhost.c | 301 +++++++++++++++++++...
2016 Apr 27
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
...ace is also in charge of > snooping the IOTLB invalidation of IOMMU IOTLB and use > VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost. There's one problem here, and that is that VQs still do not undergo translation. In theory VQ could be mapped in such a way that it's not contigious in userspace memory. > Signed-off-by: Jason Wang <jasowang at redhat.com> What limits amount of entries that kernel keeps around? Do we want at least a mod parameter for this? > --- > drivers/vhost/net.c | 6 +- > drivers/vhost/vhost.c | 301 +++++++++++++++++++...
2016 Jul 28
2
[PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
...bitmap is too small, it means we have > to traverse a long list for many times, and it's bad for performance. > > Thanks! > Liang There are all your implementation decisions though. If guest memory is so fragmented that you only have order 0 4k pages, then allocating a huge 1M contigious chunk is very problematic in and of itself. Most people rarely migrate and do not care how fast that happens. Wasting a large chunk of memory (and it's zeroed for no good reason, so you actually request host memory for it) for everyone to speed it up when it does happen is not really an option...
2016 Jul 28
2
[PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
...bitmap is too small, it means we have > to traverse a long list for many times, and it's bad for performance. > > Thanks! > Liang There are all your implementation decisions though. If guest memory is so fragmented that you only have order 0 4k pages, then allocating a huge 1M contigious chunk is very problematic in and of itself. Most people rarely migrate and do not care how fast that happens. Wasting a large chunk of memory (and it's zeroed for no good reason, so you actually request host memory for it) for everyone to speed it up when it does happen is not really an option...
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...gmane.comp.emulators.qemu/403467 virtio: convert to use DMA api Tested with patchset http://article.gmane.org/gmane.linux.kernel.virtualization/27545 virtio-pci: iommu support (note: bit number has been kept at 34 intentionally to match posted guest code. a non-RFC version will renumber bits to be contigious). changes from v1: drop PASSTHROUGH flag The interaction between virtio and DMA API is messy. On most systems with virtio, physical addresses match bus addresses, and it doesn't particularly matter whether we use the DMA API. On some systems, including Xen and any system with a physical...
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...gmane.comp.emulators.qemu/403467 virtio: convert to use DMA api Tested with patchset http://article.gmane.org/gmane.linux.kernel.virtualization/27545 virtio-pci: iommu support (note: bit number has been kept at 34 intentionally to match posted guest code. a non-RFC version will renumber bits to be contigious). changes from v1: drop PASSTHROUGH flag The interaction between virtio and DMA API is messy. On most systems with virtio, physical addresses match bus addresses, and it doesn't particularly matter whether we use the DMA API. On some systems, including Xen and any system with a physical...
2016 Apr 21
1
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...api > > > > Tested with patchset > > http://article.gmane.org/gmane.linux.kernel.virtualization/27545 > > virtio-pci: iommu support (note: bit number has been kept at 34 > > intentionally to match posted guest code. a non-RFC version will > > renumber bits to be contigious). > > > > changes from v1: > > drop PASSTHROUGH flag > > > > The interaction between virtio and DMA API is messy. > > > > On most systems with virtio, physical addresses match bus addresses, > > and it doesn't particularly matter whether we...
2016 Apr 21
1
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
...api > > > > Tested with patchset > > http://article.gmane.org/gmane.linux.kernel.virtualization/27545 > > virtio-pci: iommu support (note: bit number has been kept at 34 > > intentionally to match posted guest code. a non-RFC version will > > renumber bits to be contigious). > > > > changes from v1: > > drop PASSTHROUGH flag > > > > The interaction between virtio and DMA API is messy. > > > > On most systems with virtio, physical addresses match bus addresses, > > and it doesn't particularly matter whether we...
2010 Sep 14
1
[PATCH] vhost: max s/g to match qemu
Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 2 +- drive...
2010 Sep 14
1
[PATCH] vhost: max s/g to match qemu
Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 2 +- drive...
2016 Jul 28
3
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
...it's bad > > for performance. > > > > > > Thanks! > > > Liang > > > > There are all your implementation decisions though. > > > > If guest memory is so fragmented that you only have order 0 4k pages, then > > allocating a huge 1M contigious chunk is very problematic in and of itself. > > > > The memory is allocated in the probe stage. This will not happen if the driver is > loaded when booting the guest. > > > Most people rarely migrate and do not care how fast that happens. > > Wasting a large chunk...
2016 Jul 28
3
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
...it's bad > > for performance. > > > > > > Thanks! > > > Liang > > > > There are all your implementation decisions though. > > > > If guest memory is so fragmented that you only have order 0 4k pages, then > > allocating a huge 1M contigious chunk is very problematic in and of itself. > > > > The memory is allocated in the probe stage. This will not happen if the driver is > loaded when booting the guest. > > > Most people rarely migrate and do not care how fast that happens. > > Wasting a large chunk...
2016 Jul 28
0
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
...e to traverse a long list for many times, and it's bad > for performance. > > > > Thanks! > > Liang > > There are all your implementation decisions though. > > If guest memory is so fragmented that you only have order 0 4k pages, then > allocating a huge 1M contigious chunk is very problematic in and of itself. > The memory is allocated in the probe stage. This will not happen if the driver is loaded when booting the guest. > Most people rarely migrate and do not care how fast that happens. > Wasting a large chunk of memory (and it's zeroed for...
2007 Nov 15
0
[patch 14/19] xfs: eagerly remove vmap mappings to avoid upsetting Xen
-stable review patch. If anyone has any objections, please let us know. ------------------ From: Jeremy Fitzhardinge <jeremy@goop.org> patch ace2e92e193126711cb3a83a3752b2c5b8396950 in mainline. XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable mapping of the page. This patch solves the problem in a brute force way, by making XFS always eagerly unmap its mappings. [ Stable: This works around a bug in 2.6.23. We may come up with...
2009 Oct 04
1
INDIRECT and NEXT
Hi! I note that chaining INDIRECT descriptors with NEXT currently is broken in lguest, because current ring index gets overwritten. It does not matter with current virtio in guest, but I think it's worth fixing: for example for when we have trouble locating a request in a physically contigious region. Makes sense? -- MST
2007 Nov 15
0
[patch 14/19] xfs: eagerly remove vmap mappings to avoid upsetting Xen
-stable review patch. If anyone has any objections, please let us know. ------------------ From: Jeremy Fitzhardinge <jeremy@goop.org> patch ace2e92e193126711cb3a83a3752b2c5b8396950 in mainline. XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable mapping of the page. This patch solves the problem in a brute force way, by making XFS always eagerly unmap its mappings. [ Stable: This works around a bug in 2.6.23. We may come up with...