search for: contigiously

Displaying 20 results from an estimated 67 matches for "contigiously".

2016 Apr 28
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
On Thu, Apr 28, 2016 at 02:37:16PM +0800, Jason Wang wrote: > > > On 04/27/2016 07:45 PM, Michael S. Tsirkin wrote: > > On Fri, Mar 25, 2016 at 10:34:34AM +0800, Jason Wang wrote: > >> This patch tries to implement an device IOTLB for vhost. This could be > >> used with for co-operation with userspace(qemu) implementation of DMA > >> remapping. >
2016 Apr 28
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
On Thu, Apr 28, 2016 at 02:37:16PM +0800, Jason Wang wrote: > > > On 04/27/2016 07:45 PM, Michael S. Tsirkin wrote: > > On Fri, Mar 25, 2016 at 10:34:34AM +0800, Jason Wang wrote: > >> This patch tries to implement an device IOTLB for vhost. This could be > >> used with for co-operation with userspace(qemu) implementation of DMA > >> remapping. >
2016 Apr 29
0
[RFC PATCH V2 2/2] vhost: device IOTLB API
On 04/28/2016 10:43 PM, Michael S. Tsirkin wrote: > On Thu, Apr 28, 2016 at 02:37:16PM +0800, Jason Wang wrote: >> >> On 04/27/2016 07:45 PM, Michael S. Tsirkin wrote: >>> On Fri, Mar 25, 2016 at 10:34:34AM +0800, Jason Wang wrote: >>>> This patch tries to implement an device IOTLB for vhost. This could be >>>> used with for co-operation with
2003 Nov 20
1
Large RAM (> 4G) and rsync still dies?
Hello. Hopefully someone can shed some light on this: We've got a production server with _LOTS_ of files on it. The system is a dual XEON with 4GB of RAM. During the evening, the load is very low. Linux shows (via 'top') that approximately 3G of RAM is cached RAM, and 500M is buffered. That, if my understanding is correct, should leave us with 3.5G of available memory to draw
2016 Apr 27
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
On Fri, Mar 25, 2016 at 10:34:34AM +0800, Jason Wang wrote: > This patch tries to implement an device IOTLB for vhost. This could be > used with for co-operation with userspace(qemu) implementation of DMA > remapping. > > The idea is simple. When vhost meets an IOTLB miss, it will request > the assistance of userspace to do the translation, this is done > through: > >
2016 Apr 27
2
[RFC PATCH V2 2/2] vhost: device IOTLB API
On Fri, Mar 25, 2016 at 10:34:34AM +0800, Jason Wang wrote: > This patch tries to implement an device IOTLB for vhost. This could be > used with for co-operation with userspace(qemu) implementation of DMA > remapping. > > The idea is simple. When vhost meets an IOTLB miss, it will request > the assistance of userspace to do the translation, this is done > through: > >
2016 Jul 28
2
[PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
On Thu, Jul 28, 2016 at 01:13:35AM +0000, Li, Liang Z wrote: > > Subject: Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate > > process > > > > On 07/26/2016 06:23 PM, Liang Li wrote: > > > + vb->pfn_limit = VIRTIO_BALLOON_PFNS_LIMIT; > > > + vb->pfn_limit = min(vb->pfn_limit, get_max_pfn()); > > > + vb->bmap_len =
2016 Jul 28
2
[PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
On Thu, Jul 28, 2016 at 01:13:35AM +0000, Li, Liang Z wrote: > > Subject: Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate > > process > > > > On 07/26/2016 06:23 PM, Liang Li wrote: > > > + vb->pfn_limit = VIRTIO_BALLOON_PFNS_LIMIT; > > > + vb->pfn_limit = min(vb->pfn_limit, get_max_pfn()); > > > + vb->bmap_len =
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
This adds a flag to enable/disable bypassing the IOMMU by virtio devices. This is on top of patch http://article.gmane.org/gmane.comp.emulators.qemu/403467 virtio: convert to use DMA api Tested with patchset http://article.gmane.org/gmane.linux.kernel.virtualization/27545 virtio-pci: iommu support (note: bit number has been kept at 34 intentionally to match posted guest code. a non-RFC version
2016 Apr 21
4
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
This adds a flag to enable/disable bypassing the IOMMU by virtio devices. This is on top of patch http://article.gmane.org/gmane.comp.emulators.qemu/403467 virtio: convert to use DMA api Tested with patchset http://article.gmane.org/gmane.linux.kernel.virtualization/27545 virtio-pci: iommu support (note: bit number has been kept at 34 intentionally to match posted guest code. a non-RFC version
2016 Apr 21
1
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
On Thu, Apr 21, 2016 at 03:56:53PM +0100, Stefan Hajnoczi wrote: > On Thu, Apr 21, 2016 at 04:43:45PM +0300, Michael S. Tsirkin wrote: > > This adds a flag to enable/disable bypassing the IOMMU by > > virtio devices. > > > > This is on top of patch > > http://article.gmane.org/gmane.comp.emulators.qemu/403467 > > virtio: convert to use DMA api > >
2016 Apr 21
1
[PATCH V2 RFC] fixup! virtio: convert to use DMA api
On Thu, Apr 21, 2016 at 03:56:53PM +0100, Stefan Hajnoczi wrote: > On Thu, Apr 21, 2016 at 04:43:45PM +0300, Michael S. Tsirkin wrote: > > This adds a flag to enable/disable bypassing the IOMMU by > > virtio devices. > > > > This is on top of patch > > http://article.gmane.org/gmane.comp.emulators.qemu/403467 > > virtio: convert to use DMA api > >
2010 Sep 14
1
[PATCH] vhost: max s/g to match qemu
Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: Jason
2010 Sep 14
1
[PATCH] vhost: max s/g to match qemu
Qemu supports up to UIO_MAXIOV s/g so we have to match that because guest drivers may rely on this. Allocate indirect and log arrays dynamically to avoid using too much contigious memory and make the length of hdr array to match the header length since each iovec entry has a least one byte. Test with copying large files w/ and w/o migration in both linux and windows guests. Signed-off-by: Jason
2016 Jul 28
3
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
On Thu, Jul 28, 2016 at 06:36:18AM +0000, Li, Liang Z wrote: > > > > This ends up doing a 1MB kmalloc() right? That seems a _bit_ big. > > > > How big was the pfn buffer before? > > > > > > Yes, it is if the max pfn is more than 32GB. > > > The size of the pfn buffer use before is 256*4 = 1024 Bytes, it's too > > > small, and
2016 Jul 28
3
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
On Thu, Jul 28, 2016 at 06:36:18AM +0000, Li, Liang Z wrote: > > > > This ends up doing a 1MB kmalloc() right? That seems a _bit_ big. > > > > How big was the pfn buffer before? > > > > > > Yes, it is if the max pfn is more than 32GB. > > > The size of the pfn buffer use before is 256*4 = 1024 Bytes, it's too > > > small, and
2016 Jul 28
0
[virtio-dev] Re: [PATCH v2 repost 4/7] virtio-balloon: speed up inflate/deflate process
> > > This ends up doing a 1MB kmalloc() right? That seems a _bit_ big. > > > How big was the pfn buffer before? > > > > Yes, it is if the max pfn is more than 32GB. > > The size of the pfn buffer use before is 256*4 = 1024 Bytes, it's too > > small, and it's the main reason for bad performance. > > Use the max 1MB kmalloc is a balance
2007 Nov 15
0
[patch 14/19] xfs: eagerly remove vmap mappings to avoid upsetting Xen
-stable review patch. If anyone has any objections, please let us know. ------------------ From: Jeremy Fitzhardinge <jeremy@goop.org> patch ace2e92e193126711cb3a83a3752b2c5b8396950 in mainline. XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable
2009 Oct 04
1
INDIRECT and NEXT
Hi! I note that chaining INDIRECT descriptors with NEXT currently is broken in lguest, because current ring index gets overwritten. It does not matter with current virtio in guest, but I think it's worth fixing: for example for when we have trouble locating a request in a physically contigious region. Makes sense? -- MST
2007 Nov 15
0
[patch 14/19] xfs: eagerly remove vmap mappings to avoid upsetting Xen
-stable review patch. If anyone has any objections, please let us know. ------------------ From: Jeremy Fitzhardinge <jeremy@goop.org> patch ace2e92e193126711cb3a83a3752b2c5b8396950 in mainline. XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable