search for: balloning

Displaying 20 results from an estimated 113 matches for "balloning".

Did you mean: ballooning
2019 Jul 18
1
[PATCH v3 2/2] balloon: fix up comments
On Thursday, July 18, 2019 8:24 PM, Michael S. Tsirkin wrote: > /* > * balloon_page_alloc - allocates a new page for insertion into the balloon > - * page list. > + * page list. > * > - * Driver must call it to properly allocate a new enlisted balloon page. > - * Driver must call balloon_page_enqueue before definitively removing it > from > - * the guest
2013 Nov 14
4
[PATCH] xen/arm: Allow balooning working with 1:1 memory mapping
With the lake of iommu, dom0 must have a 1:1 memory mapping for all these guest physical address. When the ballon decides to give back a page to the kernel, this page must have the same address as previously. Otherwise, we will loose the 1:1 mapping and will break DMA-capable device. Signed-off-by: Julien Grall <julien.grall@linaro.org> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich
2019 Jul 18
2
[PATCH v3 1/2] mm/balloon_compaction: avoid duplicate page removal
From: Wei Wang <wei.w.wang at intel.com> A #GP is reported in the guest when requesting balloon inflation via virtio-balloon. The reason is that the virtio-balloon driver has removed the page from its internal page list (via balloon_page_pop), but balloon_page_enqueue_one also calls "list_del" to do the removal. This is necessary when it's used from balloon_page_enqueue_list,
2007 Jan 17
18
mongrel memory usage ballooning and process stomping
Hi, I have mongrel 0.3.14, with ruby 1.8.5, rails 1.1.6 and mongrel cluster 0.2.1, on debian sage 3.1 with apache 2.0, and fastthread 0.6.1. I am load balancing 3 mongrel processes using the random port trick. When I start mongrel the processes have about 60MB, but after some hour of usage the memory ballons upto more than 180MB and the site becomes terribly slow. Forcing me to restart mongrel
2013 Jul 22
2
Xen kernel fixes
Hi, I have made the for-centos-v5 branch available on git://xenbits.xen.org/people/dvrabel/linux.git This is based on 3.4.54 and includes the following additional fixes (since the for-centos-v4 branch). * x86/xen: during early setup, only 1:1 map the ISA region Fixes a boot failure if tboot is used. * xen/evtchn: avoid a deadlock when unbinding an event channel Fixes a potential deadlock by
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > I wonder if it would be possible to avoid the kernel changes by > > > parsing /proc/self/pagemap - if that can be used to detect > > > unmapped/zero mapped pages in the guest ram, would it achieve the > same result? > > > > Only detect the unmapped/zero mapped pages is not enough.
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > I wonder if it would be possible to avoid the kernel changes by > > > parsing /proc/self/pagemap - if that can be used to detect > > > unmapped/zero mapped pages in the guest ram, would it achieve the > same result? > > > > Only detect the unmapped/zero mapped pages is not enough.
2020 Feb 11
2
problems with understanding of the memory parameters in the xml file
Hi guys, despite reading hours and hours in the internet i'm still struggling with "memory", "currentmemory" and "maxMemory". Maybe you can help me to sort it out. My idea is that a guest has an initial value of memory (which "memory" seems to be) when booting. We have some Windows 10 guests which calculate some stuff and i would like to increase
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2006 Dec 12
1
[PATCH] Fix e820 mapping limit
The changeset ''12803:df5fa63490f4da7b65c56087a68783dbcb7944f8'' added a new hypercall XENMEM_set_memory_map for specifying the e820 mapping. There was a small bug in the userspace side of this change though - the e820 mapping was specified based on the ''memory_static_min'' domain info parameter which means the memory map size is clamped to the value
2011 Mar 11
2
[GIT PULL stable-2.6.32.x] PV on HVM fixes
Hi Jeremy, I backported the branch I have in linux-next plus some older PV on HVM fixes to stable-2.6.32.x. Please pull: git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git stable-2.6.32-pvhvm Ian Campbell (11): xen: do not respond to unknown xenstore control requests xen: use new schedop interface for suspend xen: switch to new schedop hypercall by default. xen:
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2010 Jul 14
2
2.6.32.16 - pv_ops kernel compile error
Hi, Ubuntu 10.04 x64 installing xen4.0 testing when compiling kernel 2.6.32.16 from jeremy git i get this error: WARNING: modpost: Found 7 section mismatch(es). To see full details build your kernel with: ''make CONFIG_DEBUG_SECTION_MISMATCH=y'' GEN .version CHK include/linux/compile.h UPD include/linux/compile.h CC init/version.o LD
2019 Jul 18
0
[PATCH v3 2/2] balloon: fix up comments
Lots of comments bitrotted. Fix them up. Fixes: 418a3ab1e778 (mm/balloon_compaction: List interfaces) Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- mm/balloon_compaction.c | 73 +++++++++++++++++++++++------------------ 1 file changed, 41 insertions(+), 32 deletions(-) diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c index d25664e1857b..9cb03da5bcea 100644 ---
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [Qemu-devel] [RFC qemu 0/4] A PV solution for live migration > optimization > > On Fri, Mar 04, 2016 at 09:08:44AM +0000, Li, Liang Z wrote: > > > On Fri, Mar 04, 2016 at 01:52:53AM +0000, Li, Liang Z wrote: > > > > > I wonder if it would be possible to avoid the kernel changes > > > > > by parsing /proc/self/pagemap - if that
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1GB DIMM and only the memory to ZONE_NORMAL 3. Inflate the balloon