search for: ballon

Displaying 20 results from an estimated 113 matches for "ballon".

Did you mean: balloon
2019 Jul 18
1
[PATCH v3 2/2] balloon: fix up comments
...> spin_lock_irqsave(&b_dev_info->pages_lock, flags); @@ - > 230,8 +239,8 @@ int balloon_page_migrate(struct address_space *mapping, > > /* > * We can not easily support the no copy case here so ignore it as it "cannot" > - * is unlikely to be use with ballon pages. See include/linux/hmm.h > for > - * user of the MIGRATE_SYNC_NO_COPY mode. > + * is unlikely to be used with ballon pages. See include/linux/hmm.h "ballon" -> "balloon" > for > + * a user of the MIGRATE_SYNC_NO_COPY mode. "for the usage of&q...
2013 Nov 14
4
[PATCH] xen/arm: Allow balooning working with 1:1 memory mapping
With the lake of iommu, dom0 must have a 1:1 memory mapping for all these guest physical address. When the ballon decides to give back a page to the kernel, this page must have the same address as previously. Otherwise, we will loose the 1:1 mapping and will break DMA-capable device. Signed-off-by: Julien Grall <julien.grall@linaro.org> CC: Keir Fraser <keir@xen.org> CC: Jan Beulich <jbeulich@s...
2019 Jul 18
2
[PATCH v3 1/2] mm/balloon_compaction: avoid duplicate page removal
From: Wei Wang <wei.w.wang at intel.com> A #GP is reported in the guest when requesting balloon inflation via virtio-balloon. The reason is that the virtio-balloon driver has removed the page from its internal page list (via balloon_page_pop), but balloon_page_enqueue_one also calls "list_del" to do the removal. This is necessary when it's used from balloon_page_enqueue_list,
2007 Jan 17
18
mongrel memory usage ballooning and process stomping
...I have mongrel 0.3.14, with ruby 1.8.5, rails 1.1.6 and mongrel cluster 0.2.1, on debian sage 3.1 with apache 2.0, and fastthread 0.6.1. I am load balancing 3 mongrel processes using the random port trick. When I start mongrel the processes have about 60MB, but after some hour of usage the memory ballons upto more than 180MB and the site becomes terribly slow. Forcing me to restart mongrel cluster. Also, it reports me 9 mongrel processes instead of three. I am not able to understand why that''s happening. Are really 9 mongrel processes started instead of three? Please help. Thanks. ssin...
2013 Jul 22
2
Xen kernel fixes
Hi, I have made the for-centos-v5 branch available on git://xenbits.xen.org/people/dvrabel/linux.git This is based on 3.4.54 and includes the following additional fixes (since the for-centos-v4 branch). * x86/xen: during early setup, only 1:1 map the ISA region Fixes a boot failure if tboot is used. * xen/evtchn: avoid a deadlock when unbinding an event channel Fixes a potential deadlock by
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...to think you can safely assume there's no free memory in the guest, so > there's little point optimizing for it. If this is true, we should not inflate the balloon either. > OTOH it makes perfect sense optimizing for the unmapped memory that's > made up, in particular, by the ballon, and consider inflating the balloon right > before migration unless you already maintain it at the optimal size for other > reasons (like e.g. a global resource manager optimizing the VM density). > Yes, I believe the current balloon works and it's simple. Do you take the performance...
2016 Mar 04
2
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...to think you can safely assume there's no free memory in the guest, so > there's little point optimizing for it. If this is true, we should not inflate the balloon either. > OTOH it makes perfect sense optimizing for the unmapped memory that's > made up, in particular, by the ballon, and consider inflating the balloon right > before migration unless you already maintain it at the optimal size for other > reasons (like e.g. a global resource manager optimizing the VM density). > Yes, I believe the current balloon works and it's simple. Do you take the performance...
2020 Feb 11
2
problems with understanding of the memory parameters in the xml file
Hi guys, despite reading hours and hours in the internet i'm still struggling with "memory", "currentmemory" and "maxMemory". Maybe you can help me to sort it out. My idea is that a guest has an initial value of memory (which "memory" seems to be) when booting. We have some Windows 10 guests which calculate some stuff and i would like to increase
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1...
2019 Dec 04
5
[PATCH] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1...
2006 Dec 12
1
[PATCH] Fix e820 mapping limit
...ased on ''maxmem''. Booting with this patch I can verify that the guest kernel sees a 800 MB e820 map, so this fixes the initial problem. There appears to be a second issue somewhere in the stack though, because even if do ''xm mem-set <domid> 600'', while the ballon driver in the guest will see the 600 MB target, it will still never try to allocate its increased target. eg # grep mem /etc/xen/demo memory = 410 maxmem = 800 # xm create demo In the guest ''dmesg'' shows: BIOS-provided physical RAM map: Xen: 0000000000000000 -...
2011 Mar 11
2
[GIT PULL stable-2.6.32.x] PV on HVM fixes
...not initialize PV timers on HVM if !xen_have_vector_callback xen: no need to delay xen_setup_shutdown_event for hvm guests anymore xen: do not use xen_info on HVM, set pv_info name to "Xen HVM" xen-blkfront: handle Xen major numbers other than XENVBD xen: make the ballon driver work for hvm domains xen: PV on HVM: support PV spinlocks and IPIs xen: fix compile issue if XEN is enabled but XEN_PVHVM is disabled arch/ia64/xen/suspend.c | 9 +-- arch/x86/include/asm/xen/hypercall.h | 15 +++- arch/x86/xen/enlighten.c | 6 +...
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2016 Mar 04
2
[RFC qemu 0/4] A PV solution for live migration optimization
> Subject: Re: [RFC qemu 0/4] A PV solution for live migration optimization > > * Liang Li (liang.z.li at intel.com) wrote: > > The current QEMU live migration implementation mark the all the > > guest's RAM pages as dirtied in the ram bulk stage, all these pages > > will be processed and that takes quit a lot of CPU cycles. > > > > From guest's
2010 Jul 14
2
2.6.32.16 - pv_ops kernel compile error
Hi, Ubuntu 10.04 x64 installing xen4.0 testing when compiling kernel 2.6.32.16 from jeremy git i get this error: WARNING: modpost: Found 7 section mismatch(es). To see full details build your kernel with: ''make CONFIG_DEBUG_SECTION_MISMATCH=y'' GEN .version CHK include/linux/compile.h UPD include/linux/compile.h CC init/version.o LD
2019 Jul 18
0
[PATCH v3 2/2] balloon: fix up comments
...p while attempting to release all its pages. */ spin_lock_irqsave(&b_dev_info->pages_lock, flags); @@ -230,8 +239,8 @@ int balloon_page_migrate(struct address_space *mapping, /* * We can not easily support the no copy case here so ignore it as it - * is unlikely to be use with ballon pages. See include/linux/hmm.h for - * user of the MIGRATE_SYNC_NO_COPY mode. + * is unlikely to be used with ballon pages. See include/linux/hmm.h for + * a user of the MIGRATE_SYNC_NO_COPY mode. */ if (mode == MIGRATE_SYNC_NO_COPY) return -EINVAL; -- MST
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...mory, i.e. not free but cheap to > reclaim. > What's your mean by "available" memory? if they are not free, I don't think it's cheap. > > > OTOH it makes perfect sense optimizing for the unmapped memory > > > that's made up, in particular, by the ballon, and consider inflating > > > the balloon right before migration unless you already maintain it at > > > the optimal size for other reasons (like e.g. a global resource manager > optimizing the VM density). > > > > > > > Yes, I believe the current balloon w...
2016 Mar 04
5
[Qemu-devel] [RFC qemu 0/4] A PV solution for live migration optimization
...mory, i.e. not free but cheap to > reclaim. > What's your mean by "available" memory? if they are not free, I don't think it's cheap. > > > OTOH it makes perfect sense optimizing for the unmapped memory > > > that's made up, in particular, by the ballon, and consider inflating > > > the balloon right before migration unless you already maintain it at > > > the optimal size for other reasons (like e.g. a global resource manager > optimizing the VM density). > > > > > > > Yes, I believe the current balloon w...
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1...
2019 Dec 05
2
[PATCH v2] virtio-balloon: fix managed page counts when migrating pages between zones
In case we have to migrate a ballon page to a newpage of another zone, the managed page count of both zones is wrong. Paired with memory offlining (which will adjust the managed page count), we can trigger kernel crashes and all kinds of different symptoms. One way to reproduce: 1. Start a QEMU guest with 4GB, no NUMA 2. Hotplug a 1...