Are there any known reasons why ballooning might not work under Xen 3.1.x for a 32 bit Windows DomU? I''ve implemented ballooning in GPLPV and it works under Windows 2008 x32 in my testing, but one user is reporting problems. I don''t know much about the details yet but if it just doesn''t work under 3.1.x then upgrading is the only outcome. I''m using XENMEM_decrease_reservation to give pages back to xen, and XENMEM_populate_physmap to fill the ''holes'' back in with real memory when the domU wants to balloon up again. Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2010 10:11, "James Harper" <james.harper@bendigoit.com.au> wrote:> Are there any known reasons why ballooning might not work under Xen > 3.1.x for a 32 bit Windows DomU?32-bit HVM guest memory ballooning on 64-bit Xen is not supported until Xen 3.3.x. Earlier versions of Xen should be giving a useful warning on Xen''s logging console. -- Keir> I''ve implemented ballooning in GPLPV and it works under Windows 2008 x32 > in my testing, but one user is reporting problems. I don''t know much > about the details yet but if it just doesn''t work under 3.1.x then > upgrading is the only outcome. > > I''m using XENMEM_decrease_reservation to give pages back to xen, and > XENMEM_populate_physmap to fill the ''holes'' back in with real memory > when the domU wants to balloon up again. > > Thanks > > James > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Aravindh Puthiyaparambil
2010-May-23 18:23 UTC
RE: [Xen-devel] GPLPV memory ballooning and x32
> 32-bit HVM guest memory ballooning on 64-bit Xen is not supported until > Xen > 3.3.x. Earlier versions of Xen should be giving a useful warning on > Xen''s > logging console.Keir, I am running on CentOS 5.4 xen_major : 3 xen_minor : 1 xen_extra : .2-194.3.1.el5 I have memory=8G and maxmem=8G in my HVM config file and I am trying balloon up and down between 1G and 8G. This works with x64 Windows. With x32 I see the change only within Windows i.e. Free memory in task manager. Xentop does not reflect the change. BTW, this is with the latest PV driver from James. What is the warning message that I should be seeing on the console? I am not seeing anything on the console or in xend.log. Thanks, Aravindh _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 23/05/2010 19:23, "Aravindh Puthiyaparambil" <aravindh@gogrid.com> wrote:> I am running on CentOS 5.4 > xen_major : 3 > xen_minor : 1 > xen_extra : .2-194.3.1.el5 > > I have memory=8G and maxmem=8G in my HVM config file and I am trying balloon > up and down between 1G and 8G. This works with x64 Windows. With x32 I see the > change only within Windows i.e. Free memory in task manager. Xentop does not > reflect the change. BTW, this is with the latest PV driver from James. > > What is the warning message that I should be seeing on the console? I am not > seeing anything on the console or in xend.log.You should see lines saying things like "memory_op 1" and memory_op 6" on the Xen console (e.g., via ''xm dmesg''). So actually not very helpful warning messages. But anyway, the warnings are somewhat beside the point: 32-bit HVM ballooning will absolutely definitely not work on 64-bit Xen 3.1. It''s not going to be fixed. You need at least Xen 3.3 for that functionality. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > Keir, > > I am running on CentOS 5.4 > xen_major : 3 > xen_minor : 1 > xen_extra : .2-194.3.1.el5 > > I have memory=8G and maxmem=8G in my HVM config file and I am tryingballoon> up and down between 1G and 8G. This works with x64 Windows. With x32 Isee the> change only within Windows i.e. Free memory in task manager. Xentopdoes not> reflect the change. BTW, this is with the latest PV driver from James.GPLPV will allocate the memory from Windows and try and give it to Xen, but as Keir has said, that doesn''t work in your configuration. GPLPV doesn''t detect this though so the memory is now basically leaked. Additionally, I assume that you are creating an 8GB domain and ballooning down to 1GB as a test. I doubt it''s a good idea to balloon that much in a production system. Windows sizes various parts of the system based on the amount of physical memory, and may behave badly if you tinker with that too much. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sun, May 23, 2010 at 08:55:53PM +0100, Keir Fraser wrote:> On 23/05/2010 19:23, "Aravindh Puthiyaparambil" <aravindh@gogrid.com> wrote: > > > I am running on CentOS 5.4 > > xen_major : 3 > > xen_minor : 1 > > xen_extra : .2-194.3.1.el5 > > > > I have memory=8G and maxmem=8G in my HVM config file and I am trying balloon > > up and down between 1G and 8G. This works with x64 Windows. With x32 I see the > > change only within Windows i.e. Free memory in task manager. Xentop does not > > reflect the change. BTW, this is with the latest PV driver from James. > > > > What is the warning message that I should be seeing on the console? I am not > > seeing anything on the console or in xend.log. > > You should see lines saying things like "memory_op 1" and memory_op 6" on > the Xen console (e.g., via ''xm dmesg''). So actually not very helpful warning > messages. But anyway, the warnings are somewhat beside the point: 32-bit HVM > ballooning will absolutely definitely not work on 64-bit Xen 3.1. It''s not > going to be fixed. You need at least Xen 3.3 for that functionality. >.. then again Redhat has many backports from newer Xen releases in their 3.1.2 version.. I don''t know if this feature is backported.. probably not. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Aravindh Puthiyaparambil
2010-May-24 06:19 UTC
RE: [Xen-devel] GPLPV memory ballooning and x32
Yes, I was only ballooning it down to 1G as a test. I wouldn''t be doing that on a production system. I am now planning to try the GPLPV driver with Xen 3.4 / 4.0. Thanks, Aravindh ________________________________________ From: James Harper [james.harper@bendigoit.com.au] Sent: Sunday, May 23, 2010 5:33 PM To: Aravindh Puthiyaparambil; Keir Fraser; xen-devel@lists.xensource.com Subject: RE: [Xen-devel] GPLPV memory ballooning and x32> > Keir, > > I am running on CentOS 5.4 > xen_major : 3 > xen_minor : 1 > xen_extra : .2-194.3.1.el5 > > I have memory=8G and maxmem=8G in my HVM config file and I am tryingballoon> up and down between 1G and 8G. This works with x64 Windows. With x32 Isee the> change only within Windows i.e. Free memory in task manager. Xentopdoes not> reflect the change. BTW, this is with the latest PV driver from James.GPLPV will allocate the memory from Windows and try and give it to Xen, but as Keir has said, that doesn''t work in your configuration. GPLPV doesn''t detect this though so the memory is now basically leaked. Additionally, I assume that you are creating an 8GB domain and ballooning down to 1GB as a test. I doubt it''s a good idea to balloon that much in a production system. Windows sizes various parts of the system based on the amount of physical memory, and may behave badly if you tinker with that too much. James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 24/05/2010 07:03, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:>> You should see lines saying things like "memory_op 1" and memory_op 6" on >> the Xen console (e.g., via ''xm dmesg''). So actually not very helpful warning >> messages. But anyway, the warnings are somewhat beside the point: 32-bit HVM >> ballooning will absolutely definitely not work on 64-bit Xen 3.1. It''s not >> going to be fixed. You need at least Xen 3.3 for that functionality. >> > > .. then again Redhat has many backports from newer Xen releases in their 3.1.2 > version.. > I don''t know if this feature is backported.. probably not.I think we have evidence here that they did not. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Sun, May 23, 2010 at 11:19:43PM -0700, Aravindh Puthiyaparambil wrote:> Yes, I was only ballooning it down to 1G as a test. I wouldn''t be doing that on a production system. I am now planning to try the GPLPV driver with Xen 3.4 / 4.0. >Btw there are RPMs at http://gitco.de/repo/ for EL5. -- Pasi> Thanks, > Aravindh > ________________________________________ > From: James Harper [james.harper@bendigoit.com.au] > Sent: Sunday, May 23, 2010 5:33 PM > To: Aravindh Puthiyaparambil; Keir Fraser; xen-devel@lists.xensource.com > Subject: RE: [Xen-devel] GPLPV memory ballooning and x32 > > > > > Keir, > > > > I am running on CentOS 5.4 > > xen_major : 3 > > xen_minor : 1 > > xen_extra : .2-194.3.1.el5 > > > > I have memory=8G and maxmem=8G in my HVM config file and I am trying > balloon > > up and down between 1G and 8G. This works with x64 Windows. With x32 I > see the > > change only within Windows i.e. Free memory in task manager. Xentop > does not > > reflect the change. BTW, this is with the latest PV driver from James. > > GPLPV will allocate the memory from Windows and try and give it to Xen, > but as Keir has said, that doesn''t work in your configuration. GPLPV > doesn''t detect this though so the memory is now basically leaked. > > Additionally, I assume that you are creating an 8GB domain and > ballooning down to 1GB as a test. I doubt it''s a good idea to balloon > that much in a production system. Windows sizes various parts of the > system based on the amount of physical memory, and may behave badly if > you tinker with that too much. > > James > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Aravindh Puthiyaparambil
2010-May-24 22:23 UTC
RE: [Xen-devel] GPLPV memory ballooning and x32
> Btw there are RPMs at http://gitco.de/repo/ for EL5.I tried the Xen3.4.2 from the Gitco repo. I am unable to bring up any domain if I specify the memmax option to be greater that memory. The respective qemu-dm processes for the domains are at 95-100% CPU utilization. I tried this with x64 Linux and Windows domains. The Linux-centos domains stayed at "Booting ''CentOS''" screen. The windows domain died with a GPF that was displayed in the VNC window. I have attached the screen shots. I did not find anything of note in the xend.log. The qemu logs are shown below. The last line in "xm dmesg" is: (XEN) io.c:199:d5 MMIO emulation failed @ 0008:4013c8: 90 a6 9f 2d 08 83 Any idea why this is occurring? Thanks, Aravindh CentOS-x64 ---------- domid: 4 qemu: the number of cpus is 1 config qemu network with xen bridge for tap4.0 xenbr0 Watching /local/domain/0/device-model/4/logdirty/next-active Watching /local/domain/0/device-model/4/command char device redirected to /dev/pts/3 qemu_map_cache_init nr_buckets = 10000 size 4194304 shared page at pfn feffd buffered io page at pfn feffb Guest uuid = a2ae0d0e-380c-d3a1-bec4-01980c986d66 Time offset set 0 populating video RAM at ff000000 mapping video RAM from ff000000 Register xen platform. Done register platform. xs_read(/vm/a2ae0d0e-380c-d3a1-bec4-01980c986d66/log-throttling): read error platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. xs_read(/local/domain/0/device-model/4/xen_extended_power_mgmt): read error xs_read(): vncpasswd get error. /vm/a2ae0d0e-380c-d3a1-bec4-01980c986d66/vncpasswd. medium change watch on `hdc'' (index: 1): /home/gold/isos/CentOS-5.3-x86_64-bin-DVD_ks_floppy.iso I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 cirrus vga map change while on lfb mode mapping vram to f0000000 - f0400000 platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state. W2k3-x64 -------- domid: 5 qemu: the number of cpus is 2 config qemu network with xen bridge for tap5.0 xenbr0 Watching /local/domain/0/device-model/5/logdirty/next-active Watching /local/domain/0/device-model/5/command qemu_map_cache_init nr_buckets = 10000 size 4194304 shared page at pfn feffd buffered io page at pfn feffb Guest uuid = 41de22fc-a638-4d34-5237-1a111d81e263 Time offset set 0 populating video RAM at ff000000 mapping video RAM from ff000000 Register xen platform. Done register platform. xs_read(/vm/41de22fc-a638-4d34-5237-1a111d81e263/log-throttling): read error platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. xs_read(/local/domain/0/device-model/5/xen_extended_power_mgmt): read error xs_read(): vncpasswd get error. /vm/41de22fc-a638-4d34-5237-1a111d81e263/vncpasswd. medium change watch on `hdc'' (index: 1): /home/gold/isos/WindowsPVJH.iso I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 cirrus vga map change while on lfb mode mapping vram to f0000000 - f0400000 platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw state. platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro state. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 24/05/2010 23:23, "Aravindh Puthiyaparambil" <aravindh@gogrid.com> wrote:> I tried the Xen3.4.2 from the Gitco repo. I am unable to bring up any domain > if I specify the memmax option to be greater that memory. The respective > qemu-dm processes for the domains are at 95-100% CPU utilization. I tried this > with x64 Linux and Windows domains. The Linux-centos domains stayed at > "Booting ''CentOS''" screen. The windows domain died with a GPF that was > displayed in the VNC window. I have attached the screen shots. I did not find > anything of note in the xend.log. The qemu logs are shown below. The last line > in "xm dmesg" is: > > (XEN) io.c:199:d5 MMIO emulation failed @ 0008:4013c8: 90 a6 9f 2d 08 83 > > Any idea why this is occurring?Perhaps a bug in populate-on-demand, which I guess is what gets enabled when you specify maxmem parameter for an HVM domain. It gets allocated its basic memory parameter initially, and extra memory gets allocated when the HVM guest first writes to it, up to maxmem limit. Or that''s the intent anyway. This is not a regression from 3.1 presumably (3.1 does not implement populate-on-demand at all)? -- Keir> Thanks, > Aravindh > > > CentOS-x64 > ---------- > domid: 4 > qemu: the number of cpus is 1 > config qemu network with xen bridge for tap4.0 xenbr0 > Watching /local/domain/0/device-model/4/logdirty/next-active > Watching /local/domain/0/device-model/4/command > char device redirected to /dev/pts/3 > qemu_map_cache_init nr_buckets = 10000 size 4194304 > shared page at pfn feffd > buffered io page at pfn feffb > Guest uuid = a2ae0d0e-380c-d3a1-bec4-01980c986d66 > Time offset set 0 > populating video RAM at ff000000 > mapping video RAM from ff000000 > Register xen platform. > Done register platform. > xs_read(/vm/a2ae0d0e-380c-d3a1-bec4-01980c986d66/log-throttling): read error > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw > state. > xs_read(/local/domain/0/device-model/4/xen_extended_power_mgmt): read error > xs_read(): vncpasswd get error. > /vm/a2ae0d0e-380c-d3a1-bec4-01980c986d66/vncpasswd. > medium change watch on `hdc'' (index: 1): > /home/gold/isos/CentOS-5.3-x86_64-bin-DVD_ks_floppy.iso > I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 > cirrus vga map change while on lfb mode > mapping vram to f0000000 - f0400000 > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw > state. > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro > state. > > W2k3-x64 > -------- > domid: 5 > qemu: the number of cpus is 2 > config qemu network with xen bridge for tap5.0 xenbr0 > Watching /local/domain/0/device-model/5/logdirty/next-active > Watching /local/domain/0/device-model/5/command > qemu_map_cache_init nr_buckets = 10000 size 4194304 > shared page at pfn feffd > buffered io page at pfn feffb > Guest uuid = 41de22fc-a638-4d34-5237-1a111d81e263 > Time offset set 0 > populating video RAM at ff000000 > mapping video RAM from ff000000 > Register xen platform. > Done register platform. > xs_read(/vm/41de22fc-a638-4d34-5237-1a111d81e263/log-throttling): read error > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw > state. > xs_read(/local/domain/0/device-model/5/xen_extended_power_mgmt): read error > xs_read(): vncpasswd get error. > /vm/41de22fc-a638-4d34-5237-1a111d81e263/vncpasswd. > medium change watch on `hdc'' (index: 1): /home/gold/isos/WindowsPVJH.iso > I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 > I/O request not ready: 0, ptr: 0, port: 0, data: 0, count: 0, size: 0 > cirrus vga map change while on lfb mode > mapping vram to f0000000 - f0400000 > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is rw > state. > platform_fixed_ioport: changed ro/rw state of ROM memory area. now is ro > state._______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > On 24/05/2010 23:23, "Aravindh Puthiyaparambil" <aravindh@gogrid.com> wrote: > > > I tried the Xen3.4.2 from the Gitco repo. I am unable to bring up any domain > > if I specify the memmax option to be greater that memory. The respective > > qemu-dm processes for the domains are at 95-100% CPU utilization. I tried > this > > with x64 Linux and Windows domains. The Linux-centos domains stayed at > > "Booting ''CentOS''" screen. The windows domain died with a GPF that was > > displayed in the VNC window. I have attached the screen shots. I did not > find > > anything of note in the xend.log. The qemu logs are shown below. The last > line > > in "xm dmesg" is: > > > > (XEN) io.c:199:d5 MMIO emulation failed @ 0008:4013c8: 90 a6 9f 2d 08 83 > > > > Any idea why this is occurring? > > Perhaps a bug in populate-on-demand, which I guess is what gets enabled when > you specify maxmem parameter for an HVM domain. It gets allocated its basic > memory parameter initially, and extra memory gets allocated when the HVM > guest first writes to it, up to maxmem limit. Or that''s the intent anyway. > > This is not a regression from 3.1 presumably (3.1 does not implement > populate-on-demand at all)? >On a similar subject, is it now possible to start a hvm domain in a ''ballooned down'' state (via PoD perhaps) and then have PV drivers detect the ''unpopulated'' pages and turn them into ballooned pages? For that to work, I would need to be able to do the following: . detect the unpopulated PoD pages via some hypercall(s) . allocate specific pages in Windows (MmAllocatePagesForMdl has Low and High address parameters which suggest this sort of ability...) . make sure Windows doesn''t touch those pages when I allocate them (I guess it doesn''t anyway but I can''t look at the source to check...) . change the pages from PoD to ''empty'' via some hypercall(s) - or maybe this isn''t necessary... I can just allocate them to balloon down, and then ''touch'' each page (to make xen populate it) then free them, as long as I remember which pages are PoD and which are ''empty'' James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 25/05/2010 08:18, "James Harper" <james.harper@bendigoit.com.au> wrote:>> This is not a regression from 3.1 presumably (3.1 does not implement >> populate-on-demand at all)? >> > > On a similar subject, is it now possible to start a hvm domain in a ''ballooned > down'' state (via PoD perhaps) and then have PV drivers detect the > ''unpopulated'' pages and turn them into ballooned pages?Yeah this is all implemented in the Citrix drivers. Someone involved in that may be able to help. -- Keir> For that to work, I would need to be able to do the following: > . detect the unpopulated PoD pages via some hypercall(s) > . allocate specific pages in Windows (MmAllocatePagesForMdl has Low and High > address parameters which suggest this sort of ability...) > . make sure Windows doesn''t touch those pages when I allocate them (I guess it > doesn''t anyway but I can''t look at the source to check...) > . change the pages from PoD to ''empty'' via some hypercall(s) - or maybe this > isn''t necessary... I can just allocate them to balloon down, and then ''touch'' > each page (to make xen populate it) then free them, as long as I remember > which pages are PoD and which are ''empty''_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
>>> On 25.05.10 at 09:18, "James Harper" <james.harper@bendigoit.com.au> wrote: > On a similar subject, is it now possible to start a hvm domain in a > ''ballooned down'' state (via PoD perhaps) and then have PV drivers detect the > ''unpopulated'' pages and turn them into ballooned pages?Yes, that''s (afaik) the sole purpose of PoD. And having a functional balloon driver in the guest is a requirement then.> For that to work, I would need to be able to do the following: > . detect the unpopulated PoD pages via some hypercall(s)See linux-2.6.18''s c/s 989 and 1011. I''d be curious if you can come up with a better mechanism. Unfortunately XENMEM_get_pod_target can (so far) only be called from Dom0, otherwise that might be helpful too.> . allocate specific pages in Windows (MmAllocatePagesForMdl has Low and High > address parameters which suggest this sort of ability...) > . make sure Windows doesn''t touch those pages when I allocate them (I guess > it doesn''t anyway but I can''t look at the source to check...) > . change the pages from PoD to ''empty'' via some hypercall(s) - or maybe this > isn''t necessary... I can just allocate them to balloon down, and then ''touch'' > each page (to make xen populate it) then free them, as long as I remember > which pages are PoD and which are ''empty''Jan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> -----Original Message----- > > For that to work, I would need to be able to do the following: > . detect the unpopulated PoD pages via some hypercall(s) > . allocate specific pages in Windows (MmAllocatePagesForMdl has Low > and High address parameters which suggest this sort of ability...) > . make sure Windows doesn''t touch those pages when I allocate them > (I guess it doesn''t anyway but I can''t look at the source to > check...) > . change the pages from PoD to ''empty'' via some hypercall(s) - or > maybe this isn''t necessary... I can just allocate them to balloon > down, and then ''touch'' each page (to make xen populate it) then free > them, as long as I remember which pages are PoD and which are > ''empty'' >The position of invalid entries in the P2M are not important. IIRC all entries start PoD. If Windows can allocate without zeroing the memory (for which you''ll need MmAllocatePagesForMdlEx, so 2k3 SP1+) then the entry will remain PoD until the decrease reservation makes it invalid. Otherwise, there will be a populate followed by an immediate invalidation, which will clearly slow things down a little but is not disastrous. Providing the total number of populated pages does not reach the dynamic-max threshold, everything is fine. The only caveat with Windows is that it is good to balloon early because allocating enough guest pages to fulfill a balloon-down gets harder as the myriad of Windows kernel modules can quite aggressively land-grab in my experience. Paul _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> > The position of invalid entries in the P2M are not important. IIRC all entries > start PoD. If Windows can allocate without zeroing the memory (for which > you''ll need MmAllocatePagesForMdlEx, so 2k3 SP1+) then the entry will remain > PoD until the decrease reservation makes it invalid. Otherwise, there will be > a populate followed by an immediate invalidation, which will clearly slow > things down a little but is not disastrous. Providing the total number of > populated pages does not reach the dynamic-max threshold, everything is fine.That was my next question. So decrease_reservation on a unpopulated PoD page is okay. I think I have all the pieces of the puzzle then, and I don''t think I have to do anything different to what I''m doing now (except maybe some accounting changes), which is always nice :)> The only caveat with Windows is that it is good to balloon early because > allocating enough guest pages to fulfill a balloon-down gets harder as the > myriad of Windows kernel modules can quite aggressively land-grab in my > experience.It''s also pretty good at letting go of memory when under pressure. My balloon down runs in a loop with a timer on it - if it fails to get enough memory to balloon down it just waits a bit (1 second I think) then tries to grab some more, and there is usually enough available by then. Unless you are getting silly with the amount of ballooning then it''s normally straight forward. My test 2K3 DomU starts at 512MB and I can balloon it down to about 300MB shortly after boot before MmAllocatePagesForMdlEx starts failing to allocate memory, and then another 20 seconds of going around the loop before I get down to about 200MB. I haven''t tried to go lower than that but I suspect it wouldn''t end well if I tried :) My Balloon up is the same - it just keeps trying to get memory until Xen has some free. I should probably put a backoff in there though. Thanks for the info! James _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Aravindh Puthiyaparambil
2010-May-25 21:27 UTC
RE: [Xen-devel] GPLPV memory ballooning and x32
> > Any idea why this is occurring? > > Perhaps a bug in populate-on-demand, which I guess is what gets enabled > when > you specify maxmem parameter for an HVM domain. It gets allocated its > basic > memory parameter initially, and extra memory gets allocated when the > HVM > guest first writes to it, up to maxmem limit. Or that''s the intent > anyway. > > This is not a regression from 3.1 presumably (3.1 does not implement > populate-on-demand at all)?No, this is not a regression from 3.1. With 3.1 the domHVM would come up but you would only be able to balloon up and down between N and what was specified in the memory option. The maxmem option had no effect. However, as you pointed out ballooning was only kind of working with 3.1. Is there a work around for this issue in 3.4.2? Thanks, Aravindh _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 25/05/2010 22:27, "Aravindh Puthiyaparambil" <aravindh@gogrid.com> wrote:>> This is not a regression from 3.1 presumably (3.1 does not implement >> populate-on-demand at all)? > > No, this is not a regression from 3.1. With 3.1 the domHVM would come up but > you would only be able to balloon up and down between N and what was specified > in the memory option. The maxmem option had no effect. However, as you pointed > out ballooning was only kind of working with 3.1. > > Is there a work around for this issue in 3.4.2?Avoid the configuration option that causes the crash? ;-) Someone involved in PoD, e.g. George Dunlap, will have to help you debug it I guess. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Wed, May 26, 2010 at 7:40 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote:> Avoid the configuration option that causes the crash? ;-) > > Someone involved in PoD, e.g. George Dunlap, will have to help you debug it > I guess.Uum, can someone summarize for me what the problem is that may be related to PoD? I couldn''t figure out from the thread what the bug is. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, Jun 18, 2010 at 12:14:07PM +0100, George Dunlap wrote:> On Wed, May 26, 2010 at 7:40 AM, Keir Fraser <keir.fraser@eu.citrix.com> wrote: > > Avoid the configuration option that causes the crash? ;-) > > > > Someone involved in PoD, e.g. George Dunlap, will have to help you debug it > > I guess. > > Uum, can someone summarize for me what the problem is that may be > related to PoD? I couldn''t figure out from the thread what the bug > is. >specify: maxmem = 8192 memory = 2048 .. and the guest will crash. Some people reported also host/xen crashes in this scenario.. -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Fri, Jun 18, 2010 at 12:22 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:> specify: > maxmem = 8192 > memory = 2048 > > .. and the guest will crash.For HVM guests, this is expected behavior if the guest does not have a balloon driver installed.> Some people reported also host/xen crashes in this scenario..Obviously this is a problem; I think we''ve fixed a bunch of these kinds of issues, so if you see any more, try to get a crash dump. -George _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel