Hi, I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: static int __init balloon_init(void) { unsigned long pfn; struct page *page; if (!xen_pv_domain()) return -ENODEV; ..... } Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? 2010-11-16 Rui Chu _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: static int __init balloon_init(void) { unsigned long pfn; struct page *page; if (!xen_pv_domain()) return -ENODEV; ..... } Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? 2010-11-16 ------------------------------ Rui Chu _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On Tue, 16 Nov 2010, Chu Rui wrote:> Hi, > I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: > > static int __init balloon_init(void) > { > unsigned long pfn; > struct page *page; > if (!xen_pv_domain()) > return -ENODEV; > ..... > } > > Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that?not yet, even though I have a patch ready to enable it: git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
o, it's strange, the old version have no this limitation. At 2010-11-16 19:35:50,"Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote:>On Tue, 16 Nov 2010, Chu Rui wrote: >> Hi, >> I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: >> >> static int __init balloon_init(void) >> { >> unsigned long pfn; >> struct page *page; >> if (!xen_pv_domain()) >> return -ENODEV; >> ..... >> } >> >> Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? > >not yet, even though I have a patch ready to enable it: > >git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2010/11/16 牛立新 <topperxin@126.com>:> o, it''s strange, the old version have no this limitation.No; unfortunately a great deal of functionality present in "classic xen" has been lost in the process of getting the core dom0 support into the pvops kernel. I think the plan is, once we have the necessary changes to non-xen code pushed up stream, we can start working on getting feature parity with classic xen.> > > At 2010-11-16 19:35:50,"Stefano Stabellini" <stefano.stabellini@eu.citrix.com> wrote: > >>On Tue, 16 Nov 2010, Chu Rui wrote: >>> Hi, >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: >>> >>> static int __init balloon_init(void) >>> { >>> unsigned long pfn; >>> struct page *page; >>> if (!xen_pv_domain()) >>> return -ENODEV; >>> ..... >>> } >>> >>> Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? >> >>not yet, even though I have a patch ready to enable it: >> >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1 > > > ________________________________ > 网易163/126邮箱百分百兼容iphone ipad邮件收发 > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thank you for your kind reply, George. I am interested on the PoD memory. In my perspective, PoD mainly works in the system initialization stage. Before the balloon driver begins to work, it can limit the memory consumption of the guests. However, after a while the guest OS will commit more memory, but PoD cannot reclaim any more at that time even when the committed pages is IO cache. While the balloon keeps work all of the time. Would you please tell me whether my thought is correct? Actually, in my opinion, the guest IO cache is mostly useless, since the Dom0 will cache the IO operations. Such a double-cache wastes the memory resources. Is there any good idea for that like Transcendent Memory while works with HVM? 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@umich.edu>写道:> 2010/11/16 牛立新 <topperxin@126.com>: > > o, it''s strange, the old version have no this limitation. > > No; unfortunately a great deal of functionality present in "classic > xen" has been lost in the process of getting the core dom0 support > into the pvops kernel. I think the plan is, once we have the > necessary changes to non-xen code pushed up stream, we can start > working on getting feature parity with classic xen. > > > > > > > At 2010-11-16 19:35:50,"Stefano Stabellini" < > stefano.stabellini@eu.citrix.com> wrote: > > > >>On Tue, 16 Nov 2010, Chu Rui wrote: > >>> Hi, > >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there > exists the snippet as this: > >>> > >>> static int __init balloon_init(void) > >>> { > >>> unsigned long pfn; > >>> struct page *page; > >>> if (!xen_pv_domain()) > >>> return -ENODEV; > >>> ..... > >>> } > >>> > >>> Does it means the driver will not work in HVM? If so, where is the > HVN-enabled code for that? > >> > >>not yet, even though I have a patch ready to enable it: > >> > >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git2.6.36-rc7-pvhvm-v1 > > > > > > ________________________________ > > 网易163/126邮箱百分百兼容iphone ipad邮件收发 > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
FYI, Transcendent Memory does work with HVM, with a recent Xen and the proper Linux guest-side patches (including Stefano’s PV-on-HVM patchset).? There is extra overhead in an HVM for each tmem call due to vmenter/vmexit and I have not measured performance, but this overhead should not be too large on newer processors.? Also, of course, Transcendent Memory will not work with Windows guests (or any guests that do not have tmem patches), while PoD is primarily intended to work with Windows (because, IIRC, Windows zeroes all of memory). ? I agree that guest IO cacheing is mostly useless for CLEAN pages if the dom0 page cache is large enough for all guests (or if tmem is working).? For dirty pages, using dom0 cacheing risks data integrity problems (e.g. the guest believes a transaction to disk is complete but the data is in a dom0 cache that has not been flushed to disk). Dan ? From: Chu Rui [mailto:ruichu@gmail.com] Sent: Tuesday, November 16, 2010 8:37 AM To: George Dunlap; Xen-devel@lists.xensource.com Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM ? Thank you for your kind reply, George. ? I am interested on the PoD memory. In my perspective, PoD mainly works in the system initialization stage. Before the balloon driver begins to work, it can limit the? memory consumption of the guests. However, after a while the guest OS will commit more memory, but PoD cannot reclaim any more at that time even when the committed pages is IO cache. While the balloon keeps work all of the time. Would you please tell me whether my thought is correct? Actually, in my opinion, the guest IO cache is mostly useless, since the Dom0 will cache the IO operations. Such a double-cache wastes the memory resources. Is there any good idea for that like Transcendent Memory while works with HVM? 在 2010年11月16日 下午8:56,George Dunlap <HYPERLINK "mailto:dunlapg@umich.edu"dunlapg@umich.edu>写道: 2010/11/16 牛立新 <HYPERLINK "mailto:topperxin@126.com"topperxin@126.com>:> o, it''s strange, the old version have no this limitation.No; unfortunately a great deal of functionality present in "classic xen" has been lost in the process of getting the core dom0 support into the pvops kernel. ?I think the plan is, once we have the necessary changes to non-xen code pushed up stream, we can start working on getting feature parity with classic xen.> > > At 2010-11-16 19:35:50,"Stefano Stabellini" <HYPERLINK "mailto:stefano.stabellini@eu.citrix.com"stefano.stabellini@eu.citrix.com> wrote: > >>On Tue, 16 Nov 2010, Chu Rui wrote: >>> Hi, >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: >>> >>> static int __init balloon_init(void) >>> { >>> unsigned long pfn; >>> struct page *page; >>> if (!xen_pv_domain()) >>> return -ENODEV; >>> ..... >>> } >>> >>> Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? >> >>not yet, even though I have a patch ready to enable it: >> >>git://HYPERLINK "http://xenbits.xen.org/people/sstabellini/linux-pvhvm.git" \nxenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1 > > > ________________________________ > 网易163/126邮箱百分百兼容iphone ipad邮件收发> _______________________________________________ > Xen-devel mailing list > HYPERLINK "mailto:Xen-devel@lists.xensource.com"Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Thank you, Dan. It is a pity that tmem cannot be used for Windows guest. But can we disable the guest Windows caching? If so, the guest OS is no longer a memory hog (as referred in your talk), and maybe we can manage its memory consumption on demand, as a ring3 application does. BTW, as far as I am concerned, Windows XP does NOT zeros all of the memory at startup stage. Actually, even the memory allocated in ring3 application was not commited until it was really accessed. So PoD memory may work well in that case. 在 2010年11月17日 上午1:10,Dan Magenheimer <dan.magenheimer@oracle.com>写道:> FYI, Transcendent Memory does work with HVM, with a recent Xen and the > proper Linux guest-side patches (including Stefano’s PV-on-HVM patchset). > There is extra overhead in an HVM for each tmem call due to vmenter/vmexit > and I have not measured performance, but this overhead should not be too > large on newer processors. Also, of course, Transcendent Memory will not > work with Windows guests (or any guests that do not have tmem patches), > while PoD is primarily intended to work with Windows (because, IIRC, Windows > zeroes all of memory). > > > > I agree that guest IO cacheing is mostly useless for CLEAN pages if the > dom0 page cache is large enough for all guests (or if tmem is working). For > dirty pages, using dom0 cacheing risks data integrity problems (e.g. the > guest believes a transaction to disk is complete but the data is in a dom0 > cache that has not been flushed to disk). > > > Dan > > > > *From:* Chu Rui [mailto:ruichu@gmail.com] > *Sent:* Tuesday, November 16, 2010 8:37 AM > *To:* George Dunlap; Xen-devel@lists.xensource.com > *Subject:* Re: Re: [Xen-devel] Balloon driver for Linux/HVM > > > > Thank you for your kind reply, George. > > > > I am interested on the PoD memory. In my perspective, PoD mainly works in > the system initialization stage. Before the balloon driver begins to work, > it can limit the memory consumption of the guests. However, after a while > the guest OS will commit more memory, but PoD cannot reclaim any more at > that time even when the committed pages is IO cache. While the balloon keeps > work all of the time. > > Would you please tell me whether my thought is correct? > > Actually, in my opinion, the guest IO cache is mostly useless, since the > Dom0 will cache the IO operations. Such a double-cache wastes the memory > resources. Is there any good idea for that like Transcendent Memory while > works with HVM? > > 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@umich.edu>写道: > > 2010/11/16 牛立新 <topperxin@126.com>: > > > o, it''s strange, the old version have no this limitation. > > No; unfortunately a great deal of functionality present in "classic > xen" has been lost in the process of getting the core dom0 support > into the pvops kernel. I think the plan is, once we have the > necessary changes to non-xen code pushed up stream, we can start > working on getting feature parity with classic xen. > > > > > > > > At 2010-11-16 19:35:50,"Stefano Stabellini" < > stefano.stabellini@eu.citrix.com> wrote: > > > >>On Tue, 16 Nov 2010, Chu Rui wrote: > >>> Hi, > >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there > exists the snippet as this: > >>> > >>> static int __init balloon_init(void) > >>> { > >>> unsigned long pfn; > >>> struct page *page; > >>> if (!xen_pv_domain()) > >>> return -ENODEV; > >>> ..... > >>> } > >>> > >>> Does it means the driver will not work in HVM? If so, where is the > HVN-enabled code for that? > >> > >>not yet, even though I have a patch ready to enable it: > >> > >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git2.6.36-rc7-pvhvm-v1 > > > > > > ________________________________ > > 网易163/126邮箱百分百兼容iphone ipad邮件收发 > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
PoD is a mechanism designed for exactly one purpose: to allow a VM to "boot ballooned". It''s designed to allow the guest to run on less than the amount of memory it thinks it has until the balloon driver loads. After that, its job is done. So you''re right, it is designed to work for the system initialization stage. Regarding disk caching: I disagree about the guest IO cache. I''d say if one cache is to go, it should be the dom0 cache. There are lots of reasons for this: * It''s more fair: if you did all caching in dom0, then VM A might be able to use almost the entire cache, leaving VM B without. If each guest does its own caching, then it''s using its own resources and not impacting someone else. * I think the guest OS has a better idea what blocks need to be cached and which don''t. It''s much better to let that decision happen locally, than to try to guess it from dom0, where we don''t know anything about processes, disk layout, &c. * As Dan said, for write caching there''s a consistency issue; better to let the guest decide when it''s safe not to write a page. * If dom0 memory isn''t being used for something else, it doesn''t hurt to have duplicate copies of things in memory. But ideally guest disk caching shouldn''t take away from anything else on the system. My $0.02. :-) -George 2010/11/16 Chu Rui <ruichu@gmail.com>:> Thank you for your kind reply, George. > > I am interested on the PoD memory. In my perspective, PoD mainly works in > the system initialization stage. Before the balloon driver begins to work, > it can limit the memory consumption of the guests. However, after a while > the guest OS will commit more memory, but PoD cannot reclaim any more at > that time even when the committed pages is IO cache. While the balloon keeps > work all of the time. > > Would you please tell me whether my thought is correct? > > Actually, in my opinion, the guest IO cache is mostly useless, since the > Dom0 will cache the IO operations. Such a double-cache wastes the memory > resources. Is there any good idea for that like Transcendent Memory while > works with HVM? > > 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@umich.edu>写道: >> >> 2010/11/16 牛立新 <topperxin@126.com>: >> > o, it''s strange, the old version have no this limitation. >> >> No; unfortunately a great deal of functionality present in "classic >> xen" has been lost in the process of getting the core dom0 support >> into the pvops kernel. I think the plan is, once we have the >> necessary changes to non-xen code pushed up stream, we can start >> working on getting feature parity with classic xen. >> >> > >> > >> > At 2010-11-16 19:35:50,"Stefano Stabellini" >> > <stefano.stabellini@eu.citrix.com> wrote: >> > >> >>On Tue, 16 Nov 2010, Chu Rui wrote: >> >>> Hi, >> >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there >> >>> exists the snippet as this: >> >>> >> >>> static int __init balloon_init(void) >> >>> { >> >>> unsigned long pfn; >> >>> struct page *page; >> >>> if (!xen_pv_domain()) >> >>> return -ENODEV; >> >>> ..... >> >>> } >> >>> >> >>> Does it means the driver will not work in HVM? If so, where is the >> >>> HVN-enabled code for that? >> >> >> >>not yet, even though I have a patch ready to enable it: >> >> >> >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git >> >> 2.6.36-rc7-pvhvm-v1 >> > >> > >> > ________________________________ >> > 网易163/126邮箱百分百兼容iphone ipad邮件收发 >> > _______________________________________________ >> > Xen-devel mailing list >> > Xen-devel@lists.xensource.com >> > http://lists.xensource.com/xen-devel >> > >> > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
I should point out also, that the balloon driver will most likely (indirectly) pull memory from the guest''s IO cache. The balloon driver asks the guest OS for a page, and the guest OS decides which page is the least useful at this point. If it doesn''t have any free pages, it will most likely choose either a page from the buffer cache, or page out a not-recently-used application memory page. The guest is really in the best position to know which will have the least impact on performance at that point. Also, making dom0''s buffer cache tiny and giving all the memory to the guests allows the guests to use memory the way they see fit as well. If the guest OS thinks having a larger buffer cache will be advantageous, it can do that; OTOH, if it thinks giving almost all the memory to processes will be more advantageous, it can do that too. Having memory set aside for a dom0 guest-disk cache doesn''t give the guest that choice. -George 2010/11/17 George Dunlap <George.Dunlap@eu.citrix.com>:> PoD is a mechanism designed for exactly one purpose: to allow a VM to > "boot ballooned". It''s designed to allow the guest to run on less > than the amount of memory it thinks it has until the balloon driver > loads. After that, its job is done. So you''re right, it is designed > to work for the system initialization stage. > > Regarding disk caching: I disagree about the guest IO cache. I''d say > if one cache is to go, it should be the dom0 cache. There are lots of > reasons for this: > * It''s more fair: if you did all caching in dom0, then VM A might be > able to use almost the entire cache, leaving VM B without. If each > guest does its own caching, then it''s using its own resources and not > impacting someone else. > * I think the guest OS has a better idea what blocks need to be cached > and which don''t. It''s much better to let that decision happen > locally, than to try to guess it from dom0, where we don''t know > anything about processes, disk layout, &c. > * As Dan said, for write caching there''s a consistency issue; better > to let the guest decide when it''s safe not to write a page. > * If dom0 memory isn''t being used for something else, it doesn''t hurt > to have duplicate copies of things in memory. But ideally guest disk > caching shouldn''t take away from anything else on the system. > > My $0.02. :-) > > -George > > 2010/11/16 Chu Rui <ruichu@gmail.com>: >> Thank you for your kind reply, George. >> >> I am interested on the PoD memory. In my perspective, PoD mainly works in >> the system initialization stage. Before the balloon driver begins to work, >> it can limit the memory consumption of the guests. However, after a while >> the guest OS will commit more memory, but PoD cannot reclaim any more at >> that time even when the committed pages is IO cache. While the balloon keeps >> work all of the time. >> >> Would you please tell me whether my thought is correct? >> >> Actually, in my opinion, the guest IO cache is mostly useless, since the >> Dom0 will cache the IO operations. Such a double-cache wastes the memory >> resources. Is there any good idea for that like Transcendent Memory while >> works with HVM? >> >> 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@umich.edu>写道: >>> >>> 2010/11/16 牛立新 <topperxin@126.com>: >>> > o, it''s strange, the old version have no this limitation. >>> >>> No; unfortunately a great deal of functionality present in "classic >>> xen" has been lost in the process of getting the core dom0 support >>> into the pvops kernel. I think the plan is, once we have the >>> necessary changes to non-xen code pushed up stream, we can start >>> working on getting feature parity with classic xen. >>> >>> > >>> > >>> > At 2010-11-16 19:35:50,"Stefano Stabellini" >>> > <stefano.stabellini@eu.citrix.com> wrote: >>> > >>> >>On Tue, 16 Nov 2010, Chu Rui wrote: >>> >>> Hi, >>> >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there >>> >>> exists the snippet as this: >>> >>> >>> >>> static int __init balloon_init(void) >>> >>> { >>> >>> unsigned long pfn; >>> >>> struct page *page; >>> >>> if (!xen_pv_domain()) >>> >>> return -ENODEV; >>> >>> ..... >>> >>> } >>> >>> >>> >>> Does it means the driver will not work in HVM? If so, where is the >>> >>> HVN-enabled code for that? >>> >> >>> >>not yet, even though I have a patch ready to enable it: >>> >> >>> >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git >>> >> 2.6.36-rc7-pvhvm-v1 >>> > >>> > >>> > ________________________________ >>> > 网易163/126邮箱百分百兼容iphone ipad邮件收发 >>> > _______________________________________________ >>> > Xen-devel mailing list >>> > Xen-devel@lists.xensource.com >>> > http://lists.xensource.com/xen-devel >>> > >>> > >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> >> >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
You are right, so balloon is an important tool to adjust the capacity of the buffer caches among the guests. But balloon is usually criticized for its long reactive time. Would you please tell me how slow it is? Can we temporarily suspend the guest when the balloon deflating speed is not as far as required? Furthermore, with HVM, the balloon does not work when the guest is short of memory and swapping, even the host has a lot of surplus at that time. Besides promise a large size to the booting guest, it there any better way? Maybe the dom0 cache could reduce the swapping consumption, since the swapping IO is also cached. Anyway, PoD is a very cool contribution :-) 在 2010年11月17日 下午5:53,George Dunlap <George.Dunlap@eu.citrix.com>写道:> I should point out also, that the balloon driver will most likely > (indirectly) pull memory from the guest''s IO cache. The balloon > driver asks the guest OS for a page, and the guest OS decides which > page is the least useful at this point. If it doesn''t have any free > pages, it will most likely choose either a page from the buffer cache, > or page out a not-recently-used application memory page. The guest is > really in the best position to know which will have the least impact > on performance at that point. > > Also, making dom0''s buffer cache tiny and giving all the memory to the > guests allows the guests to use memory the way they see fit as well. > If the guest OS thinks having a larger buffer cache will be > advantageous, it can do that; OTOH, if it thinks giving almost all the > memory to processes will be more advantageous, it can do that too. > Having memory set aside for a dom0 guest-disk cache doesn''t give the > guest that choice. > > -George > > 2010/11/17 George Dunlap <George.Dunlap@eu.citrix.com>: > > PoD is a mechanism designed for exactly one purpose: to allow a VM to > > "boot ballooned". It''s designed to allow the guest to run on less > > than the amount of memory it thinks it has until the balloon driver > > loads. After that, its job is done. So you''re right, it is designed > > to work for the system initialization stage. > > > > Regarding disk caching: I disagree about the guest IO cache. I''d say > > if one cache is to go, it should be the dom0 cache. There are lots of > > reasons for this: > > * It''s more fair: if you did all caching in dom0, then VM A might be > > able to use almost the entire cache, leaving VM B without. If each > > guest does its own caching, then it''s using its own resources and not > > impacting someone else. > > * I think the guest OS has a better idea what blocks need to be cached > > and which don''t. It''s much better to let that decision happen > > locally, than to try to guess it from dom0, where we don''t know > > anything about processes, disk layout, &c. > > * As Dan said, for write caching there''s a consistency issue; better > > to let the guest decide when it''s safe not to write a page. > > * If dom0 memory isn''t being used for something else, it doesn''t hurt > > to have duplicate copies of things in memory. But ideally guest disk > > caching shouldn''t take away from anything else on the system. > > > > My $0.02. :-) > > > > -George > > > > 2010/11/16 Chu Rui <ruichu@gmail.com>: > >> Thank you for your kind reply, George. > >> > >> I am interested on the PoD memory. In my perspective, PoD mainly works > in > >> the system initialization stage. Before the balloon driver begins to > work, > >> it can limit the memory consumption of the guests. However, after a > while > >> the guest OS will commit more memory, but PoD cannot reclaim any more at > >> that time even when the committed pages is IO cache. While the balloon > keeps > >> work all of the time. > >> > >> Would you please tell me whether my thought is correct? > >> > >> Actually, in my opinion, the guest IO cache is mostly useless, since the > >> Dom0 will cache the IO operations. Such a double-cache wastes the memory > >> resources. Is there any good idea for that like Transcendent Memory > while > >> works with HVM? > >> > >> 在 2010年11月16日 下午8:56,George Dunlap <dunlapg@umich.edu>写道: > >>> > >>> 2010/11/16 牛立新 <topperxin@126.com>: > >>> > o, it''s strange, the old version have no this limitation. > >>> > >>> No; unfortunately a great deal of functionality present in "classic > >>> xen" has been lost in the process of getting the core dom0 support > >>> into the pvops kernel. I think the plan is, once we have the > >>> necessary changes to non-xen code pushed up stream, we can start > >>> working on getting feature parity with classic xen. > >>> > >>> > > >>> > > >>> > At 2010-11-16 19:35:50,"Stefano Stabellini" > >>> > <stefano.stabellini@eu.citrix.com> wrote: > >>> > > >>> >>On Tue, 16 Nov 2010, Chu Rui wrote: > >>> >>> Hi, > >>> >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, > there > >>> >>> exists the snippet as this: > >>> >>> > >>> >>> static int __init balloon_init(void) > >>> >>> { > >>> >>> unsigned long pfn; > >>> >>> struct page *page; > >>> >>> if (!xen_pv_domain()) > >>> >>> return -ENODEV; > >>> >>> ..... > >>> >>> } > >>> >>> > >>> >>> Does it means the driver will not work in HVM? If so, where is the > >>> >>> HVN-enabled code for that? > >>> >> > >>> >>not yet, even though I have a patch ready to enable it: > >>> >> > >>> >>git://xenbits.xen.org/people/sstabellini/linux-pvhvm.git > >>> >> 2.6.36-rc7-pvhvm-v1 > >>> > > >>> > > >>> > ________________________________ > >>> > 网易163/126邮箱百分百兼容iphone ipad邮件收发 > >>> > _______________________________________________ > >>> > Xen-devel mailing list > >>> > Xen-devel@lists.xensource.com > >>> > http://lists.xensource.com/xen-devel > >>> > > >>> > > >> > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >> > >> > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hello, On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote:> You are right, so balloon is an important tool to adjust the capacity of the > buffer caches among the guests. But balloon is usually criticized for its > long reactive time. Would you please tell me how slow it is? Can we > temporarily suspend the guest when the balloon deflating speed is not as far > as required? > Furthermore, with HVM, the balloon does not work when the guest is short of > memory and swapping, even the host has a lot of surplus at that time. > Besides promise a large size to the booting guest, it there any better way?Yes, it is - memory hotplug. Now it is under development. Currently, I am waiting for some comments for new version of patch. I will make it public when I receive those reviews. Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Nice to hear that! In my image, the memory hotplug only works in the PV guest. Did you figured out an HVM version? If so, could you tell me the method you are using? Maybe binary patching with an HVM driver? 2010/11/17 Daniel Kiper <dkiper@net-space.pl>> Hello, > > On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote: > > You are right, so balloon is an important tool to adjust the capacity of > the > > buffer caches among the guests. But balloon is usually criticized for its > > long reactive time. Would you please tell me how slow it is? Can we > > temporarily suspend the guest when the balloon deflating speed is not as > far > > as required? > > Furthermore, with HVM, the balloon does not work when the guest is short > of > > memory and swapping, even the host has a lot of surplus at that time. > > Besides promise a large size to the booting guest, it there any better > way? > > Yes, it is - memory hotplug. Now it is under development. Currently, > I am waiting for some comments for new version of patch. I will make > it public when I receive those reviews. > > Daniel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hello, On Wed, Nov 17, 2010 at 10:05:40PM +0800, Chu Rui wrote:> Nice to hear that! > In my image, the memory hotplug only works in the PV guest.Which kernel version are you using ???> Did you figured out an HVM version? If so, could you tell > me the method you are using? Maybe binary patching with an HVM driver?Check git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git repository, xen/memory-hotplug head. It contains PV/HVM working version (and some bugs which were removed in newest version which will be published after verification). Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
As George pointed out in a separate branch of this email thread, disabling a guests caching is probably a bad idea in general. The goal of tmem is to explore if physical memory utilization can be improved when the guest is aware that it is running as a guest and when the guest kernel can be modified (slightly) for that case.? This implies that Windows would have to be modified to use tmem, though it has been suggested that a Windows kernel expert might be able to somehow interpose binary code to do a similar thing.? Since I know nothing about Windows, someone else will have to explore that. ? From: Chu Rui [mailto:ruichu@gmail.com] Sent: Tuesday, November 16, 2010 7:28 PM To: Dan Magenheimer; xen-devel@lists.xensource.com; George Dunlap Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM ? Thank you, Dan. ? It is a pity that tmem cannot be used for Windows guest. But can we disable the guest?Windows caching? If so, the guest OS is no longer a memory hog (as referred in your talk), and maybe?we can manage its memory consumption on demand, as a ring3 application does. ? BTW, as far as I am concerned, Windows XP does NOT zeros all of the memory at startup stage. Actually, even the memory allocated in ring3 application was not commited?until it was really accessed. So PoD memory may work well in that case. ? ? 在 2010年11月17日 上午1:10,Dan Magenheimer <HYPERLINK "mailto:dan.magenheimer@oracle.com"dan.magenheimer@oracle.com>写道: FYI, Transcendent Memory does work with HVM, with a recent Xen and the proper Linux guest-side patches (including Stefano’s PV-on-HVM patchset).? There is extra overhead in an HVM for each tmem call due to vmenter/vmexit and I have not measured performance, but this overhead should not be too large on newer processors.? Also, of course, Transcendent Memory will not work with Windows guests (or any guests that do not have tmem patches), while PoD is primarily intended to work with Windows (because, IIRC, Windows zeroes all of memory). ? I agree that guest IO cacheing is mostly useless for CLEAN pages if the dom0 page cache is large enough for all guests (or if tmem is working).? For dirty pages, using dom0 cacheing risks data integrity problems (e.g. the guest believes a transaction to disk is complete but the data is in a dom0 cache that has not been flushed to disk). Dan ? From: Chu Rui [mailto:HYPERLINK "mailto:ruichu@gmail.com" \nruichu@gmail.com] Sent: Tuesday, November 16, 2010 8:37 AM To: George Dunlap; HYPERLINK "mailto:Xen-devel@lists.xensource.com" \nXen-devel@lists.xensource.com Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM ? Thank you for your kind reply, George. ? I am interested on the PoD memory. In my perspective, PoD mainly works in the system initialization stage. Before the balloon driver begins to work, it can limit the? memory consumption of the guests. However, after a while the guest OS will commit more memory, but PoD cannot reclaim any more at that time even when the committed pages is IO cache. While the balloon keeps work all of the time. Would you please tell me whether my thought is correct? Actually, in my opinion, the guest IO cache is mostly useless, since the Dom0 will cache the IO operations. Such a double-cache wastes the memory resources. Is there any good idea for that like Transcendent Memory while works with HVM? 在 2010年11月16日 下午8:56,George Dunlap <HYPERLINK "mailto:dunlapg@umich.edu" \ndunlapg@umich.edu>写道: 2010/11/16 牛立新 <HYPERLINK "mailto:topperxin@126.com" \ntopperxin@126.com>:> o, it''s strange, the old version have no this limitation.No; unfortunately a great deal of functionality present in "classic xen" has been lost in the process of getting the core dom0 support into the pvops kernel. ?I think the plan is, once we have the necessary changes to non-xen code pushed up stream, we can start working on getting feature parity with classic xen.> > > At 2010-11-16 19:35:50,"Stefano Stabellini" <HYPERLINK "mailto:stefano.stabellini@eu.citrix.com" \nstefano.stabellini@eu.citrix.com> wrote: > >>On Tue, 16 Nov 2010, Chu Rui wrote: >>> Hi, >>> I have noticed that, in the code of linux/drivers/xen/balloon.c, there exists the snippet as this: >>> >>> static int __init balloon_init(void) >>> { >>> unsigned long pfn; >>> struct page *page; >>> if (!xen_pv_domain()) >>> return -ENODEV; >>> ..... >>> } >>> >>> Does it means the driver will not work in HVM? If so, where is the HVN-enabled code for that? >> >>not yet, even though I have a patch ready to enable it: >> >>git://HYPERLINK "http://xenbits.xen.org/people/sstabellini/linux-pvhvm.git" \nxenbits.xen.org/people/sstabellini/linux-pvhvm.git 2.6.36-rc7-pvhvm-v1 > > > ________________________________ > 网易163/126邮箱百分百兼容iphone ipad邮件收发> _______________________________________________ > Xen-devel mailing list > HYPERLINK "mailto:Xen-devel@lists.xensource.com" \nXen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >? ? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> From: Daniel Kiper [mailto:dkiper@net-space.pl] > Sent: Wednesday, November 17, 2010 6:04 AM > To: Chu Rui > Cc: jeremy@goop.org; Dan Magenheimer; Xen-devel@lists.xensource.com > Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM > > Hello, > > On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote: > > You are right, so balloon is an important tool to adjust the capacity > of the > > buffer caches among the guests. But balloon is usually criticized for > its > > long reactive time. Would you please tell me how slow it is? Can we > > temporarily suspend the guest when the balloon deflating speed is not > as far > > as required? > > Furthermore, with HVM, the balloon does not work when the guest is > short of > > memory and swapping, even the host has a lot of surplus at that time. > > Besides promise a large size to the booting guest, it there any > better way? > > Yes, it is - memory hotplug. Now it is under development. Currently, > I am waiting for some comments for new version of patch. I will make > it public when I receive those reviews.Hi Daniel -- If I am not misunderstanding, memory hotplug (whether it works in an HVM guest or not) doesn''t solve Chu''s issue because memory hotplug either: (1) requires operator intervention or (2) creates denial-of-service conditions such as a guest maliciously hot-plugging as much memory as it can. Chu''s stated issue is that ballooning is not responsive enough when memory demand increases unexpectedly. Since future memory demand can never be accurately predicted (only poorly guessed), some compensating mechanism must be in place to handle the poorly predicted cases. That''s essentially what tmem is good for. Dan _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Exactly, Dan. Since most of our guests are running Windows, it is hard to make a binary patch on Windows, to make tmem working (Although tmem perfectly solves the problem of guest swapping). I am afraid the memory hotplug does not work as well in Windows guest. Thus the only choise for me is balloon, is it? 2010/11/18 Dan Magenheimer <dan.magenheimer@oracle.com>> > From: Daniel Kiper [mailto:dkiper@net-space.pl] > > Sent: Wednesday, November 17, 2010 6:04 AM > > To: Chu Rui > > Cc: jeremy@goop.org; Dan Magenheimer; Xen-devel@lists.xensource.com > > Subject: Re: Re: [Xen-devel] Balloon driver for Linux/HVM > > > > Hello, > > > > On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote: > > > You are right, so balloon is an important tool to adjust the capacity > > of the > > > buffer caches among the guests. But balloon is usually criticized for > > its > > > long reactive time. Would you please tell me how slow it is? Can we > > > temporarily suspend the guest when the balloon deflating speed is not > > as far > > > as required? > > > Furthermore, with HVM, the balloon does not work when the guest is > > short of > > > memory and swapping, even the host has a lot of surplus at that time. > > > Besides promise a large size to the booting guest, it there any > > better way? > > > > Yes, it is - memory hotplug. Now it is under development. Currently, > > I am waiting for some comments for new version of patch. I will make > > it public when I receive those reviews. > > Hi Daniel -- > > If I am not misunderstanding, memory hotplug (whether it works in > an HVM guest or not) doesn''t solve Chu''s issue because memory hotplug > either: (1) requires operator intervention or (2) creates denial-of-service > conditions such as a guest maliciously hot-plugging as much memory > as it can. > > Chu''s stated issue is that ballooning is not responsive enough when > memory demand increases unexpectedly. Since future memory demand > can never be accurately predicted (only poorly guessed), some > compensating mechanism must be in place to handle the poorly predicted > cases. That''s essentially what tmem is good for. > > Dan >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi,> > On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote: > > > Furthermore, with HVM, the balloon does not work when the guest is short of > > > memory and swapping, even the host has a lot of surplus at that time. > > > Besides promise a large size to the booting guest, it there any better way? > > > > Yes, it is - memory hotplug. Now it is under development. Currently, > > I am waiting for some comments for new version of patch. I will make > > it public when I receive those reviews. > > If I am not misunderstanding, memory hotplug (whether it works in > an HVM guest or not) doesn''t solve Chu''s issue because memory hotplug > either: (1) requires operator intervention or (2) creates denial-of-service > conditions such as a guest maliciously hot-plugging as much memory > as it can.As I understrand (if I am wrong please correct me) in last sentence (please look above) Chu stated that balloon driver could not expand memory above limit declared at boot (which is true). That is why I mentioned about memory hotplug which is a solution for problem described above. 1) Now it is true, however it is easy to change that (I sent you an proposal which we discussed a bit). 2) It is true if maxmem is set above memory limit available for host. If it is set below then it does not pose a problem for host because guest could not allocate more memory than maxmem.> Chu''s stated issue is that ballooning is not responsive enough when > memory demand increases unexpectedly. Since future memory demand > can never be accurately predicted (only poorly guessed), some > compensating mechanism must be in place to handle the poorly predicted > cases. That''s essentially what tmem is good for.As we agreed ealier sometimes it is better to return memory to the system "directly" than allocate it as a backend for swap. I written PoC (with some suggestions from you) which works and cope very good with high memory demands. However, it is a long way (to some extent) to fully working implementation containing all features (ballooning, tmem and memory hotplug). I could do that. However at the beginnig we should agree on which kernel version I start development. For me it looks that git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git repository, xen/balloon head is the best. What do you think about that ??? Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
> As we agreed ealier sometimes it is better to return memory to the > system "directly" than allocate it as a backend for swap. I written > PoC (with some suggestions from you) which works and cope very good > with high memory demands. However, it is a long way (to some extent) > to fully working implementation containing all features (ballooning, > tmem and memory hotplug). I could do that. However at the beginnig > we should agree on which kernel version I start development. > For me it looks that > git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git > repository, xen/balloon head is the best. What do you think about that > ???Sorry, I''ve had to refocus my energy on other non-virtualization tmem-related issues* for awhile so haven''t had a chance to work on the future balloon driver features you are proposing. What is the difference between 2.6.36 balloon driver and Jeremy''s xen/balloon tree head? I think ideally the kernel version for development should be the latest (2.6.36). * Search for cleancache in http://lwn.net/Articles/412687/ _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Sorry for the misleading statements before. The balloon, as my point of view, has the problems as follows: 1. cannot deflate to a "negative" size. therefore the guest swapping will become a problem. 2. long reactive time. While the advantage is that it need not any guest os modification. tmem solves the problem 1 and 2 well, and mem-hotplug at least solves the former. Unfortunately both of them need os modification, so I have to select balloon. I wonder if we can reduce the influence of guest swapping, using dom0/host caching, to cache the swapped pages, if dom0/host have extra memory. The reactive time, if not too long, maybe tolerable if we temporarily suspend the guest with both a big balloon and instant memory requirments. Sincerely thanks for all of you. 2010/11/19 Daniel Kiper <dkiper@net-space.pl>> Hi, > > > > On Wed, Nov 17, 2010 at 07:50:18PM +0800, Chu Rui wrote: > > > > Furthermore, with HVM, the balloon does not work when the guest is > short of > > > > memory and swapping, even the host has a lot of surplus at that time. > > > > Besides promise a large size to the booting guest, it there any > better way? > > > > > > Yes, it is - memory hotplug. Now it is under development. Currently, > > > I am waiting for some comments for new version of patch. I will make > > > it public when I receive those reviews. > > > > If I am not misunderstanding, memory hotplug (whether it works in > > an HVM guest or not) doesn''t solve Chu''s issue because memory hotplug > > either: (1) requires operator intervention or (2) creates > denial-of-service > > conditions such as a guest maliciously hot-plugging as much memory > > as it can. > > As I understrand (if I am wrong please correct me) in last sentence > (please look above) Chu stated that balloon driver could not expand > memory above limit declared at boot (which is true). That is why > I mentioned about memory hotplug which is a solution for problem > described above. > > 1) Now it is true, however it is easy to change that (I sent > you an proposal which we discussed a bit). > > 2) It is true if maxmem is set above memory limit available for host. > If it is set below then it does not pose a problem for host > because guest could not allocate more memory than maxmem. > > > Chu''s stated issue is that ballooning is not responsive enough when > > memory demand increases unexpectedly. Since future memory demand > > can never be accurately predicted (only poorly guessed), some > > compensating mechanism must be in place to handle the poorly predicted > > cases. That''s essentially what tmem is good for. > > As we agreed ealier sometimes it is better to return memory to the > system "directly" than allocate it as a backend for swap. I written > PoC (with some suggestions from you) which works and cope very good > with high memory demands. However, it is a long way (to some extent) > to fully working implementation containing all features (ballooning, > tmem and memory hotplug). I could do that. However at the beginnig > we should agree on which kernel version I start development. > For me it looks that git:// > git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git > repository, xen/balloon head is the best. What do you think about that ??? > > Daniel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, On Thu, Nov 18, 2010 at 12:32:35PM -0800, Dan Magenheimer wrote:> What is the difference between 2.6.36 balloon driver and Jeremy''s > xen/balloon tree head? I think ideally the kernel version for > development should be the latest (2.6.36).I know that the best practice is to base new work on the latest stable kernel version, however I am not sure that it is the case there. xen/balloon head contains most of neweset fixes and improvments which are not merged into current stable till now. That is why I think we should start from this and later merge it with current stable. However, maybe Jeremy may have better view of which version would be better for the start of development. Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hello, On Sat, Nov 20, 2010 at 12:53:17AM +0800, Chu Rui wrote:> Sorry for the misleading statements before. > The balloon, as my point of view, has the problems as follows: > 1. cannot deflate to a "negative" size. therefore the guest swapping will > become a problem. > 2. long reactive time. > While the advantage is that it need not any guest os modification.No problem, however could you tell me which type of system you are using (Linux, Windows, ...), its version and exact kernel version if it is the case ??? Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Many thanks. Currently we are using xen 4.0. Dom0 is RHEL 5.4(kernel 2.6.31-13), most of DomUs are using Windows 2003, and some of them are also RHEL 5.4. NONE of the DomU is patched. 2010/11/22 Daniel Kiper <dkiper@net-space.pl> Hello,> > On Sat, Nov 20, 2010 at 12:53:17AM +0800, Chu Rui wrote: > > Sorry for the misleading statements before. > > The balloon, as my point of view, has the problems as follows: > > 1. cannot deflate to a "negative" size. therefore the guest swapping will > > become a problem. > > 2. long reactive time. > > While the advantage is that it need not any guest os modification. > > No problem, however could you tell me which type of system you are > using (Linux, Windows, ...), its version and exact kernel version > if it is the case ??? > > Daniel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, On Wed, Nov 24, 2010 at 03:38:14PM +0800, Chu Rui wrote:> Many thanks. > Currently we are using xen 4.0. Dom0 is RHEL 5.4(kernel 2.6.31-13), most of > DomUs are using Windows 2003, and some of them are also RHEL 5.4. > NONE of the DomU is patched.Thx. Are you using xenballoond for self-ballooning ??? Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, Jeremy, could you comment my e-mail below ??? On Mon, Nov 22, 2010 at 10:51:46AM +0100, Daniel Kiper wrote:> Hi, > > On Thu, Nov 18, 2010 at 12:32:35PM -0800, Dan Magenheimer wrote: > > What is the difference between 2.6.36 balloon driver and Jeremy''s > > xen/balloon tree head? I think ideally the kernel version for > > development should be the latest (2.6.36). > > I know that the best practice is to base new work on the > latest stable kernel version, however I am not sure that it is > the case there. xen/balloon head contains most of neweset fixes > and improvments which are not merged into current stable till > now. That is why I think we should start from this and later > merge it with current stable. However, maybe Jeremy may have > better view of which version would be better for the start > of development.Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
On 11/22/2010 01:51 AM, Daniel Kiper wrote:> Hi, > > On Thu, Nov 18, 2010 at 12:32:35PM -0800, Dan Magenheimer wrote: >> What is the difference between 2.6.36 balloon driver and Jeremy''s >> xen/balloon tree head? I think ideally the kernel version for >> development should be the latest (2.6.36). > I know that the best practice is to base new work on the > latest stable kernel version, however I am not sure that it is > the case there. xen/balloon head contains most of neweset fixes > and improvments which are not merged into current stable till > now. That is why I think we should start from this and later > merge it with current stable. However, maybe Jeremy may have > better view of which version would be better for the start > of development. >The big difference between xen/balloon and current mainline is hugepage support, which is not yet in an upstreamable state. Aside from that, current upstream balloon driver (in current git, post 2.6.37-rc3) is equivalent, and would make a good base for development. The main difference between .37-rc and .36 is support for boot-time ballooning, but the changes to the balloon driver to support that are small, so .36 would probably make a reasonable base as well. J _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Hi, Sorry for late reply, however I am very busy now. On Wed, Nov 24, 2010 at 10:53:58AM -0800, Jeremy Fitzhardinge wrote:> On 11/22/2010 01:51 AM, Daniel Kiper wrote: > > Hi, > > > > On Thu, Nov 18, 2010 at 12:32:35PM -0800, Dan Magenheimer wrote: > >> What is the difference between 2.6.36 balloon driver and Jeremy''s > >> xen/balloon tree head? I think ideally the kernel version for > >> development should be the latest (2.6.36). > > I know that the best practice is to base new work on the > > latest stable kernel version, however I am not sure that it is > > the case there. xen/balloon head contains most of neweset fixes > > and improvments which are not merged into current stable till > > now. That is why I think we should start from this and later > > merge it with current stable. However, maybe Jeremy may have > > better view of which version would be better for the start > > of development. > > > > The big difference between xen/balloon and current mainline is hugepage > support, which is not yet in an upstreamable state. Aside from that, > current upstream balloon driver (in current git, post 2.6.37-rc3) is > equivalent, and would make a good base for development. > > The main difference between .37-rc and .36 is support for boot-time > ballooning, but the changes to the balloon driver to support that are > small, so .36 would probably make a reasonable base as well.Thx. I will prepare patches for 2.6.36. Daniel _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel