Dan Magenheimer
2008-May-16 16:23 UTC
RE: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
(Your reply came to me but not to the list... not sure why. So I''ve attached your full reply below.)> ah ok, that is my failure, I need a bigger swapdisk ;)Yes, definitely. If you are creating the swapdisk on an ext3 filesystem, you might try using sparse files. They won''t take up much disk space unless/until they get swapped-to. There might be some performance ramifications though. (My testing has been with swap disk as a logical volume so I can''t try sparse.)> Ok, our plan is to have a high availbilty xen farm. Now we''re > beginning > with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use > selfballooning, because of peak traffic on a server, normal a server > needs about 256MB, but when it needs more, it shouldn''t be a > problem to > give it 4GB. The idea is not to overbook the memory, but have the > ability to get rid of memory failures because of peaks.Exactly what it is intended for! I''d be interested in how it works for guests with memory=4096 and higher. All of my testing so far has been on a machine with only 2GB of physical memory so I can test lots of guests but no large guests. Thanks, Dan> -----Original Message----- > From: viets@work.de [mailto:viets@work.de] > Sent: Friday, May 16, 2008 9:49 AM > To: dan.magenheimer@oracle.com; xen-devel-bounces@lists.xensource.com > Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > post memory > info via xenbus, > > > Dan Magenheimer wrote: > >> thanks for the patch, I was waiting for this feature. > > > > Thanks very much for the testing and feedback! Could you > > comment on what you plan to use it for? (Keir hasn''t accepted > > it yet, so I am looking for user support ;-) > > Ok, our plan is to have a high availbilty xen farm. Now we''re > beginning > with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use > selfballooning, because of peak traffic on a server, normal a server > needs about 256MB, but when it needs more, it shouldn''t be a > problem to > give it 4GB. The idea is not to overbook the memory, but have the > ability to get rid of memory failures because of peaks. > > > > > First question: Do you have a swap (virtual) disk configured and, > > if so, how big is it? (Use "swapon -s" and the size shows in KB.) > > Selfballooning shouldn''t be run in a domain with no swap disk. > > Also, how big is your "memory=" in your vm.cfg file? > > #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" > #kernel = "/boot/vmlinuz-2.6.18.8-xenU" > kernel = "/boot/vmlinuz-selfballooning" > memory = 256 > maxmem = 8192 > vcpu = 4 > name = "test.work.de" > vif = [ ''bridge=xenvlan323'' ] > disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] > root = "/dev/hda ro" > extra = ''xencons=tty'' > > > > swap_size = 256M > > > > > I''m not able to reproduce your dd failure at all, even with > > bs=2047M (dd doesn''t permit larger values for bs). > > Your program (I called it "mallocmem") does eventually fail for > > me but not until i==88. However, I have a 2GB swap disk configured. > > ah ok, that is my failure, I need a bigger swapdisk ;) > > > > I think both tests are really measuring the total virtual memory > > space configured, e.g. the sum of physical memory (minus kernel > > overhead) and configured swap space. I think you will find that > > both will fail similarly with ballooning off and even on a physical > > system, just at different points in virtual memory usage. > > Indeed, by adding additional output to mallocmem, I can see that > > it fails exactly when it attempts to malloc memory larger than > > the CommitLimit value in /proc/meminfo. I expect the same is > > true for the dd test. > > > > Note that CommitLimit DOES go down when memory is ballooned-out > > from a guest. So your test does point out to me that I should > > include a warning in the documentation not only that a swap disk > > should be configured, but also that the swap disk should be > > configured larger for a guest if selfballooning will be turned on. > > > > Thanks, > > Dan > > > >> -----Original Message----- > >> From: xen-devel-bounces@lists.xensource.com > >> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of > >> viets@work.de > >> Sent: Friday, May 16, 2008 3:36 AM > >> To: xen-devel@lists.xensource.com > >> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and > >> post memory > >> info via xenbus, > >> > >> > >> Hello, > >> > >> thanks for the patch, I was waiting for this feature. > >> > >> I''ve tried this patch and I''ve seen that if I malloc a great > >> size of memory in time, this fails, but if I malloc a small > >> size first and then resize it slowly, it works. > >> > >> this highly suffisticated (:p) program I use to test the > ballooning: > >> > >> #include <stdio.h> > >> #include <unistd.h> > >> > >> int main () { > >> void *v; > >> int i; > >> for(i=40; i < 50; ++i) { > >> v = malloc((i*32*1024*1024)); > >> printf("%i\n", i); > >> if ( v != NULL) { > >> system("cat /proc/xen/balloon"); > >> sleep(1); > >> free(v); > >> } > >> } > >> } > >> > >> same effect I''ve got if I change the blocksize in a dd: > >> > >> works: dd if=zero of=/test.img count=1 bs=32M > >> doesn''t work: dd if=zero of=/test.img count=1 bs=256M > >> > >> Don''t know whether this is the right test for this... > >> > >> greetings > >> Torben Viets > >> > >> > >> Dan Magenheimer wrote > >>> OK, here''s the promised patch. The overall objective of the > >>> patch is to enable limited memory load-balancing capabilities > >>> as a step toward allowing limited memory overcommit. With > >>> this and some other minor hackery, I was able to run as > >>> many as 15 lightly loaded 512MB domains on a 2GB system > >>> (yes, veerrrryyy slooowwwlly). > >>> > >>> Review/comments appreciated. > >>> > >>> With this patch, balloon.c communicates (limited) useful > >>> memory usage information via xenbus. It also implements > >>> "selfballooning" which applies the memory information > >>> locally to immediately adjust the balloon, giving up memory > >>> when it is not needed and asking for it back when it is needed, > >>> implementing a first-come-first-served system-wide ballooning > >>> "policy". When a domain asks for memory but none is available, > >>> it must use its own configured swap disk, resulting in > >>> (potentially severe) performance degradation. Naturally, > >>> it is not recommended to turn on selfballooning in a domain > >>> that has no swap disk or if performance is more important > >>> than increasing the number of VMs runnable on a physical machine. > >>> > >>> A key assumption is that the Linux variable vm_committed_space > >>> is a reasonable first approximation of memory needed by a domain. > >>> This approximation will probably improve over time, but is > >>> a good start for now. The variable is bound on the lower end > >>> by the recently submitted minimum_target() algorithm patch; > >>> thus O-O-M conditions should not occur. > >>> > >>> The code is a bit complicated in a couple of places because of > >>> race conditions involving xenstored startup relative to > >>> turning on selfballooning locally. Because the key variable > >>> (vm_committed_space) is not exported by Linux, I implemented > >>> a horrible hack which still allows the code to work in a > >>> module, however I fully expect that this part of the patch > >>> will not be accepted (which will limit the functionality to > >>> pvm domains only... probably OK for now). > >>> > >>> Existing balloon functionality which is unchanged: > >>> - Set target for VM from domain0 > >>> - Set target inside VM by writing to /proc/xen/balloon > >>> Existing balloon info on xenbus which is unchanged: > >>> - /local/domain/X/memory/target > >>> To turn on selfballooning: > >>> - Inside a VM: "echo 1 > /proc/xen/balloon" > >>> - From domain0: "xenstore-write > >> /local/domain/X/memory/selfballoon 1" > >>> To turn off selfballooning: > >>> - Inside a VM: "echo 0 > /proc/xen/balloon" > >>> - From domain0: "xenstore-write > >> /local/domain/X/memory/selfballoon 0" > >>> New balloon info now on xenbus: > >>> - /local/domain/X/memory/selfballoon [0 or 1] > >>> - /local/domain/X/memory/actual [kB] * > >>> - /local/domain/X/memory/minimum [kB] * > >>> - /local/domain/X/memory/selftarget [kB] * (only valid if > >>> selfballoon==1) > >>> * writeable only by balloon driver in X when either > >>> selfballooning is first enabled, or target is changed > >>> by domain0 > >>> > >>> Thanks, > >>> Dan > >>> > >>> ==================================> >>> Thanks... for the memory > >>> I really could use more / My throughput''s on the floor > >>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store > >>> Overcommitted so much > >>> (with apologies to the late great Bob Hope) > >> -- > >> > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >> > > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Torben Viets
2008-May-16 16:50 UTC
Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
Dan Magenheimer wrote:> (Your reply came to me but not to the list... not sure why. > So I''ve attached your full reply below.) >thanks, hope this time it works....> >> ah ok, that is my failure, I need a bigger swapdisk ;) >> > > Yes, definitely. If you are creating the swapdisk on an ext3 > filesystem, you might try using sparse files. They won''t > take up much disk space unless/until they get swapped-to. > There might be some performance ramifications though. > (My testing has been with swap disk as a logical volume > so I can''t try sparse.) > > >> Ok, our plan is to have a high availbilty xen farm. Now we''re >> beginning >> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >> selfballooning, because of peak traffic on a server, normal a server >> needs about 256MB, but when it needs more, it shouldn''t be a >> problem to >> give it 4GB. The idea is not to overbook the memory, but have the >> ability to get rid of memory failures because of peaks. >> > > Exactly what it is intended for! > > I''d be interested in how it works for guests with memory=4096 > and higher. All of my testing so far has been on a machine with > only 2GB of physical memory so I can test lots of guests but > no large guests. >I''ll test it on monday, now I''m going into my weekend ;) but I think, that I wasn''t able to get more than 2GB RAM allocated, but I will test it on monday again. PS: In my first mail I''ve attached my whole signatur, I remove it because I get enough spam ;) Thanks Torben Viets> Thanks, > Dan > > >> -----Original Message----- >> From: viets@work.de [mailto:viets@work.de] >> Sent: Friday, May 16, 2008 9:49 AM >> To: dan.magenheimer@oracle.com; xen-devel-bounces@lists.xensource.com >> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and >> post memory >> info via xenbus, >> >> >> Dan Magenheimer wrote: >> >>>> thanks for the patch, I was waiting for this feature. >>>> >>> Thanks very much for the testing and feedback! Could you >>> comment on what you plan to use it for? (Keir hasn''t accepted >>> it yet, so I am looking for user support ;-) >>> >> Ok, our plan is to have a high availbilty xen farm. Now we''re >> beginning >> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >> selfballooning, because of peak traffic on a server, normal a server >> needs about 256MB, but when it needs more, it shouldn''t be a >> problem to >> give it 4GB. The idea is not to overbook the memory, but have the >> ability to get rid of memory failures because of peaks. >> >> >>> First question: Do you have a swap (virtual) disk configured and, >>> if so, how big is it? (Use "swapon -s" and the size shows in KB.) >>> Selfballooning shouldn''t be run in a domain with no swap disk. >>> Also, how big is your "memory=" in your vm.cfg file? >>> >> #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" >> #kernel = "/boot/vmlinuz-2.6.18.8-xenU" >> kernel = "/boot/vmlinuz-selfballooning" >> memory = 256 >> maxmem = 8192 >> vcpu = 4 >> name = "test.work.de" >> vif = [ ''bridge=xenvlan323'' ] >> disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] >> root = "/dev/hda ro" >> extra = ''xencons=tty'' >> >> >> >> swap_size = 256M >> >> >>> I''m not able to reproduce your dd failure at all, even with >>> bs=2047M (dd doesn''t permit larger values for bs). >>> Your program (I called it "mallocmem") does eventually fail for >>> me but not until i==88. However, I have a 2GB swap disk configured. >>> >> ah ok, that is my failure, I need a bigger swapdisk ;) >> >>> I think both tests are really measuring the total virtual memory >>> space configured, e.g. the sum of physical memory (minus kernel >>> overhead) and configured swap space. I think you will find that >>> both will fail similarly with ballooning off and even on a physical >>> system, just at different points in virtual memory usage. >>> Indeed, by adding additional output to mallocmem, I can see that >>> it fails exactly when it attempts to malloc memory larger than >>> the CommitLimit value in /proc/meminfo. I expect the same is >>> true for the dd test. >>> >>> Note that CommitLimit DOES go down when memory is ballooned-out >>> from a guest. So your test does point out to me that I should >>> include a warning in the documentation not only that a swap disk >>> should be configured, but also that the swap disk should be >>> configured larger for a guest if selfballooning will be turned on. >>> >>> Thanks, >>> Dan >>> >>> >>>> -----Original Message----- >>>> From: xen-devel-bounces@lists.xensource.com >>>> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of >>>> viets@work.de >>>> Sent: Friday, May 16, 2008 3:36 AM >>>> To: xen-devel@lists.xensource.com >>>> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and >>>> post memory >>>> info via xenbus, >>>> >>>> >>>> Hello, >>>> >>>> thanks for the patch, I was waiting for this feature. >>>> >>>> I''ve tried this patch and I''ve seen that if I malloc a great >>>> size of memory in time, this fails, but if I malloc a small >>>> size first and then resize it slowly, it works. >>>> >>>> this highly suffisticated (:p) program I use to test the >>>> >> ballooning: >> >>>> #include <stdio.h> >>>> #include <unistd.h> >>>> >>>> int main () { >>>> void *v; >>>> int i; >>>> for(i=40; i < 50; ++i) { >>>> v = malloc((i*32*1024*1024)); >>>> printf("%i\n", i); >>>> if ( v != NULL) { >>>> system("cat /proc/xen/balloon"); >>>> sleep(1); >>>> free(v); >>>> } >>>> } >>>> } >>>> >>>> same effect I''ve got if I change the blocksize in a dd: >>>> >>>> works: dd if=zero of=/test.img count=1 bs=32M >>>> doesn''t work: dd if=zero of=/test.img count=1 bs=256M >>>> >>>> Don''t know whether this is the right test for this... >>>> >>>> greetings >>>> Torben Viets >>>> >>>> >>>> Dan Magenheimer wrote >>>> >>>>> OK, here''s the promised patch. The overall objective of the >>>>> patch is to enable limited memory load-balancing capabilities >>>>> as a step toward allowing limited memory overcommit. With >>>>> this and some other minor hackery, I was able to run as >>>>> many as 15 lightly loaded 512MB domains on a 2GB system >>>>> (yes, veerrrryyy slooowwwlly). >>>>> >>>>> Review/comments appreciated. >>>>> >>>>> With this patch, balloon.c communicates (limited) useful >>>>> memory usage information via xenbus. It also implements >>>>> "selfballooning" which applies the memory information >>>>> locally to immediately adjust the balloon, giving up memory >>>>> when it is not needed and asking for it back when it is needed, >>>>> implementing a first-come-first-served system-wide ballooning >>>>> "policy". When a domain asks for memory but none is available, >>>>> it must use its own configured swap disk, resulting in >>>>> (potentially severe) performance degradation. Naturally, >>>>> it is not recommended to turn on selfballooning in a domain >>>>> that has no swap disk or if performance is more important >>>>> than increasing the number of VMs runnable on a physical machine. >>>>> >>>>> A key assumption is that the Linux variable vm_committed_space >>>>> is a reasonable first approximation of memory needed by a domain. >>>>> This approximation will probably improve over time, but is >>>>> a good start for now. The variable is bound on the lower end >>>>> by the recently submitted minimum_target() algorithm patch; >>>>> thus O-O-M conditions should not occur. >>>>> >>>>> The code is a bit complicated in a couple of places because of >>>>> race conditions involving xenstored startup relative to >>>>> turning on selfballooning locally. Because the key variable >>>>> (vm_committed_space) is not exported by Linux, I implemented >>>>> a horrible hack which still allows the code to work in a >>>>> module, however I fully expect that this part of the patch >>>>> will not be accepted (which will limit the functionality to >>>>> pvm domains only... probably OK for now). >>>>> >>>>> Existing balloon functionality which is unchanged: >>>>> - Set target for VM from domain0 >>>>> - Set target inside VM by writing to /proc/xen/balloon >>>>> Existing balloon info on xenbus which is unchanged: >>>>> - /local/domain/X/memory/target >>>>> To turn on selfballooning: >>>>> - Inside a VM: "echo 1 > /proc/xen/balloon" >>>>> - From domain0: "xenstore-write >>>>> >>>> /local/domain/X/memory/selfballoon 1" >>>> >>>>> To turn off selfballooning: >>>>> - Inside a VM: "echo 0 > /proc/xen/balloon" >>>>> - From domain0: "xenstore-write >>>>> >>>> /local/domain/X/memory/selfballoon 0" >>>> >>>>> New balloon info now on xenbus: >>>>> - /local/domain/X/memory/selfballoon [0 or 1] >>>>> - /local/domain/X/memory/actual [kB] * >>>>> - /local/domain/X/memory/minimum [kB] * >>>>> - /local/domain/X/memory/selftarget [kB] * (only valid if >>>>> selfballoon==1) >>>>> * writeable only by balloon driver in X when either >>>>> selfballooning is first enabled, or target is changed >>>>> by domain0 >>>>> >>>>> Thanks, >>>>> Dan >>>>> >>>>> ==================================>>>>> Thanks... for the memory >>>>> I really could use more / My throughput''s on the floor >>>>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store >>>>> Overcommitted so much >>>>> (with apologies to the late great Bob Hope) >>>>> >>>> -- >>>> >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>>> >>>> >>> >> >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2008-May-21 20:23 UTC
RE: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
>> memory = 256 >> maxmem = 8192By the way, I''m not sure if you knew this, but the above two lines don''t work as you might want. The maxmem is ignored. The domain is launched (in this example) with 256MB of memory and (at least without hot-plug memory support in the guest) memory can only be decreased from there, not increased. So to run a guest which adjusts between 256MB and 8192MB of memory, you must launch it with 8192MB and balloon it down to 256MB. If Xen does not have 8192MB free at launch, launching the domain will fail. Dan> -----Original Message----- > From: Torben Viets [mailto:viets@work.de] > Sent: Friday, May 16, 2008 10:51 AM > To: xen-devel@lists.xensource.com > Cc: dan.magenheimer@oracle.com > Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > post memory > info via xenbus, > > > Dan Magenheimer wrote: > > (Your reply came to me but not to the list... not sure why. > > So I''ve attached your full reply below.) > > > thanks, hope this time it works.... > > > >> ah ok, that is my failure, I need a bigger swapdisk ;) > >> > > > > Yes, definitely. If you are creating the swapdisk on an ext3 > > filesystem, you might try using sparse files. They won''t > > take up much disk space unless/until they get swapped-to. > > There might be some performance ramifications though. > > (My testing has been with swap disk as a logical volume > > so I can''t try sparse.) > > > > > >> Ok, our plan is to have a high availbilty xen farm. Now we''re > >> beginning > >> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use > >> selfballooning, because of peak traffic on a server, > normal a server > >> needs about 256MB, but when it needs more, it shouldn''t be a > >> problem to > >> give it 4GB. The idea is not to overbook the memory, but have the > >> ability to get rid of memory failures because of peaks. > >> > > > > Exactly what it is intended for! > > > > I''d be interested in how it works for guests with memory=4096 > > and higher. All of my testing so far has been on a machine with > > only 2GB of physical memory so I can test lots of guests but > > no large guests. > > > > I''ll test it on monday, now I''m going into my weekend ;) but I think, > that I wasn''t able to get more than 2GB RAM allocated, but I will test > it on monday again. > > PS: In my first mail I''ve attached my whole signatur, I remove it > because I get enough spam ;) > > Thanks > Torben Viets > > Thanks, > > Dan > > > > > >> -----Original Message----- > >> From: viets@work.de [mailto:viets@work.de] > >> Sent: Friday, May 16, 2008 9:49 AM > >> To: dan.magenheimer@oracle.com; > xen-devel-bounces@lists.xensource.com > >> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > >> post memory > >> info via xenbus, > >> > >> > >> Dan Magenheimer wrote: > >> > >>>> thanks for the patch, I was waiting for this feature. > >>>> > >>> Thanks very much for the testing and feedback! Could you > >>> comment on what you plan to use it for? (Keir hasn''t accepted > >>> it yet, so I am looking for user support ;-) > >>> > >> Ok, our plan is to have a high availbilty xen farm. Now we''re > >> beginning > >> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use > >> selfballooning, because of peak traffic on a server, > normal a server > >> needs about 256MB, but when it needs more, it shouldn''t be a > >> problem to > >> give it 4GB. The idea is not to overbook the memory, but have the > >> ability to get rid of memory failures because of peaks. > >> > >> > >>> First question: Do you have a swap (virtual) disk configured and, > >>> if so, how big is it? (Use "swapon -s" and the size shows in KB.) > >>> Selfballooning shouldn''t be run in a domain with no swap disk. > >>> Also, how big is your "memory=" in your vm.cfg file? > >>> > >> #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" > >> #kernel = "/boot/vmlinuz-2.6.18.8-xenU" > >> kernel = "/boot/vmlinuz-selfballooning" > >> memory = 256 > >> maxmem = 8192 > >> vcpu = 4 > >> name = "test.work.de" > >> vif = [ ''bridge=xenvlan323'' ] > >> disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] > >> root = "/dev/hda ro" > >> extra = ''xencons=tty'' > >> > >> > >> > >> swap_size = 256M > >> > >> > >>> I''m not able to reproduce your dd failure at all, even with > >>> bs=2047M (dd doesn''t permit larger values for bs). > >>> Your program (I called it "mallocmem") does eventually fail for > >>> me but not until i==88. However, I have a 2GB swap disk > configured. > >>> > >> ah ok, that is my failure, I need a bigger swapdisk ;) > >> > >>> I think both tests are really measuring the total virtual memory > >>> space configured, e.g. the sum of physical memory (minus kernel > >>> overhead) and configured swap space. I think you will find that > >>> both will fail similarly with ballooning off and even on > a physical > >>> system, just at different points in virtual memory usage. > >>> Indeed, by adding additional output to mallocmem, I can see that > >>> it fails exactly when it attempts to malloc memory larger than > >>> the CommitLimit value in /proc/meminfo. I expect the same is > >>> true for the dd test. > >>> > >>> Note that CommitLimit DOES go down when memory is ballooned-out > >>> from a guest. So your test does point out to me that I should > >>> include a warning in the documentation not only that a swap disk > >>> should be configured, but also that the swap disk should be > >>> configured larger for a guest if selfballooning will be turned on. > >>> > >>> Thanks, > >>> Dan > >>> > >>> > >>>> -----Original Message----- > >>>> From: xen-devel-bounces@lists.xensource.com > >>>> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of > >>>> viets@work.de > >>>> Sent: Friday, May 16, 2008 3:36 AM > >>>> To: xen-devel@lists.xensource.com > >>>> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and > >>>> post memory > >>>> info via xenbus, > >>>> > >>>> > >>>> Hello, > >>>> > >>>> thanks for the patch, I was waiting for this feature. > >>>> > >>>> I''ve tried this patch and I''ve seen that if I malloc a great > >>>> size of memory in time, this fails, but if I malloc a small > >>>> size first and then resize it slowly, it works. > >>>> > >>>> this highly suffisticated (:p) program I use to test the > >>>> > >> ballooning: > >> > >>>> #include <stdio.h> > >>>> #include <unistd.h> > >>>> > >>>> int main () { > >>>> void *v; > >>>> int i; > >>>> for(i=40; i < 50; ++i) { > >>>> v = malloc((i*32*1024*1024)); > >>>> printf("%i\n", i); > >>>> if ( v != NULL) { > >>>> system("cat /proc/xen/balloon"); > >>>> sleep(1); > >>>> free(v); > >>>> } > >>>> } > >>>> } > >>>> > >>>> same effect I''ve got if I change the blocksize in a dd: > >>>> > >>>> works: dd if=zero of=/test.img count=1 bs=32M > >>>> doesn''t work: dd if=zero of=/test.img count=1 bs=256M > >>>> > >>>> Don''t know whether this is the right test for this... > >>>> > >>>> greetings > >>>> Torben Viets > >>>> > >>>> > >>>> Dan Magenheimer wrote > >>>> > >>>>> OK, here''s the promised patch. The overall objective of the > >>>>> patch is to enable limited memory load-balancing capabilities > >>>>> as a step toward allowing limited memory overcommit. With > >>>>> this and some other minor hackery, I was able to run as > >>>>> many as 15 lightly loaded 512MB domains on a 2GB system > >>>>> (yes, veerrrryyy slooowwwlly). > >>>>> > >>>>> Review/comments appreciated. > >>>>> > >>>>> With this patch, balloon.c communicates (limited) useful > >>>>> memory usage information via xenbus. It also implements > >>>>> "selfballooning" which applies the memory information > >>>>> locally to immediately adjust the balloon, giving up memory > >>>>> when it is not needed and asking for it back when it is needed, > >>>>> implementing a first-come-first-served system-wide ballooning > >>>>> "policy". When a domain asks for memory but none is available, > >>>>> it must use its own configured swap disk, resulting in > >>>>> (potentially severe) performance degradation. Naturally, > >>>>> it is not recommended to turn on selfballooning in a domain > >>>>> that has no swap disk or if performance is more important > >>>>> than increasing the number of VMs runnable on a > physical machine. > >>>>> > >>>>> A key assumption is that the Linux variable vm_committed_space > >>>>> is a reasonable first approximation of memory needed by > a domain. > >>>>> This approximation will probably improve over time, but is > >>>>> a good start for now. The variable is bound on the lower end > >>>>> by the recently submitted minimum_target() algorithm patch; > >>>>> thus O-O-M conditions should not occur. > >>>>> > >>>>> The code is a bit complicated in a couple of places because of > >>>>> race conditions involving xenstored startup relative to > >>>>> turning on selfballooning locally. Because the key variable > >>>>> (vm_committed_space) is not exported by Linux, I implemented > >>>>> a horrible hack which still allows the code to work in a > >>>>> module, however I fully expect that this part of the patch > >>>>> will not be accepted (which will limit the functionality to > >>>>> pvm domains only... probably OK for now). > >>>>> > >>>>> Existing balloon functionality which is unchanged: > >>>>> - Set target for VM from domain0 > >>>>> - Set target inside VM by writing to /proc/xen/balloon > >>>>> Existing balloon info on xenbus which is unchanged: > >>>>> - /local/domain/X/memory/target > >>>>> To turn on selfballooning: > >>>>> - Inside a VM: "echo 1 > /proc/xen/balloon" > >>>>> - From domain0: "xenstore-write > >>>>> > >>>> /local/domain/X/memory/selfballoon 1" > >>>> > >>>>> To turn off selfballooning: > >>>>> - Inside a VM: "echo 0 > /proc/xen/balloon" > >>>>> - From domain0: "xenstore-write > >>>>> > >>>> /local/domain/X/memory/selfballoon 0" > >>>> > >>>>> New balloon info now on xenbus: > >>>>> - /local/domain/X/memory/selfballoon [0 or 1] > >>>>> - /local/domain/X/memory/actual [kB] * > >>>>> - /local/domain/X/memory/minimum [kB] * > >>>>> - /local/domain/X/memory/selftarget [kB] * (only valid if > >>>>> selfballoon==1) > >>>>> * writeable only by balloon driver in X when either > >>>>> selfballooning is first enabled, or target is changed > >>>>> by domain0 > >>>>> > >>>>> Thanks, > >>>>> Dan > >>>>> > >>>>> ==================================> >>>>> Thanks... for the memory > >>>>> I really could use more / My throughput''s on the floor > >>>>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store > >>>>> Overcommitted so much > >>>>> (with apologies to the late great Bob Hope) > >>>>> > >>>> -- > >>>> > >>>> > >>>> _______________________________________________ > >>>> Xen-devel mailing list > >>>> Xen-devel@lists.xensource.com > >>>> http://lists.xensource.com/xen-devel > >>>> > >>>> > >>> > >> > >> > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > > > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Torben Viets
2008-May-21 21:52 UTC
Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
hey, thanks for the tip, I''ve already memory hotplug activated. Now it works fine with 7 domains, but no one uses more than 256MB... I''d like to test the ballooning with more than 2GB memory but at the moment I''ven''t a live machine which needs so much memory... but with maxmen and hotplug, this defines the maximum or? greetings Torben Viets Dan Magenheimer wrote:>>> memory = 256 >>> maxmem = 8192 >>> > > By the way, I''m not sure if you knew this, but the above two > lines don''t work as you might want. The maxmem is ignored. > The domain is launched (in this example) with 256MB of > memory and (at least without hot-plug memory support in the > guest) memory can only be decreased from there, not increased. > > So to run a guest which adjusts between 256MB and 8192MB > of memory, you must launch it with 8192MB and balloon it > down to 256MB. If Xen does not have 8192MB free at > launch, launching the domain will fail. > > Dan > > >> -----Original Message----- >> From: Torben Viets [mailto:viets@work.de] >> Sent: Friday, May 16, 2008 10:51 AM >> To: xen-devel@lists.xensource.com >> Cc: dan.magenheimer@oracle.com >> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and >> post memory >> info via xenbus, >> >> >> Dan Magenheimer wrote: >> >>> (Your reply came to me but not to the list... not sure why. >>> So I''ve attached your full reply below.) >>> >>> >> thanks, hope this time it works.... >> >>>> ah ok, that is my failure, I need a bigger swapdisk ;) >>>> >>>> >>> Yes, definitely. If you are creating the swapdisk on an ext3 >>> filesystem, you might try using sparse files. They won''t >>> take up much disk space unless/until they get swapped-to. >>> There might be some performance ramifications though. >>> (My testing has been with swap disk as a logical volume >>> so I can''t try sparse.) >>> >>> >>> >>>> Ok, our plan is to have a high availbilty xen farm. Now we''re >>>> beginning >>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >>>> selfballooning, because of peak traffic on a server, >>>> >> normal a server >> >>>> needs about 256MB, but when it needs more, it shouldn''t be a >>>> problem to >>>> give it 4GB. The idea is not to overbook the memory, but have the >>>> ability to get rid of memory failures because of peaks. >>>> >>>> >>> Exactly what it is intended for! >>> >>> I''d be interested in how it works for guests with memory=4096 >>> and higher. All of my testing so far has been on a machine with >>> only 2GB of physical memory so I can test lots of guests but >>> no large guests. >>> >>> >> I''ll test it on monday, now I''m going into my weekend ;) but I think, >> that I wasn''t able to get more than 2GB RAM allocated, but I will test >> it on monday again. >> >> PS: In my first mail I''ve attached my whole signatur, I remove it >> because I get enough spam ;) >> >> Thanks >> Torben Viets >> >>> Thanks, >>> Dan >>> >>> >>> >>>> -----Original Message----- >>>> From: viets@work.de [mailto:viets@work.de] >>>> Sent: Friday, May 16, 2008 9:49 AM >>>> To: dan.magenheimer@oracle.com; >>>> >> xen-devel-bounces@lists.xensource.com >> >>>> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and >>>> post memory >>>> info via xenbus, >>>> >>>> >>>> Dan Magenheimer wrote: >>>> >>>> >>>>>> thanks for the patch, I was waiting for this feature. >>>>>> >>>>>> >>>>> Thanks very much for the testing and feedback! Could you >>>>> comment on what you plan to use it for? (Keir hasn''t accepted >>>>> it yet, so I am looking for user support ;-) >>>>> >>>>> >>>> Ok, our plan is to have a high availbilty xen farm. Now we''re >>>> beginning >>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >>>> selfballooning, because of peak traffic on a server, >>>> >> normal a server >> >>>> needs about 256MB, but when it needs more, it shouldn''t be a >>>> problem to >>>> give it 4GB. The idea is not to overbook the memory, but have the >>>> ability to get rid of memory failures because of peaks. >>>> >>>> >>>> >>>>> First question: Do you have a swap (virtual) disk configured and, >>>>> if so, how big is it? (Use "swapon -s" and the size shows in KB.) >>>>> Selfballooning shouldn''t be run in a domain with no swap disk. >>>>> Also, how big is your "memory=" in your vm.cfg file? >>>>> >>>>> >>>> #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" >>>> #kernel = "/boot/vmlinuz-2.6.18.8-xenU" >>>> kernel = "/boot/vmlinuz-selfballooning" >>>> memory = 256 >>>> maxmem = 8192 >>>> vcpu = 4 >>>> name = "test.work.de" >>>> vif = [ ''bridge=xenvlan323'' ] >>>> disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] >>>> root = "/dev/hda ro" >>>> extra = ''xencons=tty'' >>>> >>>> >>>> >>>> swap_size = 256M >>>> >>>> >>>> >>>>> I''m not able to reproduce your dd failure at all, even with >>>>> bs=2047M (dd doesn''t permit larger values for bs). >>>>> Your program (I called it "mallocmem") does eventually fail for >>>>> me but not until i==88. However, I have a 2GB swap disk >>>>> >> configured. >> >>>> ah ok, that is my failure, I need a bigger swapdisk ;) >>>> >>>> >>>>> I think both tests are really measuring the total virtual memory >>>>> space configured, e.g. the sum of physical memory (minus kernel >>>>> overhead) and configured swap space. I think you will find that >>>>> both will fail similarly with ballooning off and even on >>>>> >> a physical >> >>>>> system, just at different points in virtual memory usage. >>>>> Indeed, by adding additional output to mallocmem, I can see that >>>>> it fails exactly when it attempts to malloc memory larger than >>>>> the CommitLimit value in /proc/meminfo. I expect the same is >>>>> true for the dd test. >>>>> >>>>> Note that CommitLimit DOES go down when memory is ballooned-out >>>>> from a guest. So your test does point out to me that I should >>>>> include a warning in the documentation not only that a swap disk >>>>> should be configured, but also that the swap disk should be >>>>> configured larger for a guest if selfballooning will be turned on. >>>>> >>>>> Thanks, >>>>> Dan >>>>> >>>>> >>>>> >>>>>> -----Original Message----- >>>>>> From: xen-devel-bounces@lists.xensource.com >>>>>> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of >>>>>> viets@work.de >>>>>> Sent: Friday, May 16, 2008 3:36 AM >>>>>> To: xen-devel@lists.xensource.com >>>>>> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and >>>>>> post memory >>>>>> info via xenbus, >>>>>> >>>>>> >>>>>> Hello, >>>>>> >>>>>> thanks for the patch, I was waiting for this feature. >>>>>> >>>>>> I''ve tried this patch and I''ve seen that if I malloc a great >>>>>> size of memory in time, this fails, but if I malloc a small >>>>>> size first and then resize it slowly, it works. >>>>>> >>>>>> this highly suffisticated (:p) program I use to test the >>>>>> >>>>>> >>>> ballooning: >>>> >>>> >>>>>> #include <stdio.h> >>>>>> #include <unistd.h> >>>>>> >>>>>> int main () { >>>>>> void *v; >>>>>> int i; >>>>>> for(i=40; i < 50; ++i) { >>>>>> v = malloc((i*32*1024*1024)); >>>>>> printf("%i\n", i); >>>>>> if ( v != NULL) { >>>>>> system("cat /proc/xen/balloon"); >>>>>> sleep(1); >>>>>> free(v); >>>>>> } >>>>>> } >>>>>> } >>>>>> >>>>>> same effect I''ve got if I change the blocksize in a dd: >>>>>> >>>>>> works: dd if=zero of=/test.img count=1 bs=32M >>>>>> doesn''t work: dd if=zero of=/test.img count=1 bs=256M >>>>>> >>>>>> Don''t know whether this is the right test for this... >>>>>> >>>>>> greetings >>>>>> Torben Viets >>>>>> >>>>>> >>>>>> Dan Magenheimer wrote >>>>>> >>>>>> >>>>>>> OK, here''s the promised patch. The overall objective of the >>>>>>> patch is to enable limited memory load-balancing capabilities >>>>>>> as a step toward allowing limited memory overcommit. With >>>>>>> this and some other minor hackery, I was able to run as >>>>>>> many as 15 lightly loaded 512MB domains on a 2GB system >>>>>>> (yes, veerrrryyy slooowwwlly). >>>>>>> >>>>>>> Review/comments appreciated. >>>>>>> >>>>>>> With this patch, balloon.c communicates (limited) useful >>>>>>> memory usage information via xenbus. It also implements >>>>>>> "selfballooning" which applies the memory information >>>>>>> locally to immediately adjust the balloon, giving up memory >>>>>>> when it is not needed and asking for it back when it is needed, >>>>>>> implementing a first-come-first-served system-wide ballooning >>>>>>> "policy". When a domain asks for memory but none is available, >>>>>>> it must use its own configured swap disk, resulting in >>>>>>> (potentially severe) performance degradation. Naturally, >>>>>>> it is not recommended to turn on selfballooning in a domain >>>>>>> that has no swap disk or if performance is more important >>>>>>> than increasing the number of VMs runnable on a >>>>>>> >> physical machine. >> >>>>>>> A key assumption is that the Linux variable vm_committed_space >>>>>>> is a reasonable first approximation of memory needed by >>>>>>> >> a domain. >> >>>>>>> This approximation will probably improve over time, but is >>>>>>> a good start for now. The variable is bound on the lower end >>>>>>> by the recently submitted minimum_target() algorithm patch; >>>>>>> thus O-O-M conditions should not occur. >>>>>>> >>>>>>> The code is a bit complicated in a couple of places because of >>>>>>> race conditions involving xenstored startup relative to >>>>>>> turning on selfballooning locally. Because the key variable >>>>>>> (vm_committed_space) is not exported by Linux, I implemented >>>>>>> a horrible hack which still allows the code to work in a >>>>>>> module, however I fully expect that this part of the patch >>>>>>> will not be accepted (which will limit the functionality to >>>>>>> pvm domains only... probably OK for now). >>>>>>> >>>>>>> Existing balloon functionality which is unchanged: >>>>>>> - Set target for VM from domain0 >>>>>>> - Set target inside VM by writing to /proc/xen/balloon >>>>>>> Existing balloon info on xenbus which is unchanged: >>>>>>> - /local/domain/X/memory/target >>>>>>> To turn on selfballooning: >>>>>>> - Inside a VM: "echo 1 > /proc/xen/balloon" >>>>>>> - From domain0: "xenstore-write >>>>>>> >>>>>>> >>>>>> /local/domain/X/memory/selfballoon 1" >>>>>> >>>>>> >>>>>>> To turn off selfballooning: >>>>>>> - Inside a VM: "echo 0 > /proc/xen/balloon" >>>>>>> - From domain0: "xenstore-write >>>>>>> >>>>>>> >>>>>> /local/domain/X/memory/selfballoon 0" >>>>>> >>>>>> >>>>>>> New balloon info now on xenbus: >>>>>>> - /local/domain/X/memory/selfballoon [0 or 1] >>>>>>> - /local/domain/X/memory/actual [kB] * >>>>>>> - /local/domain/X/memory/minimum [kB] * >>>>>>> - /local/domain/X/memory/selftarget [kB] * (only valid if >>>>>>> selfballoon==1) >>>>>>> * writeable only by balloon driver in X when either >>>>>>> selfballooning is first enabled, or target is changed >>>>>>> by domain0 >>>>>>> >>>>>>> Thanks, >>>>>>> Dan >>>>>>> >>>>>>> ==================================>>>>>>> Thanks... for the memory >>>>>>> I really could use more / My throughput''s on the floor >>>>>>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store >>>>>>> Overcommitted so much >>>>>>> (with apologies to the late great Bob Hope) >>>>>>> >>>>>>> >>>>>> -- >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Xen-devel mailing list >>>>>> Xen-devel@lists.xensource.com >>>>>> http://lists.xensource.com/xen-devel >>>>>> >>>>>> >>>>>> >>>> >>> _______________________________________________ >>> Xen-devel mailing list >>> Xen-devel@lists.xensource.com >>> http://lists.xensource.com/xen-devel >>> >>> >>> >> > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Chris Lalancette
2008-May-22 07:15 UTC
Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
Dan Magenheimer wrote:>>> memory = 256 >>> maxmem = 8192 > > By the way, I''m not sure if you knew this, but the above two > lines don''t work as you might want. The maxmem is ignored. > The domain is launched (in this example) with 256MB of > memory and (at least without hot-plug memory support in the > guest) memory can only be decreased from there, not increased.Assuming we are talking about PV guests, I think this is wrong. My knowledge is a little dated (mostly 3.1.x series knowledge), but unless it has changed, this should work. As I understand it, what happens is that if you specify like above, dom0 gets ballooned down 256MB, and then your domain is started with 256MB. From there, you should be able to use xm mem-set <domid> <MB> to set the amount of memory in the guest, up to maxmem. But there is a big caveat, which trips people up all of the time. The xm mem-set command will *not* automatically balloon down dom0 for you. So if you allocated all memory to dom0 on bootup (say, 4GB), then started just this one guest (so now dom0 == 3.75GB, domU == 256MB), and try to xm mem-set, you will fail. If you then xm mem-set 0 3000 (or something), you would then be able to balloon up the domU an additional .75GB. Chris Lalancette _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Keir Fraser
2008-May-22 07:18 UTC
Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
On 22/5/08 08:15, "Chris Lalancette" <clalance@redhat.com> wrote:>> By the way, I''m not sure if you knew this, but the above two >> lines don''t work as you might want. The maxmem is ignored. >> The domain is launched (in this example) with 256MB of >> memory and (at least without hot-plug memory support in the >> guest) memory can only be decreased from there, not increased. > > Assuming we are talking about PV guests, I think this is wrong. My knowledge > is > a little dated (mostly 3.1.x series knowledge), but unless it has changed, > this > should work. As I understand it, what happens is that if you specify like > above, dom0 gets ballooned down 256MB, and then your domain is started with > 256MB. From there, you should be able to use xm mem-set <domid> <MB> to set > the > amount of memory in the guest, up to maxmem. > > But there is a big caveat, which trips people up all of the time. The xm > mem-set command will *not* automatically balloon down dom0 for you. So if you > allocated all memory to dom0 on bootup (say, 4GB), then started just this one > guest (so now dom0 == 3.75GB, domU == 256MB), and try to xm mem-set, you will > fail. If you then xm mem-set 0 3000 (or something), you would then be able to > balloon up the domU an additional .75GB.This is certainly how it is supposed to still work! -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2008-May-22 17:31 UTC
RE: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
I stand corrected... it does work for PV guests... but not for HVM guests (at least not the ones I''ve tested with). Also changing the PV guest''s balloon target will work, instead of mem-set. Interesting... What is the mechanism behind this? Is memory-hotplug used? (It appears to not be configured in my pv guest.) Or is the pv kernel actually booted with maxmem MB and the balloon driver immediately steals memory down to "memory="? (I''m wondering, for example, if the number of "struct page"s to handle maxmem is allocated at guest kernel boot, or increased when memory is increased.) A pointer to the code implementing this mechanism would also be helpful! Thanks, Dan> -----Original Message----- > From: Chris Lalancette [mailto:clalance@redhat.com] > Sent: Thursday, May 22, 2008 1:16 AM > To: dan.magenheimer@oracle.com > Cc: Torben Viets; xen-devel@lists.xensource.com > Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > post memory > info via xenbus, > > > Dan Magenheimer wrote: > >>> memory = 256 > >>> maxmem = 8192 > > > > By the way, I''m not sure if you knew this, but the above two > > lines don''t work as you might want. The maxmem is ignored. > > The domain is launched (in this example) with 256MB of > > memory and (at least without hot-plug memory support in the > > guest) memory can only be decreased from there, not increased. > > Assuming we are talking about PV guests, I think this is > wrong. My knowledge is > a little dated (mostly 3.1.x series knowledge), but unless it > has changed, this > should work. As I understand it, what happens is that if you > specify like > above, dom0 gets ballooned down 256MB, and then your domain > is started with > 256MB. From there, you should be able to use xm mem-set > <domid> <MB> to set the > amount of memory in the guest, up to maxmem. > > But there is a big caveat, which trips people up all of the > time. The xm > mem-set command will *not* automatically balloon down dom0 > for you. So if you > allocated all memory to dom0 on bootup (say, 4GB), then > started just this one > guest (so now dom0 == 3.75GB, domU == 256MB), and try to xm > mem-set, you will > fail. If you then xm mem-set 0 3000 (or something), you > would then be able to > balloon up the domU an additional .75GB. > > Chris Lalancette >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
viets@work.de
2008-Jun-30 14:25 UTC
Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
Hello, are there any plans to bring this feature in the xen kernel? Possible as an expermental marked feature? I use it about 1 month without any trouble on live systems. greetings Viets Viets wrote:> hey, > > thanks for the tip, I''ve already memory hotplug activated. Now it works > fine with 7 domains, but no one uses more than 256MB... I''d like to test > the ballooning with more than 2GB memory but at the moment I''ven''t a > live machine which needs so much memory... > > but with maxmen and hotplug, this defines the maximum or? > > greetings > Torben Viets > > Dan Magenheimer wrote: >>>> memory = 256 >>>> maxmem = 8192 >>>> >> >> By the way, I''m not sure if you knew this, but the above two >> lines don''t work as you might want. The maxmem is ignored. >> The domain is launched (in this example) with 256MB of >> memory and (at least without hot-plug memory support in the >> guest) memory can only be decreased from there, not increased. >> >> So to run a guest which adjusts between 256MB and 8192MB >> of memory, you must launch it with 8192MB and balloon it >> down to 256MB. If Xen does not have 8192MB free at >> launch, launching the domain will fail. >> >> Dan >> >> >>> -----Original Message----- >>> From: Torben Viets [mailto:viets@work.de] >>> Sent: Friday, May 16, 2008 10:51 AM >>> To: xen-devel@lists.xensource.com >>> Cc: dan.magenheimer@oracle.com >>> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory >>> info via xenbus, >>> >>> >>> Dan Magenheimer wrote: >>> >>>> (Your reply came to me but not to the list... not sure why. >>>> So I''ve attached your full reply below.) >>>> >>>> >>> thanks, hope this time it works.... >>> >>>>> ah ok, that is my failure, I need a bigger swapdisk ;) >>>>> >>>>> >>>> Yes, definitely. If you are creating the swapdisk on an ext3 >>>> filesystem, you might try using sparse files. They won''t >>>> take up much disk space unless/until they get swapped-to. >>>> There might be some performance ramifications though. >>>> (My testing has been with swap disk as a logical volume >>>> so I can''t try sparse.) >>>> >>>> >>>> >>>>> Ok, our plan is to have a high availbilty xen farm. Now we''re >>>>> beginning >>>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >>>>> selfballooning, because of peak traffic on a server, >>> normal a server >>> >>>>> needs about 256MB, but when it needs more, it shouldn''t be a >>>>> problem to >>>>> give it 4GB. The idea is not to overbook the memory, but have the >>>>> ability to get rid of memory failures because of peaks. >>>>> >>>>> >>>> Exactly what it is intended for! >>>> >>>> I''d be interested in how it works for guests with memory=4096 >>>> and higher. All of my testing so far has been on a machine with >>>> only 2GB of physical memory so I can test lots of guests but >>>> no large guests. >>>> >>>> >>> I''ll test it on monday, now I''m going into my weekend ;) but I think, >>> that I wasn''t able to get more than 2GB RAM allocated, but I will test >>> it on monday again. >>> >>> PS: In my first mail I''ve attached my whole signatur, I remove it >>> because I get enough spam ;) >>> >>> Thanks >>> Torben Viets >>> >>>> Thanks, >>>> Dan >>>> >>>> >>>> >>>>> -----Original Message----- >>>>> From: viets@work.de [mailto:viets@work.de] >>>>> Sent: Friday, May 16, 2008 9:49 AM >>>>> To: dan.magenheimer@oracle.com; >>> xen-devel-bounces@lists.xensource.com >>> >>>>> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and >>>>> post memory >>>>> info via xenbus, >>>>> >>>>> >>>>> Dan Magenheimer wrote: >>>>> >>>>> >>>>>>> thanks for the patch, I was waiting for this feature. >>>>>>> >>>>>>> >>>>>> Thanks very much for the testing and feedback! Could you >>>>>> comment on what you plan to use it for? (Keir hasn''t accepted >>>>>> it yet, so I am looking for user support ;-) >>>>>> >>>>>> >>>>> Ok, our plan is to have a high availbilty xen farm. Now we''re >>>>> beginning >>>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we like to use >>>>> selfballooning, because of peak traffic on a server, >>> normal a server >>> >>>>> needs about 256MB, but when it needs more, it shouldn''t be a >>>>> problem to >>>>> give it 4GB. The idea is not to overbook the memory, but have the >>>>> ability to get rid of memory failures because of peaks. >>>>> >>>>> >>>>> >>>>>> First question: Do you have a swap (virtual) disk configured and, >>>>>> if so, how big is it? (Use "swapon -s" and the size shows in KB.) >>>>>> Selfballooning shouldn''t be run in a domain with no swap disk. >>>>>> Also, how big is your "memory=" in your vm.cfg file? >>>>>> >>>>>> >>>>> #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" >>>>> #kernel = "/boot/vmlinuz-2.6.18.8-xenU" >>>>> kernel = "/boot/vmlinuz-selfballooning" >>>>> memory = 256 >>>>> maxmem = 8192 >>>>> vcpu = 4 >>>>> name = "test.work.de" >>>>> vif = [ ''bridge=xenvlan323'' ] >>>>> disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] >>>>> root = "/dev/hda ro" >>>>> extra = ''xencons=tty'' >>>>> >>>>> >>>>> >>>>> swap_size = 256M >>>>> >>>>> >>>>> >>>>>> I''m not able to reproduce your dd failure at all, even with >>>>>> bs=2047M (dd doesn''t permit larger values for bs). >>>>>> Your program (I called it "mallocmem") does eventually fail for >>>>>> me but not until i==88. However, I have a 2GB swap disk >>> configured. >>> >>>>> ah ok, that is my failure, I need a bigger swapdisk ;) >>>>> >>>>> >>>>>> I think both tests are really measuring the total virtual memory >>>>>> space configured, e.g. the sum of physical memory (minus kernel >>>>>> overhead) and configured swap space. I think you will find that >>>>>> both will fail similarly with ballooning off and even on >>> a physical >>> >>>>>> system, just at different points in virtual memory usage. >>>>>> Indeed, by adding additional output to mallocmem, I can see that >>>>>> it fails exactly when it attempts to malloc memory larger than >>>>>> the CommitLimit value in /proc/meminfo. I expect the same is >>>>>> true for the dd test. >>>>>> >>>>>> Note that CommitLimit DOES go down when memory is ballooned-out >>>>>> from a guest. So your test does point out to me that I should >>>>>> include a warning in the documentation not only that a swap disk >>>>>> should be configured, but also that the swap disk should be >>>>>> configured larger for a guest if selfballooning will be turned on. >>>>>> >>>>>> Thanks, >>>>>> Dan >>>>>> >>>>>> >>>>>> >>>>>>> -----Original Message----- >>>>>>> From: xen-devel-bounces@lists.xensource.com >>>>>>> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of >>>>>>> viets@work.de >>>>>>> Sent: Friday, May 16, 2008 3:36 AM >>>>>>> To: xen-devel@lists.xensource.com >>>>>>> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and >>>>>>> post memory >>>>>>> info via xenbus, >>>>>>> >>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> thanks for the patch, I was waiting for this feature. >>>>>>> >>>>>>> I''ve tried this patch and I''ve seen that if I malloc a great >>>>>>> size of memory in time, this fails, but if I malloc a small >>>>>>> size first and then resize it slowly, it works. >>>>>>> >>>>>>> this highly suffisticated (:p) program I use to test the >>>>>>> >>>>>>> >>>>> ballooning: >>>>> >>>>> >>>>>>> #include <stdio.h> >>>>>>> #include <unistd.h> >>>>>>> >>>>>>> int main () { >>>>>>> void *v; >>>>>>> int i; >>>>>>> for(i=40; i < 50; ++i) { >>>>>>> v = malloc((i*32*1024*1024)); >>>>>>> printf("%i\n", i); >>>>>>> if ( v != NULL) { >>>>>>> system("cat /proc/xen/balloon"); >>>>>>> sleep(1); >>>>>>> free(v); >>>>>>> } >>>>>>> } >>>>>>> } >>>>>>> >>>>>>> same effect I''ve got if I change the blocksize in a dd: >>>>>>> >>>>>>> works: dd if=zero of=/test.img count=1 bs=32M >>>>>>> doesn''t work: dd if=zero of=/test.img count=1 bs=256M >>>>>>> >>>>>>> Don''t know whether this is the right test for this... >>>>>>> >>>>>>> greetings >>>>>>> Torben Viets >>>>>>> >>>>>>> >>>>>>> Dan Magenheimer wrote >>>>>>> >>>>>>> >>>>>>>> OK, here''s the promised patch. The overall objective of the >>>>>>>> patch is to enable limited memory load-balancing capabilities >>>>>>>> as a step toward allowing limited memory overcommit. With >>>>>>>> this and some other minor hackery, I was able to run as >>>>>>>> many as 15 lightly loaded 512MB domains on a 2GB system >>>>>>>> (yes, veerrrryyy slooowwwlly). >>>>>>>> >>>>>>>> Review/comments appreciated. >>>>>>>> >>>>>>>> With this patch, balloon.c communicates (limited) useful >>>>>>>> memory usage information via xenbus. It also implements >>>>>>>> "selfballooning" which applies the memory information >>>>>>>> locally to immediately adjust the balloon, giving up memory >>>>>>>> when it is not needed and asking for it back when it is needed, >>>>>>>> implementing a first-come-first-served system-wide ballooning >>>>>>>> "policy". When a domain asks for memory but none is available, >>>>>>>> it must use its own configured swap disk, resulting in >>>>>>>> (potentially severe) performance degradation. Naturally, >>>>>>>> it is not recommended to turn on selfballooning in a domain >>>>>>>> that has no swap disk or if performance is more important >>>>>>>> than increasing the number of VMs runnable on a >>> physical machine. >>> >>>>>>>> A key assumption is that the Linux variable vm_committed_space >>>>>>>> is a reasonable first approximation of memory needed by >>>>>>>> >>> a domain. >>> >>>>>>>> This approximation will probably improve over time, but is >>>>>>>> a good start for now. The variable is bound on the lower end >>>>>>>> by the recently submitted minimum_target() algorithm patch; >>>>>>>> thus O-O-M conditions should not occur. >>>>>>>> >>>>>>>> The code is a bit complicated in a couple of places because of >>>>>>>> race conditions involving xenstored startup relative to >>>>>>>> turning on selfballooning locally. Because the key variable >>>>>>>> (vm_committed_space) is not exported by Linux, I implemented >>>>>>>> a horrible hack which still allows the code to work in a >>>>>>>> module, however I fully expect that this part of the patch >>>>>>>> will not be accepted (which will limit the functionality to >>>>>>>> pvm domains only... probably OK for now). >>>>>>>> >>>>>>>> Existing balloon functionality which is unchanged: >>>>>>>> - Set target for VM from domain0 >>>>>>>> - Set target inside VM by writing to /proc/xen/balloon >>>>>>>> Existing balloon info on xenbus which is unchanged: >>>>>>>> - /local/domain/X/memory/target >>>>>>>> To turn on selfballooning: >>>>>>>> - Inside a VM: "echo 1 > /proc/xen/balloon" >>>>>>>> - From domain0: "xenstore-write >>>>>>>> >>>>>>>> >>>>>>> /local/domain/X/memory/selfballoon 1" >>>>>>> >>>>>>> >>>>>>>> To turn off selfballooning: >>>>>>>> - Inside a VM: "echo 0 > /proc/xen/balloon" >>>>>>>> - From domain0: "xenstore-write >>>>>>>> >>>>>>>> >>>>>>> /local/domain/X/memory/selfballoon 0" >>>>>>> >>>>>>> >>>>>>>> New balloon info now on xenbus: >>>>>>>> - /local/domain/X/memory/selfballoon [0 or 1] >>>>>>>> - /local/domain/X/memory/actual [kB] * >>>>>>>> - /local/domain/X/memory/minimum [kB] * >>>>>>>> - /local/domain/X/memory/selftarget [kB] * (only valid if >>>>>>>> selfballoon==1) >>>>>>>> * writeable only by balloon driver in X when either >>>>>>>> selfballooning is first enabled, or target is changed >>>>>>>> by domain0 >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Dan >>>>>>>> >>>>>>>> ==================================>>>>>>>> Thanks... for the memory >>>>>>>> I really could use more / My throughput''s on the floor >>>>>>>> The balloon is flat / My swap disk''s fat / I''ve OOM''s in store >>>>>>>> Overcommitted so much >>>>>>>> (with apologies to the late great Bob Hope) >>>>>>>> >>>>>>>> >>>>>>> -- >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Xen-devel mailing list >>>>>>> Xen-devel@lists.xensource.com >>>>>>> http://lists.xensource.com/xen-devel >>>>>>> >>>>>>> >>>>>>> >>>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xensource.com >>>> http://lists.xensource.com/xen-devel >>>> >>>> >>>> >>> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xensource.com >> http://lists.xensource.com/xen-devel >> >> > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xensource.com > http://lists.xensource.com/xen-devel >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
Dan Magenheimer
2008-Jun-30 16:04 UTC
RE: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
Hi Viets -- I''m cleaning up a scripts-only version (no balloon driver changes) this week for submission to xen-devel before the 3.3 functionality freeze. For more info, see: http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008.pdf http://wiki.xensource.com/xenwiki/Open_Topics_For_Discussion?action=AttachFile&do=get&target=Memory+Overcommit.pdf Thanks, Dan> -----Original Message----- > From: viets@work.de [mailto:viets@work.de] > Sent: Monday, June 30, 2008 8:25 AM > To: xen-devel@lists.xensource.com; dan.magenheimer@oracle.com > Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > post memory > info via xenbus, > > > Hello, > > are there any plans to bring this feature in the xen kernel? > > Possible as an expermental marked feature? > > I use it about 1 month without any trouble on live systems. > > greetings > Viets > > Viets wrote: > > hey, > > > > thanks for the tip, I''ve already memory hotplug activated. > Now it works > > fine with 7 domains, but no one uses more than 256MB... I''d > like to test > > the ballooning with more than 2GB memory but at the moment > I''ven''t a > > live machine which needs so much memory... > > > > but with maxmen and hotplug, this defines the maximum or? > > > > greetings > > Torben Viets > > > > Dan Magenheimer wrote: > >>>> memory = 256 > >>>> maxmem = 8192 > >>>> > >> > >> By the way, I''m not sure if you knew this, but the above two > >> lines don''t work as you might want. The maxmem is ignored. > >> The domain is launched (in this example) with 256MB of > >> memory and (at least without hot-plug memory support in the > >> guest) memory can only be decreased from there, not increased. > >> > >> So to run a guest which adjusts between 256MB and 8192MB > >> of memory, you must launch it with 8192MB and balloon it > >> down to 256MB. If Xen does not have 8192MB free at > >> launch, launching the domain will fail. > >> > >> Dan > >> > >> > >>> -----Original Message----- > >>> From: Torben Viets [mailto:viets@work.de] > >>> Sent: Friday, May 16, 2008 10:51 AM > >>> To: xen-devel@lists.xensource.com > >>> Cc: dan.magenheimer@oracle.com > >>> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning > and post memory > >>> info via xenbus, > >>> > >>> > >>> Dan Magenheimer wrote: > >>> > >>>> (Your reply came to me but not to the list... not sure why. > >>>> So I''ve attached your full reply below.) > >>>> > >>>> > >>> thanks, hope this time it works.... > >>> > >>>>> ah ok, that is my failure, I need a bigger swapdisk ;) > >>>>> > >>>>> > >>>> Yes, definitely. If you are creating the swapdisk on an ext3 > >>>> filesystem, you might try using sparse files. They won''t > >>>> take up much disk space unless/until they get swapped-to. > >>>> There might be some performance ramifications though. > >>>> (My testing has been with swap disk as a logical volume > >>>> so I can''t try sparse.) > >>>> > >>>> > >>>> > >>>>> Ok, our plan is to have a high availbilty xen farm. Now we''re > >>>>> beginning > >>>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we > like to use > >>>>> selfballooning, because of peak traffic on a server, > >>> normal a server > >>> > >>>>> needs about 256MB, but when it needs more, it shouldn''t be a > >>>>> problem to > >>>>> give it 4GB. The idea is not to overbook the memory, > but have the > >>>>> ability to get rid of memory failures because of peaks. > >>>>> > >>>>> > >>>> Exactly what it is intended for! > >>>> > >>>> I''d be interested in how it works for guests with memory=4096 > >>>> and higher. All of my testing so far has been on a machine with > >>>> only 2GB of physical memory so I can test lots of guests but > >>>> no large guests. > >>>> > >>>> > >>> I''ll test it on monday, now I''m going into my weekend ;) > but I think, > >>> that I wasn''t able to get more than 2GB RAM allocated, > but I will test > >>> it on monday again. > >>> > >>> PS: In my first mail I''ve attached my whole signatur, I remove it > >>> because I get enough spam ;) > >>> > >>> Thanks > >>> Torben Viets > >>> > >>>> Thanks, > >>>> Dan > >>>> > >>>> > >>>> > >>>>> -----Original Message----- > >>>>> From: viets@work.de [mailto:viets@work.de] > >>>>> Sent: Friday, May 16, 2008 9:49 AM > >>>>> To: dan.magenheimer@oracle.com; > >>> xen-devel-bounces@lists.xensource.com > >>> > >>>>> Subject: Re: [Xen-devel] [PATCH] balloon: selfballooning and > >>>>> post memory > >>>>> info via xenbus, > >>>>> > >>>>> > >>>>> Dan Magenheimer wrote: > >>>>> > >>>>> > >>>>>>> thanks for the patch, I was waiting for this feature. > >>>>>>> > >>>>>>> > >>>>>> Thanks very much for the testing and feedback! Could you > >>>>>> comment on what you plan to use it for? (Keir hasn''t accepted > >>>>>> it yet, so I am looking for user support ;-) > >>>>>> > >>>>>> > >>>>> Ok, our plan is to have a high availbilty xen farm. Now we''re > >>>>> beginning > >>>>> with 2 Suns X2200, each has 16GB RAM. The idea, why we > like to use > >>>>> selfballooning, because of peak traffic on a server, > >>> normal a server > >>> > >>>>> needs about 256MB, but when it needs more, it shouldn''t be a > >>>>> problem to > >>>>> give it 4GB. The idea is not to overbook the memory, > but have the > >>>>> ability to get rid of memory failures because of peaks. > >>>>> > >>>>> > >>>>> > >>>>>> First question: Do you have a swap (virtual) disk > configured and, > >>>>>> if so, how big is it? (Use "swapon -s" and the size > shows in KB.) > >>>>>> Selfballooning shouldn''t be run in a domain with no swap disk. > >>>>>> Also, how big is your "memory=" in your vm.cfg file? > >>>>>> > >>>>>> > >>>>> #kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" > >>>>> #kernel = "/boot/vmlinuz-2.6.18.8-xenU" > >>>>> kernel = "/boot/vmlinuz-selfballooning" > >>>>> memory = 256 > >>>>> maxmem = 8192 > >>>>> vcpu = 4 > >>>>> name = "test.work.de" > >>>>> vif = [ ''bridge=xenvlan323'' ] > >>>>> disk = [ ''phy:/dev/sda,hda,w'', ''file:/var/swap.img,hdb,w'' ] > >>>>> root = "/dev/hda ro" > >>>>> extra = ''xencons=tty'' > >>>>> > >>>>> > >>>>> > >>>>> swap_size = 256M > >>>>> > >>>>> > >>>>> > >>>>>> I''m not able to reproduce your dd failure at all, even with > >>>>>> bs=2047M (dd doesn''t permit larger values for bs). > >>>>>> Your program (I called it "mallocmem") does eventually fail for > >>>>>> me but not until i==88. However, I have a 2GB swap disk > >>> configured. > >>> > >>>>> ah ok, that is my failure, I need a bigger swapdisk ;) > >>>>> > >>>>> > >>>>>> I think both tests are really measuring the total > virtual memory > >>>>>> space configured, e.g. the sum of physical memory (minus kernel > >>>>>> overhead) and configured swap space. I think you will > find that > >>>>>> both will fail similarly with ballooning off and even on > >>> a physical > >>> > >>>>>> system, just at different points in virtual memory usage. > >>>>>> Indeed, by adding additional output to mallocmem, I > can see that > >>>>>> it fails exactly when it attempts to malloc memory larger than > >>>>>> the CommitLimit value in /proc/meminfo. I expect the same is > >>>>>> true for the dd test. > >>>>>> > >>>>>> Note that CommitLimit DOES go down when memory is ballooned-out > >>>>>> from a guest. So your test does point out to me that I should > >>>>>> include a warning in the documentation not only that a > swap disk > >>>>>> should be configured, but also that the swap disk should be > >>>>>> configured larger for a guest if selfballooning will > be turned on. > >>>>>> > >>>>>> Thanks, > >>>>>> Dan > >>>>>> > >>>>>> > >>>>>> > >>>>>>> -----Original Message----- > >>>>>>> From: xen-devel-bounces@lists.xensource.com > >>>>>>> [mailto:xen-devel-bounces@lists.xensource.com]On Behalf Of > >>>>>>> viets@work.de > >>>>>>> Sent: Friday, May 16, 2008 3:36 AM > >>>>>>> To: xen-devel@lists.xensource.com > >>>>>>> Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and > >>>>>>> post memory > >>>>>>> info via xenbus, > >>>>>>> > >>>>>>> > >>>>>>> Hello, > >>>>>>> > >>>>>>> thanks for the patch, I was waiting for this feature. > >>>>>>> > >>>>>>> I''ve tried this patch and I''ve seen that if I malloc a great > >>>>>>> size of memory in time, this fails, but if I malloc a small > >>>>>>> size first and then resize it slowly, it works. > >>>>>>> > >>>>>>> this highly suffisticated (:p) program I use to test the > >>>>>>> > >>>>>>> > >>>>> ballooning: > >>>>> > >>>>> > >>>>>>> #include <stdio.h> > >>>>>>> #include <unistd.h> > >>>>>>> > >>>>>>> int main () { > >>>>>>> void *v; > >>>>>>> int i; > >>>>>>> for(i=40; i < 50; ++i) { > >>>>>>> v = malloc((i*32*1024*1024)); > >>>>>>> printf("%i\n", i); > >>>>>>> if ( v != NULL) { > >>>>>>> system("cat /proc/xen/balloon"); > >>>>>>> sleep(1); > >>>>>>> free(v); > >>>>>>> } > >>>>>>> } > >>>>>>> } > >>>>>>> > >>>>>>> same effect I''ve got if I change the blocksize in a dd: > >>>>>>> > >>>>>>> works: dd if=zero of=/test.img count=1 bs=32M > >>>>>>> doesn''t work: dd if=zero of=/test.img count=1 bs=256M > >>>>>>> > >>>>>>> Don''t know whether this is the right test for this... > >>>>>>> > >>>>>>> greetings > >>>>>>> Torben Viets > >>>>>>> > >>>>>>> > >>>>>>> Dan Magenheimer wrote > >>>>>>> > >>>>>>> > >>>>>>>> OK, here''s the promised patch. The overall objective of the > >>>>>>>> patch is to enable limited memory load-balancing capabilities > >>>>>>>> as a step toward allowing limited memory overcommit. With > >>>>>>>> this and some other minor hackery, I was able to run as > >>>>>>>> many as 15 lightly loaded 512MB domains on a 2GB system > >>>>>>>> (yes, veerrrryyy slooowwwlly). > >>>>>>>> > >>>>>>>> Review/comments appreciated. > >>>>>>>> > >>>>>>>> With this patch, balloon.c communicates (limited) useful > >>>>>>>> memory usage information via xenbus. It also implements > >>>>>>>> "selfballooning" which applies the memory information > >>>>>>>> locally to immediately adjust the balloon, giving up memory > >>>>>>>> when it is not needed and asking for it back when it > is needed, > >>>>>>>> implementing a first-come-first-served system-wide ballooning > >>>>>>>> "policy". When a domain asks for memory but none is > available, > >>>>>>>> it must use its own configured swap disk, resulting in > >>>>>>>> (potentially severe) performance degradation. Naturally, > >>>>>>>> it is not recommended to turn on selfballooning in a domain > >>>>>>>> that has no swap disk or if performance is more important > >>>>>>>> than increasing the number of VMs runnable on a > >>> physical machine. > >>> > >>>>>>>> A key assumption is that the Linux variable > vm_committed_space > >>>>>>>> is a reasonable first approximation of memory needed by > >>>>>>>> > >>> a domain. > >>> > >>>>>>>> This approximation will probably improve over time, but is > >>>>>>>> a good start for now. The variable is bound on the lower end > >>>>>>>> by the recently submitted minimum_target() algorithm patch; > >>>>>>>> thus O-O-M conditions should not occur. > >>>>>>>> > >>>>>>>> The code is a bit complicated in a couple of places > because of > >>>>>>>> race conditions involving xenstored startup relative to > >>>>>>>> turning on selfballooning locally. Because the key variable > >>>>>>>> (vm_committed_space) is not exported by Linux, I implemented > >>>>>>>> a horrible hack which still allows the code to work in a > >>>>>>>> module, however I fully expect that this part of the patch > >>>>>>>> will not be accepted (which will limit the functionality to > >>>>>>>> pvm domains only... probably OK for now). > >>>>>>>> > >>>>>>>> Existing balloon functionality which is unchanged: > >>>>>>>> - Set target for VM from domain0 > >>>>>>>> - Set target inside VM by writing to /proc/xen/balloon > >>>>>>>> Existing balloon info on xenbus which is unchanged: > >>>>>>>> - /local/domain/X/memory/target > >>>>>>>> To turn on selfballooning: > >>>>>>>> - Inside a VM: "echo 1 > /proc/xen/balloon" > >>>>>>>> - From domain0: "xenstore-write > >>>>>>>> > >>>>>>>> > >>>>>>> /local/domain/X/memory/selfballoon 1" > >>>>>>> > >>>>>>> > >>>>>>>> To turn off selfballooning: > >>>>>>>> - Inside a VM: "echo 0 > /proc/xen/balloon" > >>>>>>>> - From domain0: "xenstore-write > >>>>>>>> > >>>>>>>> > >>>>>>> /local/domain/X/memory/selfballoon 0" > >>>>>>> > >>>>>>> > >>>>>>>> New balloon info now on xenbus: > >>>>>>>> - /local/domain/X/memory/selfballoon [0 or 1] > >>>>>>>> - /local/domain/X/memory/actual [kB] * > >>>>>>>> - /local/domain/X/memory/minimum [kB] * > >>>>>>>> - /local/domain/X/memory/selftarget [kB] * (only valid if > >>>>>>>> selfballoon==1) > >>>>>>>> * writeable only by balloon driver in X when either > >>>>>>>> selfballooning is first enabled, or target is changed > >>>>>>>> by domain0 > >>>>>>>> > >>>>>>>> Thanks, > >>>>>>>> Dan > >>>>>>>> > >>>>>>>> ==================================> >>>>>>>> Thanks... for the memory > >>>>>>>> I really could use more / My throughput''s on the floor > >>>>>>>> The balloon is flat / My swap disk''s fat / I''ve > OOM''s in store > >>>>>>>> Overcommitted so much > >>>>>>>> (with apologies to the late great Bob Hope) > >>>>>>>> > >>>>>>>> > >>>>>>> -- > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> Xen-devel mailing list > >>>>>>> Xen-devel@lists.xensource.com > >>>>>>> http://lists.xensource.com/xen-devel > >>>>>>> > >>>>>>> > >>>>>>> > >>>>> > >>>> _______________________________________________ > >>>> Xen-devel mailing list > >>>> Xen-devel@lists.xensource.com > >>>> http://lists.xensource.com/xen-devel > >>>> > >>>> > >>>> > >>> > >> > >> > >> _______________________________________________ > >> Xen-devel mailing list > >> Xen-devel@lists.xensource.com > >> http://lists.xensource.com/xen-devel > >> > >> > > > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@lists.xensource.com > > http://lists.xensource.com/xen-devel > > >_______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel